Re: Subjective counterfactuals

From: Darin Sunley (umsunley@cc.umanitoba.ca)
Date: Mon Apr 06 1998 - 11:06:08 MDT


hal@rain.org wrote:
>
> Darin Sunley, <umsunley@cc.umanitoba.ca>, writes:
> > It seems to me that thsi whole debate becomes a littel cleaarer when
> > stated in terms of the ontological levels of the agents involved.
>
> This is an interesting way to look at it, but the specific analysis
> you present doesn't address the examples we have been discussing.
>
> > Isn't the whole idea of a Turing test that it be done between agents on
> > the same ontological level? When we program a computer to attempt a
> > Turing test we are equipping it with sensors and knowledge about our
> > ontological level, and the ability to communicate to our ontological
> > level. We are, in short, attempting to keep the computer's processing in
> > the same ontological level as the hardware, instead of doing processing
> > in a domain one ontological level down.
>
> I don't think it would necessarily have to be done that way.
> Theoretically one simulated being could question another at the same
> ontological level. Suppose we have a thriving AI community, generally
> accepted as being conscious, and a new design of AI is created.
> Suppose they all run a million times faster than our-level humans,
> so that a Turing test by one of us against one of them is impractical.
> The other AIs could question the new one, and all are at the same level.

Anything doing a Turing test is necessarily at the same level as the
agent it is testing. Whether this is done through robotics (giving the
processor information about our level) or by creating an avatar within
the computer's layer, we have bridged the gap. You can attempt to do a
Turing test with an agent in the layer below you, but as you have access
to or complete knowledge of all of that agent's internal states, the
agent is unlikely to succeed.

Conversely, you could attempt to administer a Turing test on an agent in
the layer above you, but He might not care all that much. :)

>
> > Consciousness is a label assigned by one agent to another.
>
> I would say, rather, that consciousness is an inherent property of some
> class of agents. Some agents may believe that others are conscious or
> not, but they may be mistaken.
>

That's the whole trick. "Inherent properties" are only held within
layers. The "wetness" of rain, an inherent property, only makes sense at
the same ontological level as the rainstorm. To us, a simulated
thunderstorm does not have this "inherent property".

> > Postulate an ontological level containing two agents, each of whom
> > believe the other to be conscious. Let them make a recording of their
> > interactions, to the greatest level of detail their environment allows,
> > to their analog of the Heisenberg limit. Let one of them program a
> > simulation, containing two other agents. Neither of the first two agents
> > believes that either of the agents in the simulation is conscious.
>
> Your intention is that the simulation replays the interaction between
> the original two agents? This is different from what we have been
> considering, because the recording in your example is effectively
> one level down from the original interaction. From the point of view
> of the agents, the original interaction occured in "the real world"
> but the recording is taking place in some kind of simulation.

Actually my point is that, form the point of view of the agents being
recorded, ALL recordings are /automatically/ one level down from the
original.

>
> What we were discussing (as I understood it) was the case where the
> recording was of an interaction one level down from us. We would then
> play back the recording in some kind of machine, also one level down
> from us. So there is no difference in the levels between the original
> interaction and the playback.

My point is that any comprehensive theory of consciousness must address
the perception/detection of consciousness across varying levels. There's
no reason to be chauvinistic about OUR level, simply because it's the
one our sensors are hooked to :)

My basic thesis is that the assignation of the label 'conscious' (as
used in normal conversation) depends on the respective levels of the
agents involved.

>
> You have introduced an extra variable: not only must we distinguish
> between recording and playback, but also we now have to contend with
> issues introduced by having the playback be at a lower level than the
> recording. This complicates the problem and will make it more difficult
> to identify the fundamental issues.

Yes I have. The 'extra variable', that of varying ontological levels
between the agents, is the generalisation that lets us resolve the
paradox of why a complete recording of one of us does not seem conscious
from our point of view, yet does seem conscious from it's point of view.

>
> > From OUR point of view, one ontological level up form these agents,
> > neither seems conscious. Both are, from our point of view, completely
> > deterministic, and we see no menanigful distinction between them and
> > their recording.

Apolpgies. I'm making the assumption that most people would rather not
refer to a recording of themselves as conscious, but would like to refer
to themselves as conscious. What I'm attempting to do here is come up
with an definition that resolves the paradoxes inherent with, but is
true to people's conversational definitions.

>
> Why would we see these agents as unconscious? Just because they are
> deterministic? Where did that come from? There's no reason I can see
> that determinism rules out consciousness. Some might say that it rules
> out free will, but even if they don't have free will they could still
> be conscious, couldn't they?
>

Strictly speaking, as deterministic simulations running on hardware in
the layer above, NONE of these agents have free will. Someone here
(apologies, I forget who) defined free will as "an inability to predict
one's future actions to an arbitrary degree". Under that definition it
is perfectly possible for agents to believe themselves to have free will
when they do not. Actually, it's not so much determinism that rules out
consciousness (form the agent's own point of view) as it is lack of
instantiation. The only good definition we have for instantiation is
that "it's the thing that everything we can physically interact with has
had done to it."

The ontological layer model allows us to see the identity between a
static recording one level up, and our apparently instantiated reality
here.

> > From THEIR point of view however, the recording seems dramatically less
> > conscious then they are. Both agents in the original level would pass
> > Turing tests adminstered by the other. Neither of the agents in the
> > recording would pass a Turing test adminitstered by agents from the
> > original level.
>

With all due respect, not everyone is entirely certain on just what
consciousness is.

Fundamentally, "conscious" is a symbol attributed to agents by agents.
All I'm saying is that the relative ontological level has considerable
bearing on the appllcation of this symbol, and that agents at different
levels are not treated the same as agents at our level.

You said earlier that "consciousness is an inherent property of some
class of agents." The assignation of all other "inherent properties"
changes across ontological levels. Why should consciousness be any
different.

We say the virtual thunderstorm is virtually wet. I say the simulated
personality, or the deterministic playback of a human personality is
"virtually" conscious.

P.S. I'm assuming that the primary difference between a recording and
the original is that we have access to all of the recording in advance,
i.e. that it is deterministic.
 
> Not everyone would agree that the recording is less conscious than them,
> just because it is one level down. That is the issue we are trying to
> understand better. Some people might argue that recordings are equally as
> conscious as the original, no matter what level they are instantiated in.
> Not everyone would hew to the strict Turing test you are using here.
>
> Hal

P.P.S. Consciousness is a label applied by agents to agents. Thsi label
may be applied correctly or incorrectly. The validity of the Turing test
notwithstanding, I'm basically just assuming for the sake of argument
that a good test for "consciousness" exists, and that the agents are
using it, so I can illustrate the whole concept of ontological levels
without getting bogged down in what precisely the agents are doing to
each other.

P.P.P.S Sorry about all the typos in the original.

Conclusion: the universe contains agents at various ontological levels
(at least two: Our level and the level of software agents we create
(which are very simple now, but won't necessarily always be that way.)).
Whether these agents, or deterministic recordings thereof are
"conscious" depends on the relation of their ontological level to that
of the observer. Whether a thunderstorm is wet depends completely on
your point of view, ontologically. Similarly, whether an agent is
conscious depends largely on your point of view, ontologicaly.

Hal, I'm actually agreeing with you, that consciosuness is an "inherent
property." It's just that, using ontological levels, the idea of
"inherent property" becomes variable with respect to the ontological
level of the observer.

Further: It may be possible to characterize ontological levels in terms
of webs of possible causal relationships. A object can physically
interact with all objects in it's ontological layer, and all objects
beneath it. An object cannot interact with objects in the layer above
it, excpet by passing information.

Darin Sunley
umsunley@cc.umanitoba.ca



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:29 MST