Darin Sunley, <umsunley@cc.umanitoba.ca>, writes:
> It seems to me that thsi whole debate becomes a littel cleaarer when
> stated in terms of the ontological levels of the agents involved.
This is an interesting way to look at it, but the specific analysis you present doesn't address the examples we have been discussing.
> Isn't the whole idea of a Turing test that it be done between agents on
> the same ontological level? When we program a computer to attempt a
> Turing test we are equipping it with sensors and knowledge about our
> ontological level, and the ability to communicate to our ontological
> level. We are, in short, attempting to keep the computer's processing in
> the same ontological level as the hardware, instead of doing processing
> in a domain one ontological level down.
I don't think it would necessarily have to be done that way. Theoretically one simulated being could question another at the same ontological level. Suppose we have a thriving AI community, generally accepted as being conscious, and a new design of AI is created. Suppose they all run a million times faster than our-level humans, so that a Turing test by one of us against one of them is impractical. The other AIs could question the new one, and all are at the same level.
> Consciousness is a label assigned by one agent to another.
I would say, rather, that consciousness is an inherent property of some class of agents. Some agents may believe that others are conscious or not, but they may be mistaken.
> Postulate an ontological level containing two agents, each of whom
> believe the other to be conscious. Let them make a recording of their
> interactions, to the greatest level of detail their environment allows,
> to their analog of the Heisenberg limit. Let one of them program a
> simulation, containing two other agents. Neither of the first two agents
> believes that either of the agents in the simulation is conscious.
Your intention is that the simulation replays the interaction between the original two agents? This is different from what we have been considering, because the recording in your example is effectively one level down from the original interaction. From the point of view of the agents, the original interaction occured in "the real world" but the recording is taking place in some kind of simulation.
> From OUR point of view, one ontological level up form these agents,
> neither seems conscious. Both are, from our point of view, completely
> deterministic, and we see no menanigful distinction between them and
> their recording.
> From THEIR point of view however, the recording seems dramatically less
> conscious then they are. Both agents in the original level would pass
> Turing tests adminstered by the other. Neither of the agents in the
> recording would pass a Turing test adminitstered by agents from the
> original level.