From: Lee Corbin (lcorbin@ricochet.net)
Date: Sun Mar 25 2001 - 13:56:17 MST
Robert J. Bradbury wrote
> Given that the SI is writing the code that 'operates' the
> telepresence body/holo, I would argue that it can prevent
> it from ever becoming self-conscious. You simply execute
> the common standard behaviors based on similar situations
> and random behavior selector strategies for the rarer
> situations (not too different from your standard issue
> human as far as I can tell).
I'm unclear what you mean here: if you mean "telepresence",
then, as you know, we don't have a zombie; telepresence is
just another "skin" for the operator. (One might as well
suppose that one's body is a zombie, operated by the real
brain inside.)
But if you mean that the SI has downloaded the code, and
the alleged zombie is now independent, well, this is where
Dennett, I, and the others may be going out on a limb, for
we maintain that no mattter how advanced the SI, and no
matter how sophisticated the code that gets downloaded into
the alleged zombie, the resulting creature must be conscious,
etc., and not a zombie.
> But 'feelings' are gentico-socio-'thingys' (insert a word
> here to represent a neurocomputational 'pattern') that are
> designed to promote survival. There is no reason to elevate
> them to significance (if that is what you are doing). They
> are subroutines designed to promote behaviors that have
> specific goal seeking strategies in the framework of the system.
> Ah ha, so here a zombie cannot be 'tele-operated'. But
> if a non-tele-operated creature does not have 'feelings' that
> promote its survival, then it's very rapidly a dead zombie.
You've said it better than I did! Exactly! "Behavior
identical to (at least) human" certainly includes the
ability to survive in non-trivial environments. So now
if the (alleged) zombie can interact in sophisticated
ways, it (we claim) knows fear and other emotions, and
also turns out to be conscious, i.e., not a zombie.
> But I don't buy the impossible/nonsensical part. With the
> statistical correlation and fuzzy logic capabilities that
> we now have, do you not think we could produce fully
> functional unrecognizable zombies with no consciousness?
No, because it is precisely our claim that such efforts will
inadvertently, but necessarily produce systems that include
"consciousness", feelings, and so on, though, as I said above,
at this juncture the burden of plausibility falls on us to
say exactly why such efforts will always fail.
But I think that it's already been explained by Dennett and
others, although I may have to get into the details. For
now, one key observation is that all complex animals, which
are capable of a very wide range of behaviors, e.g., apes,
dogs, and elephants, appear to be consious. So if zombies
could exist, why didn't nature ever make any?
> I think the AOL-Eliza example demonstrates that this
> is feasible based on much simpler principles. The
> interesting figure-of-merit is the time it takes
> individuals of various IQs or educations to recognize
> they are talking to a non-conscious entity.
Yes, but surviving cursory examination by AOL members,
which is only on-line and not real-world anyway, is
hardly much of a challenge. I agree, instead, with
what you wrote earlier: "if a... creature does not
have 'feelings' that promote its survival, then it's
very rapidly a dead [creature]". Same goes for
consciousness, I think.
Robert added two more paragraphs that penetrated into
the heart of the matter even further, which I'll try
to reply to later.
Lee Corbin
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:40 MST