From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon Mar 26 2001 - 11:17:09 MST
On Sun, 25 Mar 2001, Lee Corbin wrote:
> The whole question is, how plausible really is the existence of a
> statistical response unit, and how well could it survive on its own?
Ah ha, he says, rubbing his hands with an evil grin on his face...
(How I love it when a trap I didn't even know I had set snaps shut...)
But Lee *you* ARE a "statistical response unit". Presumably you
survive quite well 'on your own'. I'd say 90+% of human behaviors
are completely canned. Do you actually consciously 'think' about
brushing your teeth, driving your car, what is being said by
someone you are being 'polite to' in listening to them, etc.
We all have 'canned', 'mechanical' behaviors that we have learned
that get executed at a sub-conscious level. We have no awareness
of them at all unless the toothpaste tastes funny, someone runs
in front of the car, or the person going 'blah, blah, blah' suddenly
says 'and then I'm going to take this knife and stab you with it...'.
In those situations our subprocessors bring those 'variances' to
our attention and we make decisions about them. If we had enough
experiences with the exceptions to the rules, those too would become
rules. Since humanity collectively 'survives', I would say that
the statistical behavior of humanity, programmed into a zombie
would similarly be likely to survive. The world has gone from
being a very dangerous place to being for the most part a very
safe place. That has allowed us to relax the amount of consciousness
we need for survival.
Interesting to consider that the simulations may 'have' to run this
way because as the population increases, you want to make the
simulation a more zombie safe world so there are rarely those
situations in which the lookup table comes up empty. We can
all tolerate wierd behaviors by people "some" of the time,
we just can't tolerate it "all" of the time. By safety-izing
the simulation, you make it computationally less burdensome.
>
> I contend that these "common responses" are inadequate for survival in
> the real world. To be very concrete, let us ask what is the minimal
> programmable unit that could (a) hold down a job at Microsoft or Sun,
> (b) drive to work every day, (c) shop and do all the mundane things
> (e.g., fill out drivers license applications) necessary to 21st century
> existence?
(b) & (c) are largely statistical. (a) I'm going to have to think
about. How do zombies handle 'creativity'. Or does this get
swept aside by the fact that zombies don't have to really be creative?
Is the SI is the one really writing the code they produce, not the zombie?
> Forget Silicon Valley: even if it was possible in *ancient Sumeria*
> for someone to do what it took to survive back there, why wasn't
> there a series of natural mutations that got rid of all the excess
> baggage like consciousness, feelings, etc.?
Ancient Sumeria was much harder to survive in that SV most likely.
(The average lifespan *was* much less.) "Consciousness" confers
survival advantages by allowing you to practice future behaviors.
It just doesn't get used once the behaviors have been learned and
proven successful.
> The answer is that it's not possible. It, like so many other
> programming projects, only seems feasible. The first AI that
> would be capable of getting along in Silicon Valley (or ancient
> Sumeria) would be almost exactly as conscious and feeling as
> humans are.
Aw, now don't go putting consciousness & feeling back together
again when I'd worked so hard to separate them. 'Feelings' are
the feedback loops that inform about or the irritants that drive
behaviors that work or don't work. They are our hardware
(genetic) and firmware (learned at a very young age)
'strings' that tell us what needs is a threat (that should be
avoided or something of benefit to be sought after. They can
be very nice and wonderful at times (and painful at others)
but they shouldn't be considered at the level of consciousness.
I think the answer is that the zombie, like humans will have
to have a default subroutine that manufactures a behavior when
the correct one is not known. This doesn't have to be very
sophisticated. You can say "here I laugh" or "here I cry"
or "here I sit and hug myself". What is so complex about
that?
> Okay, suppose that we have a creature (that I still don't want
> to call a zombie) which has 10^x times as much storage capability
> as a human being, and it merely does a lookup for everything that
> could conceivably happen to it. It never really calculates
> anything. I will officially concede that if x is a big enough
> number, then the entity is not conscious, and therefore is a
> zombie.
Oh ho, point taken, thank you very much. (Not that I'm really
winning anything here, but we are learning how to make zombies.)
> A. A puppet or apparition run by a remote SI
> B. A self-contained entity with a fantastically large lookup table
> C. A self-contained entity with capabilities of human or above,
> but which calculates its behavior (doesn't look it up)
> D. A self-contained entity that is like C, but isn't conscious
>
Good summary.
> If D, then it's not smart enough to survive in a challenging
> environment (unlike a dog).
Hmmmm, but it is generally thought that the only 'conscious'
animals are from the ape-level to humans. Go back in the
archives and look up the mirror (self-recognition) test
discussions (in fall of '99 I think). That means all other
animals down to the level of fish survive *quite* will with
little or no consciosness. So for your statement above
to be true you seem to be saying that the natural environment
that these animals live in is not 'challenging'.
For you to link 'consciousness' with 'survival' you are
going to have to link all K-strategy animals (long lived)
with consciousness!
> Only case B might be considered a zombie, but that case is not
> what people are talking about on this list (until your post).
I'm considering a zombie to be something that looks like a human,
walks like a human, talks like a human a human, behaves in
general like a human and survives like a human but has no
self-consciousness or self-awareness. Its a means to running
a simulation with fewer resources, because you can use one
giant behavioral lookup table for *all* the zombies, you don't
have to actually simulate them as conscious processes.
(But now that I've thought about this a bit more, see some of
my other posts, I don't think consciousness is that sophisticated,
so there is going to be an interesting tradeoff between the
memory required for behavioral lookup tables and actual
computational requirements for 'consciousness'.)
>
> So therefore, any creature in your immediate vicinity is either
> a creature with a big lookup table that, because the table isn't
> big enough, still cannot survive on its own---or, it engages in
> calculation, and is really no more efficient about it than we
> are, and so therefore is conscious.
So, now we are back to -- "If it uses an equivalent number of
CPU cycles it is conscious?" I don't buy that at all. Deep Blue
didn't have anywhere near the number of CPU cycles as Kasparov,
but it still managed to behave quite 'surprisingly' at times.
It did not however have an internal model of 'itself' to think
about -- It did however have an internal model of an external
reality which it could simulate. Presumably it had 'feelings'
as well because it evaluated whether a position in that
external reality was 'good' or 'bad'.
What do you call an entity with an internal model of an external
reality, but no concept of 'itself' in that external reality?
[Note -- its probably possible to argue to some limited extent
that Deep Blue pictured itself as the chess board state in
the external reality, but this gets very swampy.]
> (P.S. I'm not even sure that
> 10^21 actions would be enough for a human---remember, it has to
> ask human IN EVERY POSSIBLE situation, which might mean millions
> of separate emulations had to go into the lookup table.)
So, they go there very fast (at GHz rates) while humans don't
notice things occuring faster than about 100 Hz. They can
have replicated copies of the table (you have got 10^40+ bits
of memory. You can multi-port the memory accesses. You
can do hashed, nested tables that minimize collisions to
identical locations. There are a host of ways to solve
problems like this.
> As a slighly relevant aside, I hope you believe that Searle's
> Chinese Room is conscious, intelligent, and has feelings!?
To be honest Lee, I can't remember precisely what this is
at this point (I know I've read it). But taking a page
from Spike's book, I'm too lazy right now to go find it again.
Remember I'm a computer scientist and/or molecular biologist --
not a philsopher. For that you have to go to Max or Nick.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:42 MST