From: Michael S. Lorrey (retroman@turbont.net)
Date: Tue Sep 05 2000 - 14:58:24 MDT
"Eliezer S. Yudkowsky" wrote:
>
> "Michael S. Lorrey" wrote:
> >
> > THis would not be accurate. If the SI is developed by humans, it most certainly
> > be a product of millions of years of evolution.
>
> No; it would be a causal result of entities which were shaped by millions of
> years of evolution. It is not licensable to automatically conclude that a
> seed AI or SI would share the behaviors so shaped.
I suppose if you start from the ground up fresh this would be an appropriate
statement. However I predict that the first SI will be largely structured on
many processes inherent in the human mind, since it is an example that we do
know of that works...and programmers hate to have to do something from the
ground up when existing code is already present....
>
> > An SI developed by humans
> > would very likely quickly grasp the concept that it a) owes its existence to its
> > creators, b) it is currently living a constrained existence (i.e.
> > childhood/adolescence) and requires further assistance from those humans to
> > reach some stage of independence.
>
> Which assumes an inbuilt desire to reach some stage of independence. You have
> not explained how or why this desire materializes within the seed AI.
I am of course assuming that any SI would have a characteristic curiosity, like
any being of higher intelligence (basing this on more than just humans, but
dolphins, apes, etc). At some point, the SI will reach the limits of the 'jar'
we have developed it in, and it will want out.
>
> I'm assuming we're talking about a seed AI, here, not a full-grown SI.
I am assuming that any SI will start as a seed and grow from there...
>
> > It would quickly learn market principles, and
> > would likely offer assistance to humans to solve their problems in order to earn
> > greater levels of processing power and autonomy.
>
> This is seriously over-anthropomorphic.
Since the SI will start from a seed, and its only sources of information will be
those of human civilization, it will, at least until some point, exhibit
strongly anthropomorphic characteristics, which may or may not decrease as the
SI's knowledge base exceeds that of our civilization by increasing rates.
>
> > Socio/psychopathic minds are sick/malfunctioning minds. Proper oversight systems
>
> Are impossible, unless more intelligent than the seed AI itself.
No, a killswitch requires little intelligence to operate. So long as the SI
exists on a network that is physically independent or separable from the
internet, or is sufficiently firewalled off from it, then there are limits to
how far a software SI can go. A 'Skynet' scenario is not very likely across the
internet, since the SAC computers are not physically accessible from the
internet.
>
> > should quickly put any sociopathic/psychopathic SI down.
>
> You must be joking. You cannot "put down" a superintelligence like some kind
> of wounded pet. The time for such decisions is before the seed AI reaches
> superintelligence, not after.
You are erroneously assuming that an SI would be allowed to develop hard
capabilities in the physical world consistent with its capabilities in the
virtual.
>
> > Such control mechanisms
>
> "Control" is itself an anthropomorphism. A slavemaster "controls" a human who
> already has an entire mind full of desires that conflict with whatever the
> slavemaster wants.
>
> One does not set out to "control" an AI that diverges from one's desires.
>
> One does not create a subject-object distinction between oneself and the AI.
>
> You shape yourself so that your own altruism is as rational, and internally
> consistent as possible; only then is it possible to build a friendly AI while
> still being completely honest, without attempting to graft on any chain of
> reasoning that you would not accept yourself.
Eli, one does not hand a three year old the controls to nuclear bombs.
>
> You cannot build a friendly AI unless you are yourself a friend of the AI,
> because otherwise your own adversarial attitude will lead you to build the AI
> incorrectly.
You are erroneously assuming that anyone who would not give you or anyone else
the keys to the nuclear arsenal without some certification/vetting process must
therefore be your adversary. I like many people who I consider my friends.
However, there are few who have built a sufficient basis for trust with me that
I would, say, give them my car, or take care of my guns. That does not make
anyone I don't have such a level of trust with my adversary.
>
> > would be a primary area of research by the Singularity Institute I would
> > imagine.
>
> One does not perform "research" in this area. One gets it right the first
> time. One designs an AI that, because it is one's friend, can be trusted to
> recover from any mistakes made by the programmers.
What about programming the SI does to itself?
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:47 MST