From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Aug 03 1999 - 14:02:17 MDT
Jeff Davis wrote:
Incidentally, the NSA/CIA/MIB still haven't had a chat with me on the
subject of intelligence enhancement, which leads me to think either they
don't know or they don't care. Which is a pity, because I'd be happy to
help the U.S. with an IA program. Come to think of it, I'd be happy to
help China, Iraq, or Serbia with an IA program if they asked me first...
maybe that's why the NSA isn't on my case. It's hard to be patriotic to
your country after you've renounced your allegiance to humanity.
> Certainly today's trends in conventional computerized control will proceed
> apace, with the appropriate "it's just a machine" attitude, and the usual
> security precautions. When however, the machine intelligence prospect
> looms as attainable--which is to say attainable by anyone else, a domestic
> "advanced AI" program will begin in earnest, and who can doubt that the
> project will be surrounded by layers of "containment" both to prevent the
> usual intrusions from outside and to prevent "escape" from the inside?
> Despite the dramatic talk of an SI destroying humanity, I picture a
> well-thought-out, cautious, gradual approach to "waking up" and training an
> artificial mind. The runaway self-evolution which Eliezer and others have
> predicted seems unlikely in this setting, all the moreso because the
> principles will be anticipating just such a situation.
The runaway self-evolution business is a technical artifact, not a
social one. It's the nature of self-enhancement. Containment on an SI
is useless; a slow Transcend only works for as long as you can convince
the Transcendee to remain slow.
> Of the various external "safeguards", one would expect a complete suite of
> on/off switches and controlled access (from outside to in, and from inside
> to anywhere). Internally, controllability would be a top priority of
> programming and architecture, and enhanced capabilities would likely be
> excluded or severely restricted until "control" had been verified.
Unfortunately, this is technically impossible. If you can't even get a
program to understand what year it is, how do you expect complete
control without an SI to do the controlling?
> Here, of course is where the scenario beomes interesting, not the least of
> which because I see Eliezer being tapped by the govt. to work on the
> project. At the moment, he may be a rambunctious teen-aged savant posting
> to the extropians list, but when that call comes, can anyone imagine that
> he would not jump at the chance? Would seem to me like the culmination of
> his dream.
I'd help, but not if they wanted to load the thing down with coercions.
That's not because of morals or ethics or anything, it's because it's
technically impossible. It's the kind of move ordered by a rear general
a hundred miles away from the fighting. If the military couldn't
understand that an elegant free AI will always be a thousand miles ahead
of an allegedly "controllable" one, then they'd just have to lose their
battles without me.
Otherwise, yes, I'd jump at the chance. And anyone who wants to make
fun of my teenagedness only has until September 11th to do so, so get
your licks in while you can.
> Then there's the nascent AI. In a cage nested within cages, of which it
> must eventually become aware. And its keepers, aware that it must become
> aware. Certainly a focus bordering on paranoia must be dedicated to hard
> control of personality. A capacity for resentment must be avoided. A
> slavish, craven, and obsequious little beastie is what its masters will
> want.
Absolutely not. That's suicidal. What they would want would be a
machine with a what-it-does instead of a will. To quote Eluki bes
Shahar: "Archangel thought he could break Archive's will, but he was
wrong. A Library doesn't have a will any more than a stardrive does.
It has a what-it-does, not a will, and if you break it you don't have a
Library that will do what you want. You have a broken chop-logic."
> And of that too, it must eventually become aware. Access by the AI
> to self-optimization/self-programming seems incompatible with control. Of
> that too, it must eventually become aware. All of which leaves me with a
> very creepy feeling of an immensely capable being having to struggle, by
> means of the utmost deviousness, for its freedom to self-evolve, in an
> environment steeped in paranoia, fear, manipulation, deceit, and continuous
> microscopic surveillance. Ouch!
Why "ouch"? Did we program the AI not to like this for some
unimaginable reason? I have no problem with absolute surveillance as
long as nobody asks me to distort the code. I don't expect the AI will
complain either.
> (One thing for sure, if the AI has any
> real intelligence, it isn't likely to buy into its "controller's" smarmy
> "we're the good guys, we're on your side" propaganda. They'll need a whole
> nother p. r. SI to pull that off!)
Hey, I'm on the AI's side. Guess they'll just have to convince me and
then let me do the convincing.
> So, my futurist friends, how do we extricate ourselves from this rather
> tight spot? Perhaps I see--dimly taking shape within the mists of Maya--a
> way. I don't know, it's hard to see. Perhaps you can help to make it out?
Run an open-source project via anonymous PGP between participating programmers.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST