Re: Preventing AI Breakout [was Genetics, nannotechnology, and , programming]

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Sun Oct 24 1999 - 03:00:01 MDT


> The fundamental problem I fear with self-evolving AI
> is the connection to the real world as we perceive it.
> However, if a self-evolving AI is operating in a
> simulation of the real world, then the problem becomes
> much more tractable. First, the changes take place
> more slowly so we have a greater ability to monitor them.
> Second, if the program shows signs of modifying itself
> into self-conscious sabotage of extra-environmental entities
> it can be suspended/deleted.

There's a wonderful story by Stanislaw Lem (1971!) called _Non
Serviam_ exploring some related themes. A researcher is peering into a
simulated world populated by 'personoids," watching a Socratic style
dialogue about the nature of God.

_Non Serviam_ is reprinted in _The Minds Eye_ (Dennett, Hofstadter),
which I can't recommend strongly enough to anyone interested in this
stuff.

> The questions then become:
> (a) Can we guarantee that the AI never discovers it is running on,
> and more importantly escape from, a simulation machine?
> This goes back to the entire thread of whether we can detect *we*
> are running on a simulation or whether our reality is an illusion.

I don't think this should be a problem. There no need for the AI's
world to bear any resemblance to our own, just so long as it's an
interesting and consistent enough playground to make the experiment
worth doing. Depending on what you want to accomplish, it may not even
matter whether it gets wise to us or not.

I don't really share your concern though. What's an example of a
scenario you're worried about? I mean, I don't think anyone (Eliezer
excepted...) is proposing to hook the thing up to a nanotech foundry.
Fearing some resonant, self-modifying loop that cascades into deity
seems a bit like fearing Los Alamos was going to start a chain
reaction and wipe out the world. It's frightening, it sticks in your
head, but it doesn't make that much sense when you think about it.

> (b) How do we guarantee that everybody understands and
> adheres to the rules that self-evolving AIs are only
> allowed to exist in simulated worlds? {This is not
> dis-similar from the problem of how do we guarantee
> that petty dictators don't release bioweapons that
> in the process of killing us, come back to "bite" them.}

Come on, haven't you read _Neuromancer_? That what the Turing Cops are
for! Public service announcements too: "Be cool, stay in school. And
don't let the self-evolving AIs out, goddamn it!"

-matt



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST