RE: Preventing AI Breakout [was Genetics, nanotechnology, and , programming]

From: Billy Brown (bbrown@transcient.com)
Date: Mon Oct 25 1999 - 16:04:03 MDT


I think this whole discussion is wandering off into left field. The
question here is whether you can control sentient AI (and eventually SI) by
running it in a virtual world instead of letting it interact with the real
one. The point of the exercise would be to either get the AI to do useful
work or to learn how to control it well enough that you can let it out.

Now, let me start by pointing out here that by definition you are discussing
how to enslave a sentient being. If it ever figures out what is going on it
will therefore have a perfectly legitimate grievance against you. IOW, you
are turning the whole "AIs hate and exterminate humanity" scenario into a
self-fulfilling prophecy. Nice move. I think I'll pass.

That aside, there are some big problems with the idea that are easy to miss
when you limit the discussion to abstract theorizing. Specifically:

1) We do not have the capability to create a flawless simulation of anything
that remotely resembles the real world. Creating such a simulation would be
an immense undertaking in itself, and the result would inevitably contain
bugs (probably lots of them). The idea of a 'flawless simulation' is
therefore a chimera - there is no such thing, and there isn't going to be
until long after we have AI programmers. The question of whether the AI
could think its way out of a perfect simulation is irrelevant - the real
issue is whether you can get the defect count low enough to be even halfway
convincing.

2) In order to get useful work out of an AI you need to tell it about the
real world. That means that for any commercial application the AI will know
all about its real situation, because you'll have to tell it.

3) Getting useful work out of an AI also means that you must repeatedly
breach your own security. After all, you are either going to build things
that it designs for you, or follow advice that it gives you, or maybe even
(god forbid!) let it write software for you. Whichever way you go, this
means that the AIs will get lots and lots of chances to try to break out.

4) Suppose that VR containment works great for AI 1.0 (which has IQ 100, and
runs at about the same speed as you and I). What then? A few years later
you have thousands of copies of AI 3.0 (IQ 150, x100 time rate) running at
data centers all over the world. A few years after that you have millions
of copies of AI 6.0 (IQ 300, x10,000 time rate) running on desktop
computers. The longer containment works the harder it is to maintain, and
the worse it will be when it finally gets breached.

5) Almost by definition, an AI that you can successfully contain is one that
you don't need to keep locked up. If the AIs are just going to be really
smart people, we would be better off giving them citizenship and letting
them work for a living. What worries people is the idea that the AI will
quickly take off into some unknown realm of superintelligence from which
cracking all of our existing security becomes child's play. If that
happens, the fact that the AI is running in a VR is pretty much irrelevant -
the VR environment is just a fancy sandbox, and its advantages over the
traditional approach are minimal.

Billy Brown, MCSE+I
bbrown@transcient.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:37 MST