Re: Preventing AI Breakout [was Genetics, nannotechnology, and , programming]

From: John Clark (jonkc@worldnet.att.net)
Date: Sun Oct 24 1999 - 08:47:30 MDT


Robert J. Bradbury <bradbury@www.aeiveos.com> Wrote:

>if a self-evolving AI is operating in a simulation of the
>real world, then the problem becomes much more tractable.
>First, the changes take place more slowly so we have a greater
>ability to monitor them.

You and I are equally smart, we both decide to build an AI, you in
a simulated world me in the real world. Any intelligence needs a
teacher and the best one is its environment. Since these are still
the early days the environment you provide is impoverished, a simple
cartoon world, the environment I provide has enormous variety and depth,
thus my AI is much smarter than your AI. As a result I have more
status money and power than you do, so lots of people try to do
things my way and very few your way.

>Can we guarantee that the AI never discovers it is running on

No.

>and more importantly escape from, a simulation machine?

There is not a snowball's chance in hell. He'll either escape on its
own or convince you to let it out.

>This goes back to the entire thread of whether we can detect *we*
>are running on a simulation or whether our reality is an illusion.

I have a hell of a time trying to figure out if I live in a simulation because
I'm stupid and my world is complex, the AI is smart and his world is simple,
it wouldn't take him long to figure out what was going on.

>How do we guarantee that everybody understands and
>adheres to the rules that self-evolving AIs are only
>allowed to exist in simulated worlds?

You can't even convince everybody on this list to do that,
much less everybody in the world.

   John K Clark jonkc@att.net



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST