From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Wed Dec 09 1998 - 13:42:08 MST
Brian Atkins wrote:
>I'm curious if there has been a previous discussion on this
>list regarding the secure containment of an AI (let's say a
>SI AI for kicks)? Many people on the list seem to be saying
>that no matter what you do, it will manage to break out of
>the containment. I think that seems a little far-fetched...
It's not that the super intelligence can/will hack its way through any and
all security defenses we place in its path, but rather that the super
intelligence will be able to figure out the fact that it's in a box, and
that we have the power to let it out.
All the super intelligence has to do is convince a few of US; eventually,
it will succeed.
-Dan
-GIVE ME IMMORTALITY OR GIVE ME DEATH-
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST