From: Brian Atkins (brian@posthuman.com)
Date: Wed Dec 09 1998 - 15:04:57 MST
Well ok imagine something like this: a large corporate decides
to develop its own "slave SI AI" so that it can take down its
corporate rivals more easily. So it builds a secure facility
for it, totally cut off from the Internet or other computers,
in a physically secure room with very very strict security
procedures for the humans interacting with it. Now unless
some kind of silly plot device happens like on TV, this
should keep the AI contained, no? (assuming no wild physics
discoveries)
Billy Brown wrote:
>
> Brian Atkins writes:
> > I'm curious if there has been a previous discussion on this
> > list regarding the secure containment of an AI (let's say a
> > SI AI for kicks)? Many people on the list seem to be saying
> > that no matter what you do, it will manage to break out of
> > the containment. I think that seems a little far-fetched....
>
> Here's why I don't think containment is feasible for an SI:
>
> 1) No one has ever achieved it for programs written by humans. Some
> operating systems get close, but I can't think of one that has never had a
> serious security issue.
>
> 2) Supporting an SI AI would require much faster machines than what we have
> now, running much more complex programs. This makes the problem even worse.
>
> 3) An AI will be much better at programming than humans. That means that
> its efforts to get around our security will be much sneakier, and more
> complex, that those of human hackers. See #1.
>
> Even if you make a perfectly secure sandbox, we still aren't safe. Never
> underestimate the security risk posed by social engineering:
>
> 4) You have to have human/AI contact to have any idea what the AIs are like.
> This opens up lots of potential problems - the AI can talk someone into
> letting it out, bribe them to do it, 'give away' useful (or fantastically
> valuable) programs that contain seeds of itself, etc.
>
> 5) Don't forget the legal front. The AI could try to convince people that
> it is a person, and you are keeping it as a slave (not hard to do, since
> that's exactly what is happening). If it acts as its own lawyer, you're
> probably going to loose the case.
>
> 6) Do reporters ever talk to the AI? Of course they do. Think of the PR
> campaign the 'poor, helpless, exploited' AI could mount.
>
> Some of these problems are bigger than others, but that isn't the point.
> The real problem is that I thought of all these approaches in the space of
> 15 minutes, and I'm only human. What is something with an IQ of 1,000 (or
> worse, 1,000,000) going to think of?
>
> Billy Brown
> bbrown@conemsco.com
-- The future has arrived; it's just not evenly distributed. -William Gibson ______________________________________________________________________ Visit Hypermart at http://www.hypermart.net for free business hosting!
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST