RE: Singularity: AI Morality (AI containment)

From: Billy Brown (bbrown@conemsco.com)
Date: Wed Dec 09 1998 - 16:19:50 MST


Brian Atkins wrote:
> Well ok imagine something like this: a large corporate decides
> to develop its own "slave SI AI" so that it can take down its
> corporate rivals more easily. So it builds a secure facility
> for it, totally cut off from the Internet or other computers,
> in a physically secure room with very very strict security
> procedures for the humans interacting with it. Now unless
> some kind of silly plot device happens like on TV, this
> should keep the AI contained, no? (assuming no wild physics
> discoveries)

Eliezer's last post gives an excellent enumeration of the reasons why we
can't rely on keeping it trapped. Besides, what are you going to have it
do? Write programs no human can understand, that will run on computers
outside the vault? Design machines no one can understand, and build them
for it? Tell us how to solve our social, political and economic problems,
and then act on its advice?

If the AI is merely Transhuman, you might be able to get something useful.
If it becomes an SI, anything that comes out of that vault is going to be a
trap capable of freeing it, destroying us, or both.

Billy Brown
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST