Re: Why would AI want to be friendly?

From: Andrew Lias (andrew.lias@corp.usa.net)
Date: Fri Sep 29 2000 - 11:27:09 MDT


[Non-member submission]

First things first: I'm new to the list -- be gentle. :-)

I've been following the debates regarding the possibilities of friendly vs.
unfriendly AI and I have a question. It seems that we are presuming that a
friendly AI would be friendly towards us in a manner that we would recognize
as friendly. Indeed, what, precisely, do we mean by friendly?

Let us (to invoke a favored cliche) suppose that an AI is evolved such that
it's understanding of being friendly towards humans is that it should try to
insure the survival of humanity and that it should attempt to maximize our
happiness. What is to prevent it from deciding that the best way to
accomplish those goals is to short circuit our manifestly self-destructive
sense of intelligence and to re-wire our brains so that we are incapable of
being anything but deleriously happy at all times? [1]

Now, I'm not suggesting that *this* is a plausible example (as noted, it's
very much a science-fiction cliche), but I am concerned that any definition
or set of parameters we develop for the evolution of friendly AI may include
unforseen consequences in the definition that we simply can't anticipate at
our level of intelligence -- and that's supposing that the SI will still
want to be friendly.

What am I missing?

--
Andrew Lias
[1] Thinking about this a bit, making a mandate to preserve humanity might
be an exceptionally bad thing to try and program, from a transhumanist
perspective.  "Success" may be the development of an SI that will actively
prevent us from uploading/upgrading ourselves in order to stop us from
"losing" our humanity. 


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:17 MST