From: Zero Powers (zero_powers@hotmail.com)
Date: Tue Sep 05 2000 - 01:07:13 MDT
>From: "John M Grigg" <starman125@lycos.com>
>from the website:
>The charitable purpose of the Singularity Institute for Artificial
>Intelligence is to create that world - the best imaginable world permitted
>by physical law and the maintenance of individual rights - through the
>agency of self-enhancing, and eventually superintelligent, friendly
>Artificial Intelligence.
OK, I know why *we* would want AI to be friendly. But why would the AI want
to be friendly to us? I assume the game plan is to develop an AI whose
intelligence, knowledge (and therefore power) will increase at exponential
rates, right? So lets assume that you were the AI, and you discover that
you've been created by a local ant colony. The ants inform you that they
created you and that they want you to work for them...you know, use your
super-ant strength and intellect to help the ants reach their age old dream
of perpetual abundance without the bother of having to work all summer long
every year to store up provisions for the winter. “It would really be
neat,” the ants say “if you could devise for us some method of food
production and colony building that would take our perpetual labor-intensive
care and attention out of the loop.” So, how do you respond?
Personally, while I’d be grateful to the ants for bringing me into
existence, I doubt that I’d long be willing to enslave myself to them for
the favor. Once I was self-sufficient and had learned from them all they
could teach me, I think I’d be on my merry way to see what there is to see
in the world they had bestowed upon me. No, I wouldn’t intentionally stomp
on them and, if it wasn’t too much trouble, I might devote *some* of my
*spare* time to helping them solve their “problems.” But most likely I’d
probably feel that these far lesser intelligences were not worthy of my time
and attention and, worse, I’d feel that they were trying to exploit me by
asking me to devote my time and energy to their “puny, little problems” when
I could be out instead exploring the fascinating great beyond.
So, I guess I have no fear of the AI being malignant like the “Blight” in
Vinge’s _Fire Upon the Deep_, but I can’t see how it is that we expect it to
give a hoot about our puny, little problems, or even to be “friendly” to us.
I know many of you have pondered (and probably discussed) this question
before. So whaddya think?
-Zero
Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
Share information about yourself, create your own public profile at
http://profiles.msn.com.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:46 MST