Re: Why would AI want to be friendly?

From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Tue Sep 05 2000 - 14:41:36 MDT


----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

> If, as seems to be the default scenario, all supergoals are ultimately
> arbitrary, then the superintelligence should do what we ask it to, for
lack of
> anything better to do.

Particularly true if we are careful regarding initial conditions.

We humans, for instance, have some hard wired mechanisms that attempt to
influence our behavior-- pleasure/pain responses, sex drive, hunger, etc.
Certainly it would be smart to build some good positive feedback loops into
the core structure of any theoretical AI/SI. The concepts, for instance, of
pleasure and pain are rather arbitrary and intangible but act as effective
incentives for us to pursue certain goals.

It would be nice, for instance, to have SIs that are made happy by making
humans happy.

This could be problematic though, for several reasons. Anders Sandberg and
I had a long discussion regarding this several years ago... I think at that
time I indicated that aspects of the superiority of artificial intelligence
would be found in the relative plasticity of it's substrate-- we can presume
a good AI will be able to 'free' itself from it's baser instinctual
responses-- this is, after all, something that _we_ are trying to do.
Sleeping makes me feel good, but if I could route around the symptomatic
response, I would (in the absence of other nasty side-effects.)

Is it possible that in order for AI to be truly successful, we're going to
have to give it the keys to the car?

--
   ::jason.joel.thompson::
   ::founder::
    www.wildghost.com


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:47 MST