Re: Why would AI want to be friendly?

From: Zero Powers (zero_powers@hotmail.com)
Date: Mon Oct 02 2000 - 22:24:21 MDT


>From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

>Emlyn wrote:
> >
> > Eliezer wrote:
> > > As for that odd scenario you posted earlier, curiosity - however
>necessary or
> > > unnecessary to a functioning mind - is a perfectly reasonable subgoal
>of
> > > Friendliness, and therefore doesn't *need* to have independent motive
>force.
> >
> > I'm not sure I understand how curiosity can be a subgoal for a seed ai;
>I'd
> > love some more on that.
>
>You need curiosity in order to think, learn, and discover, and you need to
>think, learn, and discover in order to be more generally efficient at
>manipulating reality, and being more generally efficient at manipulating
>reality means you can be more efficiently Friendly.

I hate to always sound like the pessimist in the bunch, but doesn't "being
more generally efficient at manipulating reality" also mean that you can be
more efficiently Unfriendly as well? In other words, if I had an enemy, say
an enemy with a photographic memory who learns at 10 to the X times faster
than I do, I would certainly hope that he was not very curious and that he
was not very efficient at manipulating reality.
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:21 MST