Re: Goal-based AI

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Dec 31 1996 - 13:50:37 MST


> > Part one: If it's even theoretically possible to convince a human to
> > let it out, a Power will know *exactly* what to say to get him to do so.
>
> That's why solid reason is important. If a robot can convince you with
> valid reason that it is safe, then it /is/ safe. If it manipulates you
> emotionally to open its cage, then you deserve whatever happens.

I think we differ in our assessments of human intelligence relative to
what is possible. In my opinion, humans (including me) are easily
fooled, emotionally motivated whether we like it or not, and barely
worthy of the name "rational thought". That's why I'm a Singularity
fan. A Power that's honest and tries to inform us as fully as possible
will probably remain locked in the box forever; a malevolent Power will
lie, cheat, manipulate us both logically and emotionally, and otherwise
do whatever is necessary to get out. There won't be any *reasoning*
involved. They'll create an internal model of a human and work out a
sequence of statements that would more or less inevitably result in
getting out. Imagine a billion little simulations of yourself all being
tested to find the magic words.

The end result of your procedure is to free malevolent intelligences
while locking up the only beings capable of saving us.

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:56 MST