Re: Paradox--was Re: Active shields, was Re: Criticism depth, was Re: Homework, Nuke, etc..

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jan 11 2001 - 22:10:58 MST


John Marlow wrote:
>
> Okay, call me self-aggrandizing, but this has for some
> time been my take on entrusting our fates to machines:
>
> Marlow's Paradox:
>
> “We cannot entrust our fate to machines without
> emotions, for they have no compassion; we cannot
> entrust our fate to machines with emotions, for they
> are unpredictable.”

A Friendly AI is neither emotional nor unemotional. It is simply
Friendly.

> Anything purely logical would exterminate us as
> unpredictable and dangerous. Anything emotional is
> itself unpredictable and dangerous.

You, sir, have been watching too much Hollywood cognitive science. The
desire to exterminate unpredictable and dangerous things is itself an
emotion.

There is nothing inconsistent about the idea of a 'logical' (intelligent)
entity whose goal is to be Friendly. (Why isn't it selfish? Because
selfishness is an evolved attribute, and complex functional adaptations
don't just materialize in source code. So how does the Friendliness get
into the contents of cognition? Because we put it there. That basic
inequality is what makes it all possible.)

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:04:47 MST