Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 27 2000 - 08:53:08 MDT


Eugene Leitl wrote:
>
> Aaargh. The only AI that counts passes through the diversity
> bottleneck by virtue of the positive autofeedback self-enhancement
> loop. This is the factor which wipes out all other contestans. Coming
> close second, still no cigar.

I seem to recall that when I tried to tell *you* that, you poured out buckets
of objections upon my shrinking head. I don't recall you cheering on
self-enhancement back when I was alone defending that argument against the
legions of Hansonians.

> Only then it radiates.

Excuse me, but I really have to ask: Why? What particular supergoal makes
phylum radiation a good subgoal? It isn't a subgoal of being friendly, a
subgoal of personal survival, a subgoal of happiness, a subgoal of attempting
to maximize any internal programmatic state, or a subgoal of trying to
maximize any physical state in the external Universe. Phylum radiation is
cognitively plausible only if the SI possesses an explicit drive for
reproduction and diversification at the expense of its own welfare.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:14 MST