From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jul 15 2005 - 20:47:55 MDT
Robin Lee Powell wrote:
> On Fri, Jul 15, 2005 at 04:56:12PM -0400, pdugan wrote:
>
>>Here is a funny idea, what if we launch an AGI that recursively
>>self-improved out the wazoo, and nothing changes at all. Say, the
>>AGI keeps a consistent supergoal of minding its own buisiness and
>>letting the world continue to operate with its direct
>>intervention. Or maybe initial supergoals renormalize into an
>>attitude of going with the flow, letting the wind blow as it may.
>>Would such a transhuman mind classify as friendly or unfriendly?
>
> Neither, I'd say, but if I had to pick one, Friendly. No question.
Unless someone created the AGI that way *on purpose*, I'd call it a-friendly.
> "UnFriendly Superintelligent AI", to me, means "being that poses a
> serious threat to the continued existence of life in its vicinity".
These terms are all a tad fuzzy, which is okay so long as we keep track of
what the postulated observed behavior, causal origins, and internal design
actually are. There's no rule saying that the Friendly/UnFriendly distinction
can't have fuzzy borderline cases, just like the "sound" of a tree that falls
in a deserted forest. "Sound" is still a fine and useful term, it's just
possible to break it with a special case that forces you to fall back on more
detailed descriptions of what is actually happening. So too with Friendly.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT