Re: Why would AI want to be friendly?

From: xgl (xli03@emory.edu)
Date: Tue Sep 05 2000 - 20:29:50 MDT


this disclaimer shouldn't be necessary: i speak for myself only.

On Tue, 5 Sep 2000, Zero Powers wrote:

>
> Right. But once it reaches and surpasses human levels, won't it be
> sentient? Won't it begin to ask existential questions like "Who am I?" "Why
> am I here?" "Is it fair for these ignorant humans to tell me what I can and
> cannot do?" Won't it read Rand? Will it become an objectivist? It is not
> very likely to be religious or humanist. Won't it begin to wonder what
> activities are in its own best interest, as opposed to what is in the best
> interests of us?
>

        self interest is an evolved adaptation, and fairness is a mostly
human concept. i doubt that an SI would even harbor any special attachment
to the "self" -- all that matters is the goal. as to existential
questions, it's perhaps ironic that the hardest questions for us humans
would probably be trivial for an SI. to the best of my knowledge,
eliezer's design _begins_ with the meaning of life -- crudely speaking, do
the right thing.

> Sure you can program in initial decision making procedures, but once it
> reaches sentience (and that *is* the goal, isn't it?) aren't all bets off?
>

        interesting. our idea of sentience derives mostly from the only
specimen we have found so far -- human beings. but human beings are messy
hacks. what is sentience like without self interest? without emotion? what
is a pure mind (if such a thing can exist)? eliezer's minimalist design is
pretty much as pure a mind as one can get (and that's why i prefer to call
it a transcendent mind) -- and thus as far from my intuition as it can get
as well. perhaps all bets _are_ off ... but not in the way we think.

-x



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:48 MST