From: Barbara Lamar (shabrika@juno.com)
Date: Tue Sep 05 2000 - 15:11:32 MDT
Note: my questions below are intended as sincere questions, not as
attempts to start an argument. I've followed this thread with great
interest, having myself asked this question and variations of it.
On Tue, 05 Sep 2000 14:19:43 -0400 "Eliezer S. Yudkowsky"
<sentience@pobox.com> writes:
>
> Which assumes an inbuilt desire to reach some stage of independence.
> You have
> not explained how or why this desire materializes within the seed
> AI.
>
> I'm assuming we're talking about a seed AI, here, not a full-grown
> SI.
What if you don't make this assumption? Would a full-grown SI
necessarily have an inbuilt desire to reach some stage of independence?
Independence from what or whom? It seems as though this question must
be answered before one can meaningfully discuss whether or not such
independence would be a necessary condition of SuperIntelligence.
> You must be joking. You cannot "put down" a superintelligence like
> some kind
> of wounded pet.
No, not like a wounded pet. But I can imagine wanting to destroy a SI
that I perceive as a threat to me.
.The time for such decisions is before the seed AI
> reaches
> superintelligence, not after.
Is this because once the AI reaches the SI stage it would be hopeless for
the less intelligent humans to try to destroy it? Or because of moral
concerns? Or some combination of both? Or is it logically necessary
given the properties of humans & SI's?
> One does not set out to "control" an AI that diverges from one's
> desires.
>
> One does not create a subject-object distinction between oneself and
> the AI.
Would the pronoun "one" in the sentences above refer only to the
creator(s) of the AI? Or to some larger group of humans? Would these
sentences be valid if the pronoun "one" could also refer to an SI in
relation to an AI?
> You shape yourself so that your own altruism is as rational, and
> internally
> consistent as possible; only then is it possible to build a friendly
> AI while
> still being completely honest, without attempting to graft on any
> chain of
> reasoning that you would not accept yourself.
You're not implying that the AI will necessarily take on the personality
of its creator, though? Why would honesty on the part of the creator be
necessary? Honesty with respect to what?
> You cannot build a friendly AI unless you are yourself a friend of
> the AI,
> because otherwise your own adversarial attitude will lead you to
> build the AI
> incorrectly.
The above makes sense--but the fact that the creator is a friend of the
AI doesn't imply that all humans are friends of the AI. It seems as
though the inverse of this statement would also be true--that the Ai
would not be a friends of all humans. It seems unlikely that the
creator could be friendly towards all of humanity except at an abstract
level. But maybe friendliness at the abstract level is all that's
required?
What about when a human or group of humans has interests that conflict
with another human's or group of humans'? What sort of instructions
would the AI have for dealing with such situations?
Barbara
________________________________________________________________
YOU'RE PAYING TOO MUCH FOR THE INTERNET!
Juno now offers FREE Internet Access!
Try it today - there's no risk! For your FREE software, visit:
http://dl.www.juno.com/get/tagj.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:47 MST