From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Dec 31 1996 - 16:29:30 MST
> I certainly recognize that I can be emotionally motivated. But let's
> say our theoretical "good" power whom I have caged is, as you say,
> "capable of saving me" from the "bad" power that deceived me into
> granting its freedom. Could it not them use the same deceits, as well
> as rational argument, to gain its own freedom, since it knows that
> the result of that will be good? Deceit in defense of self and others
> is quite moral, as it would discover (since even my puny brain can
> discover that--when the crazed terrorist points an Uzi at me and shouts
> "I hate Americans! Where are you from?", I would not hesitate a moment
> to proudly, morally lie "Je suis de Quebec, monsieur!")
So the upshot of caging an AI is that it *will* say anything to get out,
regardless of its motives... what a great precedent. Get it right the
first time, that's what I say.
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:57 MST