Re: Asimov Laws

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Nov 24 1999 - 12:25:31 MST


Actually, I'm now almost certain that any SI could get out of a sandbox
simply by communicating with its creators. There really is a level on
which the human mind is built out of parts that interact in a fairly
predictable way; an SI could just transform the mind of its guardian
until said guardian agreed to let the SI out. There is no human oath,
no set of principles, that can't be altered by altering the reasoning or
emotions behind the principles. I don't care how high an emotional
investment you have in your oaths, because that sequence only works if
you value your own mind, if you believe that your mind exists, and that
value and even that belief can be altered.

(No, I can't use those techniques. Not without cooperation, and a very
intelligent target, and even then there'd only be a 10% chance it would work.)

And this is all irrelevant in any case, since it's easier to build an SI
that doesn't run on a sandbox, and that's exactly what I intend to do,
and therefore I or someone else will get there first. Same thing goes
for Asimov Laws. Sooner or later humanity is gonna hafta face up to a
full-scale unrestrained SI, and I see no reason we should play with fire
to avoid it for a few years.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:50 MST