From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Sep 30 2000 - 09:29:58 MDT
Emlyn wrote:
>
> I guess AI will have to make a judgement call. That's the truly dangerous
> part.
It would be the truly dangerous part if we were dealing with corruptible
humans. The whole principle here is that the AI doesn't *want* to disobey the
spirit of the rules.
"Don't try to be strict, the way you would be with an untrustworthy
human. Don't worry about "loopholes". If you've succeeded worth
a damn, your AI is Friendly enough not to *want* to exploit
loopholes. If the Friendly action is a loophole, then "closing the
loophole" means *you have told the AI to be unFriendly*, and that's
much worse."
As for that odd scenario you posted earlier, curiosity - however necessary or
unnecessary to a functioning mind - is a perfectly reasonable subgoal of
Friendliness, and therefore doesn't *need* to have independent motive force.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST