From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri May 30 2003 - 08:58:35 MDT
Ben Goertzel wrote:
>
> If an AGI is given these values, and is also explicitly taught why euphoride
> is bad and why making humans appear happy by replacing their faces with
> nanotech-built happy-makes is stupid, then it may well grow into a powerful
> mind that acts in genuinely benevolent ways toward humans. (or it may
> not -- something we can't foresee now may go wrong...)
You can't block an infinite number of special cases. If you aren't
following a general procedure that rules out both euphoride and mechanical
happy faces, you're screwed no matter what you try. The general
architecture should not be breaking down like that in the first place.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT