Re: Singularity: AI Morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Dec 09 1998 - 11:59:34 MST


Robin Hanson wrote:
>
> Well we could do a little more; we might create lots of different AIs
> and observer how they treat each other in contained environments. We might
> then repeatedly select the ones whose behavior we deem "moral." And once
> we have creatures whose behavior seems stably "moral" we could release them
> to participate in the big world.

Anything that can safely be stuffed into a contained environment isn't any
sort of AI that we need to worry about. Such threat management techniques are
useful only against programs that can be filed and forgotten. Remember, we're
talking about Culture Minds and Vingean Powers, not your mail filter. Yours
is a way to ensure the integrity of the global data network, not to protect
the survival of humanity.

As for pulling this trick on genuine SIs:

This would ENSURE that at least one of the SIs went nuts, broke out of your
little sandbox, and stomped on the planet! This multiplies the risk factor by
a hundred times for no conceivable benefit! I would rather have three million
lines of Asimov Laws written in COBOL than run evolutionary simulations! No
matter how badly you screw up ONE mind, there's a good chance it will shake it
off and go sane!

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST