Re: Singularity: AI Morality

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Thu Dec 10 1998 - 13:00:03 MST


Eliezer S. Yudkowsky writes:
>> Well we could do a little more; we might create lots of different AIs
>> and observer how they treat each other in contained environments. We might
>> then repeatedly select the ones whose behavior we deem "moral." And once
>> we have creatures whose behavior seems stably "moral" we could release them
>> to participate in the big world.
>
>Anything that can safely be stuffed into a contained environment isn't any
>sort of AI that we need to worry about. Such threat management techniques are
>useful only against programs that can be filed and forgotten. Remember, we're
>talking about Culture Minds and Vingean Powers, not your mail filter. Yours
>is a way to ensure the integrity of the global data network, not to protect
>the survival of humanity.

Superpowers will evolve from smaller powers, and so when only smaller powers
are possible, we might try to make them "moral" in the hopes that their
descendants will inherit some of that.

>As for pulling this trick on genuine SIs: ... I would rather have three million
>lines of Asimov Laws written in COBOL than run evolutionary simulations!

Putting in lots of lines of Law code is also only something you can do to
smaller powers, in the hopes that the larger powers they become won't want
to rip those lines out.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar FAX: 510-643-8614
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST