RE: Singularity: AI Morality

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Wed Dec 09 1998 - 10:33:04 MST


Billy Brown writes:
>> wisely select the values we give to the superintelligences, ...
>
>Guys, please, trust the programmers on programming questions, OK? ...
>Now, in the real world we can't even program a simple, static program
>without bugs. The more complex the system becomes, the more errors there
>will be. Given that a seed AI would consist of at least several hundred
>thousand lines of arcane, self-modifying code, it is impossible to predict
>its behavior with any great precision. Any static morality module will
>eventually break or be circumvented, and a dynamic one will itself mutate in
>unpredictable ways. The best that we can do is teach it how do deduce its
>own rules, and hope it comes up with a moral system requires it to be nice
>to fellow sentients.

Well we could do a little more; we might create lots of different AIs
and observer how they treat each other in contained environments. We might
then repeatedly select the ones whose behavior we deem "moral." And once
we have creatures whose behavior seems stably "moral" we could release them
to participate in the big world.

However, I'd expect evolutionary pressures to act again out in the big world,
and so our only real reason for confidence in continued "moral" behavior
would be expectations that such behavior would be rewarded in a world when
most other creatures also act that way.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar FAX: 510-643-8614
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST