Billy Brown writes:
Well we could do a little more; we might create lots of different AIs
and observer how they treat each other in contained environments. We might
then repeatedly select the ones whose behavior we deem "moral." And once
we have creatures whose behavior seems stably "moral" we could release them
to participate in the big world.
However, I'd expect evolutionary pressures to act again out
>> wisely select the values we give to the superintelligences, ...
>
>Guys, please, trust the programmers on programming questions, OK? ...
>Now, in the real world we can't even program a simple, static program
>without bugs. The more complex the system becomes, the more errors there
>will be. Given that a seed AI would consist of at least several hundred
>thousand lines of arcane, self-modifying code, it is impossible to predict
>its behavior with any great precision. Any static morality module will
>eventually break or be circumvented, and a dynamic one will itself mutate in
>unpredictable ways. The best that we can do is teach it how do deduce its
>own rules, and hope it comes up with a moral system requires it to be nice
>to fellow sentients.
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/ RWJF Health Policy Scholar FAX: 510-643-8614 140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884