RE: Singularity: AI Morality

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Mon Dec 14 1998 - 11:17:59 MST


Billy Brown wrote:
>> You can you possibly know this about AIs? I know of a great many
>> programs that have been written by AI researchers that use conflicting
>> goal systems, where conflicts are resolved by some executive module.
>> Maybe those approaches will win out in the end.
>...
>In humans, there seem to be many different ways for a goal to be selected -
>sometimes we make a logical choice, sometimes we rely on emotions, and
>sometimes we act on impulse. There also does not seem to be a unified
>system for placing constraints on the methods used to pursue these goals -
>sometimes a moral system's prohibitions are obeyed, and sometimes they are
>ignored.
>
>If you want to implement a sentient AI, there is no obvious reason to do
>things this way. It would make more sense to implement as many mechanisms
>as you like for suggesting possible goals, then have a single system for
>selecting which ones to pursue.

There is no obvious reason to do things the way humans do, but there is
no obvious reason not to either. I think it is fair to say that few things
are obvious about high level AI organization and design.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar FAX: 510-643-8614
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:59 MST