RE: Singularity: AI Morality

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Thu Dec 10 1998 - 13:05:09 MST


Billy Brown wrote:
>... Humans have more than one system for "what do I do next?" -
>you have various instinctive drives, a complex mass of conscious and
>unconscious desires, and a conscious moral system. When you are trying to
>decide what to do about something, you will usually get responses from
>several of these goal systems. ...
>In an AI, there is only one goal system. ... There is no 'struggle to
>do the right thing', because there are no conflicting motivations.

You can you possibly know this about AIs? I know of a great many
programs that have been written by AI researchers that use conflicting
goal systems, where conflicts are resolved by some executive module.
Maybe those approaches will win out in the end.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar FAX: 510-643-8614
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST