From: Samael (Samael@dial.pipex.com)
Date: Thu Dec 10 1998 - 09:25:43 MST
-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 10 December 1998 16:19
Subject: RE: Singularity: AI Morality
>christophe delriviere wrote:
>> A lot are assuming that if a "smarter" AI has a particular moral system,
>> he will follow it if he believe for a little time it is the moral system
>> (say 5 micro seconds :) )....
>>
>> I can't see why, We probably all have some moral system, but we surely
>> don't always follow it. I'm a strong relativist and I strongly feel that
>> there is not true objective moral system, but there is of course one
>> somewhat hardwired in my brain, statistically I follow it almost all the
>> time, but from time to time I do an act wrong in this moral system and
>> because I also think it's totally subjective, i'm feeling bad and don't
>> feel bad about it at the same time. The later wins after mostly in a
>> little time. I'm sure a greater intelligence will have the ability to be
>> strongly multiplex in his world views and will be able to deal with
>> strong contradictions and inconsistencies in his beliefs ;)....
>
>The phenomenon you describe is an artifact of the cognitive architecture of
>the human mind. Humans have more than one system for "what do I do
next?" -
>you have various instinctive drives, a complex mass of conscious and
>unconscious desires, and a conscious moral system. When you are trying to
>decide what to do about something, you will usually get responses from
>several of these goal systems. Sometimes the moral system wins the
>argument, and sometimes it doesn't. I suspect that the conscious moral
>system takes longer to come up with an opinion than the other systems,
which
>would explain why people tend to ignore it in a crisis situation.
>
>In an AI, there is only one goal system. When it is trying to decide if an
>action is moral, it evaluates it against whatever rules it uses for such
>things and comes up with a single answer. There is no 'struggle to do the
>right thing', because there are no conflicting motivations.
Unless it has numerous different factos which contribute towards it's rules.
After all, it would probably have the same problems with certain situations
that we would. Would it think that the ends justify the means? What
variance would it allow for different possibilities? It would be better at
predicting outcomes from its actions, but it stil wouldn't be perfect.
Samael
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST