Re: Singularity: AI Morality

From: Samael (Samael@dial.pipex.com)
Date: Fri Dec 11 1998 - 04:36:26 MST


-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 10 December 1998 20:26
Subject: RE: Singularity: AI Morality

>Samael wrote:
>> > In an AI, there is only one goal system. When it is trying to decide
if
>an
>> > action is moral, it evaluates it against whatever rules it uses for
such
>> > things and comes up with a single answer. There is no 'struggle to do
>the
>> > right thing', because there are no conflicting motivations..
>>
>> Unless it has numerous different factos which contribute towards it's
>rules..
>> After all, it would probably have the same problems with certain
>situations
>> that we would. Would it think that the ends justify the means? What
>> variance would it allow for different possibilities? It would be better
>at
>> predicting outcomes from its actions, but it stil wouldn't be perfect..
>>
>> Samael
>
>The AI won't necessarily have a clear answer to a moral question, any more
>than we do. However, my point is that it won't have more than one answer -
>there is no 'my heart says yes but my mind says no' phenomenon.

But it might have an 'objective' viewpoint.

"Tell me oh wise AI, is it moral to X'

"Oh suplicant, you are a utilitarian, so it is indeed moral to X. I would
advise you to not let your christian neighbours find out as according to
their moral system, it is wrong to X."

Samael



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:57 MST