From: Samael (Samael@dial.pipex.com)
Date: Mon Dec 14 1998 - 03:36:49 MST
-----Original Message-----
From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 12 December 1998 01:04
Subject: Re: Singularity: AI Morality
>Samael wrote:
>>
>> But why would it _want_ to do anything?
>>
>> What's to stop it reaching the conclusion 'Life is pointless. There is
no
>> meaning anywhere' and just turning itself off?
>>
>Nothing stops any particular AI from deciding to do this. However, this
>doesn't stop the singularity unless it happens to every AI.
>The singularity only takes one AI that decides to extend itself rather than
>terminating.
>
>If you are counting on AI self-termination to stop the Singularity, you'll
>have to explain why affects every single AI.
I don't expect it will, because I expect the AI's to be prgorammed with
strong goals that it will not think about.
Samael
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:59 MST