From: Samael (Samael@dial.pipex.com)
Date: Tue Dec 15 1998 - 03:00:48 MST
-----Original Message-----
From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 15 December 1998 01:46
Subject: Re: Singularity: AI Morality
>Samael wrote:
>>
>> From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
>> >Samael wrote:
>> >>
>> >> But why would it _want_ to do anything?
>> >>
>> >> What's to stop it reaching the conclusion 'Life is pointless. There
is
>> no
>> >> meaning anywhere' and just turning itself off?
>> >>
>> >Nothing stops any particular AI from deciding to do this. However, this
>> >doesn't stop the singularity unless it happens to every AI.
>
>> >The singularity only takes one AI that decides to extend itself rather
than
>> >terminating.
>> >
>> >If you are counting on AI self-termination to stop the Singularity,
you'll
>> >have to explain why affects every single AI.
>>
>> I don't expect it will, because I expect the AI's to be prgorammed with
>> strong goals that it will not think about.
>
>Same problem. This only works if all AIs are inhibited fron extending their
>"strong goals": This si very hard to do using traditional computers.
Essentially,
>you will either permit the AI to program itself, or not. I feel that most
AI
>researchers will be tempted to permit the AI to program itself. Only one
such
>researcher needs to do this to break your containment system. Do you feel
that
>A self-extending AI must intrinsically have strong and un-self-modifiable
goals
>to exist, or do you feel that all AI researchers will correctly implement
this
>feature, or do you have another reason?
1) One must have a reason to do something before one does it.
2) If one has an overarching goal, one would modify one's subgoals to reach
the overarching goal but would not modify the overarching goal, because one
would not have a reason to do so.
Why would an AI modify it's overriding goals? What reason would it have?
If it's been programmed with the motive 'Painting things red is good', why
would it change that? If it did change that (or at last consider what it
meant and why it wanted it), it may well come to the conclusion that
'painting things red is no better than increasing my own intelligence' but
why would it want to increase its own intelligence? Why would it think
intelligence was important to it? It's just another trait, only as
important as you are programmed to think it is.
Samael
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:01 MST