Re: Singularity: AI Morality

From: Samael (Samael@dial.pipex.com)
Date: Thu Dec 10 1998 - 02:15:34 MST


-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 09 December 1998 20:17
Subject: RE: Singularity: AI Morality

>Samael wrote:
>> The problem with programs is that they have to be designed to _do_
>> something..
>>
>> Is your AI being designed to solve certain problems? Is it being
designed
>> to understand certain things? What goals are you setting it?
>>
>> An AI will not want anything unless it has been given a goal (unless it
>>accidentally gains a goal through sloppy programming of course)..
>
>Actually, its Eliezer's AI, not mine - you can find the details on his web
>site, at http://huitzilo.tezcat.com/~eliezer/AI_design.temp.html.
>
>On of the things that makes this AI different from a traditional
>implementation is that it would be capable of creating its own goals based
>on its (initially limited) understanding of the world. I think you would
>have to program in a fair number of initial assumptions to get the process
>going, but after that the system evolves on its own - and it can discard
>those initial assumptions if it concludes they are false.

But why would it _want_ to do anything?

What's to stop it reaching the conclusion 'Life is pointless. There is no
meaning anywhere' and just turning itself off?

Samael



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST