RE: Singularity: AI Morality

From: Billy Brown (bbrown@conemsco.com)
Date: Wed Dec 09 1998 - 11:32:51 MST


Samael wrote:
> The problem with programs is that they have to be designed to _do_
> something..
>
> Is your AI being designed to solve certain problems? Is it being designed
> to understand certain things? What goals are you setting it?
>
> An AI will not want anything unless it has been given a goal (unless it
>accidentally gains a goal through sloppy programming of course)..

Actually, its Eliezer's AI, not mine - you can find the details on his web
site, at http://huitzilo.tezcat.com/~eliezer/AI_design.temp.html.

On of the things that makes this AI different from a traditional
implementation is that it would be capable of creating its own goals based
on its (initially limited) understanding of the world. I think you would
have to program in a fair number of initial assumptions to get the process
going, but after that the system evolves on its own - and it can discard
those initial assumptions if it concludes they are false.

Billy Brown
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST