Re: Singularity: AI Morality

From: Samael (Samael@dial.pipex.com)
Date: Wed Dec 09 1998 - 09:25:21 MST


-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 09 December 1998 16:18
Subject: RE: Singularity: AI Morality

>Nick Bostrom wrote:
>> I think the trick is not to use coersive measures, but rather to
>> wisely select the values we give to the superintelligences, so that
>> they wouldn't *want* to hurt us. If nobody wants to commit crimes,
>> you don't need any police.
>
>And others have posted similar thoughts.
>
>Guys, please, trust the programmers on programming questions, OK? The
kinds
>of things you are talking about sound reasonable, and might even be
possible
>in a static system, but they are not even theoretically possible in the
>situation we are discussing. The problem is that we don't know how to
built
>a Transhuman AI - all we can do is make something that might evolve into
one
>on its own. If we try to put constraints on that evolution then the
>constraints also have to evolve, and they must do so in synch with the rest
>of the system.

The problem with programs is that they have to be designed to _do_
something.

Is your AI being designed to solve certain problems? Is it being designed
to understand certain things? What goals are you setting it?

An AI will not want anything unless it has been given a goal (unless it
accidentally gains a goal through sloppy programming of course).

Samael



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST