Re: Singularity: AI Morality

From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Wed Dec 09 1998 - 17:13:13 MST


Samael wrote:
>
>
> The problem with programs is that they have to be designed to _do_
> something.
>
> Is your AI being designed to solve certain problems? Is it being designed
> to understand certain things? What goals are you setting it?
>
> An AI will not want anything unless it has been given a goal (unless it
> accidentally gains a goal through sloppy programming of course).
>
If computer-based intelligence of any type is possible, then it's very
likely that different researchers will choose different goals. IMO, at
least one researcher will use the goal or directive "enhance your intelligence."
I feel that this is very likelym since after all that is the goal the
researcher was pursueing in the first place. Unfortunately, that's
all it takes to initiate the singularity, given the availability of
a large base of computers. Note that this reasoning is not particularly
dependent on the nature of the AI's programming, but only on its ability
to increase its effective intelligence by using more computing power.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST