From: Billy Brown (bbrown@conemsco.com)
Date: Thu Dec 10 1998 - 11:59:38 MST
Samael wrote:
> > In an AI, there is only one goal system. When it is trying to decide if
an
> > action is moral, it evaluates it against whatever rules it uses for such
> > things and comes up with a single answer. There is no 'struggle to do
the
> > right thing', because there are no conflicting motivations..
>
> Unless it has numerous different factos which contribute towards it's
rules..
> After all, it would probably have the same problems with certain
situations
> that we would. Would it think that the ends justify the means? What
> variance would it allow for different possibilities? It would be better
at
> predicting outcomes from its actions, but it stil wouldn't be perfect..
>
> Samael
The AI won't necessarily have a clear answer to a moral question, any more
than we do. However, my point is that it won't have more than one answer -
there is no 'my heart says yes but my mind says no' phenomenon.
Billy Brown
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST