Re: Singularity: AI Morality

From: Eric Ruud (ejruud@ucdavis.edu)
Date: Sat Dec 12 1998 - 06:15:51 MST


>An AI implemented in this fashion would exhibit what I call 'unified will'.
>It would act on whatever moral system it believed in with a very high
degree
>of consistency, because the tenets of that system would be enforced
>precisely and universally. It could still face moral quandaries, because
it
>might have conflicting goals or limited information. However, it would
>never knowingly violate its own ethics, because it would always use the
same
>set of rules to make a decision.

This may sound a bit trite, but can it really be a conscious intelligence if
it has no ability to violate it's own ethical codes? Under this system, the
AI would have no "freedom of choice" (a concept I'm still not sure about,
but which is present-- by current widely accepted definitions-- in humans).
And if it's ability to choose is less than a human, albeit infinitely more
sophisticated in the possible outcomes, how can it possibly considered more
advanced?

Does anybody have any interesting theories on freedom of choice on a
metaphysical level? I'm a bit skeptical such a thing exists at this point.

-Eric Ruud



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:58 MST