From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 25 2004 - 00:02:54 MST
Let me define Universal morality as the process of
determining goals, which, when acted upon, generate
zero long-term conflicts of interests with other
sentients who follow universal morality. Or, the
actions consistent with universal morality are the
actions, which, in the limit that the effects of these
actions could be projected infinitely far into the
future, never result in a conflict of interest with
any other sentient obeying universal morality.
The set of positive-sum interactions (The actions
which are 'reciprocal' - in the sense that they
benefit everyone who interacts in this way) is
consistent with Universal Morality.
The trouble I have with Eliezer's ideas is that he
conceives of an FAI with no 'Self' node. We have to
distinguish between universal morality (if there is
one) and personal values. Universal morality would
mean that all good humans would have to have some
moral principles in common, but all humans would still
have their own personal values in addition to this.
Or think of the morality of good humans this way:
UNIVERSAL MORALITY + PERSONAL VALUES
There are two components. The universal moral
principles AND on top of this extra personal values.
It is only if we demand that the personal values are
subtracted out that we end up with Eliezer's
conception of Friendliness : Rational altruism or
'Volitional morality' as I understand it, means the
FAI is helping others to get what they want, within
the limits set by Universal morality. The problem I
have with this is that the FAI would be an empty husk.
With the 'personal values' component of morality
subtracted out, the FAI cannot distinguish between the
myriad of personal values which are consistent with
Universal Morality. To such an FAI, all these values
would be designated as equally 'good'. But why should
personal values be subtracted out? Why shouldn't
FAI's have personal values as well?
If we allow an FAI to have personal values, then it
would no longer be following volitional morality.
Why? The reason is that the FAI would now be able to
assign differing moral weights to values which are
equally consistent with Universal Morality. This
leads to a sharp distinction known in moral philosophy
as the distinction between 'Acts and Omissions'. The
Acts and Omissions distinction is that failing to act
to prevent X is not regarded as morality equivalent to
actively causing X.
Let me show you what I mean with an example:
A man wants to kill himself. Is failing to stop him
killing himself morally equivalent to actively helping
him to kill himself? If you answer yes, then you see
no distinctions between acts and omissions in this
instance.
To an FAI operating off Volitional morality, there
would be no acts/omissions distinction. Presumably,
Universal morality says that the man does have the
right to kill himself. An FAI operating off
Volitional morality has no additional personal moral
values, so such an FAI could not morally distinguish
between 'Man kills himself' and 'Man doesn't kill
himself' (both actions are consistent with Universal
Morality). The FAI would regard both actions as
equally good, and it should therefore help the man
kill himself if that is what the man desires.
Now, let’s consider the case where an FAI is allowed
to form personal moral judgments which are in addition
to Universal morality. (So now the FAI's morality
consists of Universal Morality+Personal values, just
like humans). Now there could well be a distinction
between acts and omissions. In the example given, if
the FAI has the personal value that people shouldn't
commit suicide, then the distinction appears. So how
would such an FAI act in this instance?
The 'personal value' component of the FAI's morality
says that: 'My own subjective personal value says
that I prefer that people shouldn't kill themselves'.
But the 'Universal morality' component says that:
'The universal edict is that people have the right to
kill themselves if they want to!’ The FAI (being
Friendly) would not act to stop the man killing
himself (because that would conflict with Universal
morality). But the FAI wouldn't act to actively help
the man kill himself either (because that would
conflict with the FAI's personal values). So such an
FAI would not follow Eliezer conception of 'Volitional
morality', even though its actions would still be
consistent with universal morality.
You can see the problem I have with Eliezer's ideas.
They seem to be wholly concerned with creating an AI
which would act in accordance with Universal Morality.
But such an FAI would be an empty husk with no
'Personal Values' component to its morality.
=====
Please visit my web-site at: http://www.prometheuscrack.com
http://personals.yahoo.com.au - Yahoo! Personals
New people, new possibilities. FREE for a limited time.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT