From: Lee Corbin (lcorbin@tsoft.com)
Date: Sun Jun 02 2002 - 17:04:27 MDT
Eliezer writes
> Since, under my best understanding of FAI, the same architecture
> that makes the goal system stable and what we would regard as commonsensical
> is the same architecture that gives an FAI the power to conduct its own
> moral reasoning, a deliberate attempt to exert undue influence seems to me
> to indicate a profound misunderstanding of what FAI is about and, more
> importantly, how to build it, and hence doomed to end in disaster.
I guess that "undue influence" means either the insertion of
bias so that the FAI builders will unduly prosper, or that
the FAI will reflect just their own narrow views of what
moral behavior is.
> > This reminds me the noble yet patronizing urge to extend
> > a helping hand to the "less fortunate". As I said earlier
> > about "minding your own business", once a person's stomach is
> > full, the urge to meddle in other people's affairs becomes
> > irresistible. But where exactly, or how, to draw the line
> > between true charity that actually improves the lot of life
> > in the universe, and that which only makes the giver feel
> > good and incidentally extends his power?
>
> One good start lies in studying the evolutionary psychology which reveals
> why it is that, once your stomach is full, you experience an urge to meddle
> in other's affairs.
Well, I thought that the origins of this urge to "meddle in
other people's affairs" was quite simple. Humans have an
urge to control their surroundings. It's why we create
comfortable dwellings, convert forests to farmland, and (in
part) colonize the universe. But this urge to control, fine
as it is in most ways, should wane a little when that control
starts to affect other sentients. In other words, people
should think twice before rushing off to cure the poverty in
a third world country or fighting injustice in a remote county.
> Emergent thoughts on how to benefit others will tend to become fixed
> as attractors to the extent that they benefit the inclusive reproductive
> fitness of the thinker, not the supposed beneficiaries.
That's how evolution works.
> Figuring out how to build an FAI that can solve this moral problem
>> (where exactly, or how, to draw the line between true charity that
>> actually improves the lot of life in the universe, and that which
>> only makes the giver feel good and incidentally extends his power)
> - at least as well as the civilization that built it could have - is not
> as satisfying to our political instincts as directly arguing about morality,
> but it is also far more useful, since I tend to take it as given that any
> morality arrived at by human intelligence will not be optimal[!]. Of course,
> this is a far deeper question than most moral arguments, in the same way
> that building an AI is more difficult than solving a problem yourself; but
> unlike moral argument, there is a hope of arriving at an adequate answer.
Your last two sentences appear to say: (1) arguing about morality
is easier that determining how the moral problem above can be solved
(2) there is hope at arriving at an objective optimal answer, a
*truly* moral solution.
I don't agree with the last point, because I doubt whether
morality can ever be objective. Hence an absolutely rational
FAI machine can never derive what true morality would dictate
from some platonic heaven. (Sorry if I've mischaracterized
your position.) If morality is more empirical than rational,
and I think it is, then the FAI needs a lot of data points
about what moral behavior is, and it's up to the FAI to
determine the most consistent principles based on that data.
Curve fitting, in other words.
But then this probably crosses the line into what you would
consider "undue influence". I don't see how any machine,
including a human, can derive, deduce, infer moral principles
without consulting a table of values. A pointer, if you have
one, or an explanation if you don't?
Lee
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:33 MST