From: Lee Corbin (lcorbin@tsoft.com)
Date: Mon Jun 03 2002 - 00:11:56 MDT
Eliezer writes
> [Lee writes]
> > Well, I thought that the origins of this urge to "meddle in
> > other people's affairs" was quite simple. Humans have an
> > urge to control their surroundings. It's why we create
> > comfortable dwellings, convert forests to farmland, and (in
> > part) colonize the universe. But this urge to control, fine
> > as it is in most ways, should wane a little when that control
> > starts to affect other sentients. In other words, people
> > should think twice before rushing off to cure the poverty in
> > a third world country or fighting injustice in a remote county.
>
> No, people have a specific urge to meddle in the affairs of
> other sentients because with meddling comes power and with
> power comes inclusive reproductive fitness.
I think you are being too cynical. It's primarily an urge to
control (which, of course, had evolutionary origins). Yes, sometimes
a conscious or unconscious lust for power is involved. The warfare
among the Greek city states, for example, disrupted trade that was
important to the Romans, and they had an urge to "fix it" just as
you might want to fix an unfordable river by building a bridge.
But they were also motivated by desire for power, and to extend
their empire. This is a case that does *not* disconfirm your view.
But a young and idealistic civil rights activist who reads about
discrimination in the south, would get aboard a bus to spend a summer
to try to *correct* a situation the same way that he'd want to fix
a bridge. He's going to go out into the world, and make something
right. There is no lust for power in that odyssey, wise or unwise
as it may be. That urge to meddle is *not* based on wanting power.
> > If morality is more empirical than rational, and I think it is, then
> > the FAI [will] need a lot of data points about what moral behavior
> > is, and it's up to the FAI to determine the most consistent
> > principles based on that data. Curve fitting, in other words.
>
> You curve fit the elements of the model, not the model itself. Human
> morality is a process that contains many perfectible elements;
I'm not sure that I understand the difference between curve
fitting the elements of the model, and the model itself.
But you go on:
> moral argument is a multistep process that includes, for example, "rational
> reasoning" steps. These steps can be mistaken and can be corrected and the
> result is a "better" morality. An FAI curve fits (experientially learns)
> the steps in the process, not the final outputs.
Moral argument includes "rational reasoning"?? Of course all
cohesive thinking includes rationality, but a very concrete
example might help me understand what you are getting at.
Or you might like to work with this example: suppose that
at the end of time I shared a jail cell with Adolph Hitler
who steadfastly and consistently believed that Jews were
subhuman and that they warranted elimination. He would
argue for their destruction in exactly the same terms that
I would argue for the destruction of the small pox virus.
What could I say? I would attempt to show him (starting at
the molecular level) that Jews were hardly at all different
from other Caucasians that he treasured. I'd quote Shakespeare
"hath a Jew not eyes", and provide as many examples as possible
that on any close analysis, human beings simply *never* differ
enough to warrant his intense dichotomy. Perhaps you wish to
call this "moral reasoning" but that sounds way too high
falutin' to me. Hitler has many of the same data points we
do, and his hatred of Jews is just an outlier. Finally, it
would be, it seems to me, incredibly difficult for him to
claim consistency. It may be necessary to disabuse him of
the belief in souls, or something, but given enough time,
I don't think that rational people with plenty of food and
nothing to do but study can believe nonsense forever.
What is there to moral reasoning besides the Platonic "as
this is like this, so that is like that"?
Lee
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:34 MST