Robin Hanson wrote:
>
> To appear, Summer 2002, Social Philosophy & Policy 19(2).
> Current version at http://hanson.gmu.edu/bioerror.pdf or .ps
This is a really fascinating paper... from a Friendly AI standpoint. A
lot of the concepts introduced in the opening sections sound like the sort
of thing a maturing AI would formulate to learn and quantify human
ethics. Is the paper likely to stick around on your website, and is it
available for linkery and/or citation?
I'm surprised at the degree of useful analysis that ethics would seem to
have undergone, in particular the concept of reflective equilibria. Are
there any moderately technical works you would care to recommend on the
subject?
I have a few objections to the way you phrase your evolutionary reasoning,
in particular:
"Often a moral intuition about the worthiness of some action comes
packaged with a rationale that its purpose is to benefit some third party,
even though a careful study of the details and origin of that intuition
suggests that it functions primarily to benefit the actor or his close
associates. Such moral intuitions are commonly considered to be
especially likely to be in error, all else equal, through some complex
process of self-deception."
It is an error to characterize the political emotions as being "complex
processes of self-deception". Political emotions are not self-deceptions;
they are evolved biases in systems of honestly held beliefs. Political
emotions evolve in imperfectly deceptive social organisms; all else being
equal, the actor is more likely to convince others of a given statement if
he believes it himself. The evolutionary rationale is real, but is
nowhere represented in cognition. A slave-owner who believes that slaves
cannot be trusted with freedom is not deceiving himself; he is rather
being deceived by evolution - he is making cognitive errors which have
adaptive value (for him).
Some other minor but evolutionarily-bothersome errors scattered through
the document:
"When people want to signal their health, wealth, and intelligence to
potential mates, they still use the ancient signals of physique, sports,
fancy clothes, and witty conversation. They do not use the plausibly more
reliable modern substitutes of medical tests, bank statements, educational
degrees, and IQ scores."
This presumes that the female of the species is seeking health, wealth,
and intelligence as declarative goals, rather than responding to cues
which were adaptive cues in the ancestral environment; similarly, that the
male is executing a consciously reasoned mate-acquisition strategy rather
than acting on instincts for "how to acquire mates" which were adaptive in
the ancestral environment. However, these errors are comparatively
trivial.
"Infrequent large costs, in contrast, can signal long term allegiance.
[...] Health care is thus our one remaining ancient strong signal of
long-term loyalty to associates, a signal that has likely stuck even as
the world has changed."
Health care is not just a signal for loyalty because of its sparse
temporal distribution, but because of its context-insensitivity. Someone
who is sick or injured has apparently decreased in value as an ally; thus,
caring for such an ally sends a signal to nearby observers that the carer
is someone who can be relied on to remain allied even under extreme
circumstances. Again, this motive does not need to be declaratively
represented to become enshrined as an adaptive type of reasoning; indeed,
it is evolutionarily more adaptive if the ulterior motive is *not*
represented - if the carer *genuinely* cares about the recipient. An
unconditional ally is substantially more valuable than a conditional ally;
thus, humans are evolved to admire unconditionality and be disgusted at
conditionality; thus, humans are now evolved to have and display
unconditional emotions, in defiance of short-term payoffs and penalties.
(For the record, please do not interpret the above as a statement that I
believe unconditionality to be irrational. More the reverse; there are
some emotionally buried forms of context-sensitivity that I regard as
naturalistic infringements on valued moral principles.)
"If we think of status as having many good allies, then you want them to
act as if they were sure to be of high status."
Why?
"So if investing in one’s health is more attractive when one has many
allies,"
Why?
"then you will want your ally to invest more in health than he or she
would choose for himself or herself."
This conclusion follows from the premises, but both premises strike me as
non sequiturs. I do not see why either premised behavior is adaptive, or
how the final conclusion is adaptive.
Personally, I would conclude the reverse; in terms of evolutionary
rationale, a patron is more likely to want the ally to expend resources on
group ventures, while the ally is more likely to expend resources on
increasing context-insensitive personal effectiveness, which includes
health care. A person is much more likely to rationalize the group
utility of personal health care than a third-party observer. But the main
thing I'm objecting to is that you went *way* too fast in that paragraph
and totally lost me.
"Thus your allies should care more about your health than about your
happiness."
*This* makes perfect sense, and you don't need the earlier stuff to lead
up to it. Health makes someone a more valuable ally. Happiness may or
may not. If an ally wants to be happy at the expense of their health, it
is adaptive for you if they don't take that action. This is a valued part
of friendship! Some people want to resist temptation, but they can't
resist temptation in contexts where giving in to temptation was adaptive
in the ancestral environment. Thus it's the role of the friend to counsel
the befriended against eating too many cookies or committing adultery. An
elegant and symbiotic relationship.
"While we believe that our apparent paternalism is a response to the
ignorance of those we are supposedly helping, it seems to actually be the
direct result of an ancient fear that such people will not remain in our
group of allies."
I have to say that I found this totally unconvincing. It looks to me like
it's just standard paternalism, the political emotion whereby gaining
power over others (context-insensitive, adaptive power) is rationalized as
a group benefit.
I see no domain specificity for health care; it is a special case of
paternalism on the group scale, and "friends help you resist temptation"
on the individual scale.
"Also, while many believe they want NHI to deal with some internal market
failure, the actual function of NHI appears to be to promote national
solidarity."
Very shaky. Again, I see no reason to hypothesize domain specificity for
health care; as far as I can tell, advocacy of NHI is generated by the
same set of causes that generate advocacy of Welfare. (Social Security
does plausibly have domain specificity with respect to our instincts about
how to treat elders.) To be specific, both Welfare and NHI derive
argumentary force from our intuitions about how to treat tribal members
who are the victims of unpredictable, major, temporally sparse
catastrophes. Opponents of both Welfare and NHI invoke mental imagery
regarding "savings" or "insurance", i.e. that the catastrophes are
predictable and preventable.
But your point about why people don't seek information about quality
provided to others, despite their declared concern, is probably correct.
Page 19 and 20 struck me as very hard to follow - you seemed to keep
switching between individual definitions of benefit, evolutionary
definitions of benefit, and morally normative definitions of benefit. And
I didn't understand how any of it was relevant to health care.
In the appendix... well, basically I disagreed with just about everything
you said about health care and status, for the reasons given above.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:39 MDT