Re: Ethics

From: Daniel Fabulich (daniel.fabulich@yale.edu)
Date: Tue Jul 14 1998 - 19:16:34 MDT


On Tue, 14 Jul 1998, Joe Jenkins wrote:

> We both agree that the best known strategy for iterated Prisoners
> Dilemma is "tit for tat" for both egoism and utilitarianism. I know
> Prisoners Dilemma is a well developed field of study and we both agree
> it can be a useful tool for evaluating ethics philosophies. Although,
> I never understood what Richard Dawkins was getting at in "The Selfish
> Gene" when he commented that he does not advocate the use of Prisoners
> Dilemma to develop a system of ethics. Maybe he was referring to the
> mismatch with a lot of real word situations. Anyway, if ethics is to
> be rational, IMHO game theoretics allows a more critical evaluation
> than any other I know of. We must have a leg to stand on even if it
> is a little shaky. Thats why its still a philosophy.

In general, this is referred to as the "naturalistic" fallacy. Basically,
it is a fallacy to presume that just because we have evolved some system
of ethics, it must be right. By this argument, we could just as easily
operate entirely on instinct, which is more evolved (in the biological
sense) than logic, which may only have been around since Aristotle. I
must agree, however, that he has made a mistake when he associates game
theory and the naturallistic fallacy: it is an entirely different argument
to say that we should agree on an ethical system which HAS evolved and to
say that we should agree on an ethical system which MUST evolve given
certain realistic presumptions.

I personally see such simulations as part of the practical part the
answer, telling us more about what we ought to do within an ethical system
than defining the system itself. I only mention it here because it is not
intuitively obvious that the game should have such a successful strategy
in the iterated version (which is what most of us play most of the time)
and the non-iterated version. As I've already noted, consequentialism and
the generalization principle, which states that a rational system for one
person must also be rational for another person, together provide an
excellent argument for utilitarianism without these simulations.
 
> You have stated that non-iterated Prisoners dilemma does not render
> evolutionarily stable systems. Intuitively, this does not play well
> between my ears. But even if so (assuming you've seen the results of
> some computer simulation), again intuitively, would not some
> strategies be better than others in this game. I would bet on a "tit
> for tat" strategy before going all out with "always cooperate"
> (utilitarianism) or "always defect" (egoism) even if I'm
> non-rationally "tit"ing this guy because that other guy "tat"ed me. I
> can see "always cooperate" as the best strategy only in utopia which
> we both agree does not exist. Oh logic where have I failed thee.

They will all become Defectors if you try it. Think: If a Tit for Tat
(TfT) plays against a Defector, TfT will defect on the next round. If it
happens to defect against another TfT, the first TfT will cooperate on the
next round, but the second TfT will defect on the next round. No matter
how many TfTs there are, one of them will always defect on the next round.
Worse, the number of TfTs who will defect on the next round increases
every time a TfT who intends to cooperate plays against a Defector. When
any number of Defectors play against x TfTs, all x of the TfTs will always
defect after x games between a TfT and a Defector in which the TfT
cooperates while the Defector defects.

Anyway, this game is especially obvious in the version I originally
posted, the situation under which the prisoners, though known to each
other, will never play again. In that case egoism absolutely demands that
both players defect; despite the fact that it results in suboptimal
consequences.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:21 MST