From: Lee Corbin (lcorbin@tsoft.com)
Date: Fri Nov 22 2002 - 08:20:35 MST
Anders writes
> > > I have always thought there should be a children's
> > > book about [tit for tat].
> >
> > Well, I think that *children* already understand it quite well:
> > just observe three year olds on a playground.
>
> They get the tit-for-tat bit well, but they often are too
> retaliatory. Just some fine-tuning needed :-)
Too retaliatory? I guess you mean "two tats for one tit".
But while you're at it, could you explain why EFAE (an eye
for an eye) is not tit-for-tat? (It was claimed not to be
tit-for-tat in some earlier post in the old thread.)
> > The most interesting scenarios are those that start from
> > MAD and then slowly evolve towards
> >
> > Cooperate Defect
> > Cooperate 1,1 -1000, 100
> > Defect 100, -1000 -100, -100
> >
> > Yikes! It's like slowing raising the temperature in
> > a kettle holding a frog. Is one to be "hyper-rational"
> > and start the war immediately as soon as one realizes
> > that the evolution towards the latter table has started?
> > Or is one to grimly hang-on, hoping against hope that
> > either the evolution stops or the other party grimly
> > also hangs on?
>
> If the rate of change is slow, then it is likely that new factors
> will appear, making the hyperrational choice cooperation in the
> expectation of new options. Also, since both players to some extent
> can influence the matrix, they might set out to stop the evolution.
That sounds prudent :-) but uninteresting from a game theory
POV :-(
> One neat solution would be to have each side emplace bombs wherever
> they want in the other side's country.
I've been thinking of that! Since the U.S. can target every town
in Iraq, wouldn't it be symmetrical and fair for Iraq to have
smuggled in an A-bomb into every American town?
> Tamperproofed bombs that could be detonated remotely, but would
> do so (say) half an hour after the signal was given. That would
> firmly force the matrix to the first form (retaliatory capacity
> on subs is of course a less advanced and more reliable solution,
> but the above is the cool and sf-friendly solution :-)
The stronger (or more technologically advanced) side would have to
be out of its mind to allow that. But it's interesting, and I
wonder if the U.S. during the cold war---despite its internal
propaganda---felt itself holding the best cards. It seems that
the U.S. always had a big edge, and that the Soviet planners
knew it.
But to return to the hard case. Let us suppose that the planners
on both sides must accept evolution towards the table above, in
which progressively it becomes more and more advantageous to
strike first, and more and more dangerous not to.
I submit that one can delay immediately attacking one's opponent
by noticing that real life is not a zero sum game, and that any
delay is a payoff. Thus, IMO, it becomes very similar to the
finitely iterated PD.
The finitely iterated PD goes like this. You and the other
contestant get to play the following table 200 times:
C D
C 300,300 100,500 (denominations in
D 500,100 0,0 Euros or dollars)
Now on the last round (for those who haven't seen this sort
of thing), there is no incentive to cooperate. (The reason
that there is some incentive to cooperate in earlier rounds
is that it is profitable for both sides to milk the situation
for a while.) But then when you realize that there is no
incentive to C in the last round, there becomes no incentive
to C in the round before that, and all the way back.
So just when *would* you defect? (Of course, for newbies
in game theory, it is not allowed to consult your altruism
module.)
Lee
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:18 MST