Re: How You Do Not Tell the Truth

From: hal@finney.org
Date: Thu May 03 2001 - 19:18:36 MDT


Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:
> Essentially, Robin's paper gives a rigorous mathematical proof that for
> two people to (a) disagree and (b) maintain their disagreement after
> interaction, one or both of the parties must believe that they are more
> likely to be rational than the other person.

Do they have to assume that the other part is *not* rational, or is it
enough that the other party is *less* rational than them? (The first,
right?)

And by rational, you mean to be a Bayesian truth-seeking truth-teller?

Robin describes it as, "Thus the decision to disagree with someone
depends crucially on an estimate that one is more meta-rational, i.e.,
that one is more likely to be a truth-seeker with a rational core who
understands the irrationality of disagreement."

This suggests that being a Bayesian truth-seeker is not enough, it
is also necessary to understand the irrationality of disagreement.
Of course, since Bayesians draw all correct conclusions from existing
data, they have full, implicit knowledge of all of mathematics and hence
were completely aware of this result long before Aumann published.

One thing which is unclear in the paper is how you should react if you
become convinced that both you and the other party are both rational
and fully informed about these results. (Or, if it seems unreasonable
to suppose that any human beings can reach this state, consider it a
hypothetical question to ask how true Bayesians would interact.) As a
practical matter, what means do you apply to resolve your disagreement?

Obviously one of you has a more informed opinion, which is the ultimate
cause of the disagreement. You share common priors, but you have
different information (some of which is genetic and not accessible to
your conscious mind). How do you determine which view is more accurate?

The issue is especially difficult because the theorem not only requires
agreement to be reached eventually, but also constrains the path which
must be followed: "Even though agreement may eventually be reached,
rationality is still violated if John and Mary are able to predict
differences of opinions early in the discussion." Seemingly this would
mean that if Bayesians John and Mary gradually move closer to an eventual
compromise that they agree upon, this violates the theorem because they
could have predicted this path. Yet it seems obvious that something
like this must be how agreement is reached.

> If a 100,000:1 genius is interacting with a 10,000,000:1 genius, but
> neither of them knows the other's percentile, both will rationally assume
> that they are more likely to be rational than the other person. However,
> Robin's paper does prove that in *most* cases, rather than in the rarer
> instances where two geniuses unknowingly interact, people must be
> overestimating their own rationality relative to others, or else must not
> be using rigorous Bayesian reasoning with respect to what they are
> licensed to conclude from their own thoughts.

It seems that if both parties clearly explain their understanding about
this paradoxical result, each would be forced to accept that the other
had made at least a prima facie case for being rational enough for the
result to apply. They should then feel mutually bound to reach agreement.

Hal



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:27 MST