From: CurtAdams@aol.com
Date: Tue May 08 2001 - 17:54:41 MDT
In a message dated 5/8/01 6:37:53 AM, rhanson@gmu.edu writes:
>You were addressing updating on the fact that someone else received
>the prior they did, not on the fact of yourself receiving the prior
>you did. I again point you to: http://hanson.gmu.edu/prior.pdf or .ps
You make a huge assumption with the consistency assumption, that a prior is
obtained by updating the uberprior with the fact of the prior assignment.
This assumption is a generalized equivalent to my condition that prior
probabilities must be a particular function of the world state. Essentially
you assume that all priors are extremely well founded. An easy example
where this is false is that the uberprior offers two possible priors (25% of
Q vs. 75% of Q) and a 50% chance of Q, regardless of assigned prior.
Updating the uberprior with the assigned prior leaves the expectation for
worldstate probabilities unchanged and at variance with the obtained prior.
I haven’t proved it yet, but if you apply your condition to my simplified situ
ation you’re going to extract my prior probability requirement. Because your
condition is more generally applicable it’s not immediately obvious what an
incredibly strong assumption it is.
Another way of stating it is that you effectively assume everybody has the
same prior (the uberprior) which has only been altered by events in perfect
accordance with Bayesian inference, even though the inference process is
cloaked; i.e., private information. Given that assumption, yes, it is
rational for John and Mary to update their beliefs on each other’s priors;
the differences between their priors results only from good Bayesian
inference on honest data. But that is not my experience of priors, at all;
as far as I can tell, they’re pretty random even where humans should have
built-in data like social interaction and completely random for, say,
chemisty.
Technically, you're right that we agree; we've both shown a strong
restriction on prior assignments is necessary and sufficient to cause
Bayesian agents that hold such priors to hold common beliefs on encountering
each other. [Technically, I showed it's necessary and you that it's
sufficient; but the inversions are trivial.] Your derivation is more general
and complete than mine. However, the restriction is generally false in the
real world; this is immediately obvious the way I state it but not so obvious
the way you do.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:33 MST