Curt Adams wrote:
> >You were addressing updating on the fact that someone else received
> >the prior they did, not on the fact of yourself receiving the prior
> >you did. I again point you to: http://hanson.gmu.edu/prior.pdf or .ps
>
>You make a huge assumption with the consistency assumption, that a prior is
>obtained by updating the uberprior with the fact of the prior assignment.
>... you assume that all priors are extremely well founded. An easy example
>where this is false is that the uberprior offers two possible priors (25% of
>Q vs. 75% of Q) and a 50% chance of Q, regardless of assigned prior. ...
>Another way of stating it is that you effectively assume everybody has the
>same prior (the uberprior) ... Given that assumption, yes, it is
>rational for John and Mary to update their beliefs on each other's priors;
>the differences between their priors results only from good Bayesian
>inference on honest data. But that is not my experience of priors, at all;
>as far as I can tell, they're pretty random even where humans should have
>built-in data like social interaction and completely random for, say,
>chemisty. ... the restriction is generally false in the real world; ...
I was making a *normative* argument about rational beliefs, not a descriptive
model of actual beliefs. Yes, people disagree, and that can't be explained
by assuming people are Bayesians with the same priors. That has been the
whole point of this discussion. I argue that not all possible priors are
rational -- rational priors should be obtained from "uberpriors" via
conditioning. I do *not* assume that everyone has the same uberprior,
actually, but I do describe plausible constraints on such uberpriors that
imply common priors.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT