Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors?

From: CurtAdams@aol.com
Date: Sun May 06 2001 - 22:05:43 MDT


I consider the situation of two uninformed Bayesian agents becoming aware of
differences in priors about a binary world-state. I derive their confidence
functions in the binary world-state upon the information of the other’s
priors and show the confidence functions coincide only if priors are drawn
from one particular distribution strongly informative on the world-state. I
non-rigorously prove the result generalizes to a world with any finite number
of possible states and to Bayesian agents with access to common public
information. I present arguments for the conjecture that the result further
generalizes to worlds with infinite states and to Bayesians with private
information. Hence, rational Bayesians with initially differing priors
should continue to disagree even when fully informed of each other’s beliefs.

Take a world with two possible states, Q and ~Q. Assume two Bayesian agents
with prior degrees of belief in Q, denoted as A and B, and degrees of belief
in ~Q of (1-A) and (1-B). For notational convenience in text transmission, I
denote the degree of belief of a Bayesian agent with initial degree of belief
X, given information Y, as (X)Y. Priors provide information in that they
depend on the state of the world; for any given prior P there is a
probability P|Q of that prior in world with Q and P|~Q in worlds with ~Q.
This defines a function f(P) = (P|Q)/(P|~Q) which eases notation of
confidence functions upon the information of a new agent with prior P.

On learning of B, A should now have a degree of belief in Q of A*B|Q and a
degree of belief in ~Q of (1-A)*(B|~Q). Normalizing A’s total degree of
belief to 1 and using f(B) to simplify notation, I derive:
(A)B = A*f(B)/( A*f( B)+(1-A)) (1)

And by symmetry

(B)A = B*f(B)/(B*f(A) + (1-B)) (2)

If A and B commonize their priors on learning of each other, (A)B = (B)A
Solving for f:

A*f(B)/(A*f(B) + 1-A) = B*f(A)/(B*f(A) + 1-B)
A*f(B)(B*f(A) + 1-B) = B*f(A)( A*f(B) + 1-A)
AB*f(B)*f(A) + A(1-B)f(B) = AB*f(B)*f(A) + B(1-A)f(A)
A(1-B)f(B) = B(1-A)f(A)
F(A)/f(B) = A(1-B)/B(1-A) (3)

This requires f(A) = cA/(1-A) (4), with some arbitrary constant c. Hence
priors must be highly dependent on the world-state. In particular,
completely uninformed individuals are almost never profoundly wrong: given
world-state Q, the chance of a prior with low A (relatively strong disbelief
in the actual world-state) goes to 0 as A goes to zero, and does so rather
rapidly. This disagrees markedly with actual experience, which shows most
completely uninformed people have very incorrect beliefs.

Let us suppose the agents with priors A and B have access to additional
public information E prior to learning each other’s priors. If they concur
at this stage, then ((A)E)B = ((B)E)A. By standard Bayesian inference,
((A)E)B = ((A)B)E so ((A)B)E = ((B)A)E. Hence, by Bayesian rules (A)B) =
(B)A. Hence two Bayesian agents with access to public information will
concur only under exactly the same restrictive conditions required to concur
in the absence of public information.

A world with three states Q, R, and S can be described with the binary
beliefs Q/~Q and R/~R, given that Q implies ~R. In order for two agent to
agree on degrees of belief to all three states, they must concur on both Q
and R. The requirements to concur on binary beliefs Q and R are as above.
By induction, two agents will concur on finite multi-state worlds only if the
probability of a degree of belief AN in each particular state N follows the
condition p(AN) = cAN(1-AN).

Even if the world consist of an infinite set of possible states, these can be
partitioned into two sets, each of which can be partitioned into two sets,
etc., leading to a sequence of binary possibilities Q1, Q2, Q3,… For two
agents to concur on the confidence function over all sets, it seems intuitive
they must concur on Q1, Q2, Q3 …, with each concurrence requiring the
distribution of priors on each Q follow condition (4).

Finally, if Bayesian agents with private information meet, the conclusion
above follows by replacing prior A with private-informed degree of belief A.
The requirement in (4) still holds, except that now private-informed beliefs,
rather than priors, must follow the distribution. I conjecture that if, in
some world, (4) holds for private-informed beliefs, gain or loss of private
information would in general cause (4) to no longer hold.

My personal experience is that priors and private information are only weakly
informative; i.e., even given world-state Q obtains, it isn’t particularly
difficult to find uninformed individuals with strong disbelief in Q. Given
this, the probability of a given degree of belief A in Q varies only mildly
with whether Q obtains. Hence the information derived from a given person
holding a degree of belief Q is small and a rational Bayesian should have
only a small change in belief on learning another’s opinions. Rational
Bayesians, then, generally should maintain differences of opinion due to
differences in priors. Under most circumstances, for two agents to commonize
priors requires a violation of Bayesian inference.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:31 MST