From: Robin Hanson (hanson@hss.caltech.edu)
Date: Tue Mar 25 1997 - 18:28:19 MST
Lee Daniel Crocker responded to my response:
>> If someone is a reasonable person, then the fact that they have come
>> to some opinion is information to you. It is an important clue.
>
>Yes, that's what I thought I said. It's a clue; one among many.
>But since I'm the one affected most by my decisions, and I'm the one
>who has to take responsibility for them, I'm the only one I allow
>to make the final decision as to which clues have better proof. I
>give a very high weight to my own observations and experiences over
>the opinions of others. ...
>If you ask how many people whose opinions--in the absence of any
>direct experience of mine--I value so highly that I might actually
>re-examine my own ideas in light of their disagreement, then yes,
>there are very few of those indeed.
I think you are making a serious cognitive error here. Briefly: the
fact that you make final decisions is irrelevant to the weight you
should give to the opinions of others. This weight can be objectively
calculated in principle, and is typically large, as Damien S. said.
Since your post was endorsed by Michael Lorrey, ShawnJ99@aol.com, and
GeoffCobb@aol.com, I'll try to explain in some detail.
Imagine we have some set of alternative theories of how the world is,
and also have some set of relevant evidence. Now consider asking:
does this evidence lend support to one of these theories relative to
the others? If so, how much support?
There is relatively widespread agreement that the answers to these
questions are not a matter of personal taste. Roughly: each
theory can be used to generate expectations regarding the sorts of
possible evidence one is likely to observe, and theories which suggest
a higher likelihood for observing the actual evidence are better
supported. Bayesian decision theory is an exact framework for making
such calculations, but there are many other approaches which roughly
agree in the standard cases.
Regarding what weight to give to the opinions of others, the key point
is: opinions are just another kind of evidence. To decide how much
support any one person's opinion gives to any one theory, you ask:
it this theory were true, how likely would it be for this person to
come to this opinion. If this likelihood is relatively high, this
opinion supports this theory. This degree of support is not a matter
of personal taste, nor does it depend on your level of "responsibility".
Of course people are complicated, so there are a lot of details to
consider in estimating such chances. You should in principle consider
whether this person just likes to be a contrarian or a conformist,
whether you expect them to make many or few cognitive mistakes in this
area, how much weight they give to other people's opinions, what other
evidence they may have seen, whether they might want to deceive you
and others regarding their opinions, and whether they know of your
opinion. Of course you may well want to use heuristic approximations
here in place of detailed exact calculations.
There are lots of explicit models of situations like these, most but
not all using Bayesian inference, and the robust result is that other
people's opinions should be given a lot of weight, typically much more
than the weight given to your personal experience. In fact, it turns
out to be difficult to explain persistent divergence in opinions.
For example: should your first vehicle purchase be a car, pickup, or
motorcycle? Well you might rent/borrow one of each for a week to try
out, but even then there's a lot you wouldn't know about how
comfortable they'd feel after a year of driving, how much trouble they
are to keep up, how dangerous they are, etc. You are well advised to
take the vehicle choices of more experienced people around you as a
strong clue regarding these uncertainties.
Or consider the winner's curse in common value auctions (such as for
oil tracts or art for resale). You're a fool not to consider the fact
that winning the auction tells you that everyone else made a lower bid
than you, and likely has a lower estimate of the item's value.
Similarly, while you might abstractly reason that "paternalism",
forcing people to do things good for them, vs. just advising them, is
a bad idea, the experience of billions of parents, teachers,
employers, and local governments of thousands of years contains much
relevant information. Why do they continue in this practice if it is
worse than an obvious and widely tried alternative? ("For their own
benefit, not for those they `help'", is not a good answer. I have a
paper on a better answer.)
Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:18 MST