From: Lee Daniel Crocker (lcrocker@calweb.com)
Date: Tue Mar 25 1997 - 22:26:54 MST
> I think you are making a serious cognitive error here. Briefly: the
> fact that you make final decisions is irrelevant to the weight you
> should give to the opinions of others. This weight can be objectively
> calculated in principle, and is typically large, as Damien S. said.
That is indeed an error, but not a serious one. Let me correct: I have
the final responsibility for my decisions, so I am obligated to use the
best methods I can to choose a course of action. Independently of that,
I also happen to give little weight to the opionions of others, which
may or may not be a mistake on my part.
I will also admit that it is quite possible that the number of
decisions I make using conscious reasoning is numerically overwhelmed
by those I make sub-consciously by simply following whatever memes
happen to floating in the soup. I do not stop to reason the way I
walk, or the way I open a door, or the way I reach for something on
a shelf. I suppose if I cared about such things I could study the
orthopedics involved and find effective ways, but I don't. I just
do it the way I do it. In that sense, it is possible that I /do/
in fact "listen" to the opinions of others in the way I hold a phone
to my ear, because I subconsciously do it the way I see it done.
But in the subset of decisions I make consciously, I do not.
That imples some method of determining which decisions call for my
attention and which do not. I don't think I have any clear view
of that, so that decision seems to be in the automatic set. I'll
have to think about that. I can certainly imagine heuristics: how
large an effect will this decision have on my future? How likely
do I think it is that the majority opinion might be wrong? It may
be that the questions I have chosen to examine closely in my life are
precisely those for which following the majority has failed me, so
my selective memory tends to devalue majority opinion as a result.
> Imagine we have some set of alternative theories of how the world is,
> and also have some set of relevant evidence. Now consider asking:
> does this evidence lend support to one of these theories relative to
> the others? If so, how much support?
>
> There is relatively widespread agreement that the answers to these
> questions are not a matter of personal taste. Roughly: each
> theory can be used to generate expectations regarding the sorts of
> possible evidence one is likely to observe, and theories which suggest
> a higher likelihood for observing the actual evidence are better
> supported. Bayesian decision theory is an exact framework for making
> such calculations, but there are many other approaches which roughly
> agree in the standard cases.
There are many complications here as well. What is the likelihood
that /any/ theory of the set is correct? All of human knowledge
from all of history doesn't amount to a hill of beans compared to
what there is to know. Also, how likely is it that human perception
is wired to make evaluating a particular piece of evidence difficult
(e.g., our motion-detection circuitry fooling us into believing that
the images we see on the movie screen are actually moving). And how
likely is it that human psychology will predispose us to believe
things in spite of evidence (a widespread phenomenon which I can't
possibly justify discounting). You mention others as well, such as
the possibility of deliberate deception.
> There are lots of explicit models of situations like these, most but
> not all using Bayesian inference, and the robust result is that other
> people's opinions should be given a lot of weight, typically much more
> than the weight given to your personal experience. In fact, it turns
> out to be difficult to explain persistent divergence in opinions.
Doesn't this treat "evidence" as if it were a continuous quantity?
As if it just piles up in front of each theory to give it weight (in
fact, the very phrase "weight of evidence" assumes this). But history
and experience show us that not all evidence is created equal: one
measurement outweighs 1000 expert opinions. If your mathematical
model shows that generally taking seriously the opinions of others
tends to reliably steer you toward theories supported by the most
evidence, that is no use if the theory with centuries of history comes
against a single experiment that not everyone has seen yet, but that
inescapably discredits the majority. Centuries of the practical use
of Newtonian mechanics were no match for a single Michelson-Morley.
> For example: should your first vehicle purchase be a car, pickup, or
> motorcycle? Well you might rent/borrow one of each for a week to try
> out, but even then there's a lot you wouldn't know about how
> comfortable they'd feel after a year of driving, how much trouble they
> are to keep up, how dangerous they are, etc. You are well advised to
> take the vehicle choices of more experienced people around you as a
> strong clue regarding these uncertainties.
A fair strategy, given the inability to observe directly which is
built into that scenario.
> Or consider the winner's curse in common value auctions (such as for
> oil tracts or art for resale). You're a fool not to consider the fact
> that winning the auction tells you that everyone else made a lower bid
> than you, and likely has a lower estimate of the item's value.
In the marketplace, opinions /are/ value, because that's what is being
measured. That says nothing at all about the strength of opinions of
other issues.
> Similarly, while you might abstractly reason that "paternalism",
> forcing people to do things good for them, vs. just advising them, is
> a bad idea, the experience of billions of parents, teachers,
> employers, and local governments of thousands of years contains much
> relevant information. Why do they continue in this practice if it is
> worse than an obvious and widely tried alternative? ("For their own
> benefit, not for those they `help'", is not a good answer. I have a
> paper on a better answer.)
(1) All the alternatives have /not/ been tried. (2) Existing practice
is a demonstrable failure in many cases. (3) They may keep practicing
it out of fear, psychological predisposition, insufficient knowledge
of available alternatives, incomplete understanding of the "evidence",
predisposition to favor the status quo and ignore or deny its failures,
and other reasons. (4) By your reasoning, nothing new and innovative
would ever happen, because that /requires/ imagining that every other
opinion on the planet might be wrong. Human progress depends on such
precious arrogance.
-- Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lcrocker.html> "All inventions or works of authorship original to me, herein and past, are placed irrevocably in the public domain, and may be used or modified for any purpose, without permission, attribution, or notification."--LDC
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:18 MST