From: Anders Sandberg (asa@nada.kth.se)
Date: Fri May 24 2002 - 08:41:17 MDT
Maybe I'm missing some of the context, since I'm not following the
big noisy thread at the list center right now, but I found this
question interesting in its own right and besides, it gave me the
chance to do a small plug for my latest paper :-)
On Thu, May 23, 2002 at 02:16:41AM -0700, Wei Dai wrote:
> Consider a situation where you absolutely don't have time to judge someone
> on his own merits before having to make some decision. The only
> information you have is that he belongs to a certain group. Should you
> ignore that information and just treat him as a random human being?
Use Bayes' theorem and estimate that he has property X given
membership in group G:
P(has propery X | member of group G) = P(X and in G)/P(in G)
Given the available information you can calculate the probability
that you do action A given it P(A|G). The expected cost is
C = P(A|G)(c1*P(X|G)+c2*(1-P(X|G))+ (1-P(A|G))(c3 P(X|G) + c4*(1-P(X|G)))
= P(A|G)[(c1-c2-c3+c4)P(X|G)+c2-c4] + (c3-c4)P(X|G)+c4
where c1,c2,c3 and c4 are the costs of doing A when he is X, doing A
when he isn't, not doing A when he is and not doing A when he isn't.
This is a linear function (real people have nonlinear subjective cost
functions, but lets skip that for now), so the lowest cost is reached
at one of the endpoints. You should do A if (c1-c2-c3+c4)P(X|G) <
(c4-c2)
If you just assume him to be a random person, P(X|G) is replaced by
P(X). If we assume P(X) to be small and P(G) to be relatively small,
then in general P(X|G) will be larger than P(X). If type II errors
(you do A, he is not X) are costly (large c3) then this will very
likely increase your likeliehood of doing A. If type I errors (you
don't do A, he is X) are costly (large c2) then the situation is less
clear, but it seems the formula correctly predicts that you should do
A less.
The problem inherent in all this is of course that estimating P(X&G)
and P(G) on the fly are uncertain. Especially if there is a benefit
in appearing to have property X. One can model this by making a prior
assumption about the probability of cheating P(C)
P(X|G)=P(X&G)/P(G) = [ P(X&G|C)P(C) + P(X&G| not C)(1-P(C))]/
[P(C)+(1-P(C))P(G)]
Of course, this is a kind of infinite regress where we normally make
heuristic decisions. Nobody makes Bayesian calculations in a critical
situation. Instead we rely on the probability estimates our brains
have estimated from experience (read all about my neural network
model of it in A Sandberg, A Lansner, K M Petersson and Ö Ekeberg, A
Bayesian attractor network with incremental learning, Network, Volume
13, Number 2, May 2002
http://www.iop.org/EJ/S/UNREG/8TKEdhOjCpOuc5KAEIflfA/toc/0954-898X/13/2 :-)
We are constantly updating our probability estimates of different
things and the likeliehood of them co-occuring. The challenge is to
know how fast to update things and when to actively search out
information.
> What if he chose to join that group voluntarily? Does that change
> anything? What if he was born into that group but had the choice of
> leaving it, perhaps at some cost? Does it depend on the type of group? If
> so, which types should be considered, and which types should be ignored?
It is a game theoretical issue. Some groups are more fair predictors
(i.e. P(C) is low) than others, but they might not be good predictors
of X. It all comes down to having prior information when making
decisions, or - if you have time - looking at likely economic and
game theoretical equlibria. Mathematically it is just more
calculation.
> I would welcome any suggestions on how to derive the answers to these
> questions from Extropian principles.
Hmm, what about this: We seek to maximize our utility (and possibly
the utility of others) [Perpetual Progress - i.e. we seek to achieve
good aims], so we decide to act [Self-Direction - nobody is acting
for us, making our decision or responsible for it but us] according
to the best information available seeking to make a decision
that conforms to reality [Rational Thinking - making an approximation
to my above reasoning or some better line of reasoning]. We might
have somewhat more optimistic priors than others [Practical Optimism
- although the principle is more about how to act than what to
expect], but we seek to learn from our experiences to make these
actions produce good outcomes [Self-Transformation and a bit more
Rational Thinking]. Using technology and the knowledge of others
[Intelligent Technology and Open Society - note that my math is an
example of IT and the open society is conductive to information,
helping us to make rational decisions] we can reach even better
decisions.
The Principles are more of a moral framework than a direct
decision-making framework - they do not prescribe whether to use the
Axiom of Choice in math or not, but rather to what end we would like
to use it.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:19 MST