Re: copying related probability question

From: Nicholas Bostrom (bostrom@mail.ndirect.co.uk)
Date: Sat Sep 27 1997 - 18:20:19 MDT


I have at last had time to read the interesting postings by Wei, Hal
and Eliezer on this thread. Here are two preliminary remarks:

(1)
Hal wrote:
>In effect, cloning makes it a non-zero-sum game. The changes
>introduced by cloning in potential future earnings cause the gains
>or losses of the bet to be evaluated differently. The result is
>that it may be rational to take a bet which gives an expected profit
>to the other player.

The utility functions of the individual clones may
or may not assign equal utility to the money that goes to the other
clones. It is not necessary that a clone must think an outcome is
only half as good just because it means he has to give
half of the money to his clone. E. g., he might regard his clone
as another part of himself. Thus it appears that the fact that
earnings might need to be divided in the cloning example is not
essential to the rationality problem. It might therefore be better to
shift the example to one where this confusing aspect does not occur.

(2)
Wei wrote:
>Nicholas wrote:
>> This thread is closely related to the Carter-Leslie Doomsday
>> argumen. I have a paper about this:
>>
>> http://www.hedweb.com/nickb/140797/doomsday.html
>>
>> (If anybody has any idea on how we might solve the so-called
>> "problem of the reference class", I would be all ears.)

>I agree with Nicholas that these two problems are closely related.
>The nice thing about the copying problem is that the reference class
>is very clear. It is the class of your (potential) clones.

If the thought experiment is supposed to occur in the actual world,
then there will also be other members of the reference class, namely
all those people not involved in the copying procedure. These are the
"outsiders". As I argued at length in the Doomsday paper, the
Doomsday argument only works (at full strength) if we postulate the
"no-outsider requirement": that there be no outsiders in the world.

> It's
>interesting that if answer set B is correct, then something like the
>self-indication axiom does apply in the copying problem.

Well, that would be exactly the way things would appear if there is a
large number of outsiders. Let us consider the following thought
experiment (due to John Leslie):

God's coin toss:
In an otherwise lifeless universe, God decides to toss a coin and
create 10 humans if it lands head and one human if it lands tail.
(Really, we should take "God" to mean a randomised human breeder
machine, in order not to need to consider the possibility that you
might be God Himself.) Now, suppose you find yourself in such a
universe but you havn't seen how the coin landed.

The self-indication axiom says that you should think it much more
likely it landed head. The self-indication axiom is very dubious
(and probably false).

On the other hand, if there will exist a million humans independently
of how the coin landed, then it is likely that you will not have been
created as a result of the coin toss. But given that you were created
as a result of the coin toss then it is ten times more likely that
the coin landed head .For consider the conditional probability Pr(I
was created because of the coin toss. | There are 1010 humans.). This
is clearly greater than Pr(I was created because of the coin toss. |
There are 1001 humans.) If there are 1010 humans then about 1% of
them will be coin-toss humans; if there are1001, then the figure is
about 0.1%. By Bayes' theorem, we conclude that after
conditionalising on "I was created as a result of the coin toss." it
is more likely (by about a factor 10) that the coin showed head than
that it landed tail.

If this is right then it is important to distinguish the case where
the no-outsider requirement is satisfied from cases where it isn't.

Nicholas Bostrom
http://www.hedweb.com/nickb



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:58 MST