Re: reasoning under computational limitations

From: Wei Dai (weidai@eskimo.com)
Date: Thu Apr 01 1999 - 04:04:52 MST


On Mon, Mar 29, 1999 at 05:16:25PM +0000, Nick Bostrom wrote:
> Wei Dai wrote:
> > > And in general when faced with a computational
> > > problem that we can't solve, (1) does it always make sense to reason as if
> > > there is a probability distribution on the answer and (2) how do we come
> > > up with this distribution?
>
> We don't need to assume an objective chance for these cases if we
> don't want to. You can use a person's subjective probability, which
> you find out about by asking him about what odds he would be willing
> to bet at.

If computational uncertainty is covered under probability theory, maybe it
can shed some light on the Self-Indication Axiom (SIA). Suppose you are in
a universe with either 1 or 10 people, depending on whether the 100!-th
digit of PI is odd or even. (There is a machine that computes the digit
and then creates either 1 or 10 people.) Should you believe that the
100!-th digit of PI is more likely to be even than odd? This is similar to
the thought experiment in your DA paper where a machine creates 1 or 10
people depending on the outcome of a coin flip, but without the problem of
dealing with what it means for the coin flip to be "fair".



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:27 MST