From: Wei Dai (weidai@weidai.com)
Date: Mon Mar 29 2004 - 05:54:57 MST
On Sun, Mar 28, 2004 at 08:29:34PM -0700, mike99 wrote:
> For a detailed consideration of the argument that we are living in a
> simulation, see Nick Bostrom's page with essays by various contributors at:
> http://www.simulation-argument.com/
I've read that paper, but my question is different from the one he
addresses. Bostrom focuses on the possibility that we're living in an
ancester-simulation run by posthumans. This doesn't really apply to an SI,
who I think would be much more interested in the possibility that it's
living in a universe simulation running in a computationally much richer
base reality.
Of course some of the reasoning Bostrom uses applies to this case,
but we also need something that he doesn't address: a prior distribution
on the laws of physics of base reality. He doesn't have to address it
because he assumes that the laws of physics of base reality are the laws
that we observe, and the only question is whether we're living in an
ancestor-simulation or not.
One appealing answer to this question of the prior is to define the prior
probability of a possible universe being base reality as the inverse of
the complexity of its laws of physics. This could be formalized as P(X) =
n^-K(X) where X is a possible universe, n is the size of the alphabet of
the language of a formal set theory, and K(X) is length of the shortest
definition in this language of a set isomorphic to X. (Those of you
familiar with algorithmic complexity theory might notice that K(X) is just
a generalization of algorithmic complexity, to sets, and to
non-constructive descriptions. The reason for this generalization is to
avoid assuming that base reality must be discrete and computable.)
That still leaves the question: which set theory? This is analagous to the
question in algorithmic complexity theory of which universal Turing
machine do we use to define algorithmic complexity? Algorithmic complexity
theorists have ignored the question (because they can obtain asymptotic
results without having to define a particular Turing machine) so they're
not of much help to us on this point.
The lack of a objective criteria for choosing a formal set theory for this
purpose leads me to wonder if perhaps the choice of a prior is a
subjective one, similar to the "choice" of a supergoal in the presumed
absence of objective morality. In case it is, shouldn't we try to answer
this question before building an SI?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT