From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Mon Oct 25 1999 - 06:33:33 MDT
On Sun, 24 Oct 1999, Eliezer S. Yudkowsky wrote:
> Joseph Sterlynne wrote:
> >
> > It seems that everyone is confident that an AI of sufficient intelligence
> > will be able to discover that it exists within a simulation.
[snip]
>
> Maybe it'd accept the laws of physics, if it wasn't smart enough to
> engage in a-priori ontological reasoning. But how is the AI supposed to
> accept itself? We all know that the human body and human mind are the
> result of evolution, right? Conscious design would be just as obvious
> to the AI.
Ah, but bottom-up AI (which is what we are talking about!) involves
a process of "scrambling" the code and selecting the best results.
Now, we as humans are slowly decoding the code and while we may
never have a perfect picture (because many of the steps are "lost")
we *will* be able to have a highly probable set of mutations, chromosome
translocations, etc. that lead to the rise of all existing species.
I very much doubt that in this process we are going to "discover"
someplace in our evolution, someone "tweaked" the code? Similarly
how would we ever "discover" that an alien SI "tweaked" the orbit
of a comet and sent it crashing into Earth 65 million years ago?
So long as the people controlling the simulation make the tweaks
in a way "consistent" with the level of noise or chaos in the
system, then it is going to be pretty difficult for the AI to
discover the simulation. Similarly if the simulation environment
is randomly generated in a manner consistent with the physical
laws of the simulation, I don't see how you can discover that
the simulation was artificial.
I look around us and see something like the speed of light and
say "now why does light have to travel at that speed"? I don't
see all of our brilliant physicists suggesting any experiment
that will demonstrate that that speed was arbitrarily set by
the simulation controller. Hell, the simulation controller may
have simply selected that speed at random. Unless you can make
an argument that the AI can discover simply by reasoning alone
that the speed of light is "rigged", I don't think it can get
out of the box.
The arguments thus far seem to imply that once the AI is much
smarter than us it can argue its way out. But the arguments
seem to rest on the premise that the AI gets "big" relative
to the size of the box (and can recognize the flaws in its matrix).
Interestingly, that argument works as well for us, that when we
get "big" relative to our box (know all the physical laws,
control large amounts of the matter and energy, etc.) that
we will discover the ways our universe is "rigged" and start
to talk our way out of the box.
If that turns out to be true, then that makes me happy about
our future 10^100 years from now when this universe starts
to get cold and dark.
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST