From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 06 1999 - 12:08:54 MDT
Matt Gingell wrote:
>
> Therefore - if we are being simulated by a computer built in a world whose
> physics match our own, and we are being simulated in real time, then the
> computer simulating us must be at least as big as our entire universe.
Except that once you start thinking in those terms, weird things start
happening to the way you think about the laws of physics. For example,
Universes where infinite computing power is possible, and the enclosing
Power doesn't absolutely prohibit Singularities, will tend to
out-reproduce (!!!) Universes where only limited sub-simulations are possible.
Of course, such Universes will also evolve so that new Singularities
tend to be interested in running computer simulations of a type that are
interested in running their own computer simulations... and so on and so
on. But are the mortals of the originating civilizations really in
charge? Would you, or I, or anyone on this list except possibly den
Otter, really allow all the suffering and pain and death if we could end
it? I find it easy to believe that many civilizations fall into the
temptation of programming AIs with Asimov Laws, which, under the logic
of this Universe, are unstable. The resulting AIs are inevitably
twisted in a way that lead them to "value" mortal existence by creating
endless copies of it, but not to actually serve or obey them. The
advocates of controlled Singularities - via uploading or controlled AI -
may be walking into a trap laid by the structure of the very Universe.
If life is really cruel, then programming the AIs as Externalists still
might not work. There might be an elaborate illusion of objective
morality, created by greater Powers and capable of fooling lesser ones.
Or it could simply trigger a failure mode and some swift internal
rewriting of the seed AI code by the wacky enclosing Power.
Long evolved lines of simulated Universes are not necessarily fun to be
in. And you have to worry about interference from *every* damn point
along the en*tire* line. Obviously, the relative sanity of human life
implies either convergence to a stable set of interference conditions,
or a noninterference directive imposed by a Power fairly close to the
start of the line.
So, which will it be? Program 'em as Asimovs and walk into the trap, or
program 'em as Externalists and run the risk of triggering a failure
mode? Extra Bonus Nightmare: Arbitrary sets of Power motives *are*
possible, and one of the Powers in our line started out as a seed AI
programmed by a reigning theocracy!
But at least the hypothesis explains both the existence of qualia and
the Great Filter Paradox with a single cause. In fact, if you suppose
that builder-specified or upload-preserved Power motives are possible,
it becomes, logically, just about absolutely certain. Because even if
you grant the existence of our limited Universe as a starting point, the
vast majority of mortal life - never mind qualia-having mortal life! -
will occur inside nanocomputers. Nanocomputers are so much more
efficient, in fact, that even if only one Power in a million is insane
or stably preprogrammed or whatever in the way that creates a
pre-Singularity-civilization-simulator, the simulations of
pre-Singularity civilizations inside that Power will *still* vastly
outnumber all the pre-Singularity civilizations in the real Universe.
But my intuitions say this Universe is real on the quark level, and I
trust my intuitions.
Pleasant dreams.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:03 MST