The disadvantage of this scheme (apart from the energy
costs) is that everything else you left behind will have
aged to the same degree as the computer. So unless you can
bring everything and everyone you care about on your voyage
with you, the sacrifice might not seem worth it.
2. Imagine this:
You spawn a closed (S^3, hypersphere) universe, linked by
wormhole to the parent universe, and construct a computer
and a rocket there. You start the computer on some long
calculation, and then use the rocket to tow the wormhole
portal across the child universe at relativistic speeds.
As before, elapsed computer-time will exceed elapsed
rocket-time, when the circumnavigation is complete.
If you stayed in the parent universe, your time will
match rocket-time, rather than computer-time. So if
you can spawn a new universe, you can take advantage
of this speed-up without leaving everything behind.
3. This scheme could be applied recursively. Call
the parent universe, Universe 1, and the child,
Universe 2. While the rocket circumnavigates Universe 1,
why not have the computer spawn Universe 3, and build
a computer and a rocket there? And so on.
This would be possible if the time required to spawn
a new universe, and build a computer and a rocket
there, is less than the circumnavigation time, i.e. if
k*t[circum] > t[build] + t[circum] + t[readout]
where
t[circum] = Circumnavigation time
k = "Circumcomputational" speedup obtained
so computer-time elapsed = k*t[circum]
t[build] = Time to spawn universe, build computer
and rocket, start computer and launch rocket
t[rocket] = Time to read out result of computation
upon rocket's return
4. Consequences for time-complexity of computational
tasks:
Suppose we have a computation which consists of
the calculation of a series of functions (or the
execution of a series of programs), each of
which uses the output of the last as its input:
i. Initial input is x0.
ii. x1 = f(x0)
iii. x2 = g(x1)
iv. x3 = h(x2) - and so on.
We can imagine performing step (ii) in Universe 2,
step (iii) in Universe 3, and so on. But this will
only be possible if the inequality above continues
to hold. This means that the x's cannot become
arbitrarily large, for eventually t[readout] will
become too long and the inequality violated.
(Of course, the rocket might just go on a second
circumnavigation!)
Equally, t[build] includes the time it takes to
tell the computer in Universe N the input and the
function for step N. In fact, since the computer in
Universe N has to build and program the computer
in Universe N+1, it needs to know not just the
function to be executed at step N, but the functions
to be executed at all subsequent steps.
This would of course be possible if it's the same
function at every step.
What if the function at step N is g[N](...),
some member of a family of functions g[i](...),
any of which can be determined given i? (For
example, consider g[i](x) = x^i. If you know
what i is, you know what function g[i](...) is.)
Eventually there would be an N so big that the
mere specification of the value of i would
cause the inequality to be violated (thanks to
the size of t[build]).
Nonetheless, it seems that if we have a function
f whose inputs and outputs are sufficiently small
that they can be read into and read out of
the computer in the next universe in less than
t[circum], then we can calculate an arbitrary
iteration of f within t[circum].
But wait! If we want to calculate, say, f^1000000,
we need the computer in Universe One Million to know
not to continue the process. So the information
about the total number of steps needs to be passed
down the chain, along with the specification of f
and of the input at step (i).
However, all that this requires is that the computer
in Universe 1 be told "f, x0, and 1,000,000". It
can tell the computer in Universe 2 "f, f(x0), and
999,999". And so on. So if it takes less than t[circum]
to carry out the very first step - telling the computer
in Universe 2, "f, x0, k" - then we can calculate
f^k within t[circum].
(Just to be more precise. The instruction at each step
would have this form: "Here is a function f, and two
numbers, x and k. If k=0, then calculate f(x) and
tell me the answer when the rocket returns. If k>0,
then calculate f(x), spawn a universe, build a
computer and rocket there [etc], and give the computer
the same instruction I am giving you, only with f(x)
substituted for x, and k-1 substituted for k.")
5. In short, then, the simplest form of this scheme
should permit major speedups of certain functions,
namely iterations of functions with bounded input and
output. No doubt 'circumcomputational complexity theory'
could be greatly expanded, by considering more
elaborate variations on the basic theme (parallelism,
conditional actions).
6. A final comment. This scenario has been described
in terms of baby universes, computers, and rockets.
If we think in terms of the universe and the matter
we are familiar with, it might seem that all this is
relevant only to computations that would take years,
if not millennia, since the rocket will take that
long to circumnavigate a universe with a wormhole in
tow. But I wonder whether, amongst the myriads of
"vacua" described by string theory, there might be
much more compact baby universes, in which the
circumnavigation process could occur on much smaller
timescales. If we could create these, while retaining
a connection, and control over what happens in them,
we might be able to make "spacetime computers", made
out of wormhole-linked spacetime bubbles, able to
take advantage of circumcomputational acceleration
on timescales we would find useful.
-mitch
http://www.thehub.com.au/~mitch