From: Eugen Leitl (eugen@leitl.org)
Date: Sat Oct 26 2002 - 08:17:32 MDT
We've been discussing clocks in the past, but I'll sum up: it seems to be
possible to create monotonous counters driven by high-precision
oscillators compensated for spacetime curvature and complete rest (there
seems to be a global frame of reference if we look at the Doppler shift of
the microwave background of both hemispheres). It is possible to represent
the time passed since begin of this spacetime with a comparatively short
binary (with a suitable encoding making bitflips local) counter counting
in smallest possible time units. One can subsequently refine the
resolution as faster and more accurate clocks and counters become
available. Similiar to overlapping tree rings one can backdate clocks from
different sources by referring to large-scale events recorded by issuers
of each time system. Assuming there's a trend towards higher clock
accuracy backdating events will be associated with higher errors.
Two fully compensated clocks should have no deviation after having been
synchonized, each passed a different trajectory through time, and compared
afterwards. (Anybody seeing a reason why they shouldn't? Please speak up).
It is possible to label each clock with an unique identifying bitstring
(UID) without prior communication by choosing a random (entropy distilled
from a nondeterministic physical system) UID sufficiently large it's
exceedingly improbable to collide. The UID can also double as public key.
The clock can telecommunicate its state by sending the counter state along
with the ID. It is not possible to figure out the distance reliably,
without using a relativistic time of flight ping (the clock must
immediately send back your signal, which better have an a time field and
an UID on it, so you can compute the delay and halve it) or reference to
distant beakons encoded into the signal the clock sends out. If you know
the clock's public key (UID) and the signature of the message is good you
know that time signal came from that source, and no other (unless somebody
stole the secret from the clock or broke the cryptosystem).
So we can create a time standard and distribute it over large distances,
possibly cooperating with remote parties who're willing to play along.
Cloning a clock and taking the clone along with you offers less hassle and
more precision, however.
Monotonous counters are designed to be simple, you can guess the encoding
and predict what the next state will be from a very small sample of
values. We can also define a more complex transformation acting on a
state, and distribute that along with the seed (initial state), a time
stamp at which the evolution of the system is to start. That way we can
synchronize remote discrete deterministic systems. Typically we don't
bother with that, and let the incoming signal directly drive the system
transformation, e.g. in cryptography, when encrypting traffic to a remote
machine (in this case the initial state being a closely guarded secret,
distributed via a secure channel or via a public key cryptography
infrastructure, which is vulnerable if there's an undetected malicious
relay manipulating traffic routed through it).
Most current discrete deterministic system evolution is driven by a
system-local clock. Because of power dissipation and relativistic latency
issues this approach doesn't scale. Even precisely machined asynchronous
systems will show drift due to system noise. How can we reliably halt the
state of an evolving asynchronous discrete deterministic system? I can't
see any which wouldn't sacrifice performance (by slowing the system down
somewhat, and by diluting the circuitry concentration in the volume, which
also has a relativistic delay penalty). I'm open to suggestions here. It
would be interesting to hear how to exactly implement locally
synchronizing assemblies of cells in computronium, and read out the state
from evolving computronium via waves carrying information propagating
through the bulk (thus sacrificing capability otherwise used for holding
more state and/or making existing state evolve faster). Fortunately,
globally asynchronous but locally synchronous computronium allows
deterministic processing on the pattern scale, so one doesn't have to
stick clocks into every cell (which would considerably bloat them).
It seems that maintaining identity of evolution of N distributed
deterministic uploads is hard work, and is associated with performance
penalties (and maintaining synchronization of distributed clocks, shallow
FIFO of trajectory, checksumming, periodic halts for maintenance and
checksum validation, and the like).
To incrementally backup a running upload (fortunately, here we can use
ACKless protocol, and let the vacuum be our FIFO for storing serialized
state) we obviously need to read out bulk state which can only happen
through interface (first bottleneck) and serialize it (second, and
narrowest bottleneck). This is a very large and very dead albatross around
our necks. It is much better to periodically halt execution (as we're the
master, and our evolution code is self-correncting and self-synching we
can just use a special relativistically-constrained wave to issue halt
while tolerating a few cell flips occuring before the rest of the volume
sees it (this can be ameliorated by synchronized clocks spread over the
volume, as here the average signalling paths are much shorter), and then
use the same infrastructure to read the state out and copy it into dumb
memory, which we can serialize and fire as tracer bullets into the medium
to be picked up by archive).
Computronium is pretty close to the storage density of molecular memory,
so we're clearly running into problems as soon as resource scarcity runs
the scene. Similiarly to today, not many people are running a remote
mirror if resources are not free (this state should persist).
That's all for today. Please feel free to point out mistakes, offer
dis/agreement, etc.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:47 MST