hal@finney.org writes:
> That may be true in some circumstances, but it's not clear what the
> ultimate architecture will be. There may be tradeoffs or economic
> pressures which force us to be a little more parsimonious with our
> processors.
Even with nanotechnology, one cannot clock individual processors very high (let's say no more than 100 GHz), and brain's massive parallelism easily compensates for slower components. Moreover, we *know* what the optimal computer architecture is (Nanotechnology 9 (1998) pp. 162-176). It is crystalline computation a la cellular automaton.
> Even where we do have enough resources for an optimally secure OS,
Edge-of-chaos cellular automata cannot be programmed in conventional languages. I very much doubt there will at all be something like an BorgOS.
> Parallel processor systems work well with local interactions, but when
> there is a need for global effects they slow way down. After each step
There cannot be any global effects in a relativistic universe. It can only happen if you simulate a physical system at a scale where simulation ticks are long enough that light propagation appears simultaneous on the system scale. Which should not be a problem with a proper networking architecture supporting broadcasting on a 3d (or higher-dimensional) lattice. (For obvious reasons, crossbar architectures cannot scale to high node numbers).
> of the calculation, every processor has to stop and wait for any messages
> to come in from distant processors in the network. Then they can go on
> to the next step. Most of the time there aren't any such messages, but
> they have to wait anyway. This ends up running very inefficiently, and
> it gets worse as the network grows.
In physical implementations the long-range interactions, being less important, can sometimes be omitted without a significant loss of accuracy.
The obvious solution e.g for Coulomb forces in a MD simulation is to tesselate the simulation box into roughly atom-sized voxels, assigning an individual "processor" with hardwired physical rules letting the electrostatics propagate as voxels of physical vacuum do. A kind of raycasting for physical forces. The same technique would work for an artificial reality renderer engine: implement gravity, finite elements and light propagation as cellular automaton rules, and cast it in molecular hardware. You can even run it without hardware synchronization, free-running.
> There is some wasted work here, but as long as the remote messages are
> not too frequent, you end up with a more efficient system, which means
> that it runs faster.
Of course if the remote messages are rare your window of states for rollback becomes overwhelmingly huge. Since memory gobbles up atoms and occupies space (space is time with relativistic signalling) this looks like a pretty rotten strategy to me.
> Now, this architecture would not be a bad choice for simulating the brain.
> Most brain connections are local, but there are some long range neurons.
> You might do very well to run something like Time Warp and roll back
> the local state when a message comes in from a distant part of the brain.
Why bother? Implement the physical connections directly in cellular automaton hardware, just like in the real wetware thing. This way delays function automatically: the spike takes time to propagate along the duct & gets modulated along the way. With a freeze/slice/scan scenario this is in fact a necessary first point of departure for an upload. Before you morph the thing you must have it running natively.
The brain is a classical case of locally coupled massively parallel system. It makes absolutely no sense trying to map it to a sequential computer, unless you want to sacrifice speed for higher density, and implement connectivity virtually. This might make sense on an interstellar journey where there is time to kill, but would seem foolish in an coevolutionary scenario. You'd be outmaneuvered real quick.
> The interesting question is what effects this might have on consciousness.
> We have the "main path" of the brain calculation constantly moving
> forward, but at the same time there are a number of "side branches"
> where segments of the brain run off and do calculations that later turn
> out to have been mistaken (as in the run from 2102 to 2108 above).
> Would these side branches cause momentary bits of consciousness?
> Would these conscious threads then be erased when we do the rollback?
> And would we be aware of these effects if we were run on a Time Warp
> architecture?
>
> In some sense we can argue that there would be no perceptible effects.
> Certainly no one would be able to say anything about it (we would wait
> for any late messages to arrive before actually doing any output, so
> we would never undo anything we started to say). So it would seem that
> there must be a conscious entity which is unaware of any effects caused
> by Time Warp.
What I really wonder is whether the rules for the neural CA could be sparse enough to allow implementing a HashLife kind of technique. Of course Conway's Life is not an EOC CA...
> On the other hand, maybe there are additional entities which have only
> a transient existence and which are constantly being snuffed out. The
> "main path" consciousness would not be aware of these, but they might be
> said to be real nonetheless. In effect, we could be creating and killing
> thousands of variants of ourselves every second.
>
> I think this is one performance optimization which even some hard-nosed
> computationalists would hesitate to embrace. On the other hand, if it
> turns out to be more efficient, there may be pressure to use it. And
> after all, nobody who does ever reports anything untoward. It is another
> example of how philosophy will collide with practicality once these
> technologies become possible.
>
> Hal