Re: Making the Most of What We Have

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Fri Nov 08 1996 - 13:23:09 MST


On Fri, 8 Nov 1996, Anders Sandberg wrote:

> On Thu, 7 Nov 1996, Hara Ra wrote:
>
> > Thanks for a posting which mentions a much avoided topic re
> > transhumanism and
> > uploads - BUGS! If my experience with my PC is any guide, I will be wary
> > of uploading for quite some time...
>
> Yes, creating a stable environment for uploads is a nontrivial problem.

I think the most nontrivial problem might be to create an environment at
all capable to sustain an upload... Making it stable is easy in
comparison (actually, it's simply a side effect of the implementation
paradigm naturally suggesting itself ;)
 
> We need a large number of processes that interact in a robust, efficient
> way (no, no upload can run as a single process - physics prevents that

No, not even a single, biologically realistic neuron can be run in
realtime on a current single-processor machine, however capable. (Just
ask Joe Strout of MURG, he'll quote chapter & verse).
 
A standard workstation class system (regardless, whether it has or
hasn't got an FPU) equates to about 100 realtime _simplistic toy_
neurons (integer ops, shifts, primitive logics, lots of lookup tables,
etc.) Whether these toy neurons are equivalent to biologically realistic
neurons at some abstration level (translation procedure from raw voxel
data to labeled geometry, then functionality, then reverse-engineering
the according IAN circuit) is currently unknown (the big IF of uploading,
since we're up shit creek should we be forced to stick to actual wetware
geometry & co). The only way increase the size of the simulation is to
increase memory bandwidth, which means going WSI (wafer scale integration
of CPU/RAM dies, since on-die accesses are orders of magnitude faster, burn
less power and don't need bond pads) really soon. Since die yield goes up
exponentially with die size, and we need a goodish amount of functional
dies/wafer (let's say at least 50%), this means dies must be small
(=simple). The smaller, the better.

> powerful single processor systems, we have to go parallel) that can
> withstand errors, breakdowns and crashes.

I don't think uploads will run on von-Neumann machines at all. They will
require nonalgorithmic virtual hardware, e.g. Integer Automaton Networks
(a form of ANNs), which will be mapped to extremely braindead molecular
CAMs (unless strong nanotech proves viable, and we can create complex,
compact computer 3d-lattice with better OPS/volume/Watt ratio).

Such hardware can't crash (if anybody is interested in mapping IANs to
molCAM/quantum dot arrays, I can expand on that). Purely digital
sequential systems should be confined to hybrid use only, everything else
is much too dangerous.

> For example, assume my brain is run on ten processor clusters and one
> crashes (because a nano-squirrel munched up the power cable?). How to
> handle it? One natural solution would be to freeze the other nine
> clusters as soon as the error is detected (which takes a short time
> compared to my emulated mental processes) while error checking takes place
> (was the data sent to the other clusters corrupted before the crash? I
> don't want an involuntary digital ECT). Now we need to get the mental
> state of one tenth of my mind, either from backup or by backtracking, and
> place it on another processor cluster. Clearly nontrivial, especially
> since most of us don't want to loose too much of our minds when the lags
> on our networks become too much and out internal communications become
> asynchronus.
>
> The above example suggests that fine-grained parallelism is the way to go
> with our minds, we might even be able to stand a few emulated neurons
> crashing.

While IANs do not crash, the hardware might die (molCAMs will slowly
deteriorate, requiring periodic hardware "upgrades" (which can occur
transparently incrementally)). However, we harly notice microstrokes,
even though they mean miniature neuron holocausts (several M of 'em, at
a guess). So probably no such elaborate exception handling (which might be
buggy itself, and certainly has negative impact on realtime response,
which is crucial e.g. in battle) is necessary.

_Finest_ - grain parallelism is the only option possible.

'gene



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:50 MST