> On Sun, 16 Mar 1997, The Low Golden Willow wrote:
>
> > There's not just trickier to figure out; there's requiring more data for
> > storage. John and Eugene have had their debates on how much data the
> > brain encodes; the article seems to be evidence for Eugene's position.
> > And it may argue against Drexler's extreme miniturization ideas; the
> > brain could be less shrinkable.
>
> I don't know how shrinkable the brain is, but being the brother/collaborator
> of an Amiga demo programmer I know that a clever hack can achieve plenty
I still own a fully operational Amiga (A2000, this mail is being written
on it), and I have sure seen miraculous demos. However, we are talking not
about a radical algorithm (as a raycasting renderer vs. a polygon one)
written by a human hacker, we are talking about an long-term evolutionary
optimized connectionist hardware, which seem to operate at the threshold
of what is at all possible with biological structures (speed, accuracy,
etc.).
(Remember Moravec's double bogus argument upon collapsibility of the
retina, and his grand sweeping extrapolation, judging from retina to the
cortex, regarding merely the most trivial functions, assuming an identical
architecture on no empiric grounds whatsoever).
> of compression if you have the basic algorithm right. I'm fairly sure
> that once we have suceeded at uploading, we will gradually find ways to
> shrink the upload matrix.
That I agree absolutely with -- we won't have to account for each ion
channel/neurotransmitter vesicle/cytoskeleton. I think there might be a
bandwidth of several (maspar/conectionist) hardwares in existance,
capable of emulating biological neuronal tissue efficiently.
> Back to the subject: I think the truth is somewhere between Eugene
> (neurons are tricky, simulate them all as carefully as possible) and John
In the retrospect I realize that my estimations were somewhat obscurely
stated. I did not mean we must emulate each single biologically realistic
neuron for uploads. This is certainly an option, in fact, I think the only
way to translate a certain neural circuitry into a more efficient,
compact (e.g. a packet switched integer automaton network), is by means
of GA-reverse-enginering of a given peculiar (sub)network. Which requires a
transient existance of a detailed emulation, of course. A population of
them, in fact.
What I _wanted_ to convey, is tanstaafl; that there is no free lunch, that
there is a minimal computational work to be done to simulate a given
physical system realistically. The harder, the smarter (= more complex)
the system is. And that minimal threshold may lie quite high for such
complex objects as a mammal brain. Human equivalents the size of a sugar
cube, running at speeds >10^6 of realtime seem to reside firmly in the
realm of science fiction, not even very good science fiction.
> (neurons are important mainly in groups) - neurons indeed do a lot of
> amazing stuff, but they also work together in populations. Evolution would
I we consider the edelmanian brain, a population is an absolute
prerequisite for thought. Darwin doesn't operate on individuals, but on
populations.
> favor beings whose minds had a fair bit of redundancy over beings where
> every neuron matters (in situations where brain damage is likely during
Molecular hardware will die continuously. You simply cannot avoid it: it
starts having defects right from the start, and it goes on losing bits up
to the end of its usability, when it has to be substitued for fresh
circuitry. We simply can't ditch "redundancy".
> the lifetime of the being). So Eugene is pointing out an upper bound to
> upload matrix capacity, while John is speaking of the "compressed"
> brain-algorithm (where we can chunk neurons).
I think this level of compressability might be quite limited. An
intrinsically digital system with error-correction redundancy might have
some advantages in relation to attractor regeneration capability vs. a
wet analog system, as is ours. But this is pure conjecture, we lack any
data whatsoever.
> As a neural networks/neuroscience person I would guess that these new
> properties would not increase the upper estimate with more than a
> magnitude. If we assume 10^11 neurons with 10^4 synapses each, and 100
That's very good news.
> parameters in each synapse (I'll chunk the synapses and dendrites they sit
> upon into one unit here), we get 10^17 state variables. Nonlinearity,
> dendritic computation and similar stuff just makes calculating the next
> state more heavy, and doesn't increase the storage needs much. Each
> update, (say) every millisecond might depend on the other states in each
> neuron + signals from connecting neurons, giving around 10^26 flops.
> Diffuse modulation doesn't increase the estimate much. Of course, this is
> a very rough estimate and should be taken with a large grain of salt.
How large large? Please give the exact weight, and the error range ;)
ciao,
'gene