Re: Neuron Computational Requirements?

From: Robin Hanson (rhanson@gmu.edu)
Date: Thu Apr 20 2000 - 07:40:47 MDT


Anders Sandberg wrote:
> > > http://www.cacr.caltech.edu/Publications/annreps/annrep92/schutt.html
> > proves that simulating a single neuron is not a trivial task.
>
>Definitely not trivial. ...
>In most computational neuroscience models of neurons they are divided
>into compartments treated as isopotential; they are each described by
>their membrane potential, the concentrations of sodium, potassium,
>calcium and sometimes chloride ions and the state of various ion
>channels. The number of compartment varies a lot, 100,000 is on the
>largest scale so far - we are quite limited by our lack of really good
>morphological cell data not to mention electrophysiological data.
>
>Other chemical processes need to be added at the synapses, especially
>for plasticity models. A fellow researcher made a simplified (!) model
>involving just 30-40 chemicals (many of which were variously
>phosphorylated states of a protein) of the long term potentiation
>phenomenon.
>
>Yes, neurons are much more complex than simple transistors. But it is
>not obvious that all this complexity makes them *much* more complex -
>there are clever simplifications ... described more compactly
>using a phenomenological model.

Wow. It looks like Anders has learned enough about this field to
start to answer the questions I'm most interested in. So let me now
try to express my questions more clearly in the hopes of inspiring
Anders and others to answer them.

Robert Bradbury supported Billy Brown in this thread, saying "*when*
one understands exactly what it is the neurons are `computing', you
can develop software and hardware that do it more effectively."
Therefore Morevac's estimates using the parts of the brain he
understands are reasonable estimates for CPU requirements of the
whole brain when we understand what all brain parts compute.

This argument seems correct to me, but beside the most interesting
point. Estimating how cheap it will be in CPU terms to simulate
a brain "eventually" seems like asking how cheap anything will be
"eventually" -- eventually may never come if improvement continues
on and on, and even if eventually does come its not clear why
that estimate is of interest now.

To me the most interesting questions are how good uploading technology
is now, how fast it is improving, and when therefore uploading may
be feasible. The whole point of the uploading strategy to achieving
artificial intelligence is to port the software of a human brain
*without* understanding how it works. The simplest strategy of this
sort is to just understand how each type of neuron works, and scan a
brain noting which neurons are of which types and are connected to
which other neurons.

Focusing on this neuron-level strategy, I want estimates for five
parameters. Two are static parameters:

1) Neuron Inventory -- The number of distinct types of neurons, and
the number of each type in a typical human brain.

3) Resolution required -- The spatial/chemical resolution of a scanning
technology that would be required to sufficiently reliably distinguish
between the types of neurons, and to distinguish which neurons connect
to which others via which kinds of synapses. (Assume one is repeated
slicing and scanning a cryogenically frozen brain.)

Three parameters are dynamic, about *rates* of progress of these:

1) Neuron types modeled -- The fraction of neuron types for which
we have reasonable computational models at *any* level of complexity.
That is, where we have run such models and seen that they correspond
reasonably closely to the behavior of actual neurons of that type.

2) Average CPU/RAM per neuron -- A weighted average across neuron
types of the CPU/RAM requirements for simulating each neuron type.
The weights should be according to the fraction of each type of
neuron in a human brain.

3) Scanning costs -- The time/dollar cost per area scanned of scanning
at the resolution required for uploading. Costs of scanning at
other resolutions may be interesting if time trends in them can be
more accurately estimates, and if one can estimate a scaling
relationship between the costs of scanning at different resolutions.

The progress estimate of neuron types modeled help us estimate
when it will be possible to upload at any cost, while the other
estimates (together with estimates of how fast CPU/RAM costs fall)
help us estimate how fast the cost of uploading is falling with time.

Anders, tell me if I'm wrong, but it strikes me that we may well have
enough experience modeling neurons to start to estimate these parameters.
If so, constructing such estimates of when uploading will be feasible
and how fast costs will fall seems a very high priority to me.

Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030
703-993-2326 FAX: 703-993-2323



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:08 MST