From: Billy Brown (bbrown@transcient.com)
Date: Thu Apr 20 2000 - 10:13:43 MDT
Paul Hughes wrote:
> Billy Brown wrote:
> > If we know that some particular set of neurons performs a certain
function,
> > and we know we can duplicate said function with X MFLOPS, we have an
> > accurate estimate of the computational demands of replacing said
neurons.
> > Period.
>
> Huh? Now you are making no sense at all. What "If"?? That's the whole
point, no
> one has yet been able to duplicate any said function.
Not true. I assume you've seen
http://www.transhumanist.com/volume1/moravec.htm, since you mentioned
Moravec earlier. Surely you don't understand the difference between
simulating the internal processes of a neuron and simulating the
computations that it performs?
Moravec argues that we now know exactly what image-processing operations are
done by the human retina, as opposed to other parts of the brain. We also
know how many MIPS it takes to do exactly the same operations in a computer
(10^3 MIPS). So, we have an empirical yardstick for comparing the actual
computational output of a functional unit of the brain to that of a
computer.
Based on that yardstick, Moravec estimates the total computational power of
the brain at 10^8 MIPS. Now, this is obviously a rough estimate, because not
all parts of the brain are going to be equally efficient or equally easy to
duplicate. However, this doesn't necessarily mean the estimate is low - for
example, one could argue that the retina is likely to be one of the most
efficient, highly optimized neural structures in the human body simply
because of its great evolutionary antiquity, and therefore the estimate is
likely to be high.
All in all, it seems reasonable to think that Moravec's estimate is within
1-2 orders of magnitude of being correct. Since there are currently several
operational supercomputers with TeraFLOPS speeds, and a PetaFLOPS machine is
under construction, that makes the brain significantly slower than the best
modern hardware.
> As the 1992 study of the Purkinje demonstrates, it was an extremely
difficult
> task to simulate the function of a single neuron - and this was an
admittedly
> simplified simulation at that. It took an i860 processor almost 60
minutes to
> simulate a single firing! So the question still remains how many xflops
it's
> going to take to simulate a *typical* neuron of the human brain not
including
> any of its self-maintenance functions.
That is like arguing that it would take vast oceans of processing power to
duplicate my old 8088 machine, because it is so hard to simulate the physics
of electron flow in a semiconductor from first principles.
If your goal is to simulate the detailed workings of the brain, for medical
purposes or scientific curiosity, you are entirely correct. If, however, you
are interested in the question of how much hardware a human-equivalent AI
would need, it is irrelevant.
It doesn't matter how complicated a neuron is. What matters for AI purposes
is how much useful computation that complexity produces. The empirical
answer appears to be "not very much".
Billy Brown
bbrown@transcient.com
http://www.transcient.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:08 MST