Re: Neurons vs. Transitors (IA vs. AI)

From: Paul Hughes (paul@i2.to)
Date: Tue Jul 27 1999 - 23:08:58 MDT


Eugene Leitl wrote:

> paul@i2.to writes:
>
> > How many thousands is the real question. I suspect you're looking at
100's of
> > thousands. Sure, once you achieve the proper transistor density, the
next
> > challenge is actually hooking them up in such a way as to accurately
> > emulate the entire spectrum of neurochemical activity! :-)
>
> Talk is cheap. Can you gives us numbers?

Well, I'm no neurophsyiologist, but lets give it a shot. First of all, how many
'and/or' gates would it take to model a seratonin molecule interacting with a
phosphate molecule? This is outside of my expertise, but I suspect this would
require several dozen 'and/or' gates to model even the simplest of interactions.
The next challenge is coming up with a complex enough circuit design to model
*all* of the possible interactions of seratonin along a single neuron
connection. And of course this only describes seratonin! So the next challenge
is moving up the scale of complexity to design a software/hardware system
general enough toaccommodate *all* neurotransmitter activity. You would think
with today's supercomputers, somebody somewhere has designed a program that can
emulate every conceivable molecular state of a single neuron. If not, then my
point is well taken in how complex each neuron in fact is.

Assuming someone has designed a program that's capable of this; conceptually
then, one must then have this program run simultaneously and in parallel with
thousands of others to match the average neuronal connectivity. The program
would have to do a complete run through an average of 10 times/sec. Since it
would be a waste for a simultaneous program running for every neuron (10
billion?) it would be easier to have each neuron be stored as rapidly-accessed
data until used. How much data would be required to store accurately the state
of each neuron? I don't know, but I suspect it's easily on the order of a
megabyte at least - as it would have to store the entire array of unique
molecular states the neuron is in.

At this point it's all guess work, but my point is that even if when we achieve
atomic scale transistor densities, the real challenge will be organizing those
transistors to emulate the entire brain, which is itself a molecular switching
computer.

**The challenge is not the speed or density, which will eventually give us the
_potential_ to run a human brain 1000's of times faster than our own. No, the
real challenge is creating something complex and coherent enough to emulate the
brain itself. I suspect the hardware and software bottlenecks in actually doing
so will be difficult enough to close the gap between brain augmentation (IA) and
human-level AI considerably.

> > Therefore, my primary position is that the gap between uploading
> > and human-level AI is much narrower than is often argued.
>
> Gap in which context? Brute-force uploading (aka neuroemulation)
> requires many orders of magnitude more power than straight bred
> AI. And the translation procedure to the more compact target encoding
> is nonobvious (i.e. I do have hints, but can't be sure it's going to
> work).

That is also completely non-obvious. I have yet to hear a single convincing
argument that sufficient *speed* and density automatically yields human-level
AI. You will need complexity, and where by chance will you be getting it?
Until this can be convincingly answered, all human-level AI claims should be
kept in every Skeptics top 5.

**Since no one has actually built or designed a theoretical human-level AI, how
can anyone possibly claim what it takes to build one? This seems completely
absurd to the point of self-contradiction! As so many are fond of saying around
here - extraordinary claims require extraordinary proof. To re-iterate the
obvious, until someone can prove otherwise, a human-level AI will have to equal
the complexity of the human brain.

Paul Hughes



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:35 MST