Re: Neurons vs. Transistors (IA vs. AI)

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Thu Jul 29 1999 - 23:04:00 MDT


Paul Hughes writes:

> Well, I'm no neurophsyiologist, but lets give it a shot. First of all, how many

I don't think a neurophysiologist will have very much to tell you
about the molecular level. It's not their area of expertise, after
all.

> 'and/or' gates would it take to model a seratonin molecule interacting with a
> phosphate molecule? This is outside of my expertise, but I suspect this would
> require several dozen 'and/or' gates to model even the simplest of interactions.

If you want to address the issue at molecular level of detail you're
facing two problem domains: system representation (scratchpad with
lots of numbers on it) and system evolution, i.e. the laws which make
it tick. Both require "logic gates" for implementation. If you attach
an instance of the physical law implementing circuitry to each atom
representation unit (the really smart memory scenario), wiring the
result in a 3d lattice you'd obviously achive the fastest possible
execution time but have the highest possible redundancy to pay. If you
have just one instance of physical law engine to serve 10^9 or more
atom coding units you're obviously very far away from realtime. My
(more or less educated) guess is that one needs somewhen from
10^3..10^6 transistor equivalents for the physical law engine if one
does it the digital physics way (in current lingo, aka integer lattice
gas with a complicated rule, check it out on http://xxx.lanl.gov ).

Unless the mind requires nonabstractable quantum level weirdness to
operate, this brute force approach would obviously work. At maximum
redundancy/maximum speed implementation scenario we're talking about
roughly 1 cubic micron molecular circuitry for each atom, with 1 ns..1
ps ticks being equivalent to 1-2 fs real time.

If you do the numbers you will notice that this is nothing like a
sugarcube, and is not exactly running factor 10^9 superrealtime.

Of course we don't need molecular level of detail to run an upload,
that's quite absurd. Beefed-up compartmental modelling is probably
sufficient, so a complex cellular-level model fed with real
neuroanatomy data would probably suffice. This is a far cry from the
molecular dynamics level, yet would require about the same amount of
complexity (within two orders of magnitude in the currency of
circuitry) as the brute force MD approach.

There might be higher abstraction levels, but just what they will be
is currently difficult to tell (probably system state space
dynamics/attractor level), and it is not at all obvious how to
translate the digitized neuroanatomy (scanned wetware, in cheesy
cyberspeak) into that higher level encoding.

> The next challenge is coming up with a complex enough circuit design to model
> *all* of the possible interactions of seratonin along a single neuron connection.
> And of course this only describes seratonin! So the next challenge is moving up
> the scale of complexity to design a software/hardware system general enough to
> accommodate *all* neurotransmitter activity. You would think with today's

This is not done this way. One just needs a set of parameters for each
new class of molecules. In theory a single universal forcefield would
suffice, and people are currently working on developing these (which
automagically invoke the QM level and fit parameters whenever they
don't have the knowledge in the database already), so these are maybe
5-10 years away.

> supercomputers, somebody somewhere has designed a program that can emulate
> every conceivable molecular state of a single neuron. If not, then my point is well
> taken in how complex each neuron in fact is.
 
Right now we can more or less easily simulate 10^9 particle system
with short-range potential (otherwise know as forcefield) on best
modern mainframes. Assuming digital physics tricks as above (most
simulation people would disagree when currently asked, but they're
obviously turkeys) it is very much conceivable to model a cubic micron
or slighlty more of biological tissue on a temporal domain of 1 ns...1
us with end-of-the line silicon (i.e. when Moore has saturated in 2d
photolitho). We're talking about runtime of ~1 year of a big box of
end-Moore silicon here, so don't phone your local computing center
operator yet.
 
> Assuming someone has designed a program that's capable of this; conceptually then,
> one must then have this program run simultaneously and in parallel with thousands
> of others to match the average neuronal connectivity. The program would have

The connectivity has little to do with parallelism: you can simulate a
parallel system on a perfectly sequential machine. Of course it has
everything to do with performance. So one needs at least one
processing instance/biological neuron, if not several.

> to do a complete run through an average of 10 times/sec. Since it would be a waste
> for a simultaneous program running for every neuron (10 billion?) it would be

Waste of space, not of time. And make no mistake: we're not talking
about programs here. We're strictly on dedicated hardware train here,
several of these babies allocated to each neuron. You're in dire
trouble when you try to assign significant number of neurons to a
single processor: there are many orders of magnitude hidden in their
wetware parallelism.

> easier to have each neuron be stored as rapidly-accessed data until used. How much
> data would be required to store accurately the state of each neuron? I don't know,
> but I suspect it's easily on the order of a megabyte at least - as it
> would have to store the entire array of unique molecular states the neuron is in.
 
Data is not a problem: molecular memories are damn dense. The real
problem is how quickly you can mangle these data.

> At this point it's all guess work, but my point is that even if when we achieve
> atomic
> scale transistor densities, the real challenge will be organizing those transistors
> to
> emulate the entire brain, which is itself a molecular switching computer.
 
The underlying hardware must have obviously modest to mid connectivity
due to constraints derived from physics of computation, but the switch
topology has very little to do with the original layout of wetware
circuitry. We're at the emulation level, the physical level is here
visible only dimly (essentially, as signalling latency).

> **The challenge is not the speed or density, which will eventually give us the
> _potential_
> to run a human brain 1000's of times faster than our own. No, the real challenge

10^3 is very much possible, 10^6 is stretching it, 10^9? Maybe, but
not very likely.

> is creating something complex and coherent enough to emulate the brain itself. I
> suspect
> the hardware and software bottlenecks in actually doing so will be difficult enough
> to close the gap between brain augmentation (IA) and human-level AI considerably.
 
Well, in principle all you need is a really accurate MD forcefield and
the full structural data. It would be impractical, but in theory it
would work, and with very little code at that.

> > > Therefore, my primary position is that the gap between uploading
> > > and human-level AI is much narrower than is often argued.
> >
> > Gap in which context? Brute-force uploading (aka neuroemulation)
> > requires many orders of magnitude more power than straight bred
> > AI. And the translation procedure to the more compact target encoding
> > is nonobvious (i.e. I do have hints, but can't be sure it's going to
> > work).
>
> That is also completely non-obvious. I have yet to hear a single convincing argument
>
> that sufficient *speed* and density automatically yields human-level AI. You will
> need
> complexity, and where by chance will you be getting it? Until this can be
> convincingly
> answered, all human-level AI claims should be kept in every Skeptics top 5.
 
Well, I keep repeating it over and over, but apparently people either
don't listen or keep forgetting it. You can't specify a true AI, we're
not smart enough for that by far (even you, Eliezer). But we know that
AI solves problems, and it is comparatively easy to evaluate individua
according to their task solving performance. You need a testing
framework, and a darwin in machina framework, and really monstrous
power to run it (that's why you probably need nano). One starts on a
small scale, feeding trivial problems to trivial systems first. As an
analogy, our neurons are not much different from that of many other
animals, and even evolutionary old animals. It's all in the details of
several hundred classes, and how they are assembled. Of course nothing
prevents us from inserting what we think is useful into the pool
manually (i.e. we insert a protoneuron into a sea of cells). It will
of course get blown away eventually, but we're saving time since not
having to reinvent the wheel.
 
> **Since no one has actually built or designed a theoretical human-level AI, how can
> anyone possibly claim what it takes to build one? This seems completely absurd

I'm not saying my approach will work, I'm saying why other approaches
are imo doomed, and outline a model without their listed
shortcomings. If you have a new, better suggestion how to bootstrap AI
I'm all ears.

> to the point of self-contradiction! As so many are fond of saying around here -
> extraordinary claims require extraordinary proof. To re-iterate the obvious, until
> someone can prove otherwise, a human-level AI will have to equal the complexity of
> the
> human brain.
 
No one will argue with that, I think. But of course nobody requires
you to produce the complexity by hand. You being there didn't require
having a creator either.

> Paul Hughes
 



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:36 MST