RE: Making neural nets more like real neurons

From: Colin Hales (colin@versalog.com.au)
Date: Mon Apr 01 2002 - 22:53:05 MST


Rüdiger Koch
> I am not aware of meta-signals. If you have literature to this
> topic, please point me to it! Hebbian learning seems not to
> be enough, however. Something is missing....

Yeah. I have spent serious time looking in detail at the innter workings of
human neurons, purely from the point of view of deriving the computational
equivalent on my own. A neuron is a beautiful, enormous city of operations.
Garbage department, power plant, central government, local councils,
roadways, bridges, repair crews. Incredible.

When I then went and looked at all the ANN models, I couldn't believe how
far off in the weeds they all were.

It's a really interesting situation. There's a level of interpretation where
you see the computational intent. Assume/simplify more and you get a watered
down, not very clever NN model. Go further into it, and you start to
simulate the electrochemical workings directly, and bury the intent in
complexity, really losing the plot. I don't think the fastest mega computer
on the planet could simulate the entire workings of even 1 neuron, let alone
a brain-full.

It's a question of knowing what to ignore. After 20 years of musing and then
months busting my brain on it, the 10 point cook's tour of the big issues
that I arrived at:

1) The key currency in a neuron is charge, not potential, although they are
related. It's charge manipulation that is actually going on, from which
potentials can be seen. A neuron is a leaky membrane bag filled with charge.
In varying-density/type clusters over the surface are 'windows' to control
charge flows through the membrane (in and out). It's like a 3D mexican wave
of windows, guiding charge to the destination. The niftiest doors (down the
axons) are like time delayed double-glazing! At this level, you really need
to model it with flow dynamics or Maxwells equations (arrrgghhh!!). Charge
flows in all directions, not just towards the axon hillock. In complex
dendrite systems, you can even get 'firings' in the dendrite structure (eg
purkinje).
2) Parallelism rules. The order of execution of the individual neuron models
in an overall NN application is critical. Synchrony is the key word. Leaving
any PC operating system to sort it out for you and you're dead. Not dead.
You'll end up with a greyed out version of what you'd think it should be. It
just won't work very well. Think old age. 99% of the NN models in the world
are kind of senile.
3) The glial cells, combined with electro-chemistry _outside_ the neurons
plays a pivotal role in learning, based on specific behaviour in 2). It's
what everybody has been ignoring (ie what's _not_ a neuron) that sorts out
learning. Individual weights are adjusted, full time, depending on their
instantaneous role in whatever activation regime is happening. Check out
this, -hot from Mr Bradbury-: recent post "SCIENCE: progress on various
fronts", down the bottom. Typical of what's coming out of labs in the last 6
months.
4) It appears like there's a huge number of physical types of neurons. The
zoo of dendrite/synapse types alone is phenomenal. However, when you look at
it in detail, this begins to dissappear. As far as I can tell, there appears
to be only 2 basic flavours of neurons needed, say, type I and type II and
the closer you get to sensing, the more type I is needed. The brain,
however, uses the same basic neurons and simply ignores the difference
between I and II, as far as I can tell. It didn't invent a new neuron, it
just uses the bits it needs and it has heaps of them to spare. The type I
neuron is there to allow spatio-temporal invariance during property
extraction. Eg if I talk faster, you can still decode it the same.
5) Numerical 1D, 2D and 3D power spectral density (fourier transforms, kind
of) are everywhere in the audio and visual sensory decoding.
6) The brain stores nothing. As in no _thing_. It doesn't have to.
7) Short term memory, long term memory and the learning process are based on
the same synapse creation/alteration system, tuned up by appropriate
chemistry and interconnects. The 'leaky membrane' part of the process is
useful in the 'spatio-temporal invariance' that I mentioned above. Otherwise
it plays no part in memory.
8) The brain is entirely models of reality that transduce to a common
currency in the thalmocortical area, which any sensory model and any
internal belief model may interact. Left un-cordinated and purple could
smell like dogshit or hope could feel like your left nipple was on fire. for
example. :-) Yes I'm crude, sometimes. :-)
9) The subjective experiential side of the mind results from symbol
allocation in the models.
10) Consciousness results from the structural re-entrancy and co-ordination
of the models. The way and rate at which the models are executed and
interact.
11) Bonus stop off. Past, present and future are all models, Goals (Desires)
are models of future events created from interacting with belief models.
Intent is a selection from the goal models activated in the present to cause
goals to happen, and past is models of goals completed, which includes the
sensory events at the time, as necessary.

When you look at the brain at the computational equivalence level and get it
right, the brain is like a sea of uniformity, as far as I can tell. All the
same neurons, all models. Simplicity in the midst of a maze of complexity.

I'd be interested in anyone elses musings on this kind of thing. Maybe off
list. My Easter de-lurk is now complete. Back to the real world.

cheers

Colin Hales



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:11 MST