Re: Making neural nets more like real neurons

From: Rüdiger Koch (rkoch@rkoch.org)
Date: Mon Apr 01 2002 - 00:30:16 MST


On Monday 01 April 2002 06:41, you wrote:
>
> An ANN consists of a series of nodes designed to mimic biological
> neurons, thus the name. Each node takes a series of inputs, multiplies

Don't confuse ANN in their wide spread implementation with biological
neurons. In most ANN simulators don't model neurons at all. They are
connectionist models, such as neural networks are connectionist, but there
the similarity ends. There are more realistic models available. See

http://www.bbb.caltech.edu/GENESIS
which tries to model biological neurons in all their complexity with the aim
to help understanding NNs

http://sourceforge.net/projects/amygdala/
For a simulator that tries to capture the information theoritical aspects in
order to solve problems such as spacio-temporal pattern recognition.

>
> This differs from real biological neurons in that there is no abstract
> meta-signal to distinguish good from bad in the living system. A weight
> (amount and other properties of the synapses) gets increased if it
> fires, and decreases (decays) if it does not for a while. (This decay
> rate depends on which properties of the synapse have been enhanced:
> while a given synapse may become less sensitive after not firing for a
> day, newly formed synapses do not go away entirely at nearly as high a
> rate.) Thus, biological neurons can learn by repeating a pattern of
> stimulus - but this requires something to determine whether the stimulus
> repeats. Where humans are involved, say with a parent or teacher of a
> young child, this repetition can be provided by a human, in the hopes
> that not only the stimulus, but also the concept of deliberate
> repetition, is acquired - "learning how to learn".

This is called Hebbian learning and is available in all decent ANN
simulators. I am not aware of meta-signals. If you have literature to this
topic, please point me to it! Hebbian learning seems not to be enough,
however. Something is missing....

> Certain patterns, or at least tendencies towards them, are pre-wired at
> birth (the initial weights of the system), with help from evolution.

.... Every brain (of a given species) is locally very different even though
it globally looks the same. It's not plausible how a genome with maybe 100MB
of data is able to describe the synapses of a human brain which is way more
than 1PetaByte of data. The genome is not a blue print! It's rather a
program, the execution of which builds a phenotype. My hypothesis is that the
initial micro anatomy of the brain is such that self-organizing by Hebbian
and maybe other types of local learning is possible.

> If this is correct...then, would the above work as an algorithm for
> emulating uploaded animals? (Including humans, once Moore's Law gets us

Nope. But I would not worry about this part of uploading technology. Once
scanning technology is developed enough we can scan a mouse and instantiate
it into different neuron models until we find it eats virtual cheese ;-)

-- 
Rüdiger Koch
http://rkoch.org
Mobile: +49-179-1101561


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:10 MST