From: Adrian Tymes (wingcat@pacbell.net)
Date: Mon Apr 01 2002 - 14:18:47 MST
Rüdiger Koch wrote:
> On Monday 01 April 2002 06:41, you wrote:
>>An ANN consists of a series of nodes designed to mimic biological
>>neurons, thus the name. Each node takes a series of inputs, multiplies
>
> Don't confuse ANN in their wide spread implementation with biological
> neurons. In most ANN simulators don't model neurons at all. They are
> connectionist models, such as neural networks are connectionist, but there
> the similarity ends. There are more realistic models available.
Great. So, I rediscovered what someone else has discovered. Again.
-_-
>>This differs from real biological neurons in that there is no abstract
>>meta-signal to distinguish good from bad in the living system. A weight
>>(amount and other properties of the synapses) gets increased if it
>>fires, and decreases (decays) if it does not for a while. (This decay
>>rate depends on which properties of the synapse have been enhanced:
>>while a given synapse may become less sensitive after not firing for a
>>day, newly formed synapses do not go away entirely at nearly as high a
>>rate.) Thus, biological neurons can learn by repeating a pattern of
>>stimulus - but this requires something to determine whether the stimulus
>>repeats. Where humans are involved, say with a parent or teacher of a
>>young child, this repetition can be provided by a human, in the hopes
>>that not only the stimulus, but also the concept of deliberate
>>repetition, is acquired - "learning how to learn".
>
> This is called Hebbian learning and is available in all decent ANN
> simulators. I am not aware of meta-signals. If you have literature to this
> topic, please point me to it! Hebbian learning seems not to be enough,
> however. Something is missing....
Like, say, initial weights geared towards the problem (though probably
not encoding it exactly) and an ability to form connections where none
were before?
>>Certain patterns, or at least tendencies towards them, are pre-wired at
>>birth (the initial weights of the system), with help from evolution.
>
> .... Every brain (of a given species) is locally very different even though
> it globally looks the same. It's not plausible how a genome with maybe 100MB
> of data is able to describe the synapses of a human brain which is way more
> than 1PetaByte of data. The genome is not a blue print! It's rather a
> program, the execution of which builds a phenotype. My hypothesis is that the
> initial micro anatomy of the brain is such that self-organizing by Hebbian
> and maybe other types of local learning is possible.
Like I said. We're in violent agreement here.
>>If this is correct...then, would the above work as an algorithm for
>>emulating uploaded animals? (Including humans, once Moore's Law gets us
>
> Nope. But I would not worry about this part of uploading technology. Once
> scanning technology is developed enough we can scan a mouse and instantiate
> it into different neuron models until we find it eats virtual cheese ;-)
But if we don't have the models, we don't know what to scan for...not
exactly, anyway.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:11 MST