Making neural nets more like real neurons

From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Mar 31 2002 - 21:41:20 MST


Did a bit of reflecting on artificial neural nets this morning, and came
to an interesting conclusion I'd like to bounce off y'all. Take
everything below this paragraph with the appropriate mass of salt. Most
people on this list are probably already familiar with ANNs, but here's
a quick overview for those who aren't. (Those who are, skip the next
paragraph.)

An ANN consists of a series of nodes designed to mimic biological
neurons, thus the name. Each node takes a series of inputs, multiplies
each input by a weight, then fires if the total exceeds a threshold.
Other nodes take this firing - or lack thereof - as an input, and so on
until one or more nodes connected to an external output fires (or
doesn't). These networks can "learn" after a test input works its way
through: something outside the network determines whether the output is
good or bad, given the input. If good, the weights of the firing
signals are increased; if bad, the weights are decreased.

This differs from real biological neurons in that there is no abstract
meta-signal to distinguish good from bad in the living system. A weight
(amount and other properties of the synapses) gets increased if it
fires, and decreases (decays) if it does not for a while. (This decay
rate depends on which properties of the synapse have been enhanced:
while a given synapse may become less sensitive after not firing for a
day, newly formed synapses do not go away entirely at nearly as high a
rate.) Thus, biological neurons can learn by repeating a pattern of
stimulus - but this requires something to determine whether the stimulus
repeats. Where humans are involved, say with a parent or teacher of a
young child, this repetition can be provided by a human, in the hopes
that not only the stimulus, but also the concept of deliberate
repetition, is acquired - "learning how to learn".

Certain patterns, or at least tendencies towards them, are pre-wired at
birth (the initial weights of the system), with help from evolution.
Also, there are occasional random bursts - "noise", as it were - which
are inevitable given the non-perfect nature of organic systems (a
biological machine will not replicate or act as precisely as the metal
and silicon robots we are accustomed to), but which can be tapped and
guided as, among other uses, imagination. (These random bursts also
allow us the eventual ability to make connections between synapses that
are not connected at all to begin with - say, by one neuron running near
another, firing, and the associated electric/magnetic fields stimulating
the neighboring neuron as if a synapse had fired.)

If this is correct...then, would the above work as an algorithm for
emulating uploaded animals? (Including humans, once Moore's Law gets us
enough computational power, and presumably after technology to scan and
translate living neurons, including synapses, into such a net have been
refined on lower animals.)



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:10 MST