From: Mark Crosby (crosby_m@rocketmail.com)
Date: Wed Sep 03 1997 - 07:54:45 MDT
Regarding my my complaint that "researchers seem to be modeling the
brain as a vast network of general-purpose neurons", Anders Sandberg
takes me to task:
<Hmm, what models of the brain are you talking about? All the models I
have seen in my work as a computational neuroscientist tends to be
filled with little boxes of semi-independent systems; nobody tries to
model the brain as a "single" neural net (both because it would be too
hard, and because the neuroscientific data shows that everything is
not connected to everything else).>
You're right again ;) I'm sitting on the sidelines, not directly
involved in the nitty gritty of neural network modeling, probably
creating a strawman based on cursory glances at the few fragments I've
had time to glimpse and my impatience to see someone put it all
together into practical applications.
Still, what I was partially referring to were the apparent biases of
some neural and Alife modelers against *anything* that looks like a
rule-based or 'software' approach in favor of strictly memoryless
local learning, for example.
In that Connectionist List summary post that I mentioned before (
http://neuro.psy.soton.ac.uk/~at/Archive/0026.html ), Professor Danny
Silver notes:
< I beleive the larger and more important context involves the issues
of what has been called "life-long learning" and "learning to learn"
and "consolidation and transfer of knowledge". I have no idea why so
many researchers continue to pursue the development of the next best
inductive algorithm or archiecture (be it ANN or not) when many of
them understand that the percentage gains on predictive accuracy based
solely on an example set is marginal in comparison to the use of prior
knowledge (selection of inductive bias). >
Prof. Vassilis G. Kaburlasos adds:
< That is, to simulate convincingly a biological system we should not
probably be dealing solely with vectors of real numbers. The capacity
to deal with symbols and other types of data merits also attention. In
other words, besides memory and more global learning capabilities, it
will be advantageous to be able to handle jointly disparate data such
as real numbers, fuzzy sets, propositional statements, etc. >
Dr. Peter Cariani mentions a host of additional assumptions in
connectionist research that need to be challenged, such as: place
coding, scalar signals, synchronous or near-synchronous operation, and
fan-out of the same signals to all target elements. Cariani concludes:
< I definitely agree with you that once we understand the functional
organization of the brain as an information processing system, then we
will be able to build devices that are far superior to the biological
ones. My motto is: "keep your hands wet, but your mind dry" -- it's
important to pay attention to the biology, to not project one's
preconceptions onto the system, but it's equally important to keep
one's eyes on the essentials, to avoid getting bogged down in largely
irrelevant details. >
The 'you' in the above citations refers to Asim Roy, organizer of a
discussion panel, "Connectionist Learning: Is It Time to reconsider
the Foundations?", at the June 1997 International Conference on Neural
Networks. This panel discussed the following three questions:
<1. Should memory be used for learning? Is memoryless learning an
unnecessary restriction on learning algorithms? [Snip]
2. Is local learning a sensible idea? Can better learning algorithms
be developed without this restriction? [Snip]
3. Who designs the network inside an autonomous learning system such
as the brain?>
But, I have to admit I'm just an observer, not really a participant,
and shouldn't be so critical of people doing very *difficult* work.
Mark Crosby
_____________________________________________________________________
Sent by RocketMail. Get your free e-mail at http://www.rocketmail.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:48 MST