From: Mark Crosby (crosby_m@rocketmail.com)
Date: Tue Sep 02 1997 - 13:37:43 MDT
Regarding Stephen Thayer’s Creativity Machine, I wrote that "this
two-layer approach sounds like traditional client-server architecture
compared to three-tier approaches" and cited a Pat Hayes post to the
Journal of Consciousness Studies Online that referred to the old
Perceptron model and a critique of it by Minsky and Papert. Hayes
*also* seemed (to me) to be suggesting that three-level system
architectures (in general, as opposed to just neural networks) could
do things that could not be done with two levels.
Anders Sandberg responded:
< This is a bit comparing apples and oranges; what Minsky and Papert
(and likely Hayes) talk about are layered feed-forward networks where
information is passed from A to B to C, while Thaler is using a
recurrent net where information moves between A and B. Recurrent
networks can do more than few-layer feedforward networks, and Thalers
scheme could actually be implemented in a single layer if needed
(essentially you divide the connection matrix into semi-independent
parts). >
Anders, you are correct: Hayes was specifically talking about neural
nets and I confused the excerpt I cited with another Hayes essay where
he talks about the more general issue of virtual machines.
Perhaps I was mixing apples and oranges a bit here, but that was
sort-of my point. I was not really talking about the number of
*layers in a neural network*. Nor about the particular algorithms of
any single neural-net *program*. I was talking about the number of
‘levels’ or ‘tiers’ in composite *systems* or usable applications.
This is somewhat related to complaints I have about neural models of
the brain where many researchers seem to be modeling the brain as a
vast network of general-purpose neurons rather than a *system* of
multiple, specialized components. Instead of, as you say, "dividing
the connection matrix into semi-independent parts".
In the context of Thayer’s Creativity Machine, I was specifically
referring to the ‘conscious’ and ‘subconscious’ *functional levels*
that he was touting. I was suggesting that there would need to be more
than just these two ‘levels’ to adequately reflect the functionality
of the human mind.
Mark Crosby
P.S. Another interesting excerpt from that Pat Hayes and Keith
Sutherland exchange:
KS: . . . all programming models ultimately reduce to a series of
binary digits going through a Turing Machine.
PH: Well, replace 'Turing Machine' with 'hardware' (a Turing machine
can't do anything, its just a mathematical abstraction), and I'll
agree. But this is like saying that all brains ultimately reduce to
masses and masses of quarks and leptons exchanging photons: true, but
not very useful. This doesnt account for how programmed machines work,
how they do what they do.
_____________________________________________________________________
Sent by RocketMail. Get your free e-mail at http://www.rocketmail.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:48 MST