"J. R. Molloy" <jr@shasta.com> writes:
> In the textbooks, a neural network is a network of many simple units with simple
> capacities. These units are connected in such a way that the connections can
> change depending on the circumstances. The idea, which became widely known in
> the 1980s, is to model the neurons of the brain, which can knit themselves into
> new configurations to create memories and skills. DB researchers aren't doing
> neural networking in the old-fashioned sense. This was pretty much gone at the
> end of the eighties. It's much more statistical learning -- we have a very clear
> understanding about the statistical properties of the learning systems we use,
> says Schaal. In statistical learning, the machine has a statistical model of how
> its sensory data are generated, and it uses learning algorithms that exploit
> this knowledge to assure statistical convergence to good learning results. DB
> learns much more from 'statistical insights' than from any attempt by the
> researcher to say, "oh, this is how the brain seems to connect neurons, let's
> put it together this way and try to understand it later."
Neural networks and statistics seem to be more similar than people
thought. Many learning rules seem to produce statistical estimators or
information maximization principles. I wouldn't be surprised (in fact,
I would be surprised if it wasn't true) that LTP, LTD and other
biological learning mechanisms are approximations to abstract learning
rules producing relatively good statistical models.
As for DB, we'll see how it works out. I am always a bit sceptical to
Big Projects.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:16 MDT