bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>"Peter C. McCluskey" <pcm@rahul.net> writes
>I am not sure what you mean. I didn't make the claim that Hebbian
>learning is the only learning mode that occurs in the brain, or that
>it would by itself be sufficient to replicate human performance. So
>if you challenge that, then I agree with you. But I think that
>follows trivially from the definition of the original Hebb rule that
>if we hold the number of synapses per neuron fixed then the time it
>takes to go through one weight update cycle for the network grows
>linearly as the number of neurons increases.
And if you hold the number of hidden nodes per backprop neuron fixed,
the time it takes per weight update cycle also scales linearly, and
the results are probably about as good as with the Hebb rule. It's
only when you measure the ability of backprop or the Hebb rule to do
something usefull that they scale up poorly. Which makes me confused
about what, if anything, you think your analysis is relevant to.
-- ------------------------------------------------------------------------ Peter McCluskey | caffeine O CH3 pcm@rahul.net | || | http://www.rahul.net/pcm | H3C C N | \ / \ / \ | N C C | || || C C---N // \ / O N | CH3Received on Wed Jan 21 17:15:05 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST