Jim Fehlinger wrote:
> It crosses my mind that Edelman (and others) would probably snort at drawing
> parallels between "evolved" FPGAs and human brains for much the same reason
> he snorts at comparing artificial neural networks to human brains: namely,
> that the result has a static physical structure and function.
Perhaps Edelman et al. should not snort to much as what is merely
a single fleeting point in a sequence of architectures. Yes, FPGAs
are stupid, yes FPGAs don't compare favourably to brain's fanout
and connectivity factors (nanofilaments packed in 3d are hard to be
beat to implement a high-connectivity infrastructure) nor in total
number of active units, but even current FPGAs need not to remain
static. That they're typically that is a more or less deliberate
decision on the side of the implementer. She doesn't understand
circuits which flip through configurations on ms range, especially
if the circuits themselves decide on which next configuration to
choose.
Given that neuronal hardware needs minutes to hours to reconfigure,
reconfiguration of FPGA cells can (YMMV depending on what silicon
you have) happen in ns time range.
But, as I already mentioned, current FPGAs are really lousy, and
hence I'm really looking forward to CA-flavoured FPGAs, where a very
small cell with a state is directly connected to other cells, arranged
as a 2d mosaic. It is hard to predict what the minimum cell size will
be, but it will probably be not be a lot larger than a square micron,
and switch well in excess of 100 GHz even in current structure size.
And, of course in computronium the signal-ducting nanofilaments
(encoded as special cell states) will also be packed in 3d, and
a bit denser than in the real thing. Or some other computation
paradigm might be more useful, encoding information in glider
configurations, using a packet switched information delivery to
nodes arranged on a grid. We don't know yet, but we will. Figuring
this framework out will be the first step in the parameter
space search.
> In the "selectionist" theories of human intelligence espoused by Edelman,
> Changeux, Plotkin, et al., "evolution", in some sense, is a never-ending
> process. Of course, there are nested hierarchies of it -- a person is
> born with a fixed genome. But these folks believe there are somatic
> processes, analogous to Darwinian evolution, that continue throughout
> an organism's lifetime. In contrast, with an evolved FPGA as developed
> by Adrian Thompson, there's a prespecified problem, just as there would
> be in a conventional software-design or electrical engineering application
> domain, which is solved without explicit analysis by cranking the handle
> of the magic evolution machine. But at some point the evolution stops
> (when the FPGA is deemed to have solved the problem), the chip is plugged
> into the system and switched on, and becomes just another piece of
> static hardware. Same with neural networks -- there's a training set
> corresponding to the problem domain, the network is trained on it,
> and then it's plugged into the OCR program (or whatever), shrink-wrapped,
> and sold.
Of course, if the prespecified problem itself is shifting, and
if you're embedded into such a matrix the things are not nearly as
deterministic.
> Still too static, folks, to be a basis for AI. When are we going to have
> hardware with the sort of continual plasticity and dynamism that nerve tissue has?
> (I know it's going to be hard. And, in the meantime, evolved FPGAs
> might have their uses, if people can trust them to be reliable).
Jim, I think you don't realize what we've got here already even with these
stupid FPGAs. The brain has several levels of dynamics: the signalling,
which occurs on the (sub-)ms time scale, adaptation, which takes
seconds, or longer, and the hardware reconfiguration, which takes minutes,
hours, or days. In FPGAs already you have two layers: the state which
defines the hardware connectivity with the state that hardware has at time
t.
In flexibility, that framework is comparable to biology. What limits it
is the braindead architecture of current FPGAs, the limited integration density,
the whole thing being fixed into flatland, and -- most importantly -- the
limitations within designer's heads.
None of them are likely to remain a constant on the scale of two-three decades.
Even people do adapt and learn -- occasionally.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT