From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Jan 26 2000 - 15:54:35 MST
Billy Brown writes:
> That isn't quite how I read it. My understanding is that they encode the
> configuration and state information for each FPGA-sized block of simulated
Do you know how long on-the-fly-compression/expansion of the pattern takes?
> neurons in a rather tiny block of binary data, which can be moved to or from
> regular memory in less than one of the FPGA chip's clock cycles. Their
I stand corrected. I have only browsed de Garis' papers, and it has
been 1-2 years ago.
> limitations are that each FPGA can simulate only a very few neurons, so they
> have to do the time-sharing trick to avoid having to buy millions of the
Unfortunately you have to address the overlapping parts of the
simulation. Each volume blocks are coupled at the edges, hence not
completely independant. Also, FPGAs are 2d, and their circuitry
density is order of magnitude lower than an ASIC.
> things. To improve performance they have to get more FPGAs, get denser
> FPGAs so they can put more neurons on each of them, or crank up the speed at
> which the FPGAs run.
Hopefully, before very long we will get real hardware CAs instead of
FPGAs. In silicon, these will be 2d (3d is much more expensive in
terms of connectivity and hence delay), but a lot denser and
faster. As a side effect, with the proper rule you can ignore point
defects (manufacture defects and hardware failure), and utilize
98..99% of a whole wafer in a contiguous circuit.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:26:29 MST