Mike Hall writes:
> I think the limited instruction set refers not to application design, but to
> processor architecture. My own take on this is that they are developing an
This would be my guess also. I surmise the individual processors will be aligned on a 3d lattice (not unlike T3D), with direct communication links to the immediate neighbours.
> ultra-reduced instruction set processor to maximize hardware speed, by
> reducing the overhead of fetching and decoding instructions prior to
> execution to a minimum, and eliminating complex instructions which invoke
> microcode routines. The trade-off is that you have to execute more machine
Sounds sensible, however IBM is not being known for doing MISC (Minimal Instruction Set Computer) type designs. MISC is lunatic fringe right now, Chuck Moore (the father of FORTH) being its only practitioner.
> instructions to perform a given task vs. a complex instruction set processor,
> but with a good architecture design the improved performance more than
> offsets the increased processor cycles. A good optimizing HLL compiler can
> also aid in reducing object code size and instruction path lengths. An
> efficient application design can also help greatly, but machine efficiency
> isn't often a priority with application designers and coders.
If I was them I would use their embedded RAM technology and implement a generic force field engine in hardware, using particle-in-cell algorithms like SPaSM, but with hardware support for evaluating long-range interactions (DPMTA, FAMUSAMM or others).
http://bifrost.lanl.gov/MD/MD.html http://ftp.swig.org/papers/Py97/beazley.html http://www.supercomp.org/sc96/proceedings/SC96PROC/BEAZLEY/INDEX.HTM http://linux.lanl.gov/~pxl/papers/sc96/INDEX.HTM
> The good news is that if and when the machine is commercially available, it
> will probably be an excellent platform for neural modeling. The bad news is
> that it will still be a massive undertaking to develop the software. I
> expect most of IBM's $100 million will be devoted to software engineering.
Here's the outline (very old, so don't kill me) of an architecture I would use to implement a fast neural engine based on embedded RAM:
http://www.lrz-muenchen.de/~ui22204/.html/txt/firesyn.txt
> The whole complex consists of 64 processor towers. I wonder if a single
> tower (or subset of towers) can run standalone, and if they are scalable.
I wonder whether one could hijack PSX2's 4 MByte embedded RAM gfx engine for other purposes than rendering. Playstation 2 comes with FireWire and PC card slot, and there's a project to make Beowulfs from them (the main CPU is a 64/128 bit MIPS derivate).