Billy Brown writes:
> Eugene Leitl wrote:
> > Large programs are not that bad -- but large monolithic programs
> > are. You'll have trouble porting code monoliths to fine-grain maspar
> > systems.
>
> True, but who writes monolithic code these days? Everything Microsoft
> produces is a bundle of DLLs and COM objects, and I rarely see individual
> files bigger than a few MB. Everyone else I'm familiar with has either
> adopted the same approach, or doesn't write anything big in the first place.
Perhaps I should illustrate what I mean by fine grain. Let's say I'll
order ~4 k of chips like
http://products.analog.com/products/info.asp?product=ADSP-21160M
hotglue them onto a stack of perforated plexiglas sheet and wirewrap
them gluelessly into a 16x16x16 DSP array (adding little LED blinkenlights
to each link port for cuteness value), putting the edge node as a PCI
card into a garden-variety Linux box. So this gives me 2 GBytes of
on-die memory a la 4 MBit each, in toto a ~10 kW heat dissipation
(heck put it into an aquarium filled with Fluorinert and mount an
aircooled heat exchanger on top of that), 2.4 TFlops peak box in
a half-height 19" rack (for how much? 100..500 k$? ballpark of a
high-end workstation) -- enough horsepower to upload a nematode, or
even a fruit fly. It's comparatively easy (well, possible) to write
a nanokernel OS taking 4..16 kByte memory footprint or so, so this
leaves me with essentially half a MByte/node to work with --
however, how do you expect to port Excel to such an architecture?
How much of Microsoft (or OpenSource stuff, for that matter)
warez are written as asynchronous message-passing object soup?
The only reason machines as such are not widespread is that ./configure ; make ; make install is not sufficient to port your apps to them. Would make great game machines, though. Modelling, volume viz/rendering, lots of potential.