From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Thu Sep 06 2001 - 13:52:57 MDT
On Thu, 6 Sep 2001, James Rogers wrote:
> A more accurate way of describing it is that there is no OS currently
> in existence (that I know of) that intrinsically supports general
> massive parallelism. The reason for this has more to do with the fact
It's not just the OS, the whole architecture is fux0red. I can imagine
hierarchical FPGAs being anywhere in the ballpark of a small insect, but
the usual Jaquard-derived architectures are just ridiculous. CPU smart,
memory dumb. Mill, mill, mill, iterate sequentially. Bleh. Makes one wants
to reexamine which century we're in.
> that supporting GMP as a feature of the OS makes it architecturally a
> good bit different than what we have today and it will generally run
> like a dog on single processor machines, hence the bias. Also, there
If you have 1e6 nodes, you most assuredly would want to cut down the
redudancy to a minimum. No OS, that's for certain. No von Neuman, that
too.
> isn't a lot of appropriate hardware to run it on lying around.
> Therefore, "massive parallelism" as used today describes an
> unnecessarily brittle and fragile software technology (and to a lesser
> extent, hardware).
Parallelism is parallelism, the brain is massively/embarrasingly parallel,
yet it is certainly not very Rube Goldberg. So, we should not be limited
to the state of the art when discussing future architectures.
> Nobody designs computer hardware and operating systems for GMP, as it
> is much cheaper to weakly glue together lots of copies of existing
> hardware so that they can get by with minor tweaks of their existing
> OS code base. In a nutshell, nobody expects their systems to be used
> in this way. This mandates inadequacy.
Clusters are great for protypes, but prototypes is all they're good for.
Excellent bang for the buck, but it grows stale in realtime.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:24 MST