Chandra Patel wrote:
> I'm trying to develop a formula to predict the Linpack benchmark rating of a
> Beowulf. To make it easy I assume all nodes have the same processor type and
> the same bus and memory specifications. The factors I've identified so far
> are:
>
> 1) processor type (speed, flops rating, etc.)
> 2) memory (subfactors include memory access time, cache speed and size)
> 3) Ethernet capacity (throughput)
>
OK so far, but Ethernet capacity is not a scalar. In fact, a Beowulf's
connectivity (network topology) can be good for some problems and
bad for others. I think you need to rate the capacity with respect to
each problem you intend to attempt. For CPU-intensive problems with minimal
comms requirements, a COW (collection of workstations) or even a distributed.net
can be very effective. For cellular automata simulations, a four-way grid
net of FDX 100BaseT will likely beat a hierarchical Gigabit net. I'm
currently putting together a dual 450Mhz Celeron Linux machine (cheap and fast)
and a friend currently has it connected to a similar machine with 2 100BaseT
FDX links. He's going to try beowulf povray. We want to try Mosix, but
Mosix and SMP don't quite play together yet.
For those extropians who may be confused, Beowulf is software that allows anyone to build a supercomputer cheaply by connecting essentially any number of cheap computers. You can build a dual-processor Celeron system for about $500. Sixteen such systems give you a 32-processor beowulf for <$8000. If you believe that progress is constrained at least somewhat by lack of supercomputer availability, you will see Beowulf as another step toward a computer-enhanced future. Mosix is less well-known. It permits a collection of computers to share an arbitrary load of "normal" programs, while beowulf permits the collection to run a program that has been written for beowulf.