Software/Hardware Architectures (WAS: RE: Human minds on Windows(?))

From: Billy Brown (ewbrownv@mindspring.com)
Date: Tue Jul 13 1999 - 09:20:04 MDT


Eugene Leitl wrote:
> Billy Brown writes:
> > AFAIK, all Microsoft apps have been written as "asynchronous
message-passing
> > object soup" since sometime in the mid-90s. Also, with the right
software
>
> Hey, so they really call OS.send.message() every few 100 machine
> instructions, or so, and context-switch every 1 us? Really?

Why on Earth would you want to do that? Even for masively parallel
architectures you are better off either using large CPU/memory blocks, or
running conventional apps in a virtual machine. At any rate, Redmond
designs for the hardware that is actually in use (big surprise), so they
only context switch every millisecond or two.

> Mouse
> handler an object, keyboard handler an object, peripheral storage
> each an object, each widget an object, each Excel cell an object,
> almost always sitting in a different node so they have to send
> messages? We're really talking about Microsoft, Redmond, local
> universe here, are we?

They haven't gotten around to re-writing the entire OS this way yet, but
everything new *is* done that way. In Office 2000, for instance, every
spreadsheet cell is indeed an object (and so is every other recognizeable
program element). The same goes for ADO, MTS, and recent versions of most
of their server apps. Occasionally they will encapsulate a big block of
performance-intensive code in a single object instead of breaking it out
into seperate objects, but their general principle is to make everything as
object-oriented as is possible on current hardware.

> They can't do it simply because performance on current architectures
> would suck monumentally. You would have to emulate things, and context
> switches on current hardware/software architectures are just a tad too
> expensive for that.

On a modern CPU a context switch isn't any big deal. You don't want to do
it every other instruction, but there isn't any good reason to do that in
the first place. You can certainly do it anywhere there is a reason to
without having to worry about it affecting your performance.

> Multithreading!=asynchronous message passing on many tiny objects.
> We're talking about several thousands primitive (few kBytes) objects
> which send message packets which are routed by hardware directly --
> while the originator code may or may not wait for the ack/result
> to arrive. If this exists at all, it is academic curiousity at best
> (Thinking Machines might qualify, though I really doubt they exploited
> their options fully every time).

It doesn't exist because there is no reason to do it. The current model
does exactly the same thing, but the objects are 10-100 times as big and
which communicate about 10% as often. Asynchronous calls are used whenever
they actualy do something for you (in most cases they don't work, because
you can't proceed witht he current operation until you get your results
back).

Or, to put it another way, the reason current apps wouldn't benifit from
your fast chip/small memory parallel processing architecture is because most
of the tasks they do are inherently linear, not because they are poorly
written. The only way to speed up a linear process is to give it a single
very fast thread of execution. That's why massively parallel machines are
generally reserved for inherently parallel sorts of computation.

Now, if you take a close look at modern PC architectures you'll see that
there is an emerging trend towards increasing parallelism in the areas where
it is usefull. Servers often have multiple CPUs, since they have to handle
many different requests simultaneously. Video subsystems often incorporate
several DSPs, and the trend seems to be towards using more and more of them.
With modems, sound cards and other specialized functions turning into
software for DSP chips, it would not be at all surprising if the PC of the
future had a large array of DSPs for parallel-processing tasks. However, it
will still need that fast Pentium-whatever chip for handling more linear
jobs in a timely fashion.

Billy Brown, MCSE+I
ewbrownv@mindspring.com

> You certainly can't program these
> things in standard languages as C/C++ with any efficiency. And I would
> just love to see a programmer who can handle concurrent nonlinear
> control flow in a distributed kObject application. This can get
> arbitrarily hairy.
>
> There's a dearth of apps even for Beowulfs, and god knows how
> coarse-grain these are. No sir, don't think so.
>
> > you can run a large app over a distributed architecture like
> this without
> > having to explicitly modularize it to fit the hardware's quirks (see
>
> Sure you can. Put x86 opcode sofware emulation into every node,
> equipartition the old-generation binary over the nodes and send
> a message to the appropriate node whenever PC leaves your node's
> address space slice. Redundant as hell (OS nanokernel+ x86
> emulator in every node) and efficient as treacle since purely
> sequential, especially considering that you can't do 4 kCPUs
> PIII/550 way, and interpreting/compiling existing code into
> SWAR *would* require true AI.
>
> > http://research.microsoft.com/sn/Millennium/, and especialy
> > http://research.microsoft.com/sn/Millennium/Coign.html, for example).
>
> Better yet, the
> system builds on the existing Windows
> operating system,
> so today's software won't become obsolete and
> programmers can write new software the
> same way they
> do now, for one machine or many.
>
> Right. And pilots will worry about pig collision avoidance issues.
>
> Looks nice as a concept, but even if it works it is no panacea. Will
> alleviate migration psychologically since suggesting you can run your
> apps on the new platform even if in reality you can't.
>
> > But that isn't especially relevant, anyway. I was talking
> about programming
> > techniques for existing PC hardware. Obviously, radical
> changes in hardware
>
> Of what possible relevance is existing PC hardware/programming
> techniques to uploading? I am not sure we're talking in the same
> language here.
>
> > can require equally radical changes in programming methods.
>
> And even worse: vice versa.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:28 MST