Re: FY;) see, it's still spreading

From: J. R. Molloy (jr@shasta.com)
Date: Fri Mar 23 2001 - 12:28:12 MST


Eugene Leitl has come back to confer positively:
> On the more positive side, I've just come back from a 3 day tutorial on
> MPI, and I'm pretty much impressed with the thing. Any parallel processing
> wonk should take a very good look at it, as it's 1) clever 2) is here to
> stay for a long time.

MPI... it's still spreading... 249, 000 hits

MPI parallel processing = 36,300 hits

Introduction to Message Passing Parallel Programming with MPI: Parallel Processing
http://www.psu.edu/dept/cac/ait/nic_group/Edu_Train/VT/IntroMPI/VT-IntroMPI-pp.html
As stated in the introduction, MPI stands for Message Passing Interface. Message passing is a form of parallel processing where data between individual cpu's is explicitly transmitted by the programmer. Before we get into the thick of message passing parallel computing, let us take a few steps back and scrutinize why program in parallel at all. In this section we will introduce you to a basic understanding of the important concepts in the field of parallel programming, including its goals. We'll take a quick glance at the history of parallel processing that led to the development of MPI.
______________________________

In conjunction with its partnership with the
National Computational Science Alliance, the
Scientific Computing and Visualization group
and the Center for Computational Science at
Boston University will be offering a two-day
workshop on high-performance computing on
December 2-3, 1999. There is no fee for the
workshop, and it is open to anyone interested
in high-performance computing, including those
in industry as well as academia. Topics will
include parallel processing with MPI, parallel
processing with OpenMP, Fortran 90, performance
tuning, and debugging. Participants will be
awarded access to Boston University's SGI Power
Challenge and Origin2000 supercomputer systems
during and after the workshop.

LAM / MPI Parallel Computing
http://www.mpi.nd.edu/lam/
LAM (Local Area Multicomputer) is an MPI programming environment and development system for heterogeneous computers on a network. With LAM, a dedicated cluster or an existing network computing infrastructure can act as one parallel computer solving one problem.

Fwd. Mssg. Follows
--------------------------------------------------------------------------------
http://www.tlug.gr.jp/ML/0009/msg00008.html
Hi everybody.

  I'm building a simulator, which seems to work fine so far except that
it is too slow. Since the algorithms themself cannot be significantly
more optimized, I decided with my boss to make a version that will work
on a Beowulf.

  I found out that LAM/MPI would probably be a very good mix of
efficiency and portability (the Beowulf I am building is a mix of PC,
DEC Alpha and Sun computers).

  There is a lot of documentation about using or setting up LAM/MPI, and
I find the library very easy to use. But there is not so much good
documentation about how to make a good and efficient design for parallel
programs. I already have some idea about how to make my simulator
working efficiently in parallel, but I'm sure that a book would be able
to give me many tips and making me saving time.

  So, I would really appreciate if someone can suggest me some good
books about designing efficient programs on a heterogeneous cluster of
computer. I don't mind if the book use Fortran, C or any other commonly
used language (personnaly, I'm using C++), as long it help me making a
more efficient simulator, considering the available hardware.

    By the way, the structure using a master that just dispatch jobs to
slave computers are not suitable for me, because each nodes have to
communicate with some others nodes, and that the amount of data is too
big (say, easilly over one GB for a single simulation).

  Thank you in advance,

Simon Valiquette
: simon@crl.fujixerox.co.jp

PS: My boss is really amazed that I can produce better quality
software using only free GPL softwares, than in using for over 1000$ in
softwares running under the other OS.

---
It took the computational power of three Commodore 64s to fly to the
moon.
It takes a 486 to run Windows 95.
Something is wrong here.
_________________________________
Standard MPI Interprocess Communication
http://www.ptf.com/ptf/products/UNIX/current/0538.0.html
    The MPI Forum, which included representatives from every end of the
    parallel computing community, has specified a complete interface for
    message based interprocess communication, named MPI (Message Passing
    Interface).  Widespread use of MPI will benefit the general advancement of
    parallel processing technology.  The specific semantics of the communica-
    tion interface, its perceived popular and future prospects, need no longer
    be a factor in choosing an environment or a vendor for developing message
    passing applications.  Users and buyers can focus on other factors such as
    efficiency of implementation and related development tools.
____________________________
And so on...
τΏτ
Stay hungry,
--J. R.
Useless hypotheses:
 consciousness, phlogiston, philosophy, vitalism, mind, free will
Take off every 'Zig'!!


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:38 MST