[Mpi-forum] MPI One-Sided Communication

Jeff Hammond jeff.science at gmail.com
Fri Apr 24 07:38:39 CDT 2009


Greg,

> The only significant public one-sided app that I've run into is
> NWCHEM, which uses a non-MPI one-sided library. And in that case,
> PathScale did an implementation on top of 2-sided MPI-1 send/recv, and
> it scaled better than competing real one-sided hardware. (Check out my
> old whitepaper for numbers.)

You're comparing ("scaled better than") apples ("an implementation on
top of 2-sided MPI-1 send/recv") and oranges ("real one-sided
hardware").  Please post the whitepaper (I cannot find it online) and
clarify what you mean here.

Did Pathscale implement Global Arrays on top of MPI-1 instead of ARMCI
and found that NWChem scaled better in this context?  NWChem has many
different modules, some of which can be scaled fine with a
message-passing (MP) model whereas others cannot.  It took a decade
after NWChem first did it with GA for anyone to demonstrate scalable
quantum chemical many-body theories using MPI and then only with a
pseudo-compiler and new programming language (ACES3 + SIAL if anyone
cares).  One-sided provides a much better programming model than MP
for  complex algorithms that show up in quantum chemistry, which is
why GA not MPI is the basis for every existing large-scale code of
this kind.

If MPI-1 was better than ARMCI for the "meat and potatoes" in NWChem,
NWChem would be using it instead.

> Now that I'm not an HPC guy anymore, I'll note that the distributed
> database that I'm doing is completely implemented in terms of active messages.

Active messages are very appealing to me but I'm not aware of a
library implementation (F77 and C compatible) that runs on BlueGene/P,
Cray XT, Infiniband and Myrinet.  Can you point me to one?

> They're easier to use corrrectly than real one-sided
> communications, which are hard because programmers screw up
> synchronization & double-buffering.

If one-sided is evaluated at the level of Global Arrays rather than
ARMCI by itself, those issues disappear.  Perhaps that's an unfair
comparison but direct use of ARMCI is rare whereas GA is relatively
popular.

Jeff

-- 
Jeff Hammond
The University of Chicago
http://home.uchicago.edu/~jhammond/



More information about the mpi-forum mailing list