[Mpi-forum] MPI One-Sided Communication
Jeff Hammond
jeff.science at gmail.com
Fri Apr 24 21:10:48 CDT 2009
>> Hmm. There are modules of NWChem that have datasets capable of
>> scaling from 1s to 100's of thousands of cores.
>
> Right. Those don't need a fast interconnect. They probably also would
> run fine with MPI-1. That's a solved problem; we don't need to think
> much about it.
No, that's not true in the slightest for what Vinod is referring to.
NWChem's CCSD(T) code hit 0.357 petaflops at 0.55% efficiency on
Jaguar and this method is anything but simple. The most scalable
portion is a 7-loop accumulation wherein the first loop computes very
large intermediates (by design, it fills up the memory) which must be
communicated all over the place in a non-trivial way along with other
intermediates and permanent data inside of another set of do loops.
Once all the buffers get pushed around, the inner loops are just a
bunch of DGEMMs.
See the attached papers for the details of the algorithm. The
GA/ARMCI implementation is less than 1000 lines of code. Please let
me know when you can match this with MPI-1.
> I think this nicely illustrates my point that I don't know of any
> existing HPC application that really needs one-sided hardware to get
> great performance.*
Frankly, this is just a religious war already waged in HPCWire
(Myricom versus IBM). It appears you're saying that IBM and Cray
supporting powerful one-sided hardware is a waste of money. Is that
why their machines, SGI's and those running Infiniband (excellent
RDMA) compose the first 26 slots on the Top500 list? Anti-one-sided
Myricom's best showing is #27.
If you want to make a real argument against one-sided communication,
show me an MPI-1-based CCSD(T) code that hits a third of a petaflop.
Show me the code.
Jeff
--
Jeff Hammond
The University of Chicago
http://home.uchicago.edu/~jhammond/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Rendell_perturbative_triples_algorithm_TCA1993.pdf
Type: application/pdf
Size: 1176617 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20090424/1ac38f4c/attachment-0002.pdf>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Kobayashi_Rendell_parallel_direct_CC_CPL1997.pdf
Type: application/pdf
Size: 581468 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20090424/1ac38f4c/attachment-0003.pdf>
More information about the mpi-forum
mailing list