[Mpi-forum] MPI One-Sided Communication

Vinod tipparaju tipparajuv at hotmail.com
Fri Apr 24 19:43:14 CDT 2009


Hmm. There are modules of NWChem that have datasets capable of scaling from 1s to 100's of thousands of cores. You take a module that doesn't scale by its nature, say you prove that you can scale it with an incompatible model, and allowing NWChem to use 75% of compute power (try seeing your utilized flop rate) make a statement about it ... and this doesn't have anything to do with anything?

The particular hardware did not have good support for one-sided communication. It was a thin card that relied on host a lot and in this case, using the host gave some performance. 
This in my opinion is a bad way of making a point about comparing two different models. Bills point about implementations hits its sweet spot here.
Please excuse my understanding if I am missing your point here.

> Date: Fri, 24 Apr 2009 17:18:27 -0700
> From: lindahl at pbm.com
> To: mpi-forum at lists.mpi-forum.org
> Subject: Re: [Mpi-forum] MPI One-Sided Communication
> 
> On Fri, Apr 24, 2009 at 05:50:23PM -0400, Vinod Tipparaju wrote:
> 
>> The scaling in the chart discussed below is immaterial because the
>> comparision point is flawed. I tried to convey this to some of the
>> developers on the phone a couple of years ago but invain. The fact
>> this claim of performance went no where beyond the white paper
>> should convey something. Did you know dft (siosi6) doesn't scale
>> (for various reasons) beyond 1k? 
> 
> Pretty much every code and dataset have a maximum scalability point. I
> don't see what that has to do with anything.
> 
> The point of the chart is that a pure MPI-1 implementation scaled
> great and had great absolute performance on particular hardware/MPI,
> even though the developers were sure that specialized hardware &
> non-MPI underlying messaging software was needed.
> 
> -- greg
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20090424/28aba530/attachment-0001.html>


More information about the mpi-forum mailing list