[mpiwg-rma] [EXTERNAL] Re: MPI_WIN_CREATE for intercommunicators

Jeff Hammond jeff.science at gmail.com
Wed Jan 22 12:56:16 CST 2014

On Wed, Jan 22, 2014 at 12:16 PM, Barrett, Brian W <bwbarre at sandia.gov> wrote:
> On 1/22/14 11:09 AM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
>>On Wed, Jan 22, 2014 at 11:53 AM, Barrett, Brian W <bwbarre at sandia.gov>
>>> On 1/22/14 3:41 AM, "Thomas Jahns" <jahns at dkrz.de> wrote:
>>>>Hello Maik,
>>>>On 01/22/14 10:14, maik peterson wrote:
>>>>> thomas, today you should start thinking about other possibilities
>>>>> to solve your problem. this mpi rma stuff is crappy.
>>>>I guess that very much depends on your kind of machine. We've found it
>>>>very nice with MVAPICH, not so great with PE and OpenMPI.
>>> Another way to put it; it really depends on how much work your MPI
>>> implementor has put into one-sided for your hardware.  MVAPICH has put
>>> a good deal of effort to use IB-natvie RDMA semantics when implementing
>>> MPI one-sided.  Open MPI has not put that effort into IB (it has for
>>> another network, but that's a different story), so the performance is
>>> lower.  Open MPI's largely driven by squeaky wheels, and that wheel's
>>> pretty quiet :)..
>>Are you saying that if I make a big deal about OpenMPI's RMA warts on
>>your list that you all will make it MPI-3-feature-complete and perform
>>really well?
> Chances of me personally making Open MPI perform really well over IB are
> pretty minimal.  But Mellanox might have a different view ;).  My point
> was that of all the things people complain about, RMA performance
> generally isn't one of them.

Function comes before performance.  Once datatypes work, I'll worry
about bandwidth and latency.

My completely uninformed assumption is that Mellanox's customers are
happy with MVAPICH2 for RMA and thus they don't have any incentive to
duplicate OSU's effort.


Jeff Hammond
jeff.science at gmail.com

More information about the mpiwg-rma mailing list