[Mpi-forum] Discussion points from the MPI-<next> discussion today

Jeff Hammond jhammond at alcf.anl.gov
Fri Sep 21 11:16:13 CDT 2012


On Fri, Sep 21, 2012 at 8:34 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Thu, Sep 20, 2012 at 12:14 PM, Jeff Hammond <jhammond at alcf.anl.gov>
> wrote:
>>
>> P2P)
>>
>> I would like MPI_(I)RECV_REDUCE, which - as you might guess - does a
>> reduction to the receive buffer instead of a simple write.  This
>> allows one to avoid having to manually buffer incoming messages to be
>> reduced at the receiver.  Torsten and I have discussed it and it seems
>> there are at least a few use cases.
>
>
> If you do this, _please_ allow a user-defined MPI_Op to be used in the
> reduction (i.e., don't cripple it like one-sided).

There's a reason I called it MPI_RECV_REDUCE and not
MPI_TWOSIDED_ACCUMULATE :-)  All of the reasons that MPI_Accumulate
doesn't support active-messages do not apply here so of course
user-defined reductions will be supported (if it happens at all).

> Jeff Squyres, I don't know what "fix grequest" meant in your list, but I
> hope that means: "provide a mechanism for users to implement nonblocking
> operations with the same progress semantics as built-in nonblocking
> operations". After writing the blog post below, I learned about additional
> exemplar use cases in dense linear algebra. Lack of this specific feature is
> causing a lot of important applications and libraries to systematically
> over-synchronize and preventing them from hiding communication latency.
>
> https://www.ieeetcsc.org/activities/blog/user_defined_nonblocking_collectives_must_make_progress
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum



-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond



More information about the mpi-forum mailing list