[mpi3-coll] Neighborhood collectives round 2: reductions

Jed Brown jedbrown at mcs.anl.gov
Sat Dec 8 14:49:35 CST 2012


On Sat, Dec 8, 2012 at 11:12 AM, Torsten Hoefler <htor at illinois.edu> wrote:

> Hi all,
>
> We discussed the neighborhood reductions at the last Forum and the straw
> vote if we should include them in the next revision was:
>
>  - yes: 18
>  - no: 0
>  - abstain: 3
>
> I addressed all the issues in the draft that were brought up at the
> Forum. The new draft is now at http://www.unixer.de/sec/topol.pdf . Look
> for ticket "XXX".
>

The other interfaces for MPI_Ineighbor_reduce do not contain the "I" (page
40, line 25 and 30); MPI_Ineighbor_reducev
 does not have the "v" (page 41, line 26 and 31).


> One open question remains: would a single send buffer for
> neighbor_reduce suffice or do we need one buffer per destination
> process? The second case could always be done with neighbor_reducev
> (with small additional costs). This questions is more to the potential
> users (Jed etc.).
>

My understanding of Neighbor_reducev is that I send different sizes to each
neighbor, but receive the same size from every neighbor (reducing into
exactly one buffer). That isn't really a useful operation in domain
decomposed methods. The useful operation is that I reduce different sized
received buffers into different (but sometimes overlapping) parts of my
local buffer.

The single send buffer (with different datatype arguments) imposes no
restriction and is natural.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20121208/83f92d79/attachment-0001.html>


More information about the mpiwg-coll mailing list