[mpiwg-coll] backward communication in neighborhood collectives

HOLMES Daniel d.holmes at epcc.ed.ac.uk
Mon Jul 23 05:51:33 CDT 2018


Hi Junchao Zhang,

My understanding of the current API for MPI-3.1 is that:

1) the virtual topology does not actually restrict communication via the communicator to the edges specified in the topology - messages can be sent along any edge in either direction, and even between pairs of processes for which no edge was specified.

2) the virtual topology can be specified as a symmetric graph - for every ‘forward’ edge (e.g. from A to B), the ‘backward’ edge (i.e. from B to A) can be included as well.

3) there is already language in the MPI Standard regarding how MPI handles symmetric and non-symmetric graph topologies for neighbourhood collective operations.

Thus, there is no need to create two distributed graph topology communicators to achieve ‘forward and backward communication along the edges’.

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Applications Consultant in HPC Research
d.holmes at epcc.ed.ac.uk<mailto:d.holmes at epcc.ed.ac.uk>
Phone: +44 (0) 131 651 3465
Mobile: +44 (0) 7940 524 088
Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
—
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.
—

On 15 Jul 2018, at 10:42, Anthony Skjellum <tony at runtimecomputing.com<mailto:tony at runtimecomputing.com>> wrote:

Hi, I just saw this.

We definitely need to consider this concern.

I also need to go review the APIs.

Thanks,
Tony


On Sat, Jul 14, 2018 at 12:27 AM, Junchao Zhang <jczhang at mcs.anl.gov<mailto:jczhang at mcs.anl.gov>> wrote:
 I want to try MPI neighborhood collectives. I have a communication graph and want to do both forward and backward communication along the edges. With current APIs, it looks I need to create two comm_dist_graphs. It is a waste since MPI implementations can do similar optimizations for the two.
 Should MPI support this scenario?  Thanks.

--Junchao Zhang

_______________________________________________
mpiwg-coll mailing list
mpiwg-coll at lists.mpi-forum.org<mailto:mpiwg-coll at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll




--
Tony Skjellum, PhD
RunTime Computing Solutions, LLC
tony at runtimecomputing.com<mailto:tony at runtimecomputing.com>
direct: +1-205-918-7514
cell: +1-205-807-4968
_______________________________________________
mpiwg-coll mailing list
mpiwg-coll at lists.mpi-forum.org<mailto:mpiwg-coll at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20180723/46131778/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20180723/46131778/attachment.ksh>


More information about the mpiwg-coll mailing list