[mpiwg-coll] backward communication in neighborhood collectives

Jeff Hammond jeff.science at gmail.com
Mon Jul 23 09:49:50 CDT 2018


On Mon, Jul 23, 2018 at 7:45 AM, Junchao Zhang <jczhang at mcs.anl.gov> wrote:

> Hello, Dan,
>  I am interested in MPI_Neighbor_alltoallv. From its arguments, I do not
> see one can send a message to a neighbor not specified in the graph created
> by for example, MPI_Dist_graph_create_adjacent.
>

Why would you expect this?  The whole point of graph communicators is to
specify the communication topology so that it can be optimized.  If you
want to communicate with another ranks, use the parent communicator that
supports communication to all ranks.

Jeff


> --Junchao Zhang
>
> On Mon, Jul 23, 2018 at 5:51 AM, HOLMES Daniel <d.holmes at epcc.ed.ac.uk>
> wrote:
>
>> Hi Junchao Zhang,
>>
>> My understanding of the current API for MPI-3.1 is that:
>>
>> 1) the virtual topology does not actually restrict communication via the
>> communicator to the edges specified in the topology - messages can be sent
>> along any edge in either direction, and even between pairs of processes for
>> which no edge was specified.
>>
>> 2) the virtual topology can be specified as a symmetric graph - for every
>> ‘forward’ edge (e.g. from A to B), the ‘backward’ edge (i.e. from B to A)
>> can be included as well.
>>
>> 3) there is already language in the MPI Standard regarding how MPI
>> handles symmetric and non-symmetric graph topologies for neighbourhood
>> collective operations.
>>
>> Thus, there is no need to create two distributed graph topology
>> communicators to achieve ‘forward and backward communication along the
>> edges’.
>>
>> Cheers,
>> Dan.
>>>> Dr Daniel Holmes PhD
>> Applications Consultant in HPC Research
>> d.holmes at epcc.ed.ac.uk
>> Phone: +44 (0) 131 651 3465
>> Mobile: +44 (0) 7940 524 088
>> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
>>>> The University of Edinburgh is a charitable body, registered in Scotland,
>> with registration number SC005336.
>>>>
>> On 15 Jul 2018, at 10:42, Anthony Skjellum <tony at runtimecomputing.com>
>> wrote:
>>
>> Hi, I just saw this.
>>
>> We definitely need to consider this concern.
>>
>> I also need to go review the APIs.
>>
>> Thanks,
>> Tony
>>
>>
>> On Sat, Jul 14, 2018 at 12:27 AM, Junchao Zhang <jczhang at mcs.anl.gov>
>> wrote:
>>
>>>  I want to try MPI neighborhood collectives. I have a communication
>>> graph and want to do both forward and backward communication along the
>>> edges. With current APIs, it looks I need to create two comm_dist_graphs.
>>> It is a waste since MPI implementations can do similar optimizations for
>>> the two.
>>>  Should MPI support this scenario?  Thanks.
>>>
>>> --Junchao Zhang
>>>
>>> _______________________________________________
>>> mpiwg-coll mailing list
>>> mpiwg-coll at lists.mpi-forum.org
>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>>
>>>
>>
>>
>> --
>> Tony Skjellum, PhD
>> RunTime Computing Solutions, LLC
>> tony at runtimecomputing.com
>> direct: +1-205-918-7514
>> cell: +1-205-807-4968
>> _______________________________________________
>> mpiwg-coll mailing list
>> mpiwg-coll at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>
>>
>>
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
>>
>> _______________________________________________
>> mpiwg-coll mailing list
>> mpiwg-coll at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>
>>
>
> _______________________________________________
> mpiwg-coll mailing list
> mpiwg-coll at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>
>


-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20180723/a1579f41/attachment.html>


More information about the mpiwg-coll mailing list