[mpiwg-coll] backward communication in neighborhood collectives

Junchao Zhang jczhang at mcs.anl.gov
Mon Jul 23 11:20:34 CDT 2018


No, not at all.

--Junchao Zhang

On Mon, Jul 23, 2018 at 11:04 AM, Jeff Hammond <jeff.science at gmail.com>
wrote:

> Is creating two communicators prohibitive?
>
> Sent from my iPhone
>
> On Jul 23, 2018, at 8:02 AM, Junchao Zhang <jczhang at mcs.anl.gov> wrote:
>
> Jeff,
>  No. I do not want to send messages to other ranks (not in
> comm_dist_graph).
>  My question is: I create a graph with MPI_Dist_graph_create_adjacent,
> which specifies a communication direction. I can do MPI_Neighbor_alltoallv
> on it. That is great.  But I also want to do backward communication
> (reverse the direction of the edges in graph). How can I do it without
> creating a new communicator?
> The background of this question is: In PETSc, we have an operation called
> VecScatter, which scatters some entries of a parallel vector x to another
> parallel vector y. Sometime, we want to reverse the operation, to scatter y
> entries to x.
>
>
> --Junchao Zhang
>
> On Mon, Jul 23, 2018 at 9:49 AM, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>
>> On Mon, Jul 23, 2018 at 7:45 AM, Junchao Zhang <jczhang at mcs.anl.gov>
>> wrote:
>>
>>> Hello, Dan,
>>>  I am interested in MPI_Neighbor_alltoallv. From its arguments, I do not
>>> see one can send a message to a neighbor not specified in the graph created
>>> by for example, MPI_Dist_graph_create_adjacent.
>>>
>>
>> Why would you expect this?  The whole point of graph communicators is to
>> specify the communication topology so that it can be optimized.  If you
>> want to communicate with another ranks, use the parent communicator that
>> supports communication to all ranks.
>>
>> Jeff
>>
>>
>>> --Junchao Zhang
>>>
>>> On Mon, Jul 23, 2018 at 5:51 AM, HOLMES Daniel <d.holmes at epcc.ed.ac.uk>
>>> wrote:
>>>
>>>> Hi Junchao Zhang,
>>>>
>>>> My understanding of the current API for MPI-3.1 is that:
>>>>
>>>> 1) the virtual topology does not actually restrict communication via
>>>> the communicator to the edges specified in the topology - messages can be
>>>> sent along any edge in either direction, and even between pairs of
>>>> processes for which no edge was specified.
>>>>
>>>> 2) the virtual topology can be specified as a symmetric graph - for
>>>> every ‘forward’ edge (e.g. from A to B), the ‘backward’ edge (i.e. from B
>>>> to A) can be included as well.
>>>>
>>>> 3) there is already language in the MPI Standard regarding how MPI
>>>> handles symmetric and non-symmetric graph topologies for neighbourhood
>>>> collective operations.
>>>>
>>>> Thus, there is no need to create two distributed graph topology
>>>> communicators to achieve ‘forward and backward communication along the
>>>> edges’.
>>>>
>>>> Cheers,
>>>> Dan.
>>>>>>>> Dr Daniel Holmes PhD
>>>> Applications Consultant in HPC Research
>>>> d.holmes at epcc.ed.ac.uk
>>>> Phone: +44 (0) 131 651 3465
>>>> Mobile: +44 (0) 7940 524 088
>>>> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
>>>>>>>> The University of Edinburgh is a charitable body, registered in
>>>> Scotland, with registration number SC005336.
>>>>>>>>
>>>> On 15 Jul 2018, at 10:42, Anthony Skjellum <tony at runtimecomputing.com>
>>>> wrote:
>>>>
>>>> Hi, I just saw this.
>>>>
>>>> We definitely need to consider this concern.
>>>>
>>>> I also need to go review the APIs.
>>>>
>>>> Thanks,
>>>> Tony
>>>>
>>>>
>>>> On Sat, Jul 14, 2018 at 12:27 AM, Junchao Zhang <jczhang at mcs.anl.gov>
>>>> wrote:
>>>>
>>>>>  I want to try MPI neighborhood collectives. I have a communication
>>>>> graph and want to do both forward and backward communication along the
>>>>> edges. With current APIs, it looks I need to create two comm_dist_graphs.
>>>>> It is a waste since MPI implementations can do similar optimizations for
>>>>> the two.
>>>>>  Should MPI support this scenario?  Thanks.
>>>>>
>>>>> --Junchao Zhang
>>>>>
>>>>> _______________________________________________
>>>>> mpiwg-coll mailing list
>>>>> mpiwg-coll at lists.mpi-forum.org
>>>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Tony Skjellum, PhD
>>>> RunTime Computing Solutions, LLC
>>>> tony at runtimecomputing.com
>>>> direct: +1-205-918-7514
>>>> cell: +1-205-807-4968
>>>> _______________________________________________
>>>> mpiwg-coll mailing list
>>>> mpiwg-coll at lists.mpi-forum.org
>>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>>>
>>>>
>>>>
>>>> The University of Edinburgh is a charitable body, registered in
>>>> Scotland, with registration number SC005336.
>>>>
>>>> _______________________________________________
>>>> mpiwg-coll mailing list
>>>> mpiwg-coll at lists.mpi-forum.org
>>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>>>
>>>>
>>>
>>> _______________________________________________
>>> mpiwg-coll mailing list
>>> mpiwg-coll at lists.mpi-forum.org
>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>>
>>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>>
>> _______________________________________________
>> mpiwg-coll mailing list
>> mpiwg-coll at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>>
>>
> _______________________________________________
> mpiwg-coll mailing list
> mpiwg-coll at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>
>
> _______________________________________________
> mpiwg-coll mailing list
> mpiwg-coll at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20180723/7f135460/attachment-0001.html>


More information about the mpiwg-coll mailing list