<div dir="ltr">I do not want to combine the forward/backward graphs. Creating two communicators is OK with me. My original thought was: with one graph, if MPI also supports its reverse operation, then MPI implementations can do symmetric optimizations and save some resources. </div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">--Junchao Zhang</div></div></div>
<br><div class="gmail_quote">On Mon, Jul 23, 2018 at 10:35 AM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">For ghost value updates in domain decomposition PDE solvers (usually the<br>
most performance-sensitive use of VecScatter), the graph will usually be<br>
symmetric. It is often nonsymmetric for multilevel restriction and<br>
prolongation and for some field-split methods in multiphysics<br>
applications, in which case I think you'll want to make two neighborhood<br>
communicators rather than the union of the forward and reverse/transpose<br>
operations with some empty messages.<br>
<div class="HOEnZb"><div class="h5"><br>
Junchao Zhang <<a href="mailto:jczhang@mcs.anl.gov">jczhang@mcs.anl.gov</a>> writes:<br>
<br>
> Dan,<br>
> Using your approach, I have to specify zero messages in arguments to<br>
> MPI_NEIGHBOR_ALLTOALLV,<br>
> because I only do communication in one direction in one time. Also, in my<br>
> graph, there may have edges A->B and B->A. They may have different weights.<br>
> Using your approach, it is quite confusing.<br>
> One uses MPI_NEIGHBOR_ALLTOALLV to have better performance. We have to<br>
> keep it in mind.<br>
><br>
> --Junchao Zhang<br>
><br>
> On Mon, Jul 23, 2018 at 9:59 AM, HOLMES Daniel <<a href="mailto:d.holmes@epcc.ed.ac.uk">d.holmes@epcc.ed.ac.uk</a>><br>
> wrote:<br>
><br>
>> Hi Junchao Zhang,<br>
>><br>
>> You are correct - the neighbourhood collectives send messages to (and<br>
>> receive messages from) only neighbouring MPI processes, as defined by the<br>
>> virtual topology. However, it is possible to use point-to-point,<br>
>> single-sided, or normal collective functions to send (and receive) messages<br>
>> between *any* MPI processes in a communicator, irrespective of the<br>
>> presence/structure of a virtual topology.<br>
>><br>
>> For your use-case (if I understand it correctly), you should specify a<br>
>> symmetric graph - each process specifies to MPI_DIST_GRAPH_CREATE_ADJACENT<br>
>> all the processes it wishes to communicate with (both sending and<br>
>> receiving) during subsequent calls to MPI_NEIGHBOR_ALLTOALLV.<br>
>><br>
>> Cheers,<br>
>> Dan.<br>
>> —<br>
>> Dr Daniel Holmes PhD<br>
>> Applications Consultant in HPC Research<br>
>> <a href="mailto:d.holmes@epcc.ed.ac.uk">d.holmes@epcc.ed.ac.uk</a><br>
>> Phone: +44 (0) 131 651 3465<br>
>> Mobile: +44 (0) 7940 524 088<br>
>> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD<br>
>> —<br>
>> The University of Edinburgh is a charitable body, registered in Scotland,<br>
>> with registration number SC005336.<br>
>> —<br>
>><br>
>> On 23 Jul 2018, at 15:45, Junchao Zhang <<a href="mailto:jczhang@mcs.anl.gov">jczhang@mcs.anl.gov</a>> wrote:<br>
>><br>
>> Hello, Dan,<br>
>> I am interested in MPI_Neighbor_alltoallv. From its arguments, I do not<br>
>> see one can send a message to a neighbor not specified in the graph created<br>
>> by for example, MPI_Dist_graph_create_<wbr>adjacent.<br>
>><br>
>> --Junchao Zhang<br>
>><br>
>> On Mon, Jul 23, 2018 at 5:51 AM, HOLMES Daniel <<a href="mailto:d.holmes@epcc.ed.ac.uk">d.holmes@epcc.ed.ac.uk</a>><br>
>> wrote:<br>
>><br>
>>> Hi Junchao Zhang,<br>
>>><br>
>>> My understanding of the current API for MPI-3.1 is that:<br>
>>><br>
>>> 1) the virtual topology does not actually restrict communication via the<br>
>>> communicator to the edges specified in the topology - messages can be sent<br>
>>> along any edge in either direction, and even between pairs of processes for<br>
>>> which no edge was specified.<br>
>>><br>
>>> 2) the virtual topology can be specified as a symmetric graph - for every<br>
>>> ‘forward’ edge (e.g. from A to B), the ‘backward’ edge (i.e. from B to A)<br>
>>> can be included as well.<br>
>>><br>
>>> 3) there is already language in the MPI Standard regarding how MPI<br>
>>> handles symmetric and non-symmetric graph topologies for neighbourhood<br>
>>> collective operations.<br>
>>><br>
>>> Thus, there is no need to create two distributed graph topology<br>
>>> communicators to achieve ‘forward and backward communication along the<br>
>>> edges’.<br>
>>><br>
>>> Cheers,<br>
>>> Dan.<br>
>>> —<br>
>>> Dr Daniel Holmes PhD<br>
>>> Applications Consultant in HPC Research<br>
>>> <a href="mailto:d.holmes@epcc.ed.ac.uk">d.holmes@epcc.ed.ac.uk</a><br>
>>> Phone: +44 (0) 131 651 3465<br>
>>> Mobile: +44 (0) 7940 524 088<br>
>>> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD<br>
>>> —<br>
>>> The University of Edinburgh is a charitable body, registered in Scotland,<br>
>>> with registration number SC005336.<br>
>>> —<br>
>>><br>
>>> On 15 Jul 2018, at 10:42, Anthony Skjellum <<a href="mailto:tony@runtimecomputing.com">tony@runtimecomputing.com</a>><br>
>>> wrote:<br>
>>><br>
>>> Hi, I just saw this.<br>
>>><br>
>>> We definitely need to consider this concern.<br>
>>><br>
>>> I also need to go review the APIs.<br>
>>><br>
>>> Thanks,<br>
>>> Tony<br>
>>><br>
>>><br>
>>> On Sat, Jul 14, 2018 at 12:27 AM, Junchao Zhang <<a href="mailto:jczhang@mcs.anl.gov">jczhang@mcs.anl.gov</a>><br>
>>> wrote:<br>
>>><br>
>>>> I want to try MPI neighborhood collectives. I have a communication<br>
>>>> graph and want to do both forward and backward communication along the<br>
>>>> edges. With current APIs, it looks I need to create two comm_dist_graphs.<br>
>>>> It is a waste since MPI implementations can do similar optimizations for<br>
>>>> the two.<br>
>>>> Should MPI support this scenario? Thanks.<br>
>>>><br>
>>>> --Junchao Zhang<br>
>>>><br>
>>>> ______________________________<wbr>_________________<br>
>>>> mpiwg-coll mailing list<br>
>>>> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
>>>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
>>>><br>
>>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Tony Skjellum, PhD<br>
>>> RunTime Computing Solutions, LLC<br>
>>> <a href="mailto:tony@runtimecomputing.com">tony@runtimecomputing.com</a><br>
>>> direct: +1-205-918-7514<br>
>>> cell: +1-205-807-4968<br>
>>> ______________________________<wbr>_________________<br>
>>> mpiwg-coll mailing list<br>
>>> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
>>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
>>><br>
>>><br>
>>><br>
>>> The University of Edinburgh is a charitable body, registered in<br>
>>> Scotland, with registration number SC005336.<br>
>>><br>
>>> ______________________________<wbr>_________________<br>
>>> mpiwg-coll mailing list<br>
>>> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
>>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
>>><br>
>>><br>
>> ______________________________<wbr>_________________<br>
>> mpiwg-coll mailing list<br>
>> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
>><br>
>><br>
>><br>
>> The University of Edinburgh is a charitable body, registered in<br>
>> Scotland, with registration number SC005336.<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> mpiwg-coll mailing list<br>
>> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
>><br>
>><br>
> ______________________________<wbr>_________________<br>
> mpiwg-coll mailing list<br>
> <a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br>
</div></div></blockquote></div><br></div>