[mpiwg-coll] backward communication in neighborhood collectives
Junchao Zhang
jczhang at mcs.anl.gov
Mon Jul 23 11:02:09 CDT 2018
I do not want to combine the forward/backward graphs. Creating two
communicators is OK with me. My original thought was: with one graph, if
MPI also supports its reverse operation, then MPI implementations can do
symmetric optimizations and save some resources.
--Junchao Zhang
On Mon, Jul 23, 2018 at 10:35 AM, Jed Brown <jed at jedbrown.org> wrote:
> For ghost value updates in domain decomposition PDE solvers (usually the
> most performance-sensitive use of VecScatter), the graph will usually be
> symmetric. It is often nonsymmetric for multilevel restriction and
> prolongation and for some field-split methods in multiphysics
> applications, in which case I think you'll want to make two neighborhood
> communicators rather than the union of the forward and reverse/transpose
> operations with some empty messages.
>
> Junchao Zhang <jczhang at mcs.anl.gov> writes:
>
> > Dan,
> > Using your approach, I have to specify zero messages in arguments to
> > MPI_NEIGHBOR_ALLTOALLV,
> > because I only do communication in one direction in one time. Also, in
> my
> > graph, there may have edges A->B and B->A. They may have different
> weights.
> > Using your approach, it is quite confusing.
> > One uses MPI_NEIGHBOR_ALLTOALLV to have better performance. We have to
> > keep it in mind.
> >
> > --Junchao Zhang
> >
> > On Mon, Jul 23, 2018 at 9:59 AM, HOLMES Daniel <d.holmes at epcc.ed.ac.uk>
> > wrote:
> >
> >> Hi Junchao Zhang,
> >>
> >> You are correct - the neighbourhood collectives send messages to (and
> >> receive messages from) only neighbouring MPI processes, as defined by
> the
> >> virtual topology. However, it is possible to use point-to-point,
> >> single-sided, or normal collective functions to send (and receive)
> messages
> >> between *any* MPI processes in a communicator, irrespective of the
> >> presence/structure of a virtual topology.
> >>
> >> For your use-case (if I understand it correctly), you should specify a
> >> symmetric graph - each process specifies to
> MPI_DIST_GRAPH_CREATE_ADJACENT
> >> all the processes it wishes to communicate with (both sending and
> >> receiving) during subsequent calls to MPI_NEIGHBOR_ALLTOALLV.
> >>
> >> Cheers,
> >> Dan.
> >> —
> >> Dr Daniel Holmes PhD
> >> Applications Consultant in HPC Research
> >> d.holmes at epcc.ed.ac.uk
> >> Phone: +44 (0) 131 651 3465
> >> Mobile: +44 (0) 7940 524 088
> >> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
> >> —
> >> The University of Edinburgh is a charitable body, registered in
> Scotland,
> >> with registration number SC005336.
> >> —
> >>
> >> On 23 Jul 2018, at 15:45, Junchao Zhang <jczhang at mcs.anl.gov> wrote:
> >>
> >> Hello, Dan,
> >> I am interested in MPI_Neighbor_alltoallv. From its arguments, I do not
> >> see one can send a message to a neighbor not specified in the graph
> created
> >> by for example, MPI_Dist_graph_create_adjacent.
> >>
> >> --Junchao Zhang
> >>
> >> On Mon, Jul 23, 2018 at 5:51 AM, HOLMES Daniel <d.holmes at epcc.ed.ac.uk>
> >> wrote:
> >>
> >>> Hi Junchao Zhang,
> >>>
> >>> My understanding of the current API for MPI-3.1 is that:
> >>>
> >>> 1) the virtual topology does not actually restrict communication via
> the
> >>> communicator to the edges specified in the topology - messages can be
> sent
> >>> along any edge in either direction, and even between pairs of
> processes for
> >>> which no edge was specified.
> >>>
> >>> 2) the virtual topology can be specified as a symmetric graph - for
> every
> >>> ‘forward’ edge (e.g. from A to B), the ‘backward’ edge (i.e. from B to
> A)
> >>> can be included as well.
> >>>
> >>> 3) there is already language in the MPI Standard regarding how MPI
> >>> handles symmetric and non-symmetric graph topologies for neighbourhood
> >>> collective operations.
> >>>
> >>> Thus, there is no need to create two distributed graph topology
> >>> communicators to achieve ‘forward and backward communication along the
> >>> edges’.
> >>>
> >>> Cheers,
> >>> Dan.
> >>> —
> >>> Dr Daniel Holmes PhD
> >>> Applications Consultant in HPC Research
> >>> d.holmes at epcc.ed.ac.uk
> >>> Phone: +44 (0) 131 651 3465
> >>> Mobile: +44 (0) 7940 524 088
> >>> Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
> >>> —
> >>> The University of Edinburgh is a charitable body, registered in
> Scotland,
> >>> with registration number SC005336.
> >>> —
> >>>
> >>> On 15 Jul 2018, at 10:42, Anthony Skjellum <tony at runtimecomputing.com>
> >>> wrote:
> >>>
> >>> Hi, I just saw this.
> >>>
> >>> We definitely need to consider this concern.
> >>>
> >>> I also need to go review the APIs.
> >>>
> >>> Thanks,
> >>> Tony
> >>>
> >>>
> >>> On Sat, Jul 14, 2018 at 12:27 AM, Junchao Zhang <jczhang at mcs.anl.gov>
> >>> wrote:
> >>>
> >>>> I want to try MPI neighborhood collectives. I have a communication
> >>>> graph and want to do both forward and backward communication along the
> >>>> edges. With current APIs, it looks I need to create two
> comm_dist_graphs.
> >>>> It is a waste since MPI implementations can do similar optimizations
> for
> >>>> the two.
> >>>> Should MPI support this scenario? Thanks.
> >>>>
> >>>> --Junchao Zhang
> >>>>
> >>>> _______________________________________________
> >>>> mpiwg-coll mailing list
> >>>> mpiwg-coll at lists.mpi-forum.org
> >>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Tony Skjellum, PhD
> >>> RunTime Computing Solutions, LLC
> >>> tony at runtimecomputing.com
> >>> direct: +1-205-918-7514
> >>> cell: +1-205-807-4968
> >>> _______________________________________________
> >>> mpiwg-coll mailing list
> >>> mpiwg-coll at lists.mpi-forum.org
> >>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
> >>>
> >>>
> >>>
> >>> The University of Edinburgh is a charitable body, registered in
> >>> Scotland, with registration number SC005336.
> >>>
> >>> _______________________________________________
> >>> mpiwg-coll mailing list
> >>> mpiwg-coll at lists.mpi-forum.org
> >>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
> >>>
> >>>
> >> _______________________________________________
> >> mpiwg-coll mailing list
> >> mpiwg-coll at lists.mpi-forum.org
> >> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
> >>
> >>
> >>
> >> The University of Edinburgh is a charitable body, registered in
> >> Scotland, with registration number SC005336.
> >>
> >> _______________________________________________
> >> mpiwg-coll mailing list
> >> mpiwg-coll at lists.mpi-forum.org
> >> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
> >>
> >>
> > _______________________________________________
> > mpiwg-coll mailing list
> > mpiwg-coll at lists.mpi-forum.org
> > https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-coll/attachments/20180723/775c834d/attachment-0001.html>
More information about the mpiwg-coll
mailing list