<div dir="ltr">All, please see:<div><br></div><div><a href="https://github.com/mpi-forum/mpi-issues/issues/153">https://github.com/mpi-forum/mpi-issues/issues/153</a><br></div><div><br></div><div>Thanks,<br>Tony</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Sep 27, 2019 at 6:37 PM Anthony Skjellum via mpi-forum <<a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Rolf let’s open a Ticket <br>
<br>
Anthony Skjellum, PhD<br>
205-807-4968<br>
<br>
<br>
> On Sep 27, 2019, at 6:09 PM, Rolf Rabenseifner via mpi-forum <<a href="mailto:mpi-forum@lists.mpi-forum.org" target="_blank">mpi-forum@lists.mpi-forum.org</a>> wrote:<br>
> <br>
> Dear MPI collective WG,<br>
> <br>
> you may try to resolve the problem with a maybe wrong <br>
> MPI specification for MPI_NEIGHBOR_ALLTOALL/ALLGATHER<br>
> <br>
> Dear MPI Forum member,<br>
> <br>
> you may own/use an MPI implementation that implements<br>
> MPI_NEIGHBOR_ALLTOALL/ALLGATHER<br>
> with race conditions if #nprocs in one dimension is<br>
> only 1 or 2 and periodic==true<br>
> <br>
> The problem was reported as a bug of the OpenMPI library <br>
> by Simone Chiochetti from DICAM at the University of Trento, <br>
> but seems to be a bug in the MPI specification,<br>
> or at least an advice to implementors is missing.<br>
> <br>
> I produced a set of animated slides.<br>
> Please look at them in presentation mode with animation.<br>
> <br>
> Have fun with a problem that clearly prevents the use<br>
> of MPI_NEIGHBOR_... routines with cyclic boundary condition<br>
> if one wants to verify that mpirun -np 1 is doing<br>
> the same as the sequential code.<br>
> <br>
> Best regards<br>
> Rolf<br>
> <br>
> -- <br>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
> Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
> <neighbor_mpi-3_bug.pptx><br>
> _______________________________________________<br>
> mpi-forum mailing list<br>
> <a href="mailto:mpi-forum@lists.mpi-forum.org" target="_blank">mpi-forum@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpi-forum" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpi-forum</a><br>
_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org" target="_blank">mpi-forum@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpi-forum" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpi-forum</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Anthony Skjellum, PhD<br><a href="mailto:skjellum@gmail.com" target="_blank">skjellum@gmail.com</a><br>Cell: +1-205-807-4968<div><br><div><br></div></div></div></div>