[mpiwg-persistence] Completion call before all partitions marked ready

Ryan Grant ryan.grant at queensu.ca
Wed Oct 12 10:09:35 CDT 2022


This is correct, you’re allowed to call test/wait whenever you want as long as you abide by the rules that all pready calls must be made in order for completion to happen. So don’t MPI_Wait on an operation to complete when you haven’t called all of the pready calls (that the process/thread is responsible for) from that process/thread yet (similar to your responsibilities to make sure Send/Recv don’t deadlock, but obviously a different semantic for completion).

-Ryan

From: James Dinan <james.dinan at gmail.com>
Date: Wednesday, October 12, 2022 at 6:43 AM
To: Joachim Jenke <protze at itc.rwth-aachen.de>, Ryan Grant <ryan.grant at queensu.ca>
Cc: "mpiwg-persistence at lists.mpi-forum.org" <mpiwg-persistence at lists.mpi-forum.org>
Subject: Re: [mpiwg-persistence] Completion call before all partitions marked ready

This came up recently and I believe it is allowed to wait/test a send request before all partitions are marked as ready. It's not explicitly stated, rather there is no restriction disallowing this usage. It may be good to clarify this semantic (e.g. in Section 4.2.2) for MPI 4.1 since this is a topic that the WG debated at length.

@Ryan Grant<mailto:ryan.grant at queensu.ca> Please fact check.

 ~Jim.

On Mon, Oct 10, 2022 at 7:22 AM Joachim Jenke via mpiwg-persistence <mpiwg-persistence at lists.mpi-forum.org<mailto:mpiwg-persistence at lists.mpi-forum.org>> wrote:
Hello wg-persistence,

Looking at the MPI 4.0 document, it is not clear to us, whether it is
allowed to call a completion call for a partitioned communication
request before all partitions are marked ready. A simple single-threaded
example would be:

```C
MPI_Psend_init(message, partitions, COUNT, MPI_DOUBLE, dest, tag,
                MPI_COMM_WORLD, MPI_INFO_NULL, &request);
MPI_Start(&request);
for(i = 0; i < partitions-1; ++i)
{
     MPI_Pready(i, request);
}
MPI_Test(&request, &flag, MPI_STATUS_IGNORE); // flag will always be 0
MPI_Pready(partitions-1, request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
MPI_Request_free(&request);
```

The question becomes more relevant in a multi-threaded context. One
thread could finish the work early and call MPI_Wait to detect when all
partitions were sent.

 From my understanding, the only requirement is that all partitions must
be marked ready with explicit ready calls before the operation can
complete. Replacing the test in above example with a wait call would
result in deadlock.

Best
Joachim

--
Dr. rer. nat. Joachim Jenke

IT Center
Group: High Performance Computing
Division: Computational Science and Engineering
RWTH Aachen University
Seffenter Weg 23
D 52074  Aachen (Germany)
Tel: +49 241 80- 24765
Fax: +49 241 80-624765
protze at itc.rwth-aachen.de<mailto:protze at itc.rwth-aachen.de>
www.itc.rwth-aachen.de<https://can01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.itc.rwth-aachen.de%2F&data=05%7C01%7Cryan.grant%40queensu.ca%7C6464a2e4284646255bf008daac3e940c%7Cd61ecb3b38b142d582c4efb2838b925c%7C1%7C0%7C638011682013760361%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=n%2B4uUA704BGsh4zIbm7w0Fzs0S8rzlWv13LQX%2BzXF0I%3D&reserved=0>
_______________________________________________
mpiwg-persistence mailing list
mpiwg-persistence at lists.mpi-forum.org<mailto:mpiwg-persistence at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-persistence<https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mpi-forum.org%2Fmailman%2Flistinfo%2Fmpiwg-persistence&data=05%7C01%7Cryan.grant%40queensu.ca%7C6464a2e4284646255bf008daac3e940c%7Cd61ecb3b38b142d582c4efb2838b925c%7C1%7C0%7C638011682013760361%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iOVtiUL9aSL7Bu%2BmeC6dRWfAARYfpi5pxGbGFjTf4uM%3D&reserved=0>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-persistence/attachments/20221012/c3e57763/attachment-0001.html>


More information about the mpiwg-persistence mailing list