[Mpi-comments] Make MPI_Precv_init to return an array of MPI_Request to avoid MPI_Wait* MPI_Test* routines duplication

BRELLE, EMMANUEL emmanuel.brelle at atos.net
Fri Oct 15 05:40:12 CDT 2021


Dear MPI-forum members,

This comment applies to MPI-4.0 standard (https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf)

Having partitioned communication looks very promising to speed up applications, but I am wondering why did you introduce "MPI_Parrived"(page 151) routine instead of making "MPI_Precv_init"(page 147)  returning an array of MPI_Request.
To my understanding, one of the need for partitioned communications is runtimes that exchange set of "tasks" : computations do not immediately need all the partitions to continue. Partitioned communication enabled message aggregation but a single partition could be sufficient to unlock computation. In this case, processes will have to search over each partition to catch a completed ones that not have been already found. This is what MPI_Waitany (page 116) does with an array of MPI_Request. It would be have been therefore more interesting to handle explicit MPI_Request objects.

I would appreciate that MPI_Precv_init returns an array of MPI_request in addition of the main MPI_request (p147 line 39). MPI_Parrived could be deprecated for MPI_Test on these sub-requests. In extension this would also avoid to duplicate MPI_Testany(p117), MPI_Waitall(p118), MPI_Testall(p119), MPI_Waitsome (p 120) and MPI_Testsome(p121).

In a first idea, MPI users would provide this array of the size of the number of partitions as an extra parmeter to MPI_Precv_init. If the array is valid (for example not a special value "MPI_NO_REQUESTS") these MPI_Request would be created by MPI_Precv_init and freed by a call of MPI_Request_free on the main request. Waiting all the sub-requests would be equivalent to wait the main request. Like requests from MPI_Irecv, it would be invalid to call MPI_Start on these sub-requests.

Regards,
Emmanuel Brelle
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-comments/attachments/20211015/30f993c1/attachment.html>


More information about the mpi-comments mailing list