[Mpi-forum] MPI "Allocate receive" proposal

Matthieu Dorier matthieu.dorier at irisa.fr
Fri Aug 30 19:27:32 CDT 2013


What I actually meant is rather a version of MPI_Iprobe that uses an MPI_Request argument rather than a (flag, status) pair.

Matthieu

----- Mail original -----
> De: "Matthieu Dorier" <matthieu.dorier at irisa.fr>
> À: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org>
> Envoyé: Samedi 31 Août 2013 02:23:07
> Objet: Re: [Mpi-forum] MPI "Allocate receive" proposal
> 
> Sorry I only read only half of the exchanges, but I see many open
> questions here while another solution could be easier and more
> elegant, in my opinion: just provide MPI_Iprobe.
> 
> The main problem we have here is the fact that it is not possible to
> use MPI_Probe in a Wait/Test array, which I agree is an important
> problem (I ran into it myself no later than this summer). Yet from
> the MPI_Arecv proposal, my immediate question is "What if I don't
> want MPI to use malloc but another function (e.g. cudaMalloc, or any
> other custom allocators, for instance those from the C++ STL, or an
> allocator that allocates in a shared memory, etc.)?"
> 
> MPI_Iprobe(int source, int tag, MPI_Comm comm, MPI_Request* req)
> seems to resolve everything: we can use the request in a test/wait
> array, as soon as it completes we can get the status using
> MPI_Request_get_status, and get all the informations we would have
> had from a normal MPI_Probe. From there, the user is free to
> allocate its buffer the way she wants, and do either an MPI_Recv or
> an MPI_Irecv with this buffer.
> 
> To be honest I also don't like the idea of having a library
> internally allocate memory for me that I should free outside of the
> library afterward (except if it's specifically the goal of that
> library), it's a good way of forgetting to free the buffer and it
> disallows any flexibility in the way the memory is allocated.
> 
> Matthieu
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> 



More information about the mpi-forum mailing list