[Mpi-forum] MPI "Allocate receive" proposal
Jed Brown
jedbrown at mcs.anl.gov
Mon Aug 26 17:00:47 CDT 2013
Dries Kimpe <dkimpe at mcs.anl.gov> writes:
> However, think about the cost of the straightforward implementation
> without MPI_Arecv.
>
> MPI_Send (count)
> MPI_Send (vector)
for i in target_ranks:
MPI_Isend(tcount[i],...,&sends[m++]);
MPI_Isend(tdata[i],...,&sends[m++]);
> Other side:
>
> MPI_Recv (count)
> malloc (count)
> MPI_Recv (...)
The control flow is more like:
for i in source_ranks:
MPI_Irecv(&scount[i],...,&recvs[n++]);
while MPI_Waitsome(n,recvs):
for s in first_round:
MPI_Irecv(sdata[s.rank],...,&recvs[n++]);
for s in second_round:
do the processing you would have done
MPI_Waitall(m,sends);
> In the overall picture, the cost of the extra recv/send/MProbe call is
> going to be very minimal.
Agreed, my point was that MPI_Arecv may not address a lot of the cases
where it would initially seem to apply. While it might be nice to
simplify the control flow, the proposal would be a lot stronger if it
included demonstration of a performance advantage.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20130826/afbd4f7c/attachment-0001.pgp>
More information about the mpi-forum
mailing list