[mpi3-coll] Telecon to discuss DV-collectives (Alltoalldv)

Torsten Hoefler htor at illinois.edu
Thu Oct 13 19:07:11 CDT 2011


Hi Adam,
> Soon after we decided to request alltoallv to be added to the dv ticket,  
> I realized there is one important difference between this and the  
> dynamic sparse data exchange (DSDE) case.  With alltoallv, the receiver  
> knows which ranks it will recieve data from, but it doesn't with DSDE.
>
> I think for alltoalldv, you just need each process to provide two lists:  
> a send list and receive list.  Where the current API looks like this:
>
> MPI_Alltoallv(
>  sendbuf, sendcounts[], sdispls[], sendtype,  /* O(P) list */
>  recvbuf, recvcounts[], rdispls[], rectype,  /* O(P) list */
>  comm
> );
>
> Provide a new O(k) interface like so (have to add a count to each list  
> to give its length, and a list of ranks):
>
> MPI_Alltoalldv(
>  nsends, sendbuf, sendranks[], sendcounts[], sdispls[], sendtype,  /*  
> O(k) list */
>  nrecvs, recvbuf, recvranks[], recvcounts[], rdispls[], rectype,  /*  
> O(k) list */
>  comm
> );
Yes, this would probably be the simplest solution, however, it requires
the user to specify k and the receiver list. As we know, this is
sometimes hard.  So I was hoping to solve the more general problem of
DSDE with this interface. However, I agree that this departs from MPI
semantics and since it requires dynamic memory allocation, it may not be
a good choice for MPI. Bu I was hoping to discuss those issues at the
telecon.

Putting one of those interfaces into the draft document is simple (once
we agree on how much we want to provide :-).

Thanks & All the Best,
  Torsten

-- 
 bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
Torsten Hoefler         | Performance Modeling and Simulation Lead
Blue Waters Directorate | University of Illinois (UIUC)
1205 W Clark Street     | Urbana, IL, 61801
NCSA Building           | +01 (217) 244-7736



More information about the mpiwg-coll mailing list