[mpi3-coll] Telecon to discuss DV-collectives (Alltoalldv)

Torsten Hoefler htor at illinois.edu
Mon Oct 3 19:23:59 CDT 2011

Hello Coll-WG,

At the last meeting, we decided to push the scalable (dv) collective
proposal further towards a reading. The present forum members were
rather clearly supporting the proposal by straw-vote.

We also decided to include alltoalldv in the ticket, a call where every
sender specifies the destinations it sends to as a list. We did not
discuss the specification of the receive buffer though. If we force this
to be if size P blocks (for P processes in the comm, and a block being
count*sizeof(extent datatype)), then we're back to non-scalable again. I
see the following alternatives:

1) MPI allocates memory for the received blocks and returns a list of
   nodes where it received from and the allocated buffer with the
   received data
2) the user allocates a buffer of size N (<=P) and provides it to the
   MPI library, the library fills the buffer and returns a list of
   source nodes. If a process received from more than N nodes, the call
   fails (MSG truncated).
3) the user specifies a callback function for each received block :-)

I prefer 3, however, this has the same issues as active messages and
other callbacks and will most likely be discussed to death. 2 seems thus
most reasonable. Does anybody have another proposal?

We may want to split the ticket into two parts (separating out

I think we should have a quick (~30 mins) telecon to discuss this
matter. Please indicate your availability in the following doodle before
Friday 10/7 if you're interested to participate in the discussion.


The ticket is https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/264 .

Thanks & Best,
  Torsten Hoefler

 bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
Torsten Hoefler         | Performance Modeling and Simulation Lead
Blue Waters Directorate | University of Illinois (UIUC)
1205 W Clark Street     | Urbana, IL, 61801
NCSA Building           | +01 (217) 244-7736

More information about the mpiwg-coll mailing list