[mpi-21] Ballot 4 - RE: clarification text for MPI_Reduce_scatter

Rolf Rabenseifner rabenseifner at [hidden]
Tue Jan 29 04:43:57 CST 2008



This is a follow-up for MPI 2.1, Ballot 4.

I'm asking especially 
  Rajeev Thakur,
the participant of the email-discussion in 2002, to review this proposal.

I would finish the following track without any proposed 
clarification, because the text is already clear when the MPI-1.1
part of MPI_Reduce_scatter is put in front of the MPI-2.0 part,
i.e., in the upcoming combined document MPI-2.1.

This is a follow up to:
  MPI_IN_PLACE for MPI_Reduce_scatter 
  in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html
with mail discussion in
  http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/redscat/
___________________________________

The currently text in 
http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html
should not be accepted, because it is a significant modification of
the MPI-2.0 standard and would break user code:
  
   Proposed text to replace lines 16-20, pg 163 
   The "in place" option for intracommunicators is specified by 
   passing MPI_IN_PLACE in the sendbuf argument. In this case, on 
   each process, the input data is taken from recvbuf. Process i gets 
   the ith segment of the result, and it is stored at the location
   corresponding to segment i in recvbuf. 
___________________________________

Best regards
Rolf

Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the Mpi-21 mailing list