[mpi-21] Ballot 4 - Re: MPI-2 thread safety and collectives

Rajeev Thakur thakur at [hidden]
Thu Jan 31 15:06:50 CST 2008



Rolf,
     Karl's original mail in this thread asked a simpler question, namely,
what is considered as conflicting calls when using the same communicator.

His scenario 1 is
Thread 1: MPI_Allreduce(..., comm)
Thread 2: MPI_File_open(..., comm, ...)

His scenario 2 is 
Thread 1: MPI_Allreduce(..., MPI_SUM, comm)
Thread 2: MPI_Allreduce(..., MPI_MAX, comm)
 
I don't think there is any doubt about scenario 2 being conflicting. In my
opinion, even scenario 1 is conflicting because they are 2 MPI collective
calls explicitly on the same communicator (the file handle is not yet
created). He is asking for some clarification on scenario 1. Your proposed
advice to users is for collective calls on different objects (file handles,
window objects) derived from the same communicator. (It is nonetheless
useful to have it in addition.)

Rajeev

> -----Original Message-----
> From: owner-mpi-21_at_[hidden] 
> [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Rolf Rabenseifner
> Sent: Monday, January 21, 2008 11:26 AM
> To: mpi-21_at_[hidden]
> Cc: mpi-21_at_[hidden]
> Subject: [mpi-21] Ballot 4 - Re: MPI-2 thread safety and collectives
> 
> This is a proposal for MPI 2.1, Ballot 4.
> 
> This is a follow up to:
>   Thread safety and collective communication 
>   in 
> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> errata/index.html
> with mail discussion in
>   
> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> errata/discuss/thread-safety/index.htm
> 
> After checking the e-mails and looking at
> - MPI-2 8.7.2 page 195 lines 6-9
>     Collective calls Matching of collective calls on a communicator, 
>     window, or file handle is done according to the order in 
> which the 
>     calls are issued at each process. If concurrent threads 
> issue such 
>     calls on the same communicator, window or file handle, it 
> is up to 
>     the user to make sure the calls are correctly ordered, using 
>     interthread synchronization.
> - MPI-2 6.2.1 Window Creation, page 110, lines 10-12:
>     The call returns an opaque object that represents the group of 
>     processes that own and access the set of windows, and the 
> attributes 
>     of each window, as specified by the initialization call.
> - MPI-2 9.2. Opening a File, page 211, line 46 - page 212, line 2:
>     Note that the communicator comm is unaffected by MPI FILE OPEN 
>     and continues to be usable in all MPI routines (e.g., MPI SEND).
>     Furthermore, the use of comm will not interfere with I/O behavior.
> it seems that the standard should be clarified. 
>     
> 
> Proposal for MPI 2.1, Ballot 4:
> -------------------------------
> Add new paragraphs after MPI-2, 8.7.2 page 195 line 9 (the 
> end of the clarification on "Collective calls"):  
>   
>   Advice to users. 
>   With three concurrent threads in each MPI process of a 
> communicator comm,
>   it is allowed that thread A in each MPI process calls a collective 
>   operation on comm, thread B calls a file operation on an existing
>   filehandle that was formerly opened on comm, and thread C 
> invokes one-sided
>   operations on an existing window handle that was also 
> formerly created 
>   on comm. 
>   (End of advice to users.)
> 
>   Rationale. 
>   As already specified in MP_FILE_OPEN and MI_WIN_CREATE, a 
> file handle and
>   a window handle inherit only the group of processes of the 
> underlying
>   communicator, but not the communicator itself. Accesses to 
> communicators,
>   window handles and file handles cannot affect one another.
>   (End of rationale.) 
> 
>   Advice to implementors. 
>   If the implementation of file or window operations wants to 
> internally
>   use MPI communication then a duplicated communicator handle 
> may be cached
>   on the file or window handle.
>   (End of advice to implementors.)
> -------------------------------
> 
> Reason: The emails have shown, that the current MPI-2 text can be 
>         well misunderstood. 
> -------------------------------
> 
> Discussion should be done through the new mailing list
> mpi-21_at_cs.uiuc.edu.
> 
> I have sent out this mail with CC through the old general list
> mpi-21_at_[hidden]
> 
> Best regards
> Rolf
> 
> 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> 
> 



More information about the Mpi-21 mailing list