[Mpi-forum] MPI Forum Virtual Meeting tomorrow (Wednesday 10am Central US time)

Rolf Rabenseifner rabenseifner at hlrs.de
Tue Oct 9 09:30:49 CDT 2018


Martin wrote: 
> ... 
> Just as a reminder, tomorrow we will have our first (of five) virtual meeting in this quarter. The webex information can be reached via 
> [ https://www.mpi-forum.org/meetings/ | https://www.mpi-forum.org/meetings/ ] 
> The topic will be “Terms and Conventions” and Puri will lead the discussion. 
> ... 



Dear all ( whole MPI Forum ), 




we prepared new wording for the definitions of 
operation , non/blocking , non-/local , persistent , collective , and more. 


This was necessary, because the old definitions in MPI-3.1 on page 11 do no longer fit to the new interfaces of nonblocking and persistent collectives and partially also not to older parts of MPI. 





During this Wednesday MPI Forum Virtual Meeting, we have planned to discuss this new wording below with the whole MPI Forum . 



Additionally, I 'm currently preparing an Appendix for the MPI Standard summarizing the semantics of all communicating MPI routines. 
This work is not yet discussed within the Group, but may be already helpful, when you check the new wording against the many different existing in APIs in MPI with there sometimes small differences in semantics. 
I attached a very early draft. 

The major topic of the meeting is the new wording below . 
In the moment, the attachment is only a further reference (only in first-draft quality). 

Best regards 
Rolf 

PS: Current active Terms&Conventions chapter committee members are: Purushotham V. Bangalore, Daniel Holmes, Anthony Skjellum, Guillaume Mercier, Julien Jaeger, Claudia Blaas-Schenner, Rolf Rabenseifner, Bill Gropp, Wesley Bland. 
​​... 





(In 2.3 Procedure Specification, we wanted to add: function == procedure == procedure call == call) 




Substituting MPI-3.1 Section 2.4, page 11, lines 24-48 by 





2.4 Semantic Terms 




When discussing MPI procedures the following semantic terms are used. 




An operation is a set of one or more procedures leading from a well-defined input state to a well-defined output state. An operation consists of four stages: initialization, starting, completion, and freeing: 




Initialization hands over the argument list to the operation but not the content of the message data buffers. For an operation it may be specified that array arguments must not be changed until the operation is freed. 





Starting hands over the content of the message data buffer to the associated operation. 





Note that initiation refers to the combination of the initialization and starting stages. 





Completion returns control of the content of the message data buffer and indicates that any output buffers have been updated. 





Freeing returns control of the rest of the argument list. 








Procedures can be blocking or nonblocking: 





A procedure is blocking if return from the procedure indicates the user is allowed to reuse resources specified in the call. 



A procedu re is nonblocking if it returns before the user is allowed to reuse resources (such as buffers) specified in the call. 


Operations can be blocking, nonblocking, or persistent: 



For a blocking operation , all four stages are combined in a single blocking procedure call. 





For a nonblocking operation , the initialization and starting stages are combined into a single nonblocking procedure call and the completion and freeing stages are done with a separate single procedure call, which can be blocking or nonblocking. 


For a persistent operation , all four stages are done with separate procedure calls, each of which can be blocking or nonblocking. 


In addition to the concept of blocking and nonblocking there is the orthogonal concept of locality: 



A procedure is local if it returns control to the calling MPI process based only on the state of the local MPI process that invoked it. 




A procedure is non-local if its return may require the execution of some MPI procedure on another MPI process. Such a procedure may require communication occurring with another MPI process. 




Advice to users. Note that for communication-related procedures, in most cases nonblocking procedures are local and blocking procedures are non-local. Exceptions are noted where such procedures are defined. 

In many cases, i n the procedure name ​, the additional letter "I" as an abbreviation of the word "incomplete" marks nonblocking procedures and/or as an abbreviation of the word "immediately", it marks local procedures. (End of advice to users.) 









Additionally as a third orthogonal aspect, a procedure can be either collective or not. 


A procedure is collective if all processes in a process group need to invoke the procedure. 

Collective operations are also available as blocking, nonblocking and persistent operations as defined above. 

Collective initialization calls over the same process group must be executed in the same order by all members of the process group. 
Blocking collective procedures and persistent collective initialization procedures may or may not be synchronizing , that is, may or may not return before all processes in the group have called the procedure. 
Nonblocking collective initiation procedures and the start procedure of persistent collective operations are local and shall not be synchronizing. 
In case of nonblocking or persistent collective operations, the completion stage may or may not finish before all processes in the group have started the operation. 

Advice to users. 
Calling any synchronising function when there is no possibility of concurrent calls at all other processes in the associated group is erroneous because it can cause deadlock. 
Waiting for completion of any operation when there is no possibility that all other processes in the associated group will be able to start the operation is erroneous because it can cause deadlock. 
(End of advice to users.) 








For datatypes, the following terms are defined: 

..... ​ 



MPI-3. 1 Section 3.4, MPI_BSEND, page 37, after lines 36-43 


A buffered mode send operation can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. However, unlike the standard send, this operation is local , and its completion does not depend on the occurrence of a matching receive. Thus, if a send is executed and no matching receive is posted, then MPI must buffer the outgoing message, so as to allow the send call to complete. An error will occur if there is insufficient buffer space. The amount of available buffer space is controlled by the user — see Section 3.6 . Buffer allocation by the user may be required for the buffered mode to be effective. 


the following sentence and advice are added 


According to the definitions in Section 2.4 , MPI_BSEND is a blocking procedure because the user can re-use all resources given as arguments, including the message data buffer. It is also a local procedure because it returns immediately without depending on the execution of any MPI procedure in any other MPI process. 



Advice to users. This is one of the exceptions in which a blocking procedure is local. ( End of advice to users. ) 







MPI-3.1 Section 3.7.3, MPI_REQUEST_FREE, page 55, lines 16-18 read 


Mark the request object for deallocation and set request to MPI _ REQUEST _ NULL . An ongoing communication that is associated with the request will be allowed to complete. The request will be deallocated only after its completion. 

but should read 


Mark the request object for deallocation and set request to MPI _ REQUEST _ NULL . [An] O ngoing communication that is associated with the request will be allowed to [complete] continue until it is finished . The request will be deallocated only after its [completion] associated communication has finished . 








MPI-3. 1 Section 3.8.1, MPI_IPROBE, page 65, after lines 20-22 


If MPI _ IPROBE returns flag = true , then the content of the status object can be sub- sequently accessed as described in Section 3.2.5 to find the source, tag and length of the probed message. 


the following paragraph and advice are added 


MPI_IPROBE is a local procedure since it does not depend on MPI calls in other MPI processes . According to the definitions in Section 2.4 with respect to the status output argument as resource, it is a blocking procedure although it returns immediately. 





Advice to users. This is one of the exceptions in which a blocking procedure is local. ( End of advice to users. ) 







MPI-3. 1 Section 3.8.2, MPI_IMPROBE, page 68, after lines 30-31 


In addition, it returns in message a handle to the matched message. Otherwise, the call returns flag = false , and leaves status and message undefined. 


the following paragraph is added 


MPI_IMPROBE is a local procedure. According to the definitions in Section 2.4 and in contrast to MPI_IPROBE, it is a nonblocking procedure because it is the initialization of a matched receive operation. 







mpi32-report-ticket25-barcelona-vote-sep2018.pdf 


MPI-3.NEXT #25 Section 5.13 Persistent Collective Operations, page 216, after lines 3-8 
Initialization calls for MPI persistent collective operations are non-local and follow all the existing rules for collective operations, in particular ordering; programs that do not conform to these restrictions are erroneous. After initialization, all arrays associated with input arguments (such as arrays of counts, displacements, and datatypes in the vector versions of the collectives) must not be modifed until the corresponding persistent request is freed with MPI_REQUEST_FREE. 


the following sentence and advice are added 


According to the definitions in Section 2.4 , the persistent collective initialization procedures are nonblocking. They are also non-local procedures because they may or may not return before they are called in all MPI processes of the process group. 



Advice to users. This is one of the exceptions in which nonblocking procedures are non-local. ( End of advice to users. ) 








MPI-3.1 Section 13.4.5 Split Collective Data Access Routines, page 528, after lines 20-24 


- An implementation is free to implement any split collective data access routine using the corresponding blocking collective routine when either the begin call (e.g., MPI _ FILE _ READ _ ALL _ BEGIN ) or the end call (e.g., MPI _ FILE _ READ _ ALL _ END ) is issued. The begin and end calls are provided to allow the user and MPI implementation to optimize the collective Operation. 


the following sentence and advice are added 


According to the definitions in Section 2.4 , the begin procedures are nonblocking. They are also non-local procedures because they may or may not return before they are called in all MPI processes of the process group. 



Advice to users. This is one of the exceptions in which nonblocking procedures are non-local. ( End of advice to users. ) 


































From: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org> 
To: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org> 
Cc: "Martin Schulz" <schulzm at in.tum.de> 
Sent: Tuesday, October 9, 2018 9:22:04 AM 
Subject: [Mpi-forum] MPI Forum Virtual Meeting tomorrow (Wednesday 10am Central US time) 




BQ_BEGIN
Hi all, 
Just as a reminder, tomorrow we will have our first (of five) virtual meeting in this quarter. The webex information can be reached via 
[ https://www.mpi-forum.org/meetings/ | https://www.mpi-forum.org/meetings/ ] 
The topic will be “Terms and Conventions” and Puri will lead the discussion. 

Thanks, 
Martin 

— 
Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel Systems 
Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching 
[ mailto:schulzm at in.tum.de | Email: schulzm at in.tum.de ] 

_______________________________________________ 
mpi-forum mailing list 
mpi-forum at lists.mpi-forum.org 
https://lists.mpi-forum.org/mailman/listinfo/mpi-Forum [ http://vsc.ac.at/ ] 
BQ_END


BQ_BEGIN

BQ_END

BQ_BEGIN

BQ_END


-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de . 
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 . 
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 . 
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner . 
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) . 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20181009/b2392d8b/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: MPI-semantics-appendix.pdf
Type: application/pdf
Size: 40191 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20181009/b2392d8b/attachment-0001.pdf>


More information about the mpi-forum mailing list