[Mpi3-hybridpm] [MPI Forum] #208: MPI3 Hybrid Programming: multiple endpoints per collective, option 1

MPI Forum mpi-22 at lists.mpi-forum.org
Thu Jan 7 09:54:17 CST 2010

#208: MPI3 Hybrid Programming: multiple endpoints per collective, option 1
                    Reporter:  dougmill@…                 |                       Owner:  dougmill@…             
                        Type:  Enhancements to standard   |                      Status:  new                    
                    Priority:  Not ready / author rework  |                   Milestone:  2010/01/19 Atlanta, USA
                     Version:  MPI 3.0                    |                    Keywords:                         
              Implementation:  Unnecessary                |           Author_bill_gropp:  0                      
          Author_rich_graham:  0                          |           Author_adam_moody:  0                      
      Author_torsten_hoefler:  0                          |        Author_dick_treumann:  0                      
Author_jesper_larsson_traeff:  0                          |       Author_george_bosilca:  0                      
           Author_david_solt:  0                          |   Author_bronis_de_supinski:  0                      
        Author_rajeev_thakur:  0                          |         Author_jeff_squyres:  0                      
    Author_alexander_supalov:  0                          |    Author_rolf_rabenseifner:  0                      

Comment(by dougmill@…):

 An underlying assumption here is that MPI_Wait(et al.) has the same affect
 on progress as MPI_Endpoint_leave. Thus, a program that makes calls to
 MPI_Wait-like functions will be also making progress on any work assigned
 to that endpoint from other endpoints - in addition to making progress on
 any work related to explicit communications started on that endpoint. The
 MPI_Endpoint_leave call is used to ensure that all 3rd-party work is
 completed before leaving a communication phase, but that could include
 explicit communications as well - i.e. the MPI_Endpoint_leave could
 replace and MPI_Wait that waits for all requests.

Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/208#comment:6>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum

More information about the mpiwg-hybridpm mailing list