[Mpi3-hybridpm] [MPI Forum] #211: MPI3 Hybrid Programming: multiple endpoints per collective, option 4

MPI Forum mpi-22 at lists.mpi-forum.org
Wed Jan 6 08:30:40 CST 2010


#211: MPI3 Hybrid Programming: multiple endpoints per collective, option 4
----------------------------------------------------------+-----------------
                    Reporter:  dougmill@…                 |                       Owner:  dougmill@…             
                        Type:  Enhancements to standard   |                      Status:  new                    
                    Priority:  Not ready / author rework  |                   Milestone:  2010/01/19 Atlanta, USA
                     Version:  MPI 3.0                    |                    Keywords:                         
              Implementation:  Unnecessary                |           Author_bill_gropp:  0                      
          Author_rich_graham:  0                          |           Author_adam_moody:  0                      
      Author_torsten_hoefler:  0                          |        Author_dick_treumann:  0                      
Author_jesper_larsson_traeff:  0                          |       Author_george_bosilca:  0                      
           Author_david_solt:  0                          |   Author_bronis_de_supinski:  0                      
        Author_rajeev_thakur:  0                          |         Author_jeff_squyres:  0                      
    Author_alexander_supalov:  0                          |    Author_rolf_rabenseifner:  0                      
----------------------------------------------------------+-----------------

Comment(by dougmill@…):

 One problem is that endpoints may be needed for both point-to-point and
 collective communications, and for helper threads. So it is not clear just
 how a thread would initiate communications and also become a helper. This
 seems to restrict the use of helper threads to specific situations where
 exact communications pattern is known to be of a single type.

 Also, the detach seems problematic - especially when multiple threads are
 initiating communications. It would seem that some sort of global "all
 done" state (or barrier?) is needed so that the "watchdog" thread can make
 the call to detach all helper threads. And if helper threads are first
 performing explicit communications then there may be timing concerns with
 a thread becoming a helper just moments after the watchdog thread (thinks
 it) detached all helpers.

-- 
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/211#comment:1>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum




More information about the mpiwg-hybridpm mailing list