[Mpi3-hybridpm] [MPI Forum] #208: MPI3 Hybrid Programming: multiple endpoints per collective, option 1

MPI Forum mpi-22 at lists.mpi-forum.org
Thu Jan 7 07:15:13 CST 2010


#208: MPI3 Hybrid Programming: multiple endpoints per collective, option 1
----------------------------------------------------------+-----------------
                    Reporter:  dougmill@…                 |                       Owner:  dougmill@…             
                        Type:  Enhancements to standard   |                      Status:  new                    
                    Priority:  Not ready / author rework  |                   Milestone:  2010/01/19 Atlanta, USA
                     Version:  MPI 3.0                    |                    Keywords:                         
              Implementation:  Unnecessary                |           Author_bill_gropp:  0                      
          Author_rich_graham:  0                          |           Author_adam_moody:  0                      
      Author_torsten_hoefler:  0                          |        Author_dick_treumann:  0                      
Author_jesper_larsson_traeff:  0                          |       Author_george_bosilca:  0                      
           Author_david_solt:  0                          |   Author_bronis_de_supinski:  0                      
        Author_rajeev_thakur:  0                          |         Author_jeff_squyres:  0                      
    Author_alexander_supalov:  0                          |    Author_rolf_rabenseifner:  0                      
----------------------------------------------------------+-----------------

Comment(by dougmill@…):

 An attach without join would be an isolated endpoint/agent, and any
 communication would be restricted to use only that endpoint - no
 (horizontal) parallelism.

 Here's an example of how join/leave might work. In the following, "max" is
 the total number of endpoints and "group_id" is some number that is unique
 to each instance of this block of code. If "max" were actually the total
 number of endpoints created, then it would suffice to use a constant such
 as "0" there.

 One of the big questions is how much objection is there to the barriers
 (explicit and implicit) here? These are intra-process barriers and thus
 should be lightweight, but they still require synchronization between the
 threads and represent a wait for all threads to reach that point.

 {{{
 #pragma omp parallel num_threads(max)
 {
     x = omp_get_thread_num();
     MPI_Endpoint_attach(endpoints[x]);
     ...
     /* computation... */
     ...
     MPI_Endpoint_join(group_id, max);
     #pragma omp barrier
     ...
     /* communication... */
     ...
     MPI_Endpoint_leave();
     ...
     /* more phases of computation, communication, etc */
     ...
     MPI_Endpoint_detach();
 }
 }}}

-- 
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/208#comment:4>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum




More information about the mpiwg-hybridpm mailing list