[Mpi3-hybridpm] [MPI Forum] #211: MPI3 Hybrid Programming: multiple endpoints per collective, option 4
MPI Forum
mpi-22 at lists.mpi-forum.org
Thu Jan 7 07:58:55 CST 2010
#211: MPI3 Hybrid Programming: multiple endpoints per collective, option 4
----------------------------------------------------------+-----------------
Reporter: dougmill@… | Owner: dougmill@…
Type: Enhancements to standard | Status: new
Priority: Not ready / author rework | Milestone: 2010/01/19 Atlanta, USA
Version: MPI 3.0 | Keywords:
Implementation: Unnecessary | Author_bill_gropp: 0
Author_rich_graham: 0 | Author_adam_moody: 0
Author_torsten_hoefler: 0 | Author_dick_treumann: 0
Author_jesper_larsson_traeff: 0 | Author_george_bosilca: 0
Author_david_solt: 0 | Author_bronis_de_supinski: 0
Author_rajeev_thakur: 0 | Author_jeff_squyres: 0
Author_alexander_supalov: 0 | Author_rolf_rabenseifner: 0
----------------------------------------------------------+-----------------
Comment(by dougmill@…):
Some possible example code:
{{{
#pragma omp parallel num_threads(max)
{
...
/* computation... */
...
x = omp_get_thread_num();
#pragma omp master
{
MPI_Endpoint_attach(endpoints[x]);
...
/* communications... */
...
MPI_Endpoint_detach_helper();
MPI_Endpoint_detach();
}
#pragma omp !master /* how is this done? */
{
MPI_Endpoint_attach_helper(endpoints[x]);
/* these threads do no communication! */
}
...
/* more computation, communication, etc */
...
}
}}}
In this example, the non-master threads/agents do not do any
communications explicitly. If they did, there would need to be some sort
of coordination with the master thread, and they would perform (start)
their communications and then make the attach helper call. It is unclear
if there would be deadlock scenarios with that.
--
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/211#comment:2>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum
More information about the mpiwg-hybridpm
mailing list