[Mpi3-hybridpm] [MPI Forum] #210: MPI3 Hybrid Programming: multiple endpoints per collective, option 3
MPI Forum
mpi-22 at lists.mpi-forum.org
Thu Jan 7 07:35:47 CST 2010
#210: MPI3 Hybrid Programming: multiple endpoints per collective, option 3
----------------------------------------------------------+-----------------
Reporter: dougmill@… | Owner: dougmill@…
Type: Enhancements to standard | Status: new
Priority: Not ready / author rework | Milestone: 2010/01/19 Atlanta, USA
Version: MPI 3.0 | Keywords:
Implementation: Unnecessary | Author_bill_gropp: 0
Author_rich_graham: 0 | Author_adam_moody: 0
Author_torsten_hoefler: 0 | Author_dick_treumann: 0
Author_jesper_larsson_traeff: 0 | Author_george_bosilca: 0
Author_david_solt: 0 | Author_bronis_de_supinski: 0
Author_rajeev_thakur: 0 | Author_jeff_squyres: 0
Author_alexander_supalov: 0 | Author_rolf_rabenseifner: 0
----------------------------------------------------------+-----------------
Comment(by dougmill@…):
In order to make this work, I assume that the endpoint must be set
MPI_THREAD_MULTIPLE, although internally it may not be required. Since the
communications calls are not independent communications but all part of
the same communication, perhaps MPI_THREAD_MULTIPLE is not required. But
there is still confusion with multiple, independent, communications by
threads attached to the same endpoint - unless we declare that to be an
invalid usage.
So, this would probably require some additional indicator of how the
multiple calls should relate to each other, which requires changes to the
communications calls API (e.g. ticket:209)
--
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/210#comment:2>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum
More information about the mpiwg-hybridpm
mailing list