[Mpi3-hybridpm] [MPI Forum] #209: MPI3 Hybrid Programming: multiple endpoints per collective, option 2
MPI Forum
mpi-22 at lists.mpi-forum.org
Wed Jan 6 08:06:00 CST 2010
#209: MPI3 Hybrid Programming: multiple endpoints per collective, option 2
----------------------------------------------------------+-----------------
Reporter: dougmill@… | Owner: dougmill@…
Type: Enhancements to standard | Status: new
Priority: Not ready / author rework | Milestone: 2010/01/19 Atlanta, USA
Version: MPI 3.0 | Keywords:
Implementation: Unnecessary | Author_bill_gropp: 0
Author_rich_graham: 0 | Author_adam_moody: 0
Author_torsten_hoefler: 0 | Author_dick_treumann: 0
Author_jesper_larsson_traeff: 0 | Author_george_bosilca: 0
Author_david_solt: 0 | Author_bronis_de_supinski: 0
Author_rajeev_thakur: 0 | Author_jeff_squyres: 0
Author_alexander_supalov: 0 | Author_rolf_rabenseifner: 0
----------------------------------------------------------+-----------------
Comment(by dougmill@…):
A possible alternative to specifying a group would be to use some sort of
"data identifier" or to actually formally create a shared data object for
the data parameters and pass this data object to the allreduce instead of
the discrete data parameters (buffer, count, datatype for source and
destination). This data object could then be used to associate calls
together, although it may still be necessary to add some sort of
"participant count" parameter. something like:
{{{
/* 'master' thread does this: */
MPI_Dataparams_create(sendbuf, recvbuf, 1, MPI_DOUBLE, &data_blob);
...
/* each of 3 participants does this: */
MPI_Allreduce(data_blob, 3, MPI_SUM, comm);
}}}
Perhaps the 'op' should also be part of the data_blob? And possibly the
'3' (number of participants)?
--
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/209#comment:1>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum
More information about the mpiwg-hybridpm
mailing list