[mpiwg-hybridpm] Text for new MPI_ALIASED result from MPI_COMM_COMPARE
Daniel Holmes
dholmes at epcc.ed.ac.uk
Tue Apr 22 06:43:28 CDT 2014
Hi all,
I have agreed to co-ordinate the word-smithing for MPI_ALIASED.
Note that we do not need to adjust the return values for
MPI_GROUP_COMPARE due to end-points because all handles to any
particular communicator (whether containing multiple local end-points or
not) will reference exactly the same group, i.e. MPI_IDENT covers this case.
This is the current text for MPI_COMM_COMPARE (MPI-3.0 p237 lines 20-25):
MPI_IDENT results if and only if comm1 and comm2 are handles for the
same object (identical
groups and same contexts). MPI_CONGRUENT results if the underlying
groups are identical
in constituents and rank order; these communicators differ only by
context. MPI_SIMILAR
results if the group members of both communicators are the same but the
rank order differs.
MPI_UNEQUAL results otherwise.
Here's suggested new text for MPI_COMM_COMPARE:
MPI_IDENT results if comm1 and comm2 are handles for the same rank in
the same communicator (same rank, identical groups, and same contexts).
MPI_ALIASED results if comm1 and comm2 are handles for different ranks
in the same communicator (different ranks, identical groups, and same
contexts).
MPI_CONGRUENT results if the underlying groups are identical in
constituents and rank order; these communicators differ only by context.
MPI_SIMILAR results if the group members of both communicators are the
same but the rank order differs.
MPI_UNEQUAL results otherwise.
I should probably stop there but having thought about this wording a bit
more (in particular, should "these communicators differ" in the
MPI_CONGRUENT explanation be changed to "these communicator handles
differ"?) I have a few further edge-cases for consideration and discussion:
1) create an endpoints comm with 2 local endpoints (called comm1[2]),
duplicate it (called comm2[2]).
1a) compare handle 0 from the first (comm1[0]) with handle 0 from the
duplicate (comm2[0]), result is MPI_CONGRUENT.
1b) compare handle 0 from the first (comm1[0]) with handle 1 from the
duplicate (comm2[1]), result is MPI_CONGRUENT (but with different ranks).
MPI_COMM comm1[2], comm2[2];
MPI_COMM_CREATE_ENDPOINTS(parent:=MPI_COMM_WORLD, my_num_ep:=2, &comm1[0])
#pragma OMP parallel
{
MPI_COMM_DUP(comm1[omp_thread_num()], &comm2[omp_thread_num()])
}
MPI_COMM_COMPARE(comm1[0], comm2[0], &resultA) // case 1a: resultA is
MPI_CONGRUENT
MPI_COMM_COMPARE(comm1[0], comm2[1], &resultB) // case 1b: resultB is
MPI_CONGRUENT but could be MPI_CONGRUENT_ALIAS (or similar)
The handles in case 1a refer to the same endpoint via two different
groups but in case 1b they refer to two different endpoints. Is this
distinction important?
If so, we could change:
MPI_CONGRUENT results if the underlying groups are identical in
constituents and rank order; these communicators differ only by context.
to:
MPI_CONGRUENT results if the underlying groups are identical in
constituents and rank order; these communicator handles differ only by
context.
and add:
MPI_CONGRUENT_ALIAS results if the underlying groups are identical in
constituents and rank order; these communicator handles differ by
context and by rank.
2) create an endpoints comm with 2 local endpoints (called comm1[2]),
use MPI_COMM_SPLIT to permute the local ranks (called comm2[2]).
2a) compare handle 0 from the first (comm1[0]) with handle 0 from the
other (comm2[0]), result is MPI_SIMILAR (but with different ranks).
2b) compare handle 0 from the first (comm1[0]) with handle 1 from the
other (comm2[1]), result is MPI_SIMILAR.
The handles in case 2a refer to two different endpoints but in case 2b
they refer to the same endpoint via two different groups. Is this
distinction important?
If so, we could change:
MPI_SIMILAR results if the group members of both communicators are the
same but the rank order differs.
to:
MPI_SIMILAR results if the underlying groups are similar (the members
are the same but the order is different) and the two communicator
handles refer to the same endpoint.
and add:
MPI_SIMILAR_ALIAS results if the underlying groups are similar (the
members are the same but the order is different) and the two
communicator handles refer to different endpoints.
3) rinse and repeat for MPI_UNEQUAL and MPI_UNEQUAL_ALIAS
This comparison function is now attempting to say something about three
properties at the same time:
* are the two communicators using the same context?
* are the two underlying groups MPI_IDENT, MPI_SIMILAR or MPI_UNEQUAL?
* are the two communicator handles referring to the same endpoint?
If the two communicators have the same context then the two groups must
be identical but the communicator handles could still refer to
identical/different endpoints (two possibilities, MPI_IDENT and
MPI_ALIASED).
If the two communicators have different contexts then the two groups
could be identical, similar or unequal and the communicator handles
could refer to identical/different endpoints (six possibilities,
MPI_CONGRUENT, MPI_CONGRUENT_ALIAS, MPI_SIMILAR, MPI_SIMILAR_ALIAS,
MPI_UNEQUAL and MPI_UNEQUAL_ALIAS).
Whether two communicator handles refer to the same endpoint or to
different endpoints is entirely orthogonal to whether the two
communicators are identical, congruent, similar or unequal. We are
proposing that these two be combined into one query function for
convenience.
Cheers,
Dan.
--
Dan Holmes
Applications Consultant in HPC Research
EPCC, The University of Edinburgh
James Clerk Maxwell Building
The Kings Buildings
Mayfield Road
Edinburgh, UK
EH9 3JZ
T: +44(0)131 651 3465
E: dholmes at epcc.ed.ac.uk
*Please consider the environment before printing this email.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20140422/9f8c80bc/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20140422/9f8c80bc/attachment-0001.ksh>
More information about the mpiwg-hybridpm
mailing list