[Mpi-forum] MPI_Comm_split_type question
George Bosilca
bosilca at icl.utk.edu
Thu Dec 15 13:41:20 CST 2016
Guillaume,
If I understand you correctly you are trying to figure out how many unique groups (and therefore communicators) were globally created as a result of an MPI_Comm_split_type operation. Unfortunately, depending on the type passed to this function (there is a single one defined by MPI, but in Open MPI we support several extensions), the resulting communicator can be created without the need for global knowledge, and thus it is impossible to know how many have communicators have been create in total.
To extend on what JeffH proposed, you can count the resulting number of communicators by doing an MPI_Allgather on the initial communicator on a data where each participant provide their rank in the newly created communicator (then returned by the MPI_Comm_split* operation), and then counting the number of 0.
George.
> On Dec 15, 2016, at 14:05 , Jeff Hammond <jeff.science at gmail.com> wrote:
>
>
>
> On Thu, Dec 15, 2016 at 10:29 AM, Guillaume Mercier <guillaume.mercier at u-bordeaux.fr <mailto:guillaume.mercier at u-bordeaux.fr>> wrote:
>
> Hi Jeff,
>
> On 15/12/2016 18:48, Jeff Hammond wrote:
> The number of output communicators from MPI_Comm_split(_type) is always
> one.
>
> Yes, obviously.
>
> Different ranks may get different outputs, but one cannot transfer
> a communicator object from one rank to another.
>
> That's my issue actually.
>
> If you want to do how many total output communicators there are, you can
> perform an MPI_Allgather on the color arguments and see how many unique
> values there are.
>
> I'm not sure about that: tae the case of MPI_Comm_split_type with
> MPI_COMM_TYPE_SHARED. According to tha standards :"this type splits
> the communicator into subcommunicators each of which can create a shared memory region".
> So, there is only one color, but several subcommunicators.
> Or am I understanding this the wrong way?
>
>
> Each rank gets its own subcommunicator object that captures the ranks of the shared memory domain, as supported by MPI_Win_allocate_shared.
>
> I'm still not sure I understand what the issue is. What are you trying to do?
>
> Jeff
>
> Regards
> Guillaume
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org>
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/ <http://jeffhammond.github.io/>_______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org>
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20161215/237da1f2/attachment-0001.html>
More information about the mpi-forum
mailing list