[Mpi-forum] MPI_Comm_split_type question
Jeff Hammond
jeff.science at gmail.com
Thu Dec 15 19:31:11 CST 2016
MPI_Comm_split calls Allgather internally in many implementations...
Jeff
Sent from my iPhone
> On Dec 15, 2016, at 5:15 PM, Martin Schulz <schulzm at llnl.gov> wrote:
>
> Hi Guillaume,
>
> Just to add to George’s suggestion and following the same idea – instead of the Allgather, which could cause scalability problems, you could just do a Reduce or Allreduce with all processes with rank 0 in the new communicator contributing 1 and the rest 0 and then use an add as the reduction operator.
>
> Martin
>
>
> ________________________________________________________________________
> Martin Schulz, schulzm at llnl.gov, http://scalability.llnl.gov/
> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
>
>
> From: <mpi-forum-bounces at lists.mpi-forum.org> on behalf of George Bosilca <bosilca at icl.utk.edu>
> Reply-To: Main mailing list <mpi-forum at lists.mpi-forum.org>
> Date: Thursday, December 15, 2016 at 11:41 AM
> To: Main mailing list <mpi-forum at lists.mpi-forum.org>
> Subject: Re: [Mpi-forum] MPI_Comm_split_type question
>
> Guillaume,
>
> If I understand you correctly you are trying to figure out how many unique groups (and therefore communicators) were globally created as a result of an MPI_Comm_split_type operation. Unfortunately, depending on the type passed to this function (there is a single one defined by MPI, but in Open MPI we support several extensions), the resulting communicator can be created without the need for global knowledge, and thus it is impossible to know how many have communicators have been create in total.
>
> To extend on what JeffH proposed, you can count the resulting number of communicators by doing an MPI_Allgather on the initial communicator on a data where each participant provide their rank in the newly created communicator (then returned by the MPI_Comm_split* operation), and then counting the number of 0.
>
> George.
>
>
>
>> On Dec 15, 2016, at 14:05 , Jeff Hammond <jeff.science at gmail.com> wrote:
>>
>>
>>
>> On Thu, Dec 15, 2016 at 10:29 AM, Guillaume Mercier <guillaume.mercier at u-bordeaux.fr> wrote:
>>>
>>> Hi Jeff,
>>>
>>>> On 15/12/2016 18:48, Jeff Hammond wrote:
>>>> The number of output communicators from MPI_Comm_split(_type) is always
>>>> one.
>>>
>>> Yes, obviously.
>>>
>>>> Different ranks may get different outputs, but one cannot transfer
>>>> a communicator object from one rank to another.
>>>
>>> That's my issue actually.
>>>
>>>> If you want to do how many total output communicators there are, you can
>>>> perform an MPI_Allgather on the color arguments and see how many unique
>>>> values there are.
>>>
>>> I'm not sure about that: tae the case of MPI_Comm_split_type with
>>> MPI_COMM_TYPE_SHARED. According to tha standards :"this type splits
>>> the communicator into subcommunicators each of which can create a shared memory region".
>>> So, there is only one color, but several subcommunicators.
>>> Or am I understanding this the wrong way?
>>
>> Each rank gets its own subcommunicator object that captures the ranks of the shared memory domain, as supported by MPI_Win_allocate_shared.
>>
>> I'm still not sure I understand what the issue is. What are you trying to do?
>>
>> Jeff
>>
>>> Regards
>>> Guillaume
>>>
>>> _______________________________________________
>>> mpi-forum mailing list
>>> mpi-forum at lists.mpi-forum.org
>>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20161215/d0310381/attachment-0001.html>
More information about the mpi-forum
mailing list