[mpiwg-hybridpm] Can MPI processes request different threading levels?

Michael Knobloch m.knobloch at fz-juelich.de
Tue Aug 1 05:39:19 CDT 2023


I'm not even sure it fully answers the question. This statement says
that MPI processes can request different requirements in
MPI_Init_thread, but it doesn't say whether all processes must be
initialized with a call to the same routine or if some can be
initialized with MPI_Init (and assume MPI_THREAD_SINGLE?) and some with
MPI_Init_thread, which was the original question.

-Michael

On 01.08.23 12:24, Joseph Schuchart via mpiwg-hybridpm wrote:
> Funny that this is inside an advice to implementors (which users like
> Joachim are not expected to read) when it really should be part of the
> normative text.
>
> Cheers
> Joseph
>
> On 8/1/23 11:09, Jeff Hammond via mpiwg-hybridpm wrote:
>>
>> It is addressed here (in the text for MPI_INIT_THREAD):
>>
>> Advice to implementors. If provided is not MPI_THREAD_SINGLE then the
>> MPI library should not invoke C or Fortran library calls that are not
>> thread safe, e.g., in an environment where malloc is not thread safe,
>> then malloc should not be used by the MPI library.
>>
>> Some implementors may want to use different MPI libraries for
>> different levels of thread support. They can do so using dynamic
>> linking and selecting which library will be linked when
>> MPI_INIT_THREAD is invoked. If this is not possible, then
>> optimizations for lower levels of thread support will occur only when
>> the level of thread support required is specified at link time.
>>
>> *Note that required need not be the same value on all processes of
>> MPI_COMM_WORLD.* (End of advice to implementors.)
>>
>> Jeff
>>
>> On Tue, Aug 1, 2023 at 11:47 AM Joachim Jenke via mpiwg-hybridpm
>> <mpiwg-hybridpm at lists.mpi-forum.org> wrote:
>> >
>> > Hello,
>> >
>> > I'm not sure whether this is the right group to ask, but it is related
>> > to hybrid execution :)
>> >
>> > Must all MPI processes collectively call MPI_Init or MPI_Init_thread -
>> > or can some processes call one and other processes call the other
>> function?
>> >
>> > Also, can the processes request different thread-support levels?
>> >
>> > Our use case is MPMD execution, where some processes run
>> multi-threaded
>> > and other single-threaded.
>> >
>> > Thanks,
>> > Joachim
>> >
>> >
>> > --
>> > Dr. rer. nat. Joachim Jenke
>> >
>> > IT Center
>> > Group: High Performance Computing
>> > Division: Computational Science and Engineering
>> > RWTH Aachen University
>> > Seffenter Weg 23
>> > D 52074  Aachen (Germany)
>> > Tel: +49 241 80- 24765
>> > Fax: +49 241 80-624765
>> > jenke at itc.rwth-aachen.de
>> > www.itc.rwth-aachen.de <http://www.itc.rwth-aachen.de>
>> > _______________________________________________
>> > mpiwg-hybridpm mailing list
>> > mpiwg-hybridpm at lists.mpi-forum.org
>> > https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>>
>> _______________________________________________
>> mpiwg-hybridpm mailing list
>> mpiwg-hybridpm at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm
>
> _______________________________________________
> mpiwg-hybridpm mailing list
> mpiwg-hybridpm at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm

--
Michael Knobloch
Juelich Supercomputing Centre   (JSC)
Institute for Advanced Simulation (IAS)
Telefon: +49 2461 61-3546
Telefax: +49 2461 61-6656



------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschaeftsfuehrung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------


More information about the mpiwg-hybridpm mailing list