<div dir="ltr">Nothing prohibits a use case where MPI_Init + process-specific environment variables and/or MPI_Init_thread can initialize MPI with varying thread support, and I think the MPMD use case clearly demands it.<div><br></div><div>Implementations are of course capable of (1) rounding up the thread level or (2) returning an inadequate value of provided, if a precise implementation of the application requirement cannot be satisfied.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 1, 2023 at 1:39 PM Michael Knobloch via mpiwg-hybridpm <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org">mpiwg-hybridpm@lists.mpi-forum.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I'm not even sure it fully answers the question. This statement says<br>
that MPI processes can request different requirements in<br>
MPI_Init_thread, but it doesn't say whether all processes must be<br>
initialized with a call to the same routine or if some can be<br>
initialized with MPI_Init (and assume MPI_THREAD_SINGLE?) and some with<br>
MPI_Init_thread, which was the original question.<br>
<br>
-Michael<br>
<br>
On 01.08.23 12:24, Joseph Schuchart via mpiwg-hybridpm wrote:<br>
> Funny that this is inside an advice to implementors (which users like<br>
> Joachim are not expected to read) when it really should be part of the<br>
> normative text.<br>
><br>
> Cheers<br>
> Joseph<br>
><br>
> On 8/1/23 11:09, Jeff Hammond via mpiwg-hybridpm wrote:<br>
>><br>
>> It is addressed here (in the text for MPI_INIT_THREAD):<br>
>><br>
>> Advice to implementors. If provided is not MPI_THREAD_SINGLE then the<br>
>> MPI library should not invoke C or Fortran library calls that are not<br>
>> thread safe, e.g., in an environment where malloc is not thread safe,<br>
>> then malloc should not be used by the MPI library.<br>
>><br>
>> Some implementors may want to use different MPI libraries for<br>
>> different levels of thread support. They can do so using dynamic<br>
>> linking and selecting which library will be linked when<br>
>> MPI_INIT_THREAD is invoked. If this is not possible, then<br>
>> optimizations for lower levels of thread support will occur only when<br>
>> the level of thread support required is specified at link time.<br>
>><br>
>> *Note that required need not be the same value on all processes of<br>
>> MPI_COMM_WORLD.* (End of advice to implementors.)<br>
>><br>
>> Jeff<br>
>><br>
>> On Tue, Aug 1, 2023 at 11:47 AM Joachim Jenke via mpiwg-hybridpm<br>
>> <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a>> wrote:<br>
>> ><br>
>> > Hello,<br>
>> ><br>
>> > I'm not sure whether this is the right group to ask, but it is related<br>
>> > to hybrid execution :)<br>
>> ><br>
>> > Must all MPI processes collectively call MPI_Init or MPI_Init_thread -<br>
>> > or can some processes call one and other processes call the other<br>
>> function?<br>
>> ><br>
>> > Also, can the processes request different thread-support levels?<br>
>> ><br>
>> > Our use case is MPMD execution, where some processes run<br>
>> multi-threaded<br>
>> > and other single-threaded.<br>
>> ><br>
>> > Thanks,<br>
>> > Joachim<br>
>> ><br>
>> ><br>
>> > --<br>
>> > Dr. rer. nat. Joachim Jenke<br>
>> ><br>
>> > IT Center<br>
>> > Group: High Performance Computing<br>
>> > Division: Computational Science and Engineering<br>
>> > RWTH Aachen University<br>
>> > Seffenter Weg 23<br>
>> > D 52074 Aachen (Germany)<br>
>> > Tel: +49 241 80- 24765<br>
>> > Fax: +49 241 80-624765<br>
>> > <a href="mailto:jenke@itc.rwth-aachen.de" target="_blank">jenke@itc.rwth-aachen.de</a><br>
>> > <a href="http://www.itc.rwth-aachen.de" rel="noreferrer" target="_blank">www.itc.rwth-aachen.de</a> <<a href="http://www.itc.rwth-aachen.de" rel="noreferrer" target="_blank">http://www.itc.rwth-aachen.de</a>><br>
>> > _______________________________________________<br>
>> > mpiwg-hybridpm mailing list<br>
>> > <a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
>> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
>><br>
>><br>
>><br>
>> --<br>
>> Jeff Hammond<br>
>> <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br>
>> <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
>><br>
>> _______________________________________________<br>
>> mpiwg-hybridpm mailing list<br>
>> <a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
><br>
> _______________________________________________<br>
> mpiwg-hybridpm mailing list<br>
> <a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
<br>
--<br>
Michael Knobloch<br>
Juelich Supercomputing Centre (JSC)<br>
Institute for Advanced Simulation (IAS)<br>
Telefon: +49 2461 61-3546<br>
Telefax: +49 2461 61-6656<br>
<br>
<br>
<br>
------------------------------------------------------------------------------------------------<br>
------------------------------------------------------------------------------------------------<br>
Forschungszentrum Juelich GmbH<br>
52425 Juelich<br>
Sitz der Gesellschaft: Juelich<br>
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498<br>
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller<br>
Geschaeftsfuehrung: Prof. Dr. Astrid Lambrecht (Vorsitzende),<br>
Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens<br>
------------------------------------------------------------------------------------------------<br>
------------------------------------------------------------------------------------------------<br>
_______________________________________________<br>
mpiwg-hybridpm mailing list<br>
<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>