<div dir="ltr">It's not normative because it's only telling implementers that they cannot assume something that is never written. The normative text permits the "required" argument to vary across processes, because it doesn't say it cannot.<div><div><br></div><div>Jeff</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 1, 2023 at 1:24 PM Joseph Schuchart via mpiwg-hybridpm <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org">mpiwg-hybridpm@lists.mpi-forum.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Funny that this is inside an advice to implementors (which users like <br>
Joachim are not expected to read) when it really should be part of the <br>
normative text.<br>
<br>
Cheers<br>
Joseph<br>
<br>
On 8/1/23 11:09, Jeff Hammond via mpiwg-hybridpm wrote:<br>
><br>
> It is addressed here (in the text for MPI_INIT_THREAD):<br>
><br>
> Advice to implementors. If provided is not MPI_THREAD_SINGLE then the <br>
> MPI library should not invoke C or Fortran library calls that are not <br>
> thread safe, e.g., in an environment where malloc is not thread safe, <br>
> then malloc should not be used by the MPI library.<br>
><br>
> Some implementors may want to use different MPI libraries for <br>
> different levels of thread support. They can do so using dynamic <br>
> linking and selecting which library will be linked when <br>
> MPI_INIT_THREAD is invoked. If this is not possible, then <br>
> optimizations for lower levels of thread support will occur only when <br>
> the level of thread support required is specified at link time.<br>
><br>
> *Note that required need not be the same value on all processes of <br>
> MPI_COMM_WORLD.* (End of advice to implementors.)<br>
><br>
> Jeff<br>
><br>
> On Tue, Aug 1, 2023 at 11:47 AM Joachim Jenke via mpiwg-hybridpm <br>
> <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a>> wrote:<br>
> ><br>
> > Hello,<br>
> ><br>
> > I'm not sure whether this is the right group to ask, but it is related<br>
> > to hybrid execution :)<br>
> ><br>
> > Must all MPI processes collectively call MPI_Init or MPI_Init_thread -<br>
> > or can some processes call one and other processes call the other <br>
> function?<br>
> ><br>
> > Also, can the processes request different thread-support levels?<br>
> ><br>
> > Our use case is MPMD execution, where some processes run multi-threaded<br>
> > and other single-threaded.<br>
> ><br>
> > Thanks,<br>
> > Joachim<br>
> ><br>
> ><br>
> > --<br>
> > Dr. rer. nat. Joachim Jenke<br>
> ><br>
> > IT Center<br>
> > Group: High Performance Computing<br>
> > Division: Computational Science and Engineering<br>
> > RWTH Aachen University<br>
> > Seffenter Weg 23<br>
> > D 52074 Aachen (Germany)<br>
> > Tel: +49 241 80- 24765<br>
> > Fax: +49 241 80-624765<br>
> > <a href="mailto:jenke@itc.rwth-aachen.de" target="_blank">jenke@itc.rwth-aachen.de</a><br>
> > <a href="http://www.itc.rwth-aachen.de" rel="noreferrer" target="_blank">www.itc.rwth-aachen.de</a> <<a href="http://www.itc.rwth-aachen.de" rel="noreferrer" target="_blank">http://www.itc.rwth-aachen.de</a>><br>
> > _______________________________________________<br>
> > mpiwg-hybridpm mailing list<br>
> > <a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
><br>
><br>
><br>
> --<br>
> Jeff Hammond<br>
> <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br>
> <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
><br>
> _______________________________________________<br>
> mpiwg-hybridpm mailing list<br>
> <a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
<br>
_______________________________________________<br>
mpiwg-hybridpm mailing list<br>
<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" target="_blank">mpiwg-hybridpm@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a><br>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>