<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">What I gathered from the live debate a few months back is that some people want a semantic that is different from my (and others’) interpretation of MPI_THREAD_MULTIPLE. Rather than fight forever about what MPI_THREAD_MULTIPLE means, why don’t the people who want the more relaxed semantic just propose that as MPI_THREAD_CONCURRENT, as was discussed back in 2016.<div><br></div><div>I will not quarrel one bit with a new thread level, MPI_THREAD_CONCURRENT, that does what Maria’s team wants, but I intend to fight until my dying breath against any change to the MPI standard that violates the “as if in some order” text.</div><div><br></div><div>Jeff<br><div><br><blockquote type="cite"><div>On 10. May 2023, at 12.50, Holmes, Daniel John <daniel.john.holmes@intel.com> wrote:</div><br class="Apple-interchange-newline"><div><meta charset="UTF-8"><div class="WordSection1" style="page: WordSection1; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 16px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>Hi Jeff,<o:p></o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><ol start="1" type="1" style="margin-bottom: 0cm; margin-top: 0cm;"><li class="MsoListParagraph" style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>Yes<o:p></o:p></span></li><li class="MsoListParagraph" style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>If only it were that simple<o:p></o:p></span></li></ol><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>Your first quote is compromised by the example 11.17 that follows it: which order is the interleaved execution mimicking? MPI_SEND;MPI_RECV and MPI_RECV;MPI_SEND both result in deadlock (as stated in that example). That sentence needs “interpretation” – your wording is slightly better, but it needs to be something like “as if the stages of the operations were executed atomically in some order”. The initiation and starting stages of both the send and receive operations are local and can happen without dependence on anything else (in particular, without dependence on each other). Once that has happened, both operations are enabled and must complete in finite time. The execution outcome is “as if” each of the blocking procedures were replaced with a nonblocking initiation and completion procedure pair and both initiation procedures were executed (in some order) before the completion procedures (in some order).<o:p></o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>The observation and clarification above is necessary, but not sufficient, for resolving the logically concurrent issue. It speaks to execution ordering, but not to message matching ordering.<o:p></o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span>However, rather than mash everything together (we’ve been down that road, we know where it leads), we could consider the merits of just this adjustment on its own. We could call it “The two MPI thread-safety rules conflict with each other.”<o:p></o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span lang="EN-US">Best wishes,<o:p></o:p></span></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span lang="EN-US">Dan.<o:p></o:p></span></div></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><span><o:p> </o:p></span></div><div><div style="border-style: solid none none; border-top-width: 1pt; border-top-color: rgb(225, 225, 225); padding: 3pt 0cm 0cm;"><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><b><span lang="EN-US">From:</span></b><span lang="EN-US"><span class="Apple-converted-space"> </span>Jeff Hammond <<a href="mailto:jeff.science@gmail.com" style="color: blue; text-decoration: underline;">jeff.science@gmail.com</a>><span class="Apple-converted-space"> </span><br><b>Sent:</b><span class="Apple-converted-space"> </span>10 May 2023 07:28<br><b>To:</b><span class="Apple-converted-space"> </span>MPI Forum <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" style="color: blue; text-decoration: underline;">mpiwg-hybridpm@lists.mpi-forum.org</a>><br><b>Cc:</b><span class="Apple-converted-space"> </span>Holmes, Daniel John <<a href="mailto:daniel.john.holmes@intel.com" style="color: blue; text-decoration: underline;">daniel.john.holmes@intel.com</a>><br><b>Subject:</b><span class="Apple-converted-space"> </span>Re: [mpiwg-hybridpm] Meeting Tomorrow<o:p></o:p></span></div></div></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><o:p> </o:p></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">"All MPI calls are thread-safe, i.e., two concurrently running threads may make MPI calls and the outcome will be as if the calls executed in some order, even if their execution is interleaved."<o:p></o:p></div></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><o:p> </o:p></div></div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">I’m going to continue to die on the hill that “as if executed in some order” constrains the implementation behavior to something equivalent to “MPI operations are initiated atomically” because otherwise one cannot be guaranteed that some ordering exists. The text about logically concurrent merely explains the obvious to users: it is impossible to know in what order unsynchronized threads execute operations. The previous sentence makes it clear what is meant by logically concurrent, and it is consistent with Chapter 11, i.e. it logically unordered:<o:p></o:p></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><o:p> </o:p></div></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">“...if the process is multithreaded, then the<span class="Apple-converted-space"> </span><u>semantics of thread execution may not define a relative order between two send operations executed by two distinct threads</u>. The operations are logically concurrent..."<o:p></o:p></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><br>I can’t provide the full history of the Intel instruction <a href="https://community.intel.com/legacyfs/online/drupal_files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf" style="color: blue; text-decoration: underline;">ENQCMD</a> but it appears to address the problem of a large number of semi-independent HW units initiating MPI operations in a manner compliant with the text above.<o:p></o:p></div></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><o:p> </o:p></div></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">As I have stated previously, it is possible to relax the constraint “as if executed in some order” with the addition of a new threading level, which Pavan proposed years ago as <a href="https://github.com/mpiwg-sessions/sessions-issues/wiki/2016-06-07-forum#notes-from-meeting-bold--specific-work-to-do" style="color: blue; text-decoration: underline;">MPI_THREAD_CONCURRENT</a> (although details are impossible to find at this point).<o:p></o:p></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;"><o:p> </o:p></div></div><div><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">Jeff<br><br><br><o:p></o:p></div><blockquote style="margin-top: 5pt; margin-bottom: 5pt;"><div style="margin: 0cm; font-size: 11pt; font-family: Calibri, sans-serif;">On 9. May 2023, at 17.41, Holmes, Daniel John via mpiwg-hybridpm <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" style="color: blue; text-decoration: underline;">mpiwg-hybridpm@lists.mpi-forum.org</a>> wrote:<br><br>Hi all,<br> Unfortunately, I am double-booked for tomorrow’s HACC WG time slot – so my answer to the implied question below is “not yet”.<br> The “logically concurrent isn’t” issue #117 is now accepted and merged into MPI-4.1 (take a moment to celebrate!) – but it just says “here be dragons”.<br> Do we care enough to defeat those dragons?<br> Argument FOR: as systems become more heterogeneous, an MPI process is likely to abstractly “contain” more semi-independent HW units that will want to communicate with other MPI processes, which will result in lots of logically concurrent MPI communication operations – exactly the territory in which these dragons live.<br> Argument AGAINST: we’ve been throwing brave warriors into this particular dragon fire for about a decade and we’ve only now convinced ourselves that the dragons do, in fact, exist. How many more volunteers do we have and do they have sufficiently pointy sticks?<br> Best wishes,<br>Dan.<br> From: mpiwg-hybridpm <<a href="mailto:mpiwg-hybridpm-bounces@lists.mpi-forum.org" style="color: blue; text-decoration: underline;">mpiwg-hybridpm-bounces@lists.mpi-forum.org</a>> On Behalf Of Jim Dinan via mpiwg-hybridpm<br>Sent: 09 May 2023 15:15<br>To: Hybrid working group mailing list <<a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" style="color: blue; text-decoration: underline;">mpiwg-hybridpm@lists.mpi-forum.org</a>><br>Cc: Jim Dinan <<a href="mailto:james.dinan@gmail.com" style="color: blue; text-decoration: underline;">james.dinan@gmail.com</a>><br>Subject: [mpiwg-hybridpm] Meeting Tomorrow<br> Hi All,<br> We had to reschedule the topic planned for tomorrow's meeting, so the agenda is now open. Please let me know if you have a topic you'd like to discuss. If we don't have a topic ahead of time, we will cancel.<br> Thanks,<br> ~Jim.<br>_______________________________________________<br>mpiwg-hybridpm mailing list<br><a href="mailto:mpiwg-hybridpm@lists.mpi-forum.org" style="color: blue; text-decoration: underline;">mpiwg-hybridpm@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm" style="color: blue; text-decoration: underline;">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm</a></div></blockquote></div></div></div></div></div></blockquote></div><br></div></body></html>