[mpiwg-hybridpm] Meeting Tomorrow

Jeff Hammond jeff.science at gmail.com
Wed May 10 01:27:58 CDT 2023


"All MPI calls are thread-safe, i.e., two concurrently running threads may make MPI calls and the outcome will be as if the calls executed in some order, even if their execution is interleaved."

I’m going to continue to die on the hill that “as if executed in some order” constrains the implementation behavior to something equivalent to “MPI operations are initiated atomically” because otherwise one cannot be guaranteed that some ordering exists.  The text about logically concurrent merely explains the obvious to users: it is impossible to know in what order unsynchronized threads execute operations.  The previous sentence makes it clear what is meant by logically concurrent, and it is consistent with Chapter 11, i.e. it logically unordered:

“...if the process is multithreaded, then the semantics of thread execution may not define a relative order between two send operations executed by two distinct threads. The operations are logically concurrent..."

I can’t provide the full history of the Intel instruction ENQCMD <https://community.intel.com/legacyfs/online/drupal_files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf> but it appears to address the problem of a large number of semi-independent HW units initiating MPI operations in a manner compliant with the text above.

As I have stated previously, it is possible to relax the constraint “as if executed in some order” with the addition of a new threading level, which Pavan proposed years ago as MPI_THREAD_CONCURRENT <https://github.com/mpiwg-sessions/sessions-issues/wiki/2016-06-07-forum#notes-from-meeting-bold--specific-work-to-do> (although details are impossible to find at this point).

Jeff

> On 9. May 2023, at 17.41, Holmes, Daniel John via mpiwg-hybridpm <mpiwg-hybridpm at lists.mpi-forum.org> wrote:
> 
> Hi all,
>  Unfortunately, I am double-booked for tomorrow’s HACC WG time slot – so my answer to the implied question below is “not yet”.
>  The “logically concurrent isn’t” issue #117 is now accepted and merged into MPI-4.1 (take a moment to celebrate!) – but it just says “here be dragons”.
>  Do we care enough to defeat those dragons?
>  Argument FOR: as systems become more heterogeneous, an MPI process is likely to abstractly “contain” more semi-independent HW units that will want to communicate with other MPI processes, which will result in lots of logically concurrent MPI communication operations – exactly the territory in which these dragons live.
>  Argument AGAINST: we’ve been throwing brave warriors into this particular dragon fire for about a decade and we’ve only now convinced ourselves that the dragons do, in fact, exist. How many more volunteers do we have and do they have sufficiently pointy sticks?
>  Best wishes,
> Dan.
>  From: mpiwg-hybridpm <mpiwg-hybridpm-bounces at lists.mpi-forum.org> On Behalf Of Jim Dinan via mpiwg-hybridpm
> Sent: 09 May 2023 15:15
> To: Hybrid working group mailing list <mpiwg-hybridpm at lists.mpi-forum.org>
> Cc: Jim Dinan <james.dinan at gmail.com>
> Subject: [mpiwg-hybridpm] Meeting Tomorrow
>  Hi All,
>  We had to reschedule the topic planned for tomorrow's meeting, so the agenda is now open. Please let me know if you have a topic you'd like to discuss. If we don't have a topic ahead of time, we will cancel.
>  Thanks,
>  ~Jim.
> _______________________________________________
> mpiwg-hybridpm mailing list
> mpiwg-hybridpm at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20230510/6e46243e/attachment.html>


More information about the mpiwg-hybridpm mailing list