[Mpi3-hybridpm] [EXTERNAL] Re: Threading homeworking / next telecon

Sur, Sayantan sayantan.sur at intel.com
Mon Mar 25 17:31:30 CDT 2013


This is interesting. It might be useful for implementers if the app could inform the MPI library that in its usage model, per-communicator queues might lead to a performance benefit. Such as in the case of many threads (among others).

Info key? Assert?

Sayantan

From: mpi3-hybridpm-bounces at lists.mpi-forum.org [mailto:mpi3-hybridpm-bounces at lists.mpi-forum.org] On Behalf Of William Gropp
Sent: Monday, March 25, 2013 2:24 PM
To: mpi3-hybridpm at lists.mpi-forum.org
Subject: Re: [Mpi3-hybridpm] [EXTERNAL] Re: Threading homeworking / next telecon

An implementation is free to use separate queues for each communicator; some of us have discussed this in the past, in part to permit use of lock-free structures for the queue updates, particularly as this is the only place there are no wild cards, ever.  I believe that this is within the existing semantics.  It even has benefits for single threaded execution, since the communicator matching is done once, rather than in every query on the queue.

In terms of progress, the standard is deliberately vague on the details, and thus I don't believe we have the requirement that you quote.  And some of the other interpretations of progress would not be helped by any thread-safety restriction.

Bill

William Gropp
Director, Parallel Computing Institute
Deputy Director for Research
Institute for Advanced Computing Applications and Technologies
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign




On Mar 25, 2013, at 4:15 PM, Jeff Hammond wrote:


On Mon, Mar 25, 2013 at 3:17 PM, William Gropp <wgropp at illinois.edu<mailto:wgropp at illinois.edu>> wrote:

I was only addressing the issue of calling the thread level routines before
knowing what thread level you had.

Okay, sorry, I cannot tell which tickets people are referring to since
I have a bunch of different ones right now.


I'm not sure what you are looking for.  In the case of MPI_THREAD_MULTIPLE,
an implementation can provide significant concurrency today without any
change in the MPI standard - that's a major reason for that table (more to
the point - this table is meant as a guide for not using locks).  Can you
give me an example of something that the current MPI semantics prohibits
that you'd like to achieve with MPI_THREAD_PER_OBJECT?

It is my understanding of the progress requirements that any call to
MPI must make progress on all MPI operations.  This means that two
threads calling e.g. MPI_Recv must walk all of the message queues.  If
a thread needs to modify any queue because it matches, then this must
be done in a thread-safe way, which presumably requires something
resembling mutual exclusion or transactions.  If a call to MPI_Recv
only had to make progress on its own communicator, then two threads
calling MPI_Recv on two different communicators would (1) only have to
walk the message queue associated with that communicator and (2)
nothing resembling mutual exclusion is required for the thread to
update the message queue in the event that matching occurs.

Forgive me if I've got some of the details wrong.  If I've got all of
the details and the big picture wrong, then I'll think about it more.

Jeff


On Mar 25, 2013, at 2:53 PM, Jeff Hammond wrote:

That doesn't do much for me in terms of enabling greater concurrency
in performance-critical operations.

I'd like to propose that we try to make all of "Access Only", "Update
RefCount", "Read of List" and "None" thread safe in all cases.  All of
these are read-only except for "Update RefCount", but this can be done
with atomics.  I am assuming that concurrent reads are only permitted
to happen after the writing calls on the object have completed.  This
is the essence of MPI_THREAD_PER_OBJECT.

Jeff



_______________________________________________
Mpi3-hybridpm mailing list
Mpi3-hybridpm at lists.mpi-forum.org<mailto:Mpi3-hybridpm at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm



--
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov<mailto:jhammond at alcf.anl.gov> / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
_______________________________________________
Mpi3-hybridpm mailing list
Mpi3-hybridpm at lists.mpi-forum.org<mailto:Mpi3-hybridpm at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20130325/3e0ef066/attachment-0001.html>


More information about the mpiwg-hybridpm mailing list