[Mpi3-hybridpm] Threading homeworking / next telecon

Sur, Sayantan sayantan.sur at intel.com
Mon Mar 25 12:46:13 CDT 2013


> Yes, the primary motivation for this proposal was actually the endpoint
> discussion, 

Glad to hear that!

> but I realized that it also addresses one of the primary challenges
> to upping the requirement of thread-support in the standard, since
> MPI_THREAD_PER_OBJECT is one possible way to achieve
> MPI_THREAD_MULTIPLE-like behavior without requiring locks in
> performance-critical functions.
>

Yes, it could be. I would be happy if this indeed solved two problems. I'd be happy to assist you in fleshing out this proposal. Let me know how I can help.
 
> Of course, this proposal says nothing about the current endpoint proposal
> that attempts to reconcile MPI and UPC/CAF/Charm++ implementations (this
> is just how I read it; please refer to the ticket and comments by Pavan, Jim
> and company as to its actual meaning).  It merely addresses the ticketless
> pseudo-proposal I alluded to in email discussion.

Yes, I totally understand this. My gut feeling is that it may be possible to do the interop without endpoints, but at this point I do not have a concrete proposal. I will discuss more about this with Pavan and Jim.

Thanks,
Sayantan.

> 
> Best,
> 
> Jeff
> 
> On Mon, Mar 25, 2013 at 12:19 PM, Sur, Sayantan <sayantan.sur at intel.com>
> wrote:
> > Jeff,
> >
> > I like the direction you are headed with this proposal. I was also thinking in
> a similar manner to improve performance of MPI apps in multi-threaded
> context by eliminating the requirement of locking a global shared state.
> >
> > Do you think that if this concept becomes reality, you won't need
> "endpoints" any more (at least for your use case of extracting more
> performance out of 1 MPI process + many threads scenario)?
> >
> > Thanks,
> > Sayantan
> >
> >> -----Original Message-----
> >> From: mpi3-hybridpm-bounces at lists.mpi-forum.org
> >> [mailto:mpi3-hybridpm- bounces at lists.mpi-forum.org] On Behalf Of Jeff
> >> Hammond
> >> Sent: Monday, March 25, 2013 10:08 AM
> >> To: mpi3-hybridpm at lists.mpi-forum.org
> >> Subject: Re: [Mpi3-hybridpm] Threading homeworking / next telecon
> >>
> >> I finally wrote up another ticket that was my attempt at a more
> >> holistic proposal for thread-safety.  It's a bit raw, so please try
> >> to extract the essence of it before poking at specific flaws.
> >>
> >> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/373
> >>
> >> Jeff
> >>
> >> On Mon, Mar 25, 2013 at 11:18 AM, Jeff Hammond
> >> <jhammond at alcf.anl.gov> wrote:
> >> > In case people aren't on the call or didn't hear me clearly,
> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/371 is related
> >> > to the two-faced issue of threads.
> >> >
> >> > Best,
> >> >
> >> > Jeff
> >> >
> >> > On Thu, Mar 14, 2013 at 4:45 PM, Rajeev Thakur <thakur at mcs.anl.gov>
> >> wrote:
> >> >> The analysis of thread safety needs of MPI routines that Bill
> >> >> mentioned on
> >> Wed is available here:
> >> http://wiki.mpich.org/mpich/index.php/Thread_Safety . It is
> >> referenced in this paper, www.mcs.anl.gov/~thakur/papers/threads-
> parco.pdf.
> >> >>
> >> >> Rajeev
> >> >>
> >> >> On Mar 14, 2013, at 3:00 PM, Barrett, Brian W wrote:
> >> >>
> >> >>> Hi all -
> >> >>>
> >> >>> As a reminder, there's a hybrid telecon on Monday, March 25th at
> >> >>> 12:00pm EDT.  During the telcon, we'll be following up from the
> >> >>> meeting about threading (notes below).  Before the meeting, we're
> >> >>> supposed to think about a couple of different issues regarding
> >> >>> threads and the MPI spec,
> >> >>> including:
> >> >>>
> >> >>>  * which MPI functions have functionality / usability
> >> >>> deficiencies if an MPI does not support MPI_THREAD_MULTIPLE
> >> >>>  * which MPI functions have use cases which present threading
> >> >>> issues in the presence of threads?
> >> >>>
> >> >>> Some notes from this week's working group meeting:
> >> >>>
> >> >>>  * Jeff H. brought up the fact that library initialization is
> >> >>> impossible to do in a thread safe manner.  Query_thread, for
> >> >>> example, may not be called from the non-main thread in
> >> >>> MPI_THREAD_FUNNELED (or by two threads simultaneously in
> >> MPI_THREAD_SERIALIZED).
> >> >>>  * The tools working group has some use cases where the
> >> >>> Intialized / Init race condition is problematic
> >> >>>  * Fab would like us to revive the generalized request
> >> >>> discussion; it's a feature that lacks a significant amount of
> >> >>> usability without MPI_THREAD_MULTIPLE, but
> MPI_THREAD_MULTIPLE's
> >> >>> support is slow
> >> in
> >> >>> expanding.
> >> >>>  * Bill brought up the concern that the forum has the tendency to
> >> >>> be two-faced about threads.  We talk about "just use a thread"
> >> >>> when trying to solve a particular problem, then are reluctant to
> >> >>> require threads as part of the model; we should figure out how to
> deal with that.
> >> >>>
> >> >>> Brian
> >> >>>
> >> >>> --
> >> >>>  Brian W. Barrett
> >> >>>  Scalable System Software Group
> >> >>>  Sandia National Laboratories
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>> _______________________________________________
> >> >>> Mpi3-hybridpm mailing list
> >> >>> Mpi3-hybridpm at lists.mpi-forum.org
> >> >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> Mpi3-hybridpm mailing list
> >> >> Mpi3-hybridpm at lists.mpi-forum.org
> >> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> >> >
> >> >
> >> >
> >> > --
> >> > Jeff Hammond
> >> > Argonne Leadership Computing Facility University of Chicago
> >> > Computation Institute jhammond at alcf.anl.gov /
> >> > (630) 252-5381 http://www.linkedin.com/in/jeffhammond
> >> > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >>
> >>
> >>
> >> --
> >> Jeff Hammond
> >> Argonne Leadership Computing Facility University of Chicago
> >> Computation Institute jhammond at alcf.anl.gov / (630) 252-5381
> >> http://www.linkedin.com/in/jeffhammond
> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >> _______________________________________________
> >> Mpi3-hybridpm mailing list
> >> Mpi3-hybridpm at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> >
> > _______________________________________________
> > Mpi3-hybridpm mailing list
> > Mpi3-hybridpm at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> 
> 
> 
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute jhammond at alcf.anl.gov / (630)
> 252-5381 http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm




More information about the mpiwg-hybridpm mailing list