[mpiwg-rma] Fence + threads

Jim Dinan james.dinan at gmail.com
Wed Jul 2 15:12:17 CDT 2014


I agree; I think this was just an oversight.


On Sat, Jun 21, 2014 at 11:24 AM, William Gropp <wgropp at illinois.edu> wrote:

> I agree with Rajeev, and I suspect the text that only mentions
> communicators was an oversight - it probably came from the language in
> MPI-1.1 where only communicators existed.  I do think that this needs to be
> clarified.  After all, it is (as it should be) incorrect for two threads in
> the same process to call MPI_Barrier on the same communicator, even though
> we could define the semantics in this specific situation.
>
> Bill
>
> William Gropp
> Director, Parallel Computing Institute
> Thomas M. Siebel Chair in Computer Science
> University of Illinois Urbana-Champaign
>
>
>
>
>
> On Jun 20, 2014, at 1:36 PM, Rajeev Thakur wrote:
>
> I always assumed that concurrent calls from multiple threads to collective
> functions on the same comm, win, or fh are not allowed. For example, there
> is also a collective MPI_File_sync(fh).
>
> Rajeev
>
>
> On Jun 20, 2014, at 1:32 PM, "Balaji, Pavan" <balaji at anl.gov> wrote:
>
>
> Right.  We don’t say something similar for windows.  Was it intentional or
> just an accidental ommission?
>
>
> — Pavan
>
>
> On Jun 20, 2014, at 1:29 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>
>
> FWIW, the collective communication chapter says this on 217:24-27.
>
>
> "Finally, in multithreaded implementations, one can have more than one,
> concurrently
>
> executing, collective communication call at a process. In these
> situations, it is the user's responsibility
>
> to ensure that the same communicator is not used concurrently by two di
> erent
>
> collective communication calls at the same process."
>
>
>
>
> On Jun 20, 2014, at 1:18 PM, "Balaji, Pavan" <balaji at anl.gov> wrote:
>
>
>
> That doesn’t disallow it.  It just says the ordering is arbitrary, which
> is OK in my example.
>
>
> — Pavan
>
>
> On Jun 20, 2014, at 1:16 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>
>
> There is this text on pg 484, ln 18-22
>
>
> "Collective calls: Matching of collective calls on a communicator, window,
> or file handle is done according to the order in which the calls are issued
> at each process. If concurrent threads issue such calls on the same
> communicator, window or file handle, it is up to the user to make sure the
> calls are correctly ordered, using interthread synchronization."
>
>
> Rajeev
>
>
> On Jun 20, 2014, at 1:11 PM, "Balaji, Pavan" <balaji at anl.gov>
>
> wrote:
>
>
>
> The standard itself doesn’t seem to disallow it.  This means that, at
> least, MPICH is incorrect here.  I’d expect most MPICH derivatives to be
> incorrect for this as well.
>
>
> However, I’m trying to understand if that was the intention (and just
> forgotten to describe in the standard).  We never discussed this in the
> MPI-3 working group.
>
>
> — Pavan
>
>
> On Jun 20, 2014, at 12:46 PM, Dave Goodell (dgoodell) <dgoodell at cisco.com>
> wrote:
>
>
> On Jun 20, 2014, at 11:16 AM, "Balaji, Pavan" <balaji at anl.gov> wrote:
>
>
>
> Is the following code correct?  Two threads of a process both do the
> following:
>
>
> MPI_PUT
>
> MPI_PUT
>
> MPI_WIN_FENCE
>
>
> Collectives are not allowed on the same communicator simultaneously.  But
> we don’t say anything similar for windows.
>
>
> As I read it, it should be fine according to the standard as long as you
> haven't specified any assertions that would cause you to be lying to the
> implementation.
>
>
> When we don't specifically disallow concurrent multithreaded operations,
> then the general threading text seems to allow this.  See MPI-3, p.483,
> l.1-11:
>
>
> ----8<----
>
> The two main requirements for a thread-compliant implementation are listed
> below.
>
>
> 1. All MPI calls are thread-safe, i.e., two concurrently running threads
> may make MPI calls and the outcome will be as if the calls executed in some
> order, even if their execution is interleaved.
>
>
> 2. Blocking MPI calls will block the calling thread only, allowing another
> thread to execute, if available. The calling thread will be blocked until
> the event on which it is waiting occurs. Once the blocked communication is
> enabled and can proceed, then the call will complete and the thread will be
> marked runnable, within a finite time. A blocked thread will not prevent
> progress of other runnable threads on the same process, and will not
> prevent them from executing MPI calls.
>
> ----8<----
>
>
> So, absent some external inter-thread synchronization, it will be
> undefined exactly which epoch the MPI_PUTs end up in.
>
>
> -Dave
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
>
> mpiwg-rma mailing list
>
> mpiwg-rma at lists.mpi-forum.org
>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20140702/a52c2b7e/attachment.html>


More information about the mpiwg-rma mailing list