[Mpi3-hybridpm] Mpi3-hybridpm Digest, Vol 5, Issue 4

Bronis R. de Supinski bronis at llnl.gov
Thu Aug 6 12:33:41 CDT 2009


Marc:

It is true that you can force persistence of thread
numbering (and threadprivate data) for the first level.
However, if you use OpenMP nested parallelism then the
numbering of threads in inner regions (and threadprivate
data) is not guaranteed to persist. It seems like you
require that in some cases (and this topic is one that
the OpenMP language committee has discussed although
it has not been a major point recently).

Bronis


On Thu, 6 Aug 2009, Snir Marc wrote:

>
> On Aug 6, 2009, at 11:00 AM, mpi3-hybridpm-request at lists.mpi-forum.org
> wrote:
>
> > Send Mpi3-hybridpm mailing list submissions to
> > 	mpi3-hybridpm at lists.mpi-forum.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > 	http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> > or, via email, send a message with subject or body 'help' to
> > 	mpi3-hybridpm-request at lists.mpi-forum.org
> >
> > You can reach the person managing the list at
> > 	mpi3-hybridpm-owner at lists.mpi-forum.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Mpi3-hybridpm digest..."
> >
> >
> > Today's Topics:
> >
> >   1. Re: MPI + threads / MPI +OpenMP (Bronis R. de Supinski)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Wed, 5 Aug 2009 15:26:51 -0700 (PDT)
> > From: "Bronis R. de Supinski" <bronis at llnl.gov>
> > Subject: Re: [Mpi3-hybridpm] MPI + threads / MPI +OpenMP
> > To: mpi3-hybridpm at lists.mpi-forum.org
> > Message-ID: <Pine.LNX.4.58.0908051511330.25188 at tux213.llnl.gov>
> > Content-Type: TEXT/PLAIN; charset=US-ASCII
> >
> >
> > Marc:
> >
> > Hmm. Interesting reading.
> >
> > If the thread support level is MPI_THREAD_SINGLE,
> > must MPI_ENDPOINT_CREATE be called before more than
> > one thread exists in the process? I assume it is
> > erroneous to have more threads in the process than
> > generated (allocated?) endpoints.
>
> The way I wrote my proposal, you still need to call MPI_ENDPOINT_CREATE
> >
> > The OpenMP part does not clearly incorporate that
> > OpenMP threads are distinct from kernel or user
> > threads. OpenMP threads go away when a parallel
> > region ends. An implementation can choose to
> > implement them on top of more permanent threads
> > (and most do). However, you are not guaranteed
> > any fixed relationship between the numbering
> > of threads in two distinct parallel regions
> > (there are good reasons for this that I could
> > go into if anyone cares). Maybe this does not matter
> > for your proposal (I have only skimmed the later
> > part and need to cogitate on it further). It may
> > be possible to have OpenMP extensions that change
> > this aspect of the language; do you require them
> > or at least find them useful?
> >
>
> Actually, OpenMP does provide some guarantee on persistence of threads
> across parallel sections: I quote from section 2.9.2 of the OpenMP
> V3.0 standard:
>
> "The content of a threadprivate variable can change across a task
> scheduling point if the
> executing thread switches to another schedulable task that modifies
> the variable. For
> more details on task scheduling, see Section1.3 on page 11 and
> Section2.7 on page 59.
> In parallel regions, references by the master thread will be to the
> copy of the
> variable in the thread which encountered the parallel region.
> During the sequential part references will be to the initial thread’s
> copy of the variable.
> The values of data in the initial thread’s copy of a threadprivate
> variable are guaranteed
> to persist between any two consecutive references to the variable in
> the program.
> The values of data in the threadprivate variables of non-initial
> threads are guaranteed to
> persist between two consecutive active parallel regions only if all
> the following
> conditions hold:
> • Neither parallel region is nested inside another explicit parallel
> region.
> • The number of threads used to execute both parallel regions is the
> same.
> • The value of the dyn-var internal control variable in the enclosing
> task region is false
> at entry to both parallel regions.
> If these conditions all hold, and if a threadprivate variable is
> referenced in both regions,
> then threads with the same thread number in their respective regions
> will reference the
> same copy of that variable. "
>
>
> So, OpenMP can support a static thread model (fixed number of threads,
> threadprivate variables are persistent), or it can support a dynamic
> task model (tasks are dynamically allocated to threads at scheduling
> points, threadprivate variables persist, but their association with
> tasks may change when tasks are rescheduled). The user can choose one
> or another. The MPI_THREAD_SINGLE model (or MPI_THREAD_FUNELLED)
> matches a static OpenMP model that power users, who want to explicitly
> control what runs where, are likely to use. The MPI_THREAD_MULTIPLE
> model matches the dynamic OpenMP model, where the user wants automatic
> load balancing, and is willing ot pay the overhead of resource (cores,
> ports) sharing. In the static model, the data structures used by a
> thread to communicate with MPI are threadprivate; in the dynamic
> model, they are shared. In the unlikely case that an OpenMP runtime
> implements the static model while not maintaining a fixed association
> of "OpenMP threads" to kernel threads, than this runtime will have to
> inform the MPI runtime whenever threads switch identities. I assume
> such logic has to be there, anyhow, for blocking system calls.
>
>
>
> > Are there other extensions to OpenMP that would
> > be useful? We are actively working on OpenMP 4.0...
>
> Just keep the two models alive.
>
> >
> > Bronis
> >
> >
> >
> >
> > On Wed, 5 Aug 2009, Snir Marc wrote:
> >
> >> I attach a detailed proposal for a possible hybrid model that, I
> >> believe, would be easy to implement. The proposal draft is
> >> attached. I
> >> hope it can be discussed by this working group.
> >>
> >>
> >> Marc Snir
> >> Michael Faiman and Saburo Muroga Professor
> >> Department of Computer Science, University of Illinois at Urbana
> >> Champaign
> >> 4323 Siebel Center, 201 N Goodwin, IL 61801
> >> Tel (217) 244 6568
> >> Web http://**www.**cs.uiuc.edu/homes/snir
> >>
> >>
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Mpi3-hybridpm mailing list
> > Mpi3-hybridpm at lists.mpi-forum.org
> > http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> >
> >
> > End of Mpi3-hybridpm Digest, Vol 5, Issue 4
> > *******************************************
>
> Marc Snir
> Michael Faiman and Saburo Muroga Professor
> Department of Computer Science, University of Illinois at Urbana
> Champaign
> 4323 Siebel Center, 201 N Goodwin, IL 61801
> Tel (217) 244 6568
> Web http://*www.*cs.uiuc.edu/homes/snir
>
>
>
>
>
>
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
>




More information about the mpiwg-hybridpm mailing list