[mpi-22] [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc
Bronis R. de Supinski
bronis at [hidden]
Thu Jan 31 11:57:59 CST 2008
Jim and Rich:
Then I suggest that fixing this broken semantic decision is
something that we should consider for MPI 2.2. It should not
break any existing programs and might even make some existing
ones standards conforming.
Although I can imagine ways for the MPI implementation to detect
that the one thread is not the main thread it is not at all clear
to me how it would matter to the implementation.
Bronis
On Thu, 31 Jan 2008, Cownie, James H wrote:
> Because that's how it's always been. We're not adding a restriction with
> the change, merely clarifying the existing restriction.
>
> -- Jim
>
> James Cownie <james.h.cownie_at_[hidden]>
> SSG/DPD/PAT
> Tel: +44 117 9071438
>
> ________________________________
>
> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On
> Behalf Of Richard Graham
> Sent: 31 January 2008 15:51
> To: Mailing list for discussion of MPI 2.1
> Subject: Re: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re:
> Attending, topics, etc
>
>
>
> Why restrict this to a standard specified thread (main thread), why not
> word it
> as a singe thread, and let the app decide what thread this is, based on
> what
> ever criteria it wants to use to select this thread ?
>
> Rich
>
>
> On 1/31/08 10:27 AM, "Richard Treumann" <treumann_at_[hidden]> wrote:
>
> How about::
> MPI_THREAD_FUNNELED The process may be multi-threaded, but the
> application
> must insure that only the main thread makes MPI calls.
>
>
> Dick Treumann - MPI Team/TCEM
> IBM Systems & Technology Group
> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
>
>
> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM:
>
> > A simpler change which would seem to achieve the desired clarification
> > would be :-
> >
> > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the
> > main
> > thread is allowed to make MPI calls.
> >
> > (and you could add
> > If other threads make MPI calls the behavior is undefined.
> > if you want to be verbose about it).
> >
> > -- Jim
> >
> > James Cownie <james.h.cownie_at_[hidden]>
> > SSG/DPD/PAT
> > Tel: +44 117 9071438
> >
> >
> >
> >
> > > -----Original Message-----
> > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]]
> <mailto:mpi-21-bounces_at_[hidden]%5d>
> > On
> > > Behalf Of Rolf Rabenseifner
> > > Sent: 31 January 2008 14:31
> > > To: mpi-21_at_[hidden]
> > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re:
> Attending,
> > > topics, etc
> > >
> > > This is a proposal for MPI 2.1, Ballot 4.
> > >
> > > I'm asking especially
> > > Greg Lindahl,
> > > the participants of the email-discussion in 2007, to review this
> > proposal.
> > >
> > > This is a follow up to:
> > > Which thread is the funneled thread?
> > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> > > errata/index.html
> > > with mail discussion in
> > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> > > errata/discuss/funneled/
> > > ___________________________________
> > >
> > > Proposal:
> > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read:
> > >
> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only
> > > the main thread will make MPI calls (all MPI calls are "funneled"
> > > to the main thread).
> > >
> > > but should read:
> > >
> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only
> > > the main thread will make MPI calls (all MPI calls are "funneled"
> > > to the main thread, e.g., by using the OpenMP directive "master"
> > > in the application program).
> > > ___________________________________
> > > Rationale for this clarification from the email from Greg Lindahl:
> > > The existing document doesn't make it clear that
> > > the MPI user has to funnel the calls to the main thread;
> > > it's not the job of the MPI library. I have seen multiple
> > > MPI users confused by this issue, and when I first read
> > > this section, I was confused by it, too.
> > > ___________________________________
> > >
> > >
> > > Best regards
> > > Rolf
> > >
> > >
> > >
> > >
> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email
> rabenseifner_at_[hidden]
> > > High Performance Computing Center (HLRS) . phone
> ++49(0)711/685-65530
> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
> 685-65832
> > > Head of Dpmt Parallel Computing . . .
> www.hlrs.de/people/rabenseifner
> > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> > > _______________________________________________
> > > mpi-21 mailing list
> > > mpi-21_at_[hidden]
> > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
> > ---------------------------------------------------------------------
> > Intel Corporation (UK) Limited
> > Registered No. 1134945 (England)
> > Registered Office: Pipers Way, Swindon SN3 1RJ
> > VAT No: 860 2173 47
> >
> > This e-mail and any attachments may contain confidential material for
> > the sole use of the intended recipient(s). Any review or distribution
> > by others is strictly prohibited. If you are not the intended
> > recipient, please contact the sender and delete all copies.
> >
> >
> > _______________________________________________
> > mpi-21 mailing list
> > mpi-21_at_[hidden]
> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
>
> ________________________________
>
> _______________________________________________
> mpi-21 mailing list
> mpi-21_at_[hidden]
> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
>
>
>
> ---------------------------------------------------------------------
> Intel Corporation (UK) Limited
> Registered No. 1134945 (England)
> Registered Office: Pipers Way, Swindon SN3 1RJ
> VAT No: 860 2173 47
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
More information about the Mpi-22
mailing list