[Mpi3-hybridpm] Hybrid WG meeting at the Forum
Pavan Balaji
balaji at mcs.anl.gov
Mon Dec 6 17:49:05 CST 2010
Hi Doug,
On 12/06/2010 11:42 AM, Douglas Miller wrote:
> Re: SHMEM ALLOC: I thought we were considering adding a "const char
> *key" param so that shared memory allocations better match OS (e.g.
> POSIX) shmem allocations and also to allow the call to be non-synchronizing.
As I recalled, this was still being discussed. Did we decide on adding
the key? Ron?
I'll add it in as a discussion item.
> A) I don't see mention of the exception to MPI threading rules for the
> MPI_HELPER_* calls. Was that intentionally removed? Did I forget a
> discussion of that? I was thinking we need to state to implementers that
> MPI_HELPER_* must be thread safe (in all modes except MPI_THREAD_SINGLE)
> and to users that these calls may be used by any/all threads in any mode
> (except MPI_THREAD_SINGLE). Do we need to discuss this more?
We decided to drop at the Forum (3-4 Forums back, or whenever we had the
plenary session). I had accidentally added it in the first draft, but
removed it since.
Do you mean to say that you do want the modified threading rules back?
> B) We did not finish discussion on whether all communications that are
> started after JOIN must be completed (MPI_WAIT et al.) before JOIN. I
> think this is important, if not for correctness then because it allows
> the implementation to be more optimal and so far I cannot think of a
> reason one would need to make communcations span the LEAVE.
Yes, that's a discussion item as well. Though my understanding of folks'
preference was that there shouldn't be any such restriction. The MPI
implementation should be able to choose what it wants to do.
I'll add it in as a discussion item.
> C) here is an example use:
> (if this is not an interesting enough example, let me know what you'd
> like to see)
>
> *Example 1: *OpenMP program with distinct compute/communicate phases
Thanks. This is a good example. I'll add it in. We should also try to
add an example where more than one thread has work.
-- Pavan
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpiwg-hybridpm
mailing list