[Mpi3-hybridpm] Meeting agenda for 08/04
Pavan Balaji
balaji at mcs.anl.gov
Tue Aug 4 12:28:03 CDT 2009
All,
Here are the meeting minutes for today's call:
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Hybrid/notes-2009-08-04
I might have missed some attendees in that list. If I missed your name,
please add it in.
Thanks,
-- Pavan
On 08/04/2009 01:42 AM, Pavan Balaji wrote:
> Hi all,
>
> Sorry, I forgot to send out a reminder earlier about the meeting later
> this morning. But better late than never -- the telecon is at 11am
> central time on 08/04 (today).
>
> Passcode: 85444
> Domestic dial number: 888-566-1533
> International dial number: 212-547-0383
>
> We got a lot of feedback from the forum, so we have a bunch of things to
> discuss. I've listed this feedback below as agenda items for the
> telecon. Please feel free to add others as needed.
>
> 1. Based on discussion at the Forum, we need to formalize high-level
> goals of the working group. Specifically, we know that this deals with
> interactions with other standards, but we need to narrow down what all
> we are considering -- threads, PGAS, ...
>
> 2. Define what model we are assuming for threads. As we discussed
> earlier, we need to specify what properties we expect in the threads
> packages. E.g., we need threads to make asynchronous progress.
>
> 3. A bunch of proposals assume that the MPI implementation knows what
> the thread local storage is. This might not be true. For MPICH2, it
> can be configured with different thread implementations, but there's
> no way it can check if the user is using the same thread
> package. Maybe we need to add a call that allows the MPI stack get
> access to the thread ID (or maybe we can specify some other means to
> do this).
>
> 4. Discuss Alexander's proposal on thread
> registration/deregistration. One of the major comments was that
> creating thread-specific communicators for every OpenMP loop is not
> realistic. OpenMP does not maintain thread IDs persistently. We'll
> probably need more API that allows a communicator to be created once
> and each thread to query its rank from the MPI implementation within
> each OpenMP loop instead of creating/destroying communicators each
> time.
>
> 5. Another point that came up is if resource interactions between
> different models is being considered, i.e., what resources MPI gets
> vs. what resources OpenMP gets.
>
> 6. Multi-level hybridness: We are taking a piece-wise approach to
> hybridness. We need to allow for multiple stacks to be mixed together,
> i.e., MPI+PGAS+threads.
>
> -- Pavan
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpiwg-hybridpm
mailing list