[Mpi3-hybridpm] External interfaces chapter updates

Bronis R. de Supinski bronis at llnl.gov
Mon Nov 1 09:02:34 CDT 2010


Pavan:

Doug's first issue indicates that you need to update
your PDF again. I have fixed that issue (which you
caught earlier) in the current draft in the MPI-3.0
trunk. Please start with the version that is now in
the MPI-3.0-2010-11-draft branch as I have made the
other minor corrections that we have discussed in
that section.

I will look over Doug's other points to see which
should be included in that draft as well as your
proposed version once you update it.

Thanks,

Bronis


On Mon, 1 Nov 2010, Douglas Miller wrote:

> Just proof reading again...
>
> Page 11 lines 32-34, this paragraph still says "Advice to implementers" twice.
>
> Page 12 lines 3-4, missing close-paren at "async-signal-safe".
>
> Page 12 lines 46-48, I think MPI_THREAD_FUNNELED should have the same note about excepting helper threads calls? I think it can still be allowed, or might be desirable, to use helper threads in this case?
>
> Page 14 lines 42.5-48, the wording sounds a little soft, as if the only goal is to pass the linking phase. should it additionally say something like "must return a meaningful and accurate value"?
>
> Pages 16-17, should we add an "advice to users" to recommend/remind that all communications between JOIN and LEAVE be self-completing? What I mean is that if a thread does an MPI_ISEND between JOIN and LEAVE, that is also does an MPI_WAIT (or equiv.) on that ISEND before LEAVE? Since JOIN and LEAVE have no knowledge of requests, etc, isn't that prudent or even necessary?
>
> Page 17, section 12.5 Shared Memory. Rather than be collective, could these calls reflect the API of something like shm_open() whereby they have a "key" parameter that uniquely identifies the segment of shared memory? Our experience with DCMF (where we did all shmem allocations in a ordered, synchronized "collective" manner) was that it is fraught with problems and restrictions. We're moving to using an API that takes a string "key" so that we need not force such semantics. Are there any OS shmem APIs that require ordered, collective allocation? I know UPC does not use a "key", but wouldn't this allow for better implementations? Are there platforms where these semantics would NOT work? [probably a topic for our meeting]
>
> [also another topic for the meeting] Should we say something about how to get a communicator of appropriate ranks for shmem allocation? Many platforms do not support global shared memory (only shmem local to a node), and I don't think there are any MPI mechanisms for testing or selecting ranks that are node-local.
>
> Thanks, Pavan, for doing this integration, by the way.
> Myself, I don't know LaTeX and can't afford the learning curve right now, so you're really helping out.
>
> _______________________________________________
> Douglas Miller BlueGene Messaging Development
> IBM Corp., Rochester, MN USA Bldg 030-2 A410
> dougmill at us.ibm.com Douglas Miller/Rochester/IBM
>
> [cid:1__=09BBFD5DDFD0140C8f9e8a93df938 at us.ibm.com]Pavan Balaji ---10/30/2010 07:37:24 PM---On 10/30/2010 01:49 PM, Bronis R. de Supinski wrote:
>
>
> Pavan Balaji <balaji at mcs.anl.gov>
> Sent by: mpi3-hybridpm-bounces at lists.mpi-forum.org
>
> 10/30/2010 07:36 PM
> Please respond to
> mpi3-hybridpm at lists.mpi-forum.org
>
>
>
>
> To
>
> "Bronis R. de Supinski" <bronis at llnl.gov>, mpi3-hybridpm at lists.mpi-forum.org
>
> cc
>
>
>
> Subject
>
> Re: [Mpi3-hybridpm] External interfaces chapter updates
>
>
>
>
> On 10/30/2010 01:49 PM, Bronis R. de Supinski wrote:
>>> I think it's time for us to start dumping in text into the chapter and
>>> start discussing the exact wording. I've included the helper threads and
>>> shared memory extensions proposals into the chapter and uploaded it to
>>> the wiki
>>> (https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/MPI3Hybrid/ei-2-v0.1.pdf).
>>> Please take a look at it and let me know your comments.
>>
>> I'll try to make a detailed reading next week. In the
>
> I had initially incorrectly uploaded a change with respect to the
> MPI_THREAD_SERIALIZED semantics which we decided to drop last time. I've
> now uploaded v0.2 of the document with this correction as well as the
> other changes suggested by Bronis.
>
> -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
>



More information about the mpiwg-hybridpm mailing list