[Mpi3-hybridpm] External interfaces chapter updates
Douglas Miller
dougmill at us.ibm.com
Mon Nov 8 08:55:08 CST 2010
Bronis,
>> >> Page 12 lines 46-48, I think MPI_THREAD_FUNNELED should have the same
note
>> >> about excepting helper threads calls? I think it can still be
allowed, or
>> >> might be desirable, to use helper threads in this case?
>>
>> I do not understand this comment. Which note? This confusion
>> is probably an issue over which version of Pavan's document
>> you used for page/line numbers. However, I think this comment
>> and all remaining ones only pertain to the version with more
>> significant changes (i.e., the helper threads and shared memory
>> proposals). I don't intend to integrate them into the branch
>> with the small changes yet so I will stop here. Please let
>> me know if I have misinterpreted something.
I was referring to the note on MPI_THREAD_SERIALIZED that said that
MPI_Helper_* calls were an exception to the rule that only one thread makes
MPI calls at a time. I think (maybe) MPI_THREAD_FUNNELED should have the
same/similar exception. We might have already discussed that, but I don't
recall if there were compelling reasons to leave FUNNELED out.
_______________________________________________
Douglas Miller BlueGene Messaging Development
IBM Corp., Rochester, MN USA Bldg 030-2 A410
dougmill at us.ibm.com Douglas Miller/Rochester/IBM
"Bronis R. de
Supinski"
<bronis at llnl.gov> To
Sent by: "mpi3-hybridpm at lists.mpi-forum.org"
mpi3-hybridpm-bou <mpi3-hybridpm at lists.mpi-forum.org>
nces at lists.mpi-fo cc
rum.org
Subject
Re: [Mpi3-hybridpm] External
11/05/2010 06:52 interfaces chapter updates
PM
Please respond to
"Bronis R. de
Supinski"
<bronis at llnl.gov>
; Please respond
to
mpi3-hybridpm at lis
ts.mpi-forum.org
Pavan:
Sorry I had to drop out of the call early today but
I had to be somewhere at noon.
Anyway, I am looking over Doug's comments now.
Pavan and Doug:
Re:
> Pavan:
>
> Doug's first issue indicates that you need to update
> your PDF again. I have fixed that issue (which you
> caught earlier) in the current draft in the MPI-3.0
> trunk. Please start with the version that is now in
> the MPI-3.0-2010-11-draft branch as I have made the
> other minor corrections that we have discussed in
> that section.
Looks good now. Thanks.
> I will look over Doug's other points to see which
> should be included in that draft as well as your
> proposed version once you update it.
>
> Thanks,
>
> Bronis
>
>
> On Mon, 1 Nov 2010, Douglas Miller wrote:
>
>> Just proof reading again...
>>
>> Page 11 lines 32-34, this paragraph still says "Advice to implementers"
>> twice.
This issue should now be fixed in all versions.
>> Page 12 lines 3-4, missing close-paren at "async-signal-safe".
Good catch. fixing it requires that we move the period outside
of the quotation marks so it can follow the close-paren. I have
made this change and commited it to MPI-3.0-2010-11-draft branch
in which we are storing changes that will not go out in the
first release draft since they have not been discussed/seen by
the overall Forum (I know this seems like it should be OK to
include in that draft but I am just following my understanding
of the rules; it should get into the next release draft).
>> Page 12 lines 46-48, I think MPI_THREAD_FUNNELED should have the same
note
>> about excepting helper threads calls? I think it can still be allowed,
or
>> might be desirable, to use helper threads in this case?
I do not understand this comment. Which note? This confusion
is probably an issue over which version of Pavan's document
you used for page/line numbers. However, I think this comment
and all remaining ones only pertain to the version with more
significant changes (i.e., the helper threads and shared memory
proposals). I don't intend to integrate them into the branch
with the small changes yet so I will stop here. Please let
me know if I have misinterpreted something.
Pavan:
Where are you keeping the version with the more significant
changes? Did you have Jeff cut you a branch? We should definitely
do something to keep those working changes available. It would
probably be good if both of us had write access to them in
case something happens and you need someone else to pick it up.
Bronis
>> Page 14 lines 42.5-48, the wording sounds a little soft, as if the only
>> goal is to pass the linking phase. should it additionally say something
>> like "must return a meaningful and accurate value"?
>>
>> Pages 16-17, should we add an "advice to users" to recommend/remind that
>> all communications between JOIN and LEAVE be self-completing? What I
mean
>> is that if a thread does an MPI_ISEND between JOIN and LEAVE, that is
also
>> does an MPI_WAIT (or equiv.) on that ISEND before LEAVE? Since JOIN and
>> LEAVE have no knowledge of requests, etc, isn't that prudent or even
>> necessary?
>>
>> Page 17, section 12.5 Shared Memory. Rather than be collective, could
these
>> calls reflect the API of something like shm_open() whereby they have a
>> "key" parameter that uniquely identifies the segment of shared memory?
Our
>> experience with DCMF (where we did all shmem allocations in a ordered,
>> synchronized "collective" manner) was that it is fraught with problems
and
>> restrictions. We're moving to using an API that takes a string "key" so
>> that we need not force such semantics. Are there any OS shmem APIs that
>> require ordered, collective allocation? I know UPC does not use a "key",
>> but wouldn't this allow for better implementations? Are there platforms
>> where these semantics would NOT work? [probably a topic for our meeting]
>>
>> [also another topic for the meeting] Should we say something about how
to
>> get a communicator of appropriate ranks for shmem allocation? Many
>> platforms do not support global shared memory (only shmem local to a
node),
>> and I don't think there are any MPI mechanisms for testing or selecting
>> ranks that are node-local.
>>
>> Thanks, Pavan, for doing this integration, by the way.
>> Myself, I don't know LaTeX and can't afford the learning curve right
now,
>> so you're really helping out.
>>
>> _______________________________________________
>> Douglas Miller BlueGene Messaging Development
>> IBM Corp., Rochester, MN USA Bldg 030-2 A410
>> dougmill at us.ibm.com Douglas Miller/Rochester/IBM
>>
>> [cid:1__=09BBFD5DDFD0140C8f9e8a93df938 at us.ibm.com]Pavan Balaji
>> ---10/30/2010 07:37:24 PM---On 10/30/2010 01:49 PM, Bronis R. de
Supinski
>> wrote:
>>
>>
>> Pavan Balaji <balaji at mcs.anl.gov>
>> Sent by: mpi3-hybridpm-bounces at lists.mpi-forum.org
>>
>> 10/30/2010 07:36 PM
>> Please respond to
>> mpi3-hybridpm at lists.mpi-forum.org
>>
>>
>>
>>
>> To
>>
>> "Bronis R. de Supinski" <bronis at llnl.gov>,
>> mpi3-hybridpm at lists.mpi-forum.org
>>
>> cc
>>
>>
>>
>> Subject
>>
>> Re: [Mpi3-hybridpm] External interfaces chapter updates
>>
>>
>>
>>
>> On 10/30/2010 01:49 PM, Bronis R. de Supinski wrote:
>>>> I think it's time for us to start dumping in text into the chapter and
>>>> start discussing the exact wording. I've included the helper threads
and
>>>> shared memory extensions proposals into the chapter and uploaded it to
>>>> the wiki
>>>> (
https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/MPI3Hybrid/ei-2-v0.1.pdf
).
>>>> Please take a look at it and let me know your comments.
>>>
>>> I'll try to make a detailed reading next week. In the
>>
>> I had initially incorrectly uploaded a change with respect to the
>> MPI_THREAD_SERIALIZED semantics which we decided to drop last time. I've
>> now uploaded v0.2 of the document with this correction as well as the
>> other changes suggested by Bronis.
>>
>> -- Pavan
>>
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>> _______________________________________________
>> Mpi3-hybridpm mailing list
>> Mpi3-hybridpm at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>>
>>
>
_______________________________________________
Mpi3-hybridpm mailing list
Mpi3-hybridpm at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101108/91a1c161/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101108/91a1c161/attachment-0003.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic16935.gif
Type: image/gif
Size: 1255 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101108/91a1c161/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101108/91a1c161/attachment-0005.gif>
More information about the mpiwg-hybridpm
mailing list