[Mpi3-hybridpm] External interfaces chapter updates

Douglas Miller dougmill at us.ibm.com
Mon Nov 1 08:11:17 CDT 2010


Just proof reading again...

Page 11 lines 32-34, this paragraph still says "Advice to implementers"
twice.

Page 12 lines 3-4, missing close-paren at "async-signal-safe".

Page 12 lines 46-48, I think MPI_THREAD_FUNNELED should have the same note
about excepting helper threads calls? I think it can still be allowed, or
might be desirable, to use helper threads in this case?

Page 14 lines 42.5-48, the wording sounds a little soft, as if the only
goal is to pass the linking phase. should it additionally say something
like "must return a meaningful and accurate value"?

Pages 16-17, should we add an "advice to users" to recommend/remind that
all communications between JOIN and LEAVE be self-completing? What I mean
is that if a thread does an MPI_ISEND between JOIN and LEAVE, that is also
does an MPI_WAIT (or equiv.) on that ISEND before LEAVE? Since JOIN and
LEAVE have no knowledge of requests, etc, isn't that prudent or even
necessary?

Page 17, section 12.5 Shared Memory. Rather than be collective, could these
calls reflect the API of something like shm_open() whereby they have a
"key" parameter that uniquely identifies the segment of shared memory? Our
experience with DCMF (where we did all shmem allocations in a ordered,
synchronized "collective" manner) was that it is fraught with problems and
restrictions. We're moving to using an API that takes a string "key" so
that we need not force such semantics. Are there any OS shmem APIs that
require ordered, collective allocation? I know UPC does not use a "key",
but wouldn't this allow for better implementations? Are there platforms
where these semantics would NOT work? [probably a topic for our meeting]

[also another topic for the meeting] Should we say something about how to
get a communicator of appropriate ranks for shmem allocation? Many
platforms do not support global shared memory (only shmem local to a node),
and I don't think there are any MPI mechanisms for testing or selecting
ranks that are node-local.

Thanks, Pavan, for doing this integration, by the way.
Myself, I don't know LaTeX and can't afford the learning curve right now,
so you're really helping out.

_______________________________________________
Douglas Miller                  BlueGene Messaging Development
IBM Corp., Rochester, MN USA                     Bldg 030-2 A410
dougmill at us.ibm.com               Douglas Miller/Rochester/IBM


                                                                           
             Pavan Balaji                                                  
             <balaji at mcs.anl.g                                             
             ov>                                                        To 
             Sent by:                  "Bronis R. de Supinski"             
             mpi3-hybridpm-bou         <bronis at llnl.gov>,                  
             nces at lists.mpi-fo         mpi3-hybridpm at lists.mpi-forum.org   
             rum.org                                                    cc 
                                                                           
                                                                   Subject 
             10/30/2010 07:36          Re: [Mpi3-hybridpm] External        
             PM                        interfaces chapter updates          
                                                                           
                                                                           
             Please respond to                                             
             mpi3-hybridpm at lis                                             
             ts.mpi-forum.org                                              
                                                                           
                                                                           





On 10/30/2010 01:49 PM, Bronis R. de Supinski wrote:
>> I think it's time for us to start dumping in text into the chapter and
>> start discussing the exact wording. I've included the helper threads and
>> shared memory extensions proposals into the chapter and uploaded it to
>> the wiki
>> (
https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/MPI3Hybrid/ei-2-v0.1.pdf
).
>> Please take a look at it and let me know your comments.
>
> I'll try to make a detailed reading next week. In the

I had initially incorrectly uploaded a change with respect to the
MPI_THREAD_SERIALIZED semantics which we decided to drop last time. I've
now uploaded v0.2 of the document with this correction as well as the
other changes suggested by Bronis.

  -- Pavan

--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
_______________________________________________
Mpi3-hybridpm mailing list
Mpi3-hybridpm at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101101/87d9f34f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101101/87d9f34f/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic25560.gif
Type: image/gif
Size: 1255 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101101/87d9f34f/attachment-0001.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20101101/87d9f34f/attachment-0002.gif>


More information about the mpiwg-hybridpm mailing list