[Mpi3-hybridpm] Reminder for the hybrid telecon tomorrow

Pavan Balaji balaji at mcs.anl.gov
Mon Apr 22 17:13:44 CDT 2013


I need to read the rest of the emails, so my email might not capture the
entire context --

Some MPI implementations might need an explicit *real* finalize at some
point, if they need to clean up OS resources.  For most MPI
implementations this might not be a problem.  For some MPI
implementations + compiler combinations, we can ask the compiler to
throw an "at_exit()" callback to do the cleanup.  But this is something
we need to explicitly consider before making a decision.

 -- Pavan

On 04/22/2013 05:09 PM US Central Time, Schulz, Martin wrote:
> Would we actually specify what "not being initialized" anymore means, except that users are no longer allowed to call MPI routines before they call MPI_Init again? If so, the impact on the implementation should be minimal, since it could just keep MPI "alive" under the hood.
> 
> Martin
>  
> 
> On Apr 22, 2013, at 12:08 PM, Jim Dinan <dinan at mcs.anl.gov> wrote:
> 
>> I think the new semantics for MPI_Init() would be backward compatible, but we could also add a new routine: MPI_Init_awesome().  :)
>>
>> On 4/22/13 11:42 AM, Jeff Squyres (jsquyres) wrote:
>>> Sayantan and I talked about this on the phone today.  It seems like re-initialization might be a good idea, but a good first step might be asking all the hardware/software vendors if there's a technical reason they don't allow re-initialzation today (other than "because it's not required by MPI").  I.e., I think that some API's/networks (like PSM) don't allow re-initialization -- is there a technical reason for that, or is it just an overcome-able software limitation?
>>>
>>> I'm not a big fan of re-defining the default behavior of MPI_INIT, however -- I think there might be a big impact on legacy applications.
>>>
>>>
>>>
>>> On Apr 19, 2013, at 3:06 PM, Jim Dinan <dinan at mcs.anl.gov> wrote:
>>>
>>>> Another issue that ought to consider is that MPI currently can only be initialized/finalized once.  This requirement breaks the "MPI is a library" semantic and leads to some of the nastiness Jeff S. mentioned below.  I think we should re-evaluate if this restriction is really required, or if it's just convenient for implementers.
>>>>
>>>> Another suggestion on this front -- Why not modify the semantics of MPI_Init to match what we want?
>>>>
>>>> MPI_Init:
>>>>
>>>> - Always THREAD_MULTIPLE
>>>> - Always a thread safe call
>>>> - Ref-counted
>>>> - Can be used to initialize/finalize MPI multiple times
>>>> - Cannot be combined with MPI_Init_thread
>>>>
>>>> If apps really care about getting rid of threading overhead, then they should use MPI_Init_thread() and use the thread level argument to give a performance hint.
>>>>
>>>> ~Jim.
>>>>
>>>> On 4/19/13 1:11 PM, Jeff Squyres (jsquyres) wrote:
>>>>> Points to think about for the Monday teleconf...
>>>>>
>>>>> With regards to ref-counted MPI_INIT / MPI_FINALIZE (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/302):
>>>>>
>>>>> PROBLEMS IT SOLVES:
>>>>> - multiple, separate libraries in a single process needing access to MPI
>>>>>   ==> works best when all entities call MPI_INIT* at the beginning of time, work for a "long" period of time, and then call MPI_FINALIZE at the end of time (i.e., there's no race condition -- see below)
>>>>>
>>>>> PROBLEMS IT DOES NOT SOLVE:
>>>>> - Implementation not providing MPI_THREAD_MULTIPLE support (e.g., if the separate libraries are in different threads and someone already initialized MPI with THREAD_SINGLE, other threads can't know if it's safe to call MPI_INIT* or not)
>>>>> - The "finalize" problem (i.e., can't guarantee to know if MPI has been finalized or not -- there's a race between calling MPI_FINALIZED, seeing that MPI is not finalized, and then calling MPI_INIT)
>>>>>
>>>>> PROBLEMS IT CREATES:
>>>>> - Will need to change the definition of "main thread"
>>>>> - Possibly also need to change the definitions of MPI_THREAD_SERIALIZED and MPI_THREAD_FUNNELED
>>>>>
>>>>> OPEN QUESTIONS:
>>>>> - Do we still need to keep the restriction that the thread that initializes MPI is the same thread that finalizes MPI?
>>>>> - Should we allow re-initialization?  This effectively solves some (but not all) of the problems that have been discussed, but probably opens a new can of worms...
>>>>>
>>>>>
>>>>>
>>>>> On Apr 12, 2013, at 8:27 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>>>>>
>>>>>> All,
>>>>>>
>>>>>> Jim and I will be late for the April 22nd meeting.  So we decided to
>>>>>> move the endpoints discussion to the telecon after this one.
>>>>>>
>>>>>> I chatted with Jeff Squyres yesterday.  He'll be driving the April 22nd
>>>>>> telecon to discuss more details on the ref-counted init/finalize issue.
>>>>>> He'll be sending out some notes before the call for discussion.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> -- Pavan
>>>>>>
>>>>>> On 04/08/2013 11:47 AM US Central Time, Pavan Balaji wrote:
>>>>>>>
>>>>>>> The next call will be on April 22nd, 11am central.  Same telecon number.
>>>>>>>
>>>>>>> -- Pavan
>>>>>>>
>>>>>>> On 04/08/2013 11:42 AM US Central Time, Jim Dinan wrote:
>>>>>>>> Meeting notes are on the wiki:
>>>>>>>>
>>>>>>>> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Hybrid/notes-2013-04-08
>>>>>>>>
>>>>>>>>
>>>>>>>> ~Jim.
>>>>>>>>
>>>>>>>> On 4/7/13 12:45 PM, Pavan Balaji wrote:
>>>>>>>>> All,
>>>>>>>>>
>>>>>>>>> This is a reminder that we'll have our hybrid telecon tomorrow at 11am.
>>>>>>>>>  Here's the telecon information:
>>>>>>>>>
>>>>>>>>> International dial-in number: 1-719-234-7800
>>>>>>>>> Domestic dial-in number: 1-888-850-4523
>>>>>>>>> Participant Passcode: 314159
>>>>>>>>>
>>>>>>>>> The main item we'll be discussing is Jeff Squyres' ref-count proposal.
>>>>>>>>>
>>>>>>>>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/302
>>>>>>>>>
>>>>>>>>>  -- Pavan
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Mpi3-hybridpm mailing list
>>>>>>>> Mpi3-hybridpm at lists.mpi-forum.org
>>>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Pavan Balaji
>>>>>> http://www.mcs.anl.gov/~balaji
>>>>>
>>>>>
>>>> _______________________________________________
>>>> Mpi3-hybridpm mailing list
>>>> Mpi3-hybridpm at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>>>
>>>
>> _______________________________________________
>> Mpi3-hybridpm mailing list
>> Mpi3-hybridpm at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> 
> ________________________________________________________________________
> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
> 
> 
> 
> 
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> 

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji



More information about the mpiwg-hybridpm mailing list