[Mpi3-abi] For the April MPI Forum Meeting

Terry Dontje Terry.Dontje at [hidden]
Thu Apr 24 11:01:48 CDT 2008



Am I the only one getting an "all circuits are busy" message from the 
number below?

--td

Jeff Brown wrote:
> all,
>
> I scheduled a telecon to discuss status and get somewhat organized 
> for the meeting:
>
> Thursday April 24, 10:00 MDT
> local number:      606-1201(6-1201)
> toll free number:  888 343-0702.
>
> I'll send out some slides for the 5 minute briefing for the group.
>
> I'm having a hard time finding time to devote to this, but I'll have 
> a cut at the OpenMPI and LAMPI analysis prior to the telecon.  We 
> need someone to look at MPICH, and the vendor implementations need to 
> be posted.
>
> Jeff
>
>
>
> At 10:03 AM 4/16/2008, Jeff Brown wrote:
>   
>> Yes, it's time to put some cycles toward this.  Let's start
>> populating the matrix and have a telecon toward the end of next
>> week.  I'll schedule a WG working session at the meeting.
>>
>> I'll take a look at OpenMPI and LAMPI, the two primary MPI
>> implementations we use at LANL, and post to the wiki by the end of
>> the week.  Others, please do the same for your MPI implementation
>> (especially the vendors).  Overlap is OK.
>>
>> I'll send out specifics on the telecon.  Let's shoot for Thursday
>> April 24, 9:00 A.M. MST.
>>
>> Jeff
>>
>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote:
>>     
>>> Folks,
>>>
>>> Are we planning on a WG update to report at the April 28-30 Forum
>>> meeting? We have started the process of identifying the mpi.h
>>> differences, but I dont think we have synthesized the data yet, or
>>> come to any conclusions/next steps... Or did I miss something here?
>>>
>>> Thanx!
>>> Kannan
>>>
>>> -----Original Message-----
>>> From: mpi3-abi-bounces_at_[hidden]
>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis
>>> Sent: Monday, March 17, 2008 4:18 AM
>>> To: MPI 3.0 ABI working group
>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>>>
>>>
>>> I'm not sure how best to express this, but there are a couple of
>>> things that occur to me that might be important:
>>>
>>> 1. The size of the handle types (cf. size of a pointer perhaps?)
>>>
>>> 2. should we add some sort of table describing the current situation
>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g.
>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is
>>> burned into the binary; whereas OpenMPI uses extern pointers - i.e.
>>> ompi_mpi_comm_world is in the initialized data section of libmpi.so,
>>> and the value resolved at (dynamic) link time.
>>>
>>> Cheers,
>>>
>>> Edric.
>>>
>>>       
>>>> -----Original Message-----
>>>> From: mpi3-abi-bounces_at_[hidden]
>>>>         
>>> [<mailto:mpi3-abi->mailto:mpi3-abi-
>>>       
>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown
>>>> Sent: Thursday, March 13, 2008 10:11 PM
>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>>>>
>>>> I propose a way we can make progress ...
>>>>
>>>> Let's start populating a matrix (excel spreadsheet) with a column for
>>>> each MPI implementation, and rows for the various MPI datatypes,
>>>> constants, etc. where the internal implementations varys.  I'll kick
>>>> it off for OpenMPI and send out.
>>>>
>>>> The last column of the matrix can be "ABI" where we propose a common
>>>> approach across the implementations.
>>>>
>>>> A couple of driving principles:
>>>> 1. the ABI solution shouldn't negatively impact quality of
>>>>         
>>> implementation
>>>       
>>>> 2. minimize platform specific solutions
>>>>
>>>> I'd like to see if we can produce a single ABI that spans platforms.
>>>>
>>>> comments?
>>>>
>>>> Jeff
>>>>
>>>>
>>>> _______________________________________________
>>>> mpi3-abi mailing list
>>>> mpi3-abi_at_[hidden]
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>>         
>>> _______________________________________________
>>> mpi3-abi mailing list
>>> mpi3-abi_at_[hidden]
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>
>>> _______________________________________________
>>> mpi3-abi mailing list
>>> mpi3-abi_at_[hidden]
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>       
>> _______________________________________________
>> mpi3-abi mailing list
>> mpi3-abi_at_[hidden]
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>     
>
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>   



More information about the Mpi3-abi mailing list