[Mpi3-abi] For the April MPI Forum Meeting

Jeff Squyres jsquyres at [hidden]
Thu Apr 24 06:51:58 CDT 2008



Note that Open MPI changes some things by platform/compiler as well  
(MPI_Aint is an obvious one); I suspect that MPICH* may, too...?

On Apr 24, 2008, at 6:12 AM, Edric Ellis wrote:

> I've taken our MPICH2-1.0.3 build, and extracted the stuff as per  
> attached.
>
> I wrote a crufty script to pull this stuff out - for interest,  
> here's how MPICH2 changes by platform and by word size:
>
> $ diff glnxa64.csv glnx86.csv
> 8c8
> < MPI_Aint,typedef long
> ---
>> MPI_Aint,typedef int
> 30c30
> < MPI_BSEND_OVERHEAD,95
> ---
>> MPI_BSEND_OVERHEAD,59
> 197c197
> < MPI_LONG,(0x4c000807)
> ---
>> MPI_LONG,(0x4c000407)
> 200c200
> < MPI_LONG_DOUBLE,(0x4c00100c)
> ---
>> MPI_LONG_DOUBLE,(0x4c000c0c)
> 204c204
> < MPI_UNSIGNED_LONG,(0x4c000808)
> ---
>> MPI_UNSIGNED_LONG,(0x4c000408)
>
> $ diff win32.csv glnx86.csv
> 200c200
> < MPI_LONG_DOUBLE,(0x4c00080c)
> ---
>> MPI_LONG_DOUBLE,(0x4c000c0c)
> 214c214
> < MPI_WCHAR,(0x4c00020e)
> ---
>> MPI_WCHAR,(0x4c00040e)
> 218,219c218,219
> < MPI_2COMPLEX,(0x4c001024)
> < MPI_2DOUBLE_COMPLEX,(0x4c002025)
> ---
>> MPI_2COMPLEX,(MPI_DATATYPE_NULL)
>> MPI_2DOUBLE_COMPLEX,(MPI_DATATYPE_NULL)
>
> Cheers,
>
> Edric.
>
>> -----Original Message-----
>> From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-
>> bounces_at_[hidden]] On Behalf Of Supalov, Alexander
>> Sent: Thursday, April 24, 2008 9:16 AM
>> To: MPI 3.0 ABI working group
>> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting
>>
>> Thanks. Can we add MPICH2? It's different from MPICH. Also, there's
>> slight drift between different MPICH2 versions. Should we address  
>> this
>> at all, or just go for the latest and greatest (1.0.7)?
>>
>> -----Original Message-----
>> From: mpi3-abi-bounces_at_[hidden]
>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown
>> Sent: Thursday, April 24, 2008 1:13 AM
>> To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI
>> working group; MPI 3.0 ABI working group
>> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting
>>
>> Attached is the beginnings of a spreadsheet to capture the detailed
>> differences in the mpi.h implementations with the OpenMPI column
>> populated.  I'll get a start at LAMPI before the telecon.
>>
>> I'll post to the wiki.
>>
>> Talk to y'all in the morning (well, my morning).
>>
>> Jeff
>>
>> At 01:42 PM 4/21/2008, Jeff Brown wrote:
>>> all,
>>>
>>> I scheduled a telecon to discuss status and get somewhat organized
>>> for the meeting:
>>>
>>> Thursday April 24, 10:00 MDT
>>> local number:      606-1201(6-1201)
>>> toll free number:  888 343-0702.
>>>
>>> I'll send out some slides for the 5 minute briefing for the group.
>>>
>>> I'm having a hard time finding time to devote to this, but I'll have
>>> a cut at the OpenMPI and LAMPI analysis prior to the telecon.  We
>>> need someone to look at MPICH, and the vendor implementations need  
>>> to
>>> be posted.
>>>
>>> Jeff
>>>
>>>
>>>
>>> At 10:03 AM 4/16/2008, Jeff Brown wrote:
>>>> Yes, it's time to put some cycles toward this.  Let's start
>>>> populating the matrix and have a telecon toward the end of next
>>>> week.  I'll schedule a WG working session at the meeting.
>>>>
>>>> I'll take a look at OpenMPI and LAMPI, the two primary MPI
>>>> implementations we use at LANL, and post to the wiki by the end of
>>>> the week.  Others, please do the same for your MPI implementation
>>>> (especially the vendors).  Overlap is OK.
>>>>
>>>> I'll send out specifics on the telecon.  Let's shoot for Thursday
>>>> April 24, 9:00 A.M. MST.
>>>>
>>>> Jeff
>>>>
>>>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote:
>>>>> Folks,
>>>>>
>>>>> Are we planning on a WG update to report at the April 28-30 Forum
>>>>> meeting? We have started the process of identifying the mpi.h
>>>>> differences, but I dont think we have synthesized the data yet, or
>>>>> come to any conclusions/next steps... Or did I miss something  
>>>>> here?
>>>>>
>>>>> Thanx!
>>>>> Kannan
>>>>>
>>>>> -----Original Message-----
>>>>> From: mpi3-abi-bounces_at_[hidden]
>>>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric
>> Ellis
>>>>> Sent: Monday, March 17, 2008 4:18 AM
>>>>> To: MPI 3.0 ABI working group
>>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>>>>>
>>>>>
>>>>> I'm not sure how best to express this, but there are a couple of
>>>>> things that occur to me that might be important:
>>>>>
>>>>> 1. The size of the handle types (cf. size of a pointer perhaps?)
>>>>>
>>>>> 2. should we add some sort of table describing the current
>> situation
>>>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD?
>> E.g.
>>>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is
>>>>> burned into the binary; whereas OpenMPI uses extern pointers -  
>>>>> i.e.
>>>>> ompi_mpi_comm_world is in the initialized data section of
>> libmpi.so,
>>>>> and the value resolved at (dynamic) link time.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Edric.
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: mpi3-abi-bounces_at_[hidden]
>>>>> [<mailto:mpi3-abi->mailto:mpi3-abi-
>>>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown
>>>>>> Sent: Thursday, March 13, 2008 10:11 PM
>>>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
>>>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>>>>>>
>>>>>> I propose a way we can make progress ...
>>>>>>
>>>>>> Let's start populating a matrix (excel spreadsheet) with a
>> column for
>>>>>> each MPI implementation, and rows for the various MPI datatypes,
>>>>>> constants, etc. where the internal implementations varys.  I'll
>> kick
>>>>>> it off for OpenMPI and send out.
>>>>>>
>>>>>> The last column of the matrix can be "ABI" where we propose a
>> common
>>>>>> approach across the implementations.
>>>>>>
>>>>>> A couple of driving principles:
>>>>>> 1. the ABI solution shouldn't negatively impact quality of
>>>>> implementation
>>>>>> 2. minimize platform specific solutions
>>>>>>
>>>>>> I'd like to see if we can produce a single ABI that spans
>> platforms.
>>>>>>
>>>>>> comments?
>>>>>>
>>>>>> Jeff
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> mpi3-abi mailing list
>>>>>> mpi3-abi_at_[hidden]
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>>>
>>>>> _______________________________________________
>>>>> mpi3-abi mailing list
>>>>> mpi3-abi_at_[hidden]
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>>>
>>>>> _______________________________________________
>>>>> mpi3-abi mailing list
>>>>> mpi3-abi_at_[hidden]
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>>
>>>>
>>>> _______________________________________________
>>>> mpi3-abi mailing list
>>>> mpi3-abi_at_[hidden]
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>>>
>>>
>>> _______________________________________________
>>> mpi3-abi mailing list
>>> mpi3-abi_at_[hidden]
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>> ---------------------------------------------------------------------
>> Intel GmbH
>> Dornacher Strasse 1
>> 85622 Feldkirchen/Muenchen Germany
>> Sitz der Gesellschaft: Feldkirchen bei Muenchen
>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
>> Registergericht: Muenchen HRB 47456 Ust.-IdNr.
>> VAT Registration No.: DE129385895
>> Citibank Frankfurt (BLZ 502 109 00) 600119052
>>
>> This e-mail and any attachments may contain confidential material for
>> the sole use of the intended recipient(s). Any review or distribution
>> by others is strictly prohibited. If you are not the intended
>> recipient, please contact the sender and delete all copies.
>>
>>
>> _______________________________________________
>> mpi3-abi mailing list
>> mpi3-abi_at_[hidden]
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> <MPI ABI + MPICH2.xls>_______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi


-- 
Jeff Squyres
Cisco Systems




More information about the Mpi3-abi mailing list