[Mpi3-abi] Minutes of MPI ABI working group 2/21/08 meeting

Erez Haba erezh at [hidden]
Fri Feb 29 11:12:18 CST 2008



Hi all,

I don't think we can avoid breaking some MPI implementations backward compatibility. However, that should be a onetime thing and going forward the ABI should continue and keep backward compatibility. As suggested before there are solutions for the specific implementations, like providing and adaptation layer from their interface to the ABI interface or vice versa.

Thanks,
.Erez

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, Alexander
Sent: Friday, February 29, 2008 3:54 AM
To: mpi3-abi_at_[hidden]
Subject: Re: [Mpi3-abi] Minutes of MPI ABI working group 2/21/08 meeting

Hi,

Some math libraries are indeed built for several MPIs (think Intel Math
Kernel Libraries, for examples - they include Scalapack, too).

Maintaining backward compatibility is a must for industrial
implementations, thanks for bringing this up. I hope the ABI design will
alleviate the issues you addressed as much as possible. If not, we're
talking about MPI-3, and recompilation may become a viable option at
some point.

Minding the IMPI story, you're right, we should be united in offering
the standard ABI to make it fly. Once ISVs and end users see compelling
value, they will convert sooner or later, and help each other in the
process.

Best regards.

Alexander

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden]
[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown
Sent: Tuesday, February 26, 2008 5:54 PM
To: mpi3-abi_at_[hidden]
Subject: Re: [Mpi3-abi] Minutes of MPI ABI working group 2/21/08 meeting

these are good points - thanks

I hadn't considered the backward compatibility issue.

Jeff

At 09:14 AM 2/26/2008, you wrote:
>Since I was taking notes, I did not voice many of my thoughts in our
>last meeting.  Let me add some of my own thoughts:
>
>1)  HP-MPI's experience around mpi compatibility is primarily around
>the use of 3rd party libraries such as scalapack.  I am unsure if
>these libraries are distributed with multiple builds for different
>MPI implementations. I do know that HP-MPI's translation layer to
>MPICH-based compatibility is frequently used in this context.  I'm
>not sure how this information impacts what we are trying to
>accomplish, but I wanted to point out that 3rd party libraries are
>another motivating factor.
>
>2) I also have concerns around our (abi working group)
>efforts.  Some MPI implementations maintain ABI compatibility
>between releases, so users can upgrade to the newest release without
>rebuilding.  If an "official MPI ABI" is defined, they may be very,
>unlikely to stop providing compatibility to current ABI.  I think
>such implementations have two options:
>
>         a) Provide a morphing layer from the offical MPI ABI to
> their historic ABI.   In this case, I am concerned that if ISV's
> perceive that this morph layer incurs any cost whatsoever, they
> will continue to generate and distribute an executable that is
> linked to the historic implemnation-specific ABI.   In the end, the
> effect could be an increase in the number of executables that ISV's
deliver.
>
>         b)  Adopt the "official MPI ABI" as their native
> distribution and provide a morph layer for backwards
> compatibility.  This would take a lot of faith on their part that
> other implementations will embrace the official ABI and make the
> effort and the perceived hit to using their historic native-ABI worth
while.
>
>After getting bitten by efforts such as IMPI, HP-MPI, for example,
>would have to give this a lot of thought before deciding on option B.
>
>Thanks,
>Dave Solt
>
>
>-----Original Message-----
>From: mpi3-abi-bounces_at_[hidden]
>[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown
>Sent: Friday, February 22, 2008 11:15 AM
>To: mpi3-abi_at_[hidden]
>Subject: Re: [Mpi3-abi] Minutes of MPI ABI working group 2/21/08
meeting
>
>Thank you for taking notes.  You captured the key discussion points
>quite well.
>
>Here's what I took away from the meeting:
>1. the group will initially focus on an MPI ABI for C bindings only
>(can't solve all the worlds problems at once)
>     note that this avoids dealing with the fortran compiler symbol
> name issues
>     the goal is run time dynamic link compatibility (just change
> LD_LIBRARY_PATH (for unix)) 2. the work to accomplish this is two fold
>     - define a reference mpi.h to include "standard" types, scope
> and values for constants
>     - ensure consistent linkage conventions (I don't think this is
> an issue on unix platforms) 3. consider a "MPI ABI compliant"
> certification rather than including this in the general MPI 3.0
standard
>     frankly I have concerns about this - I'm afraid implementors
> may just blow it off
>
>Next steps:
>- come to consensus on working group goals as input to a proposal
>- develop a short briefing to convey this for the March meeting
>
>If we need another telecon that can be easily set up.  Let me know
>if you think this is necessary.
>
>Thanks for your participation - we are off and rolling!
>
>Jeff
>
>At 04:33 PM 2/21/2008, you wrote:
>
> >Attendees:
> >
> >Jeff Brown (meeting organizer), David Daniels (open MPI / LAM
> >developer), Nathan DeBardeleben (Los Alamos), Fredrick Ellis (end
user,
> >didn't catch an affiliation), Jeff Squyres (Cisco), Alexander Supalov
> >(Intel), Erez Haba (Microsoft), Kannan Naraasimhan (HP), David Solt
> >(HP)
> >
> >1. Introductions.
> >
> >Variety of backgrounds including developers, end users who work with
> >many applications & MPI's, and industry implementers who work with
> many ISV's.
> >
> >2. General Statement of Goals:
> >
> >For reference, here is our original charter as put forth by Jeff B.:
> >
> >"To define any additional support needed in the MPI standard to
enable
> >static and dynamic linkage compatibility across MPI implementations
on
> >a target platform for MPI based Applications."
> >
> >         * avoid separate compiles for each MPI implementation.
> >         * note that beyond mpi/application combinations, we also
have
> > compiler/mpi combination issues.
> >         * possible scopes for our efforts:
> >                 - Compile time (should be covered by the current MPI
> > standard)
> >                 - Link time (object level compatibility)
> >                 - Run time (simply "point" (LD_LIBRARY_PATH, for
> > example) to a different MPI).  Clearly more difficult but
potentially
> > more beneficial than link time.
> >         * An ABI should reduce the # of executable flavors users or
> > ISV's most provide.
> >         * Some ISV's are qualified to a specific build of a specific
> > MPI and they will not benefit for any ABI changes.
> >
> >3. Languages
> >
> >Which bindings should be included in our efforts?
> >
> >         * Should any efforts be directed to all bindings or just C?
> >                 - A C-only solution may be perceived as contrary to
> > the style of the standard.
> >                 - It was also noted that MPI introduced language
> > bindings in phases, we could follow suite.
> >         * Compilers introduce name mangling issues and C++ is
> > particularly difficult do to the inlining that occurs.
> >         * Fortran has more manageable name mangling issues, but also
> > has the problem of .TRUE. & .FALSE. definitions.
> >         * Appeared to be some consensus that a C-only solution would
> > be a good starting point.
> >
> >4. Details of what we might define
> >
> >         * A morph layer vs. a 'true' ABI compatibility.
> >                 - morph layer is perceived as additional overhead.
> >                 - morph layer is simpler for implementers who may be
> > heavily invested in current header files, etc.
> >            [[ NOTE:  I'm unsure if the morph layer is an
> > implementation issue or does it genuinely change our direction ]]
> >
> >         * Possible targets for standardization:  mpi.h, name
> > resolution of libraries
> >                 - Almost any interesting definition of ABI
> > compatibility will need some level of standardization of mpi.h
> >                 - If we can just point to a different directory with
> > LD_LIBRARY_PATH, then applications need to be linked against the
same
> > library names, regardless of MPI implementation.
> >         * Discussion of whether calling conventions are an issue or
is
> > it just a matter of fixing mpi.h?
> >                 - Most industry standard platforms have a well
defined
> > C calling interface.
> >
> >5. Should ABI compliance be optional?
> >
> >         * Forcing all implementations to adhere to a standard may be
> > too great a burden to implementers (detracting them from more
> > interesting/useful work).
> >         * Some implementations are on hardware or OS where only one
> > implementation exists and is likely to ever exist.  Should they have
> > to 'pay' the cost to claim full MPI compliance.
> >         * Appeared to be consensus that ABI compliance and MPI
> > compliance should be separated out:
> >                 - An implementation can be MPI compliant even though
> > they are not ABI compliant.
> >                 - Similar to mpiexec definition (recommended, if you
> > provide it must look "this way", but not required)
> >                 - A separate "stamp/claim" for being ABI compliant.
> >
> >6. Next steps
> >
> >         * For this week:                post minutes to e-mail,
> > then Twiki after acceptance.
> >         * For next week:                Schedule next conference
call
> >         * For march meeting:    define the goal of the working
> > group.  Exactly what are we trying to accomplish.
> >
> >         * General:                      Discuss via e-mail prior to
> > next meeting?
> >                 [[ NOTE: did not appear to be a resolution on this
]]
> >         * General:                      How will the outcome of our
> > discussions will get turned into a proposal.
> >                 [[ NOTE: do we need to assign an owner? ]]
> >         * General:                      How will the outcome of our
> > discussions will get turned into actual MPI standard text as the
> > current presentation style of the Standard is derived from its focus
> > on API's (not ABI's).
> >
> >
> >
> >
> >
> >_______________________________________________
> >Mpi3-abi mailing list
> >Mpi3-abi_at_[hidden]
> >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
>
>_______________________________________________
>Mpi3-abi mailing list
>Mpi3-abi_at_[hidden]
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
>_______________________________________________
>Mpi3-abi mailing list
>Mpi3-abi_at_[hidden]
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi

_______________________________________________
Mpi3-abi mailing list
Mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

_______________________________________________
Mpi3-abi mailing list
Mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi



More information about the Mpi3-abi mailing list