[Mpi-forum] ABI

Jeff Hammond jeff.science at gmail.com
Wed Dec 4 19:00:12 CST 2013


1) Intel has a patent on how to do this with PMPI. You're welcome to implement that on your own. 

2) Let me know when PETSc and Trilinos are anything-compatible :-P

Jeff

Sent from my iPhone

> On Dec 4, 2013, at 6:23 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> 
> "Jeff Squyres (jsquyres)" <jsquyres at cisco.com> writes:
> 
>> Integer vs. pointer is but one of MANY issues that prevent MPI
>> implementations from being ABI compatible.  Let's not also forget:
> 
> Yes, there are a lot of issues, but I don't think it's as insurmountable
> as it may appear.
> 
>> - Values of constants.  Simple integer values are usually easy to
>> device/resolve (but don't forget that some integer values are
>> specifically chosen on specific platforms). Sentinel values like
>> MPI_STATUS_IGNORE are not.
> 
> Section 2.5.4 (Named Constants) has a very short list of items that are
> actually required to be compile-time constants.  Current implementations
> put the values of far more constants into the ABI than the standard
> requires.
> 
>> - Size/content of MPI_Status.  MPI implementations hide (different)
>> non-public fields in MPI_Status.
> 
> These are not so wildly different.  Note that the non-public fields can
> be used for different things, they just have to add up to the same total
> size.
> 
> typedef struct MPI_Status {
>    int MPI_SOURCE;
>    int MPI_TAG;
>    int MPI_ERROR;
>    MPI_Count count;
>    int cancelled;
>    int abi_slush_fund[2];    
> } MPI_Status;
> 
> struct ompi_status_public_t {
>    /* These fields are publicly defined in the MPI specification.
>       User applications may freely read from these fields. */
>    int MPI_SOURCE;
>    int MPI_TAG;
>    int MPI_ERROR;
>    /* The following two fields are internal to the Open MPI
>       implementation and should not be accessed by MPI applications.
>       They are subject to change at any time.  These are not the
>       droids you're looking for. */
>    int _cancelled;
>    size_t _ucount;
> };
> 
> 
>> - Launcher differences.  The mpirun/mpiexec (or whatever launcher) is
>> inherently different, with different CLI options and configuration,
>> between different MPI implementations.
> 
> Doesn't matter because the launcher sets the environment so that the
> correct library is used.  You would use the MPICH mpiexec to run with
> the MPICH libraries and the OMPI mpiexec to run with OMPI libraries.
> You only ever interact with your library.
> 
>> - Library names.  libmpi.so?  libompi.so?  libmpich.so?  libmpi.a?  And so on.
> 
> libmpi.so, obviously.  ;-)
> 
> This should only expose the standard symbols.  /mpich/lib/libmpi.so and
> /ompi/lib/libmpi.so can link to whatever private libraries they want;
> those are not part of the ABI.
> 
>> - Dependent libraries.  What else do you need to link when linking the
>> application?  You can (usually) hide this when the MPI is a shared
>> library, but a) not always, and b) that doesn't help when MPI is a
>> static library.
> 
> A standard ABI is not useful for static libraries because you have to
> rebuild (it could save you recompiling all the sources, but you're back
> in the build system instead of the runtime).
> 
> Proper shared library etiquette is to always hide dependent libraries.
> 
>> - Compiler ABIs.  MPI middleware cannot solve the fact that C++ and
>> Fortran compilers cannot (and will not) agree on an ABI (for many good
>> reasons, BTW).
> 
> We can't standardize between incompatible compilers, so assume
> compatible compilers.
> 
>> - Compiler options.  Was the MPI library compiled with -O3?  -i8?  -32 or -64?  ...?
> 
> Same problem we have today.  If you compile with the same flags, we'd
> like the ABI to match.
> 
>> The fact of the matter is that the MPI API was *specifically designed
>> with only source compatibility in mind*.  We now have nearly 20 years
>> of momentum in different MPI implementations with different
>> engineering design choices.
>> 
>> The term "ABI" is a catchall for many, many different issues.  People
>> seem to think that simply switching libraries at run time is a silver
>> bullet -- "if only I could just change my LD_LIBRARY_PATH and use a
>> different MPI implementation, then the world would be better".  But
>> let's also not forget that most users barely know how to use
>> LD_LIBRARY_PATH (if at all).
> 
> /mpich/bin/mpiexec -n 2 ./myexecutable
> 
> /ompi/bin/mpiexec -n 2 ./myexecutable
> 
> The launcher sets LD_LIBRARY_PATH.  That is feasible (I think) and is
> the change that would save distributions, packagers, and users countless
> hours.
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum



More information about the mpi-forum mailing list