[Mpi3-abi] For the April MPI Forum Meeting

Narasimhan, Kannan kannan.narasimhan at [hidden]
Wed Apr 16 10:51:18 CDT 2008



Folks,

Are we planning on a WG update to report at the April 28-30 Forum meeting? We have started the process of identifying the mpi.h differences, but I dont think we have synthesized the data yet, or come to any conclusions/next steps... Or did I miss something here?

Thanx!
Kannan

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis
Sent: Monday, March 17, 2008 4:18 AM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

I'm not sure how best to express this, but there are a couple of things that occur to me that might be important:

1. The size of the handle types (cf. size of a pointer perhaps?)

2. should we add some sort of table describing the current situation as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g.
MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is burned into the binary; whereas OpenMPI uses extern pointers - i.e.
ompi_mpi_comm_world is in the initialized data section of libmpi.so, and the value resolved at (dynamic) link time.

Cheers,

Edric.

> -----Original Message-----
> From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-
> bounces_at_[hidden]] On Behalf Of Jeff Brown
> Sent: Thursday, March 13, 2008 10:11 PM
> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>
> I propose a way we can make progress ...
>
> Let's start populating a matrix (excel spreadsheet) with a column for
> each MPI implementation, and rows for the various MPI datatypes,
> constants, etc. where the internal implementations varys.  I'll kick
> it off for OpenMPI and send out.
>
> The last column of the matrix can be "ABI" where we propose a common
> approach across the implementations.
>
> A couple of driving principles:
> 1. the ABI solution shouldn't negatively impact quality of
implementation
> 2. minimize platform specific solutions
>
> I'd like to see if we can produce a single ABI that spans platforms.
>
> comments?
>
> Jeff
>
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi

_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi



More information about the Mpi3-abi mailing list