[Mpi3-abi] Meeting notes from 10th March

Erez Haba erezh at [hidden]
Fri Mar 14 13:46:44 CDT 2008



I have posted the mpi.h files to the wiki page and started a table comparing the various implementations.

See http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h

Thanks,
.Erez

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Solt, David George
Sent: Friday, March 14, 2008 11:08 AM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

I have to wonder if MPI_BOTTOM will cause us headaches.  MPI_BOTTOM is meant to be used directly as an address.  Our ABI would really have to specify exactly what that address is using either a constant or the address of a variable in the library.  We have found that many non-compliant codes exist which assume that MPI_BOTTOM is zero.  Assuming MPI_BOTTOM lives on to MPI-3, the ABI effort may want to dictate its value on a per-target architecture/OS basis, with ((void*) 0) the most obvious setting for nearly all targets.

Dave

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres
Sent: Friday, March 14, 2008 11:36 AM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

Perhaps you could just link to OMPI's header file -- it's a live copy of what's in our SVN repository:

https://svn.open-mpi.org/trac/ompi/browser/trunk/ompi/include/mpi.h.in

On Mar 14, 2008, at 12:21 PM, Erez Haba wrote:

> I will upload the various mpi.h files and start the table on the ABI
> wiki page.
>
> -----Original Message-----
> From: mpi3-abi-bounces_at_[hidden]
> [mailto:mpi3-abi-bounces_at_[hidden]
> ] On Behalf Of Jeff Brown
> Sent: Thursday, March 13, 2008 3:11 PM
> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>
> I propose a way we can make progress ...
>
> Let's start populating a matrix (excel spreadsheet) with a column for
> each MPI implementation, and rows for the various MPI datatypes,
> constants, etc. where the internal implementations varys.  I'll kick
> it off for OpenMPI and send out.
>
> The last column of the matrix can be "ABI" where we propose a common
> approach across the implementations.
>
> A couple of driving principles:
> 1. the ABI solution shouldn't negatively impact quality of
> implementation 2. minimize platform specific solutions
>
> I'd like to see if we can produce a single ABI that spans platforms.
>
> comments?
>
> Jeff
>
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi


--
Jeff Squyres
Cisco Systems
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi




More information about the Mpi3-abi mailing list