[Mpi3-abi] Meeting notes from 10th March

Erez Haba erezh at [hidden]
Fri Mar 14 18:59:41 CDT 2008



Thank you Alexander,

I did not configure mpich2.mpi.h on purpose. The reason was to help us understand that mpich2 ABI varies base on configuration.

For msmpi we took the approach to include the 64 vs 32 bit ifdef in the file rather than require configuring for the word width.

I think that we'll need to understand the various configure issue. I'd like us to focus on 64 bit as it reveals more issue than 32 bit. (e.g., sizeof(void*) size is different than sizeof(int) in most cases)

Thanks,
.Erez

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, Alexander
Sent: Friday, March 14, 2008 4:45 PM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

Dear Erez,

Thank you. I'm afraid that the MPICH2 mpi.h has not been configured - it
carries configure macros in the datatypes definition part and elsewhere.
By the way, I'm going to upload MPICH2 1.0.3 mpi.h early next week as a
placeholder for Intel MPI mpi.h, pending legal OK for the latter.

I'll need to instantiate this file. According to my experience, the
result may depend on the platform and compiler used. What
platform/compiler should we use for reference? At least, what word size
should we target? I seek to eliminate an accidental apple to pears
comparison here.

Best regards.

Alexander

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden]
[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Erez Haba
Sent: Friday, March 14, 2008 7:47 PM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

I have posted the mpi.h files to the wiki page and started a table
comparing the various implementations.

See http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h

Thanks,
.Erez

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden]
[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Solt, David
George
Sent: Friday, March 14, 2008 11:08 AM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

I have to wonder if MPI_BOTTOM will cause us headaches.  MPI_BOTTOM is
meant to be used directly as an address.  Our ABI would really have to
specify exactly what that address is using either a constant or the
address of a variable in the library.  We have found that many
non-compliant codes exist which assume that MPI_BOTTOM is zero.
Assuming MPI_BOTTOM lives on to MPI-3, the ABI effort may want to
dictate its value on a per-target architecture/OS basis, with ((void*)
0) the most obvious setting for nearly all targets.

Dave

-----Original Message-----
From: mpi3-abi-bounces_at_[hidden]
[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres
Sent: Friday, March 14, 2008 11:36 AM
To: MPI 3.0 ABI working group
Subject: Re: [Mpi3-abi] Meeting notes from 10th March

Perhaps you could just link to OMPI's header file -- it's a live copy of
what's in our SVN repository:

https://svn.open-mpi.org/trac/ompi/browser/trunk/ompi/include/mpi.h.in

On Mar 14, 2008, at 12:21 PM, Erez Haba wrote:

> I will upload the various mpi.h files and start the table on the ABI
> wiki page.
>
> -----Original Message-----
> From: mpi3-abi-bounces_at_[hidden]
> [mailto:mpi3-abi-bounces_at_[hidden]
> ] On Behalf Of Jeff Brown
> Sent: Thursday, March 13, 2008 3:11 PM
> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
>
> I propose a way we can make progress ...
>
> Let's start populating a matrix (excel spreadsheet) with a column for
> each MPI implementation, and rows for the various MPI datatypes,
> constants, etc. where the internal implementations varys.  I'll kick
> it off for OpenMPI and send out.
>
> The last column of the matrix can be "ABI" where we propose a common
> approach across the implementations.
>
> A couple of driving principles:
> 1. the ABI solution shouldn't negatively impact quality of
> implementation 2. minimize platform specific solutions
>
> I'd like to see if we can produce a single ABI that spans platforms.
>
> comments?
>
> Jeff
>
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
> _______________________________________________
> mpi3-abi mailing list
> mpi3-abi_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi


--
Jeff Squyres
Cisco Systems
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
_______________________________________________
mpi3-abi mailing list
mpi3-abi_at_[hidden]
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi




More information about the Mpi3-abi mailing list