From jeffb at [hidden] Tue Apr 1 09:59:57 2008 From: jeffb at [hidden] (Jeff Brown) Date: Tue, 01 Apr 2008 08:59:57 -0600 Subject: [Mpi3-abi] MPI-3 ABI proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20130ABB3@swsmsx413.ger.cor p.intel.com> Message-ID: <6.2.3.4.2.20080401085359.02cb8700@ccs-mail.lanl.gov> We will follow whatever standard template Rich and company require for the proposal. The meat of the ABI proposal will be the matrix exposing the differences in the implementations and the proposed ABI common solution. We also need to include a process for ensuring that the implementations don't drift apart again. I intend to put some time into this over the next two weeks then schedule a telecon. I'll also ask for a WG session at the Chicago meeting. I don't think we will be ready to roll out a proposal to the group at the next meeting. Jeff At 05:41 AM 4/1/2008, you wrote: >Dear Jeff, > >How are we doing on the ABI proposal? What format will we use? I'm >working on a couple of other WG, and the Wiki seems to become the >preferred way of developing proposals for discussion. I bet a >conversion from the Word document to Wiki, in case that's desirable, >won't be too cumbersome. > >Best regards. > >Alexander > >-- >Dr Alexander Supalov >Intel GmbH >Hermuelheimer Strasse 8a >50321 Bruehl, Germany >Phone: +49 2232 209034 >Mobile: +49 173 511 8735 >Fax: +49 2232 209029 > > >--------------------------------------------------------------------- >Intel GmbH >Dornacher Strasse 1 >85622 Feldkirchen/Muenchen Germany >Sitz der Gesellschaft: Feldkirchen bei Muenchen >Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >Registergericht: Muenchen HRB 47456 Ust.-IdNr. >VAT Registration No.: DE129385895 >Citibank Frankfurt (BLZ 502 109 00) 600119052 > >This e-mail and any attachments may contain confidential material for >the sole use of the intended recipient(s). Any review or distribution >by others is strictly prohibited. If you are not the intended >recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.supalov at [hidden] Tue Apr 1 10:03:26 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Tue, 1 Apr 2008 16:03:26 +0100 Subject: [Mpi3-abi] MPI-3 ABI proposal In-Reply-To: <6.2.3.4.2.20080401085359.02cb8700@ccs-mail.lanl.gov> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A2013386D7@swsmsx413.ger.corp.intel.com> Thanks, got you. There's no LaTeX template for a proposal yet. Rich actually introduced a Wiki page template for the proposals in the FT subgroup, but I'm not sure everybody followed that. I'm just reusing that one for my work. ________________________________ From: Jeff Brown [mailto:jeffb_at_[hidden]] Sent: Tuesday, April 01, 2008 5:00 PM To: Supalov, Alexander Cc: mpi3-abi_at_[hidden] Subject: Re: MPI-3 ABI proposal We will follow whatever standard template Rich and company require for the proposal. The meat of the ABI proposal will be the matrix exposing the differences in the implementations and the proposed ABI common solution. We also need to include a process for ensuring that the implementations don't drift apart again. I intend to put some time into this over the next two weeks then schedule a telecon. I'll also ask for a WG session at the Chicago meeting. I don't think we will be ready to roll out a proposal to the group at the next meeting. Jeff At 05:41 AM 4/1/2008, you wrote: Dear Jeff, How are we doing on the ABI proposal? What format will we use? I'm working on a couple of other WG, and the Wiki seems to become the preferred way of developing proposals for discussion. I bet a conversion from the Word document to Wiki, in case that's desirable, won't be too cumbersome. Best regards. Alexander -- Dr Alexander Supalov Intel GmbH Hermuelheimer Strasse 8a 50321 Bruehl, Germany Phone: +49 2232 209034 Mobile: +49 173 511 8735 Fax: +49 2232 209029 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kannan.narasimhan at [hidden] Wed Apr 16 10:51:18 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Wed, 16 Apr 2008 15:51:18 +0000 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: Folks, Are we planning on a WG update to report at the April 28-30 Forum meeting? We have started the process of identifying the mpi.h differences, but I dont think we have synthesized the data yet, or come to any conclusions/next steps... Or did I miss something here? Thanx! Kannan -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis Sent: Monday, March 17, 2008 4:18 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] Meeting notes from 10th March I'm not sure how best to express this, but there are a couple of things that occur to me that might be important: 1. The size of the handle types (cf. size of a pointer perhaps?) 2. should we add some sort of table describing the current situation as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is burned into the binary; whereas OpenMPI uses extern pointers - i.e. ompi_mpi_comm_world is in the initialized data section of libmpi.so, and the value resolved at (dynamic) link time. Cheers, Edric. > -----Original Message----- > From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi- > bounces_at_[hidden]] On Behalf Of Jeff Brown > Sent: Thursday, March 13, 2008 10:11 PM > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > I propose a way we can make progress ... > > Let's start populating a matrix (excel spreadsheet) with a column for > each MPI implementation, and rows for the various MPI datatypes, > constants, etc. where the internal implementations varys. I'll kick > it off for OpenMPI and send out. > > The last column of the matrix can be "ABI" where we propose a common > approach across the implementations. > > A couple of driving principles: > 1. the ABI solution shouldn't negatively impact quality of implementation > 2. minimize platform specific solutions > > I'd like to see if we can produce a single ABI that spans platforms. > > comments? > > Jeff > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Wed Apr 16 11:03:33 2008 From: jeffb at [hidden] (Jeff Brown) Date: Wed, 16 Apr 2008 10:03:33 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <6.2.3.4.2.20080416095806.02d139f8@ccs-mail.lanl.gov> Yes, it's time to put some cycles toward this. Let's start populating the matrix and have a telecon toward the end of next week. I'll schedule a WG working session at the meeting. I'll take a look at OpenMPI and LAMPI, the two primary MPI implementations we use at LANL, and post to the wiki by the end of the week. Others, please do the same for your MPI implementation (especially the vendors). Overlap is OK. I'll send out specifics on the telecon. Let's shoot for Thursday April 24, 9:00 A.M. MST. Jeff At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >Folks, > >Are we planning on a WG update to report at the April 28-30 Forum >meeting? We have started the process of identifying the mpi.h >differences, but I dont think we have synthesized the data yet, or >come to any conclusions/next steps... Or did I miss something here? > >Thanx! >Kannan > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis >Sent: Monday, March 17, 2008 4:18 AM >To: MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >I'm not sure how best to express this, but there are a couple of >things that occur to me that might be important: > >1. The size of the handle types (cf. size of a pointer perhaps?) > >2. should we add some sort of table describing the current situation >as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >burned into the binary; whereas OpenMPI uses extern pointers - i.e. >ompi_mpi_comm_world is in the initialized data section of libmpi.so, >and the value resolved at (dynamic) link time. > >Cheers, > >Edric. > > > -----Original Message----- > > From: mpi3-abi-bounces_at_[hidden] > [mailto:mpi3-abi- > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > Sent: Thursday, March 13, 2008 10:11 PM > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > I propose a way we can make progress ... > > > > Let's start populating a matrix (excel spreadsheet) with a column for > > each MPI implementation, and rows for the various MPI datatypes, > > constants, etc. where the internal implementations varys. I'll kick > > it off for OpenMPI and send out. > > > > The last column of the matrix can be "ABI" where we propose a common > > approach across the implementations. > > > > A couple of driving principles: > > 1. the ABI solution shouldn't negatively impact quality of >implementation > > 2. minimize platform specific solutions > > > > I'd like to see if we can produce a single ABI that spans platforms. > > > > comments? > > > > Jeff > > > > > > _______________________________________________ > > mpi3-abi mailing list > > mpi3-abi_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Mon Apr 21 14:42:12 2008 From: jeffb at [hidden] (Jeff Brown) Date: Mon, 21 Apr 2008 13:42:12 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080416095806.02d139f8@ccs-mail.lanl.gov> Message-ID: <6.2.3.4.2.20080421133743.02d31f30@ccs-mail.lanl.gov> all, I scheduled a telecon to discuss status and get somewhat organized for the meeting: Thursday April 24, 10:00 MDT local number: 606-1201(6-1201) toll free number: 888 343-0702. I'll send out some slides for the 5 minute briefing for the group. I'm having a hard time finding time to devote to this, but I'll have a cut at the OpenMPI and LAMPI analysis prior to the telecon. We need someone to look at MPICH, and the vendor implementations need to be posted. Jeff At 10:03 AM 4/16/2008, Jeff Brown wrote: >Yes, it's time to put some cycles toward this. Let's start >populating the matrix and have a telecon toward the end of next >week. I'll schedule a WG working session at the meeting. > >I'll take a look at OpenMPI and LAMPI, the two primary MPI >implementations we use at LANL, and post to the wiki by the end of >the week. Others, please do the same for your MPI implementation >(especially the vendors). Overlap is OK. > >I'll send out specifics on the telecon. Let's shoot for Thursday >April 24, 9:00 A.M. MST. > >Jeff > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > >Folks, > > > >Are we planning on a WG update to report at the April 28-30 Forum > >meeting? We have started the process of identifying the mpi.h > >differences, but I dont think we have synthesized the data yet, or > >come to any conclusions/next steps... Or did I miss something here? > > > >Thanx! > >Kannan > > > >-----Original Message----- > >From: mpi3-abi-bounces_at_[hidden] > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > >Sent: Monday, March 17, 2008 4:18 AM > >To: MPI 3.0 ABI working group > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > >I'm not sure how best to express this, but there are a couple of > >things that occur to me that might be important: > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > >2. should we add some sort of table describing the current situation > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > >ompi_mpi_comm_world is in the initialized data section of libmpi.so, > >and the value resolved at (dynamic) link time. > > > >Cheers, > > > >Edric. > > > > > -----Original Message----- > > > From: mpi3-abi-bounces_at_[hidden] > > [mailto:mpi3-abi- > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > Sent: Thursday, March 13, 2008 10:11 PM > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > I propose a way we can make progress ... > > > > > > Let's start populating a matrix (excel spreadsheet) with a column for > > > each MPI implementation, and rows for the various MPI datatypes, > > > constants, etc. where the internal implementations varys. I'll kick > > > it off for OpenMPI and send out. > > > > > > The last column of the matrix can be "ABI" where we propose a common > > > approach across the implementations. > > > > > > A couple of driving principles: > > > 1. the ABI solution shouldn't negatively impact quality of > >implementation > > > 2. minimize platform specific solutions > > > > > > I'd like to see if we can produce a single ABI that spans platforms. > > > > > > comments? > > > > > > Jeff > > > > > > > > > _______________________________________________ > > > mpi3-abi mailing list > > > mpi3-abi_at_[hidden] > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jsquyres at [hidden] Tue Apr 22 11:20:35 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 22 Apr 2008 12:20:35 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080421133743.02d31f30@ccs-mail.lanl.gov> Message-ID: <72B43D2A-A581-4BAA-9BB6-135A792BDB44@cisco.com> I will unfortunately be unable to attend this teleconference (I have a conflict). I'll comment on the slides, though. On Apr 21, 2008, at 3:42 PM, Jeff Brown wrote: > all, > > I scheduled a telecon to discuss status and get somewhat organized > for the meeting: > > Thursday April 24, 10:00 MDT > local number: 606-1201(6-1201) > toll free number: 888 343-0702. > > I'll send out some slides for the 5 minute briefing for the group. > > I'm having a hard time finding time to devote to this, but I'll have > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > need someone to look at MPICH, and the vendor implementations need to > be posted. > > Jeff > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > >Yes, it's time to put some cycles toward this. Let's start > >populating the matrix and have a telecon toward the end of next > >week. I'll schedule a WG working session at the meeting. > > > >I'll take a look at OpenMPI and LAMPI, the two primary MPI > >implementations we use at LANL, and post to the wiki by the end of > >the week. Others, please do the same for your MPI implementation > >(especially the vendors). Overlap is OK. > > > >I'll send out specifics on the telecon. Let's shoot for Thursday > >April 24, 9:00 A.M. MST. > > > >Jeff > > > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >Folks, > > > > > >Are we planning on a WG update to report at the April 28-30 Forum > > >meeting? We have started the process of identifying the mpi.h > > >differences, but I dont think we have synthesized the data yet, or > > >come to any conclusions/next steps... Or did I miss something here? > > > > > >Thanx! > > >Kannan > > > > > >-----Original Message----- > > >From: mpi3-abi-bounces_at_[hidden] > > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric > Ellis > > >Sent: Monday, March 17, 2008 4:18 AM > > >To: MPI 3.0 ABI working group > > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > >I'm not sure how best to express this, but there are a couple of > > >things that occur to me that might be important: > > > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > > > >2. should we add some sort of table describing the current > situation > > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? > E.g. > > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >ompi_mpi_comm_world is in the initialized data section of > libmpi.so, > > >and the value resolved at (dynamic) link time. > > > > > >Cheers, > > > > > >Edric. > > > > > > > -----Original Message----- > > > > From: mpi3-abi-bounces_at_[hidden] > > > [mailto:mpi3-abi- > > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > > Sent: Thursday, March 13, 2008 10:11 PM > > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > I propose a way we can make progress ... > > > > > > > > Let's start populating a matrix (excel spreadsheet) with a > column for > > > > each MPI implementation, and rows for the various MPI datatypes, > > > > constants, etc. where the internal implementations varys. > I'll kick > > > > it off for OpenMPI and send out. > > > > > > > > The last column of the matrix can be "ABI" where we propose a > common > > > > approach across the implementations. > > > > > > > > A couple of driving principles: > > > > 1. the ABI solution shouldn't negatively impact quality of > > >implementation > > > > 2. minimize platform specific solutions > > > > > > > > I'd like to see if we can produce a single ABI that spans > platforms. > > > > > > > > comments? > > > > > > > > Jeff > > > > > > > > > > > > _______________________________________________ > > > > mpi3-abi mailing list > > > > mpi3-abi_at_[hidden] > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > -- Jeff Squyres Cisco Systems From jeffb at [hidden] Wed Apr 23 18:13:02 2008 From: jeffb at [hidden] (Jeff Brown) Date: Wed, 23 Apr 2008 17:13:02 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080421133743.02d31f30@ccs-mail.lanl.gov> Message-ID: <6.2.3.4.2.20080423171026.02eb4d10@ccs-mail.lanl.gov> Attached is the beginnings of a spreadsheet to capture the detailed differences in the mpi.h implementations with the OpenMPI column populated. I'll get a start at LAMPI before the telecon. I'll post to the wiki. Talk to y'all in the morning (well, my morning). Jeff At 01:42 PM 4/21/2008, Jeff Brown wrote: >all, > >I scheduled a telecon to discuss status and get somewhat organized >for the meeting: > >Thursday April 24, 10:00 MDT >local number: 606-1201(6-1201) >toll free number: 888 343-0702. > >I'll send out some slides for the 5 minute briefing for the group. > >I'm having a hard time finding time to devote to this, but I'll have >a cut at the OpenMPI and LAMPI analysis prior to the telecon. We >need someone to look at MPICH, and the vendor implementations need to >be posted. > >Jeff > > > >At 10:03 AM 4/16/2008, Jeff Brown wrote: > >Yes, it's time to put some cycles toward this. Let's start > >populating the matrix and have a telecon toward the end of next > >week. I'll schedule a WG working session at the meeting. > > > >I'll take a look at OpenMPI and LAMPI, the two primary MPI > >implementations we use at LANL, and post to the wiki by the end of > >the week. Others, please do the same for your MPI implementation > >(especially the vendors). Overlap is OK. > > > >I'll send out specifics on the telecon. Let's shoot for Thursday > >April 24, 9:00 A.M. MST. > > > >Jeff > > > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >Folks, > > > > > >Are we planning on a WG update to report at the April 28-30 Forum > > >meeting? We have started the process of identifying the mpi.h > > >differences, but I dont think we have synthesized the data yet, or > > >come to any conclusions/next steps... Or did I miss something here? > > > > > >Thanx! > > >Kannan > > > > > >-----Original Message----- > > >From: mpi3-abi-bounces_at_[hidden] > > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > > >Sent: Monday, March 17, 2008 4:18 AM > > >To: MPI 3.0 ABI working group > > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > >I'm not sure how best to express this, but there are a couple of > > >things that occur to me that might be important: > > > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > > > >2. should we add some sort of table describing the current situation > > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >ompi_mpi_comm_world is in the initialized data section of libmpi.so, > > >and the value resolved at (dynamic) link time. > > > > > >Cheers, > > > > > >Edric. > > > > > > > -----Original Message----- > > > > From: mpi3-abi-bounces_at_[hidden] > > > [mailto:mpi3-abi- > > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > > Sent: Thursday, March 13, 2008 10:11 PM > > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > I propose a way we can make progress ... > > > > > > > > Let's start populating a matrix (excel spreadsheet) with a column for > > > > each MPI implementation, and rows for the various MPI datatypes, > > > > constants, etc. where the internal implementations varys. I'll kick > > > > it off for OpenMPI and send out. > > > > > > > > The last column of the matrix can be "ABI" where we propose a common > > > > approach across the implementations. > > > > > > > > A couple of driving principles: > > > > 1. the ABI solution shouldn't negatively impact quality of > > >implementation > > > > 2. minimize platform specific solutions > > > > > > > > I'd like to see if we can produce a single ABI that spans platforms. > > > > > > > > comments? > > > > > > > > Jeff > > > > > > > > > > > > _______________________________________________ > > > > mpi3-abi mailing list > > > > mpi3-abi_at_[hidden] > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi * -------------- next part -------------- A non-text attachment was scrubbed... Name: MPI_ABI_analysis.xls Type: application/octet-stream Size: 83456 bytes Desc: MPI_ABI_analysis.xls URL: From sameer at [hidden] Wed Apr 23 19:59:01 2008 From: sameer at [hidden] (Sameer Shende) Date: Wed, 23 Apr 2008 17:59:01 -0700 (PDT) Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080423171026.02eb4d10@ccs-mail.lanl.gov> Message-ID: Jeff, I was wondering if we should add a column for MorphMPI [http://sourceforge.net/projects/morphmpi] to the spreadsheet. Thanks, - Sameer From jeffb at [hidden] Wed Apr 23 22:43:33 2008 From: jeffb at [hidden] (Jeff Brown) Date: Wed, 23 Apr 2008 21:43:33 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <6.2.3.4.2.20080423214310.02ee7c08@ccs-mail.lanl.gov> go for it! At 06:59 PM 4/23/2008, Sameer Shende wrote: >Jeff, > I was wondering if we should add a column for MorphMPI >[http://sourceforge.net/projects/morphmpi] to the spreadsheet. > Thanks, > - Sameer > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From alexander.supalov at [hidden] Thu Apr 24 03:16:12 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 24 Apr 2008 09:16:12 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080423171026.02eb4d10@ccs-mail.lanl.gov> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201466AA9@swsmsx413.ger.corp.intel.com> Thanks. Can we add MPICH2? It's different from MPICH. Also, there's slight drift between different MPICH2 versions. Should we address this at all, or just go for the latest and greatest (1.0.7)? -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Thursday, April 24, 2008 1:13 AM To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Attached is the beginnings of a spreadsheet to capture the detailed differences in the mpi.h implementations with the OpenMPI column populated. I'll get a start at LAMPI before the telecon. I'll post to the wiki. Talk to y'all in the morning (well, my morning). Jeff At 01:42 PM 4/21/2008, Jeff Brown wrote: >all, > >I scheduled a telecon to discuss status and get somewhat organized >for the meeting: > >Thursday April 24, 10:00 MDT >local number: 606-1201(6-1201) >toll free number: 888 343-0702. > >I'll send out some slides for the 5 minute briefing for the group. > >I'm having a hard time finding time to devote to this, but I'll have >a cut at the OpenMPI and LAMPI analysis prior to the telecon. We >need someone to look at MPICH, and the vendor implementations need to >be posted. > >Jeff > > > >At 10:03 AM 4/16/2008, Jeff Brown wrote: > >Yes, it's time to put some cycles toward this. Let's start > >populating the matrix and have a telecon toward the end of next > >week. I'll schedule a WG working session at the meeting. > > > >I'll take a look at OpenMPI and LAMPI, the two primary MPI > >implementations we use at LANL, and post to the wiki by the end of > >the week. Others, please do the same for your MPI implementation > >(especially the vendors). Overlap is OK. > > > >I'll send out specifics on the telecon. Let's shoot for Thursday > >April 24, 9:00 A.M. MST. > > > >Jeff > > > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >Folks, > > > > > >Are we planning on a WG update to report at the April 28-30 Forum > > >meeting? We have started the process of identifying the mpi.h > > >differences, but I dont think we have synthesized the data yet, or > > >come to any conclusions/next steps... Or did I miss something here? > > > > > >Thanx! > > >Kannan > > > > > >-----Original Message----- > > >From: mpi3-abi-bounces_at_[hidden] > > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > > >Sent: Monday, March 17, 2008 4:18 AM > > >To: MPI 3.0 ABI working group > > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > >I'm not sure how best to express this, but there are a couple of > > >things that occur to me that might be important: > > > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > > > >2. should we add some sort of table describing the current situation > > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >ompi_mpi_comm_world is in the initialized data section of libmpi.so, > > >and the value resolved at (dynamic) link time. > > > > > >Cheers, > > > > > >Edric. > > > > > > > -----Original Message----- > > > > From: mpi3-abi-bounces_at_[hidden] > > > [mailto:mpi3-abi- > > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > > Sent: Thursday, March 13, 2008 10:11 PM > > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > I propose a way we can make progress ... > > > > > > > > Let's start populating a matrix (excel spreadsheet) with a column for > > > > each MPI implementation, and rows for the various MPI datatypes, > > > > constants, etc. where the internal implementations varys. I'll kick > > > > it off for OpenMPI and send out. > > > > > > > > The last column of the matrix can be "ABI" where we propose a common > > > > approach across the implementations. > > > > > > > > A couple of driving principles: > > > > 1. the ABI solution shouldn't negatively impact quality of > > >implementation > > > > 2. minimize platform specific solutions > > > > > > > > I'd like to see if we can produce a single ABI that spans platforms. > > > > > > > > comments? > > > > > > > > Jeff > > > > > > > > > > > > _______________________________________________ > > > > mpi3-abi mailing list > > > > mpi3-abi_at_[hidden] > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From Edric.Ellis at [hidden] Thu Apr 24 05:12:44 2008 From: Edric.Ellis at [hidden] (Edric Ellis) Date: Thu, 24 Apr 2008 11:12:44 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201466AA9@swsmsx413.ger.corp.intel.com> Message-ID: <6C017874065E6343976DCAFAEB1A3C291AFB6C5A09@EXCHANGE-UK.ad.mathworks.com> I've taken our MPICH2-1.0.3 build, and extracted the stuff as per attached. I wrote a crufty script to pull this stuff out - for interest, here's how MPICH2 changes by platform and by word size: $ diff glnxa64.csv glnx86.csv 8c8 < MPI_Aint,typedef long --- > MPI_Aint,typedef int 30c30 < MPI_BSEND_OVERHEAD,95 --- > MPI_BSEND_OVERHEAD,59 197c197 < MPI_LONG,(0x4c000807) --- > MPI_LONG,(0x4c000407) 200c200 < MPI_LONG_DOUBLE,(0x4c00100c) --- > MPI_LONG_DOUBLE,(0x4c000c0c) 204c204 < MPI_UNSIGNED_LONG,(0x4c000808) --- > MPI_UNSIGNED_LONG,(0x4c000408) $ diff win32.csv glnx86.csv 200c200 < MPI_LONG_DOUBLE,(0x4c00080c) --- > MPI_LONG_DOUBLE,(0x4c000c0c) 214c214 < MPI_WCHAR,(0x4c00020e) --- > MPI_WCHAR,(0x4c00040e) 218,219c218,219 < MPI_2COMPLEX,(0x4c001024) < MPI_2DOUBLE_COMPLEX,(0x4c002025) --- > MPI_2COMPLEX,(MPI_DATATYPE_NULL) > MPI_2DOUBLE_COMPLEX,(MPI_DATATYPE_NULL) Cheers, Edric. > -----Original Message----- > From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi- > bounces_at_[hidden]] On Behalf Of Supalov, Alexander > Sent: Thursday, April 24, 2008 9:16 AM > To: MPI 3.0 ABI working group > Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > > Thanks. Can we add MPICH2? It's different from MPICH. Also, there's > slight drift between different MPICH2 versions. Should we address this > at all, or just go for the latest and greatest (1.0.7)? > > -----Original Message----- > From: mpi3-abi-bounces_at_[hidden] > [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown > Sent: Thursday, April 24, 2008 1:13 AM > To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI > working group; MPI 3.0 ABI working group > Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > > Attached is the beginnings of a spreadsheet to capture the detailed > differences in the mpi.h implementations with the OpenMPI column > populated. I'll get a start at LAMPI before the telecon. > > I'll post to the wiki. > > Talk to y'all in the morning (well, my morning). > > Jeff > > At 01:42 PM 4/21/2008, Jeff Brown wrote: > >all, > > > >I scheduled a telecon to discuss status and get somewhat organized > >for the meeting: > > > >Thursday April 24, 10:00 MDT > >local number: 606-1201(6-1201) > >toll free number: 888 343-0702. > > > >I'll send out some slides for the 5 minute briefing for the group. > > > >I'm having a hard time finding time to devote to this, but I'll have > >a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > >need someone to look at MPICH, and the vendor implementations need to > >be posted. > > > >Jeff > > > > > > > >At 10:03 AM 4/16/2008, Jeff Brown wrote: > > >Yes, it's time to put some cycles toward this. Let's start > > >populating the matrix and have a telecon toward the end of next > > >week. I'll schedule a WG working session at the meeting. > > > > > >I'll take a look at OpenMPI and LAMPI, the two primary MPI > > >implementations we use at LANL, and post to the wiki by the end of > > >the week. Others, please do the same for your MPI implementation > > >(especially the vendors). Overlap is OK. > > > > > >I'll send out specifics on the telecon. Let's shoot for Thursday > > >April 24, 9:00 A.M. MST. > > > > > >Jeff > > > > > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > > >Folks, > > > > > > > >Are we planning on a WG update to report at the April 28-30 Forum > > > >meeting? We have started the process of identifying the mpi.h > > > >differences, but I dont think we have synthesized the data yet, or > > > >come to any conclusions/next steps... Or did I miss something here? > > > > > > > >Thanx! > > > >Kannan > > > > > > > >-----Original Message----- > > > >From: mpi3-abi-bounces_at_[hidden] > > > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric > Ellis > > > >Sent: Monday, March 17, 2008 4:18 AM > > > >To: MPI 3.0 ABI working group > > > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > > > > >I'm not sure how best to express this, but there are a couple of > > > >things that occur to me that might be important: > > > > > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > > > > > >2. should we add some sort of table describing the current > situation > > > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? > E.g. > > > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > > >ompi_mpi_comm_world is in the initialized data section of > libmpi.so, > > > >and the value resolved at (dynamic) link time. > > > > > > > >Cheers, > > > > > > > >Edric. > > > > > > > > > -----Original Message----- > > > > > From: mpi3-abi-bounces_at_[hidden] > > > > [mailto:mpi3-abi- > > > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > > > Sent: Thursday, March 13, 2008 10:11 PM > > > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > > > > > I propose a way we can make progress ... > > > > > > > > > > Let's start populating a matrix (excel spreadsheet) with a > column for > > > > > each MPI implementation, and rows for the various MPI datatypes, > > > > > constants, etc. where the internal implementations varys. I'll > kick > > > > > it off for OpenMPI and send out. > > > > > > > > > > The last column of the matrix can be "ABI" where we propose a > common > > > > > approach across the implementations. > > > > > > > > > > A couple of driving principles: > > > > > 1. the ABI solution shouldn't negatively impact quality of > > > >implementation > > > > > 2. minimize platform specific solutions > > > > > > > > > > I'd like to see if we can produce a single ABI that spans > platforms. > > > > > > > > > > comments? > > > > > > > > > > Jeff > > > > > > > > > > > > > > > _______________________________________________ > > > > > mpi3-abi mailing list > > > > > mpi3-abi_at_[hidden] > > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > > >_______________________________________________ > > > >mpi3-abi mailing list > > > >mpi3-abi_at_[hidden] > > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > > >_______________________________________________ > > > >mpi3-abi mailing list > > > >mpi3-abi_at_[hidden] > > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > > > >_______________________________________________ > > >mpi3-abi mailing list > > >mpi3-abi_at_[hidden] > > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi * -------------- next part -------------- A non-text attachment was scrubbed... Name: MPI_ABI___MPICH2.xls Type: application/vnd.ms-excel Size: 102912 bytes Desc: MPI ABI + MPICH2.xls URL: From jsquyres at [hidden] Thu Apr 24 06:51:58 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 24 Apr 2008 07:51:58 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6C017874065E6343976DCAFAEB1A3C291AFB6C5A09@EXCHANGE-UK.ad.mathworks.com> Message-ID: <55A1AF8F-EE72-4908-ACBB-DC5BAF596A9E@cisco.com> Note that Open MPI changes some things by platform/compiler as well (MPI_Aint is an obvious one); I suspect that MPICH* may, too...? On Apr 24, 2008, at 6:12 AM, Edric Ellis wrote: > I've taken our MPICH2-1.0.3 build, and extracted the stuff as per > attached. > > I wrote a crufty script to pull this stuff out - for interest, > here's how MPICH2 changes by platform and by word size: > > $ diff glnxa64.csv glnx86.csv > 8c8 > < MPI_Aint,typedef long > --- >> MPI_Aint,typedef int > 30c30 > < MPI_BSEND_OVERHEAD,95 > --- >> MPI_BSEND_OVERHEAD,59 > 197c197 > < MPI_LONG,(0x4c000807) > --- >> MPI_LONG,(0x4c000407) > 200c200 > < MPI_LONG_DOUBLE,(0x4c00100c) > --- >> MPI_LONG_DOUBLE,(0x4c000c0c) > 204c204 > < MPI_UNSIGNED_LONG,(0x4c000808) > --- >> MPI_UNSIGNED_LONG,(0x4c000408) > > $ diff win32.csv glnx86.csv > 200c200 > < MPI_LONG_DOUBLE,(0x4c00080c) > --- >> MPI_LONG_DOUBLE,(0x4c000c0c) > 214c214 > < MPI_WCHAR,(0x4c00020e) > --- >> MPI_WCHAR,(0x4c00040e) > 218,219c218,219 > < MPI_2COMPLEX,(0x4c001024) > < MPI_2DOUBLE_COMPLEX,(0x4c002025) > --- >> MPI_2COMPLEX,(MPI_DATATYPE_NULL) >> MPI_2DOUBLE_COMPLEX,(MPI_DATATYPE_NULL) > > Cheers, > > Edric. > >> -----Original Message----- >> From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi- >> bounces_at_[hidden]] On Behalf Of Supalov, Alexander >> Sent: Thursday, April 24, 2008 9:16 AM >> To: MPI 3.0 ABI working group >> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >> Thanks. Can we add MPICH2? It's different from MPICH. Also, there's >> slight drift between different MPICH2 versions. Should we address >> this >> at all, or just go for the latest and greatest (1.0.7)? >> >> -----Original Message----- >> From: mpi3-abi-bounces_at_[hidden] >> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >> Sent: Thursday, April 24, 2008 1:13 AM >> To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI >> working group; MPI 3.0 ABI working group >> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >> Attached is the beginnings of a spreadsheet to capture the detailed >> differences in the mpi.h implementations with the OpenMPI column >> populated. I'll get a start at LAMPI before the telecon. >> >> I'll post to the wiki. >> >> Talk to y'all in the morning (well, my morning). >> >> Jeff >> >> At 01:42 PM 4/21/2008, Jeff Brown wrote: >>> all, >>> >>> I scheduled a telecon to discuss status and get somewhat organized >>> for the meeting: >>> >>> Thursday April 24, 10:00 MDT >>> local number: 606-1201(6-1201) >>> toll free number: 888 343-0702. >>> >>> I'll send out some slides for the 5 minute briefing for the group. >>> >>> I'm having a hard time finding time to devote to this, but I'll have >>> a cut at the OpenMPI and LAMPI analysis prior to the telecon. We >>> need someone to look at MPICH, and the vendor implementations need >>> to >>> be posted. >>> >>> Jeff >>> >>> >>> >>> At 10:03 AM 4/16/2008, Jeff Brown wrote: >>>> Yes, it's time to put some cycles toward this. Let's start >>>> populating the matrix and have a telecon toward the end of next >>>> week. I'll schedule a WG working session at the meeting. >>>> >>>> I'll take a look at OpenMPI and LAMPI, the two primary MPI >>>> implementations we use at LANL, and post to the wiki by the end of >>>> the week. Others, please do the same for your MPI implementation >>>> (especially the vendors). Overlap is OK. >>>> >>>> I'll send out specifics on the telecon. Let's shoot for Thursday >>>> April 24, 9:00 A.M. MST. >>>> >>>> Jeff >>>> >>>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >>>>> Folks, >>>>> >>>>> Are we planning on a WG update to report at the April 28-30 Forum >>>>> meeting? We have started the process of identifying the mpi.h >>>>> differences, but I dont think we have synthesized the data yet, or >>>>> come to any conclusions/next steps... Or did I miss something >>>>> here? >>>>> >>>>> Thanx! >>>>> Kannan >>>>> >>>>> -----Original Message----- >>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >> Ellis >>>>> Sent: Monday, March 17, 2008 4:18 AM >>>>> To: MPI 3.0 ABI working group >>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>> >>>>> >>>>> I'm not sure how best to express this, but there are a couple of >>>>> things that occur to me that might be important: >>>>> >>>>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>>>> >>>>> 2. should we add some sort of table describing the current >> situation >>>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? >> E.g. >>>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>>>> burned into the binary; whereas OpenMPI uses extern pointers - >>>>> i.e. >>>>> ompi_mpi_comm_world is in the initialized data section of >> libmpi.so, >>>>> and the value resolved at (dynamic) link time. >>>>> >>>>> Cheers, >>>>> >>>>> Edric. >>>>> >>>>>> -----Original Message----- >>>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> [mailto:mpi3-abi- >>>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>>>> Sent: Thursday, March 13, 2008 10:11 PM >>>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>>> >>>>>> I propose a way we can make progress ... >>>>>> >>>>>> Let's start populating a matrix (excel spreadsheet) with a >> column for >>>>>> each MPI implementation, and rows for the various MPI datatypes, >>>>>> constants, etc. where the internal implementations varys. I'll >> kick >>>>>> it off for OpenMPI and send out. >>>>>> >>>>>> The last column of the matrix can be "ABI" where we propose a >> common >>>>>> approach across the implementations. >>>>>> >>>>>> A couple of driving principles: >>>>>> 1. the ABI solution shouldn't negatively impact quality of >>>>> implementation >>>>>> 2. minimize platform specific solutions >>>>>> >>>>>> I'd like to see if we can produce a single ABI that spans >> platforms. >>>>>> >>>>>> comments? >>>>>> >>>>>> Jeff >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> mpi3-abi mailing list >>>>>> mpi3-abi_at_[hidden] >>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Thu Apr 24 07:11:30 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 24 Apr 2008 13:11:30 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <55A1AF8F-EE72-4908-ACBB-DC5BAF596A9E@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201466D00@swsmsx413.ger.corp.intel.com> Hi, Indeed. MPICH2 1.0.3 mpi.h files for IA-32 and Intel(R) 64 are available in the Wiki comparison page. Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Thursday, April 24, 2008 1:52 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Note that Open MPI changes some things by platform/compiler as well (MPI_Aint is an obvious one); I suspect that MPICH* may, too...? On Apr 24, 2008, at 6:12 AM, Edric Ellis wrote: > I've taken our MPICH2-1.0.3 build, and extracted the stuff as per > attached. > > I wrote a crufty script to pull this stuff out - for interest, > here's how MPICH2 changes by platform and by word size: > > $ diff glnxa64.csv glnx86.csv > 8c8 > < MPI_Aint,typedef long > --- >> MPI_Aint,typedef int > 30c30 > < MPI_BSEND_OVERHEAD,95 > --- >> MPI_BSEND_OVERHEAD,59 > 197c197 > < MPI_LONG,(0x4c000807) > --- >> MPI_LONG,(0x4c000407) > 200c200 > < MPI_LONG_DOUBLE,(0x4c00100c) > --- >> MPI_LONG_DOUBLE,(0x4c000c0c) > 204c204 > < MPI_UNSIGNED_LONG,(0x4c000808) > --- >> MPI_UNSIGNED_LONG,(0x4c000408) > > $ diff win32.csv glnx86.csv > 200c200 > < MPI_LONG_DOUBLE,(0x4c00080c) > --- >> MPI_LONG_DOUBLE,(0x4c000c0c) > 214c214 > < MPI_WCHAR,(0x4c00020e) > --- >> MPI_WCHAR,(0x4c00040e) > 218,219c218,219 > < MPI_2COMPLEX,(0x4c001024) > < MPI_2DOUBLE_COMPLEX,(0x4c002025) > --- >> MPI_2COMPLEX,(MPI_DATATYPE_NULL) >> MPI_2DOUBLE_COMPLEX,(MPI_DATATYPE_NULL) > > Cheers, > > Edric. > >> -----Original Message----- >> From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi- >> bounces_at_[hidden]] On Behalf Of Supalov, Alexander >> Sent: Thursday, April 24, 2008 9:16 AM >> To: MPI 3.0 ABI working group >> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >> Thanks. Can we add MPICH2? It's different from MPICH. Also, there's >> slight drift between different MPICH2 versions. Should we address >> this >> at all, or just go for the latest and greatest (1.0.7)? >> >> -----Original Message----- >> From: mpi3-abi-bounces_at_[hidden] >> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >> Sent: Thursday, April 24, 2008 1:13 AM >> To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI >> working group; MPI 3.0 ABI working group >> Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >> Attached is the beginnings of a spreadsheet to capture the detailed >> differences in the mpi.h implementations with the OpenMPI column >> populated. I'll get a start at LAMPI before the telecon. >> >> I'll post to the wiki. >> >> Talk to y'all in the morning (well, my morning). >> >> Jeff >> >> At 01:42 PM 4/21/2008, Jeff Brown wrote: >>> all, >>> >>> I scheduled a telecon to discuss status and get somewhat organized >>> for the meeting: >>> >>> Thursday April 24, 10:00 MDT >>> local number: 606-1201(6-1201) >>> toll free number: 888 343-0702. >>> >>> I'll send out some slides for the 5 minute briefing for the group. >>> >>> I'm having a hard time finding time to devote to this, but I'll have >>> a cut at the OpenMPI and LAMPI analysis prior to the telecon. We >>> need someone to look at MPICH, and the vendor implementations need >>> to >>> be posted. >>> >>> Jeff >>> >>> >>> >>> At 10:03 AM 4/16/2008, Jeff Brown wrote: >>>> Yes, it's time to put some cycles toward this. Let's start >>>> populating the matrix and have a telecon toward the end of next >>>> week. I'll schedule a WG working session at the meeting. >>>> >>>> I'll take a look at OpenMPI and LAMPI, the two primary MPI >>>> implementations we use at LANL, and post to the wiki by the end of >>>> the week. Others, please do the same for your MPI implementation >>>> (especially the vendors). Overlap is OK. >>>> >>>> I'll send out specifics on the telecon. Let's shoot for Thursday >>>> April 24, 9:00 A.M. MST. >>>> >>>> Jeff >>>> >>>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >>>>> Folks, >>>>> >>>>> Are we planning on a WG update to report at the April 28-30 Forum >>>>> meeting? We have started the process of identifying the mpi.h >>>>> differences, but I dont think we have synthesized the data yet, or >>>>> come to any conclusions/next steps... Or did I miss something >>>>> here? >>>>> >>>>> Thanx! >>>>> Kannan >>>>> >>>>> -----Original Message----- >>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >> Ellis >>>>> Sent: Monday, March 17, 2008 4:18 AM >>>>> To: MPI 3.0 ABI working group >>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>> >>>>> >>>>> I'm not sure how best to express this, but there are a couple of >>>>> things that occur to me that might be important: >>>>> >>>>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>>>> >>>>> 2. should we add some sort of table describing the current >> situation >>>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? >> E.g. >>>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>>>> burned into the binary; whereas OpenMPI uses extern pointers - >>>>> i.e. >>>>> ompi_mpi_comm_world is in the initialized data section of >> libmpi.so, >>>>> and the value resolved at (dynamic) link time. >>>>> >>>>> Cheers, >>>>> >>>>> Edric. >>>>> >>>>>> -----Original Message----- >>>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> [mailto:mpi3-abi- >>>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>>>> Sent: Thursday, March 13, 2008 10:11 PM >>>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>>> >>>>>> I propose a way we can make progress ... >>>>>> >>>>>> Let's start populating a matrix (excel spreadsheet) with a >> column for >>>>>> each MPI implementation, and rows for the various MPI datatypes, >>>>>> constants, etc. where the internal implementations varys. I'll >> kick >>>>>> it off for OpenMPI and send out. >>>>>> >>>>>> The last column of the matrix can be "ABI" where we propose a >> common >>>>>> approach across the implementations. >>>>>> >>>>>> A couple of driving principles: >>>>>> 1. the ABI solution shouldn't negatively impact quality of >>>>> implementation >>>>>> 2. minimize platform specific solutions >>>>>> >>>>>> I'd like to see if we can produce a single ABI that spans >> platforms. >>>>>> >>>>>> comments? >>>>>> >>>>>> Jeff >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> mpi3-abi mailing list >>>>>> mpi3-abi_at_[hidden] >>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From alexander.supalov at [hidden] Thu Apr 24 11:01:51 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 24 Apr 2008 17:01:51 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080421133743.02d31f30@ccs-mail.lanl.gov> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201466ECC@swsmsx413.ger.corp.intel.com> Hi, Is there an international number for the bridge? I cannot get to either of the numbers mentioned below. Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Monday, April 21, 2008 9:42 PM To: MPI 3.0 ABI working group; MPI 3.0 ABI working group; MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting all, I scheduled a telecon to discuss status and get somewhat organized for the meeting: Thursday April 24, 10:00 MDT local number: 606-1201(6-1201) toll free number: 888 343-0702. I'll send out some slides for the 5 minute briefing for the group. I'm having a hard time finding time to devote to this, but I'll have a cut at the OpenMPI and LAMPI analysis prior to the telecon. We need someone to look at MPICH, and the vendor implementations need to be posted. Jeff At 10:03 AM 4/16/2008, Jeff Brown wrote: >Yes, it's time to put some cycles toward this. Let's start >populating the matrix and have a telecon toward the end of next >week. I'll schedule a WG working session at the meeting. > >I'll take a look at OpenMPI and LAMPI, the two primary MPI >implementations we use at LANL, and post to the wiki by the end of >the week. Others, please do the same for your MPI implementation >(especially the vendors). Overlap is OK. > >I'll send out specifics on the telecon. Let's shoot for Thursday >April 24, 9:00 A.M. MST. > >Jeff > >At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > >Folks, > > > >Are we planning on a WG update to report at the April 28-30 Forum > >meeting? We have started the process of identifying the mpi.h > >differences, but I dont think we have synthesized the data yet, or > >come to any conclusions/next steps... Or did I miss something here? > > > >Thanx! > >Kannan > > > >-----Original Message----- > >From: mpi3-abi-bounces_at_[hidden] > >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > >Sent: Monday, March 17, 2008 4:18 AM > >To: MPI 3.0 ABI working group > >Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > >I'm not sure how best to express this, but there are a couple of > >things that occur to me that might be important: > > > >1. The size of the handle types (cf. size of a pointer perhaps?) > > > >2. should we add some sort of table describing the current situation > >as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > >MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > >burned into the binary; whereas OpenMPI uses extern pointers - i.e. > >ompi_mpi_comm_world is in the initialized data section of libmpi.so, > >and the value resolved at (dynamic) link time. > > > >Cheers, > > > >Edric. > > > > > -----Original Message----- > > > From: mpi3-abi-bounces_at_[hidden] > > [mailto:mpi3-abi- > > > bounces_at_[hidden]] On Behalf Of Jeff Brown > > > Sent: Thursday, March 13, 2008 10:11 PM > > > To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > > Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > > > > > I propose a way we can make progress ... > > > > > > Let's start populating a matrix (excel spreadsheet) with a column for > > > each MPI implementation, and rows for the various MPI datatypes, > > > constants, etc. where the internal implementations varys. I'll kick > > > it off for OpenMPI and send out. > > > > > > The last column of the matrix can be "ABI" where we propose a common > > > approach across the implementations. > > > > > > A couple of driving principles: > > > 1. the ABI solution shouldn't negatively impact quality of > >implementation > > > 2. minimize platform specific solutions > > > > > > I'd like to see if we can produce a single ABI that spans platforms. > > > > > > comments? > > > > > > Jeff > > > > > > > > > _______________________________________________ > > > mpi3-abi mailing list > > > mpi3-abi_at_[hidden] > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From Terry.Dontje at [hidden] Thu Apr 24 11:01:48 2008 From: Terry.Dontje at [hidden] (Terry Dontje) Date: Thu, 24 Apr 2008 12:01:48 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080421133743.02d31f30@ccs-mail.lanl.gov> Message-ID: <4810AEEC.5090502@sun.com> Am I the only one getting an "all circuits are busy" message from the number below? --td Jeff Brown wrote: > all, > > I scheduled a telecon to discuss status and get somewhat organized > for the meeting: > > Thursday April 24, 10:00 MDT > local number: 606-1201(6-1201) > toll free number: 888 343-0702. > > I'll send out some slides for the 5 minute briefing for the group. > > I'm having a hard time finding time to devote to this, but I'll have > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > need someone to look at MPICH, and the vendor implementations need to > be posted. > > Jeff > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > >> Yes, it's time to put some cycles toward this. Let's start >> populating the matrix and have a telecon toward the end of next >> week. I'll schedule a WG working session at the meeting. >> >> I'll take a look at OpenMPI and LAMPI, the two primary MPI >> implementations we use at LANL, and post to the wiki by the end of >> the week. Others, please do the same for your MPI implementation >> (especially the vendors). Overlap is OK. >> >> I'll send out specifics on the telecon. Let's shoot for Thursday >> April 24, 9:00 A.M. MST. >> >> Jeff >> >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >> >>> Folks, >>> >>> Are we planning on a WG update to report at the April 28-30 Forum >>> meeting? We have started the process of identifying the mpi.h >>> differences, but I dont think we have synthesized the data yet, or >>> come to any conclusions/next steps... Or did I miss something here? >>> >>> Thanx! >>> Kannan >>> >>> -----Original Message----- >>> From: mpi3-abi-bounces_at_[hidden] >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis >>> Sent: Monday, March 17, 2008 4:18 AM >>> To: MPI 3.0 ABI working group >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>> >>> >>> I'm not sure how best to express this, but there are a couple of >>> things that occur to me that might be important: >>> >>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>> >>> 2. should we add some sort of table describing the current situation >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, >>> and the value resolved at (dynamic) link time. >>> >>> Cheers, >>> >>> Edric. >>> >>> >>>> -----Original Message----- >>>> From: mpi3-abi-bounces_at_[hidden] >>>> >>> [mailto:mpi3-abi- >>> >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>> Sent: Thursday, March 13, 2008 10:11 PM >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>> >>>> I propose a way we can make progress ... >>>> >>>> Let's start populating a matrix (excel spreadsheet) with a column for >>>> each MPI implementation, and rows for the various MPI datatypes, >>>> constants, etc. where the internal implementations varys. I'll kick >>>> it off for OpenMPI and send out. >>>> >>>> The last column of the matrix can be "ABI" where we propose a common >>>> approach across the implementations. >>>> >>>> A couple of driving principles: >>>> 1. the ABI solution shouldn't negatively impact quality of >>>> >>> implementation >>> >>>> 2. minimize platform specific solutions >>>> >>>> I'd like to see if we can produce a single ABI that spans platforms. >>>> >>>> comments? >>>> >>>> Jeff >>>> >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > From kannan.narasimhan at [hidden] Thu Apr 24 11:03:49 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Thu, 24 Apr 2008 16:03:49 +0000 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <4810AEEC.5090502@sun.com> Message-ID: I get it too .... -Kannan- -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Terry Dontje Sent: Thursday, April 24, 2008 11:02 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Am I the only one getting an "all circuits are busy" message from the number below? --td Jeff Brown wrote: > all, > > I scheduled a telecon to discuss status and get somewhat organized for > the meeting: > > Thursday April 24, 10:00 MDT > local number: 606-1201(6-1201) > toll free number: 888 343-0702. > > I'll send out some slides for the 5 minute briefing for the group. > > I'm having a hard time finding time to devote to this, but I'll have a > cut at the OpenMPI and LAMPI analysis prior to the telecon. We need > someone to look at MPICH, and the vendor implementations need to be > posted. > > Jeff > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > >> Yes, it's time to put some cycles toward this. Let's start >> populating the matrix and have a telecon toward the end of next week. >> I'll schedule a WG working session at the meeting. >> >> I'll take a look at OpenMPI and LAMPI, the two primary MPI >> implementations we use at LANL, and post to the wiki by the end of >> the week. Others, please do the same for your MPI implementation >> (especially the vendors). Overlap is OK. >> >> I'll send out specifics on the telecon. Let's shoot for Thursday >> April 24, 9:00 A.M. MST. >> >> Jeff >> >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >> >>> Folks, >>> >>> Are we planning on a WG update to report at the April 28-30 Forum >>> meeting? We have started the process of identifying the mpi.h >>> differences, but I dont think we have synthesized the data yet, or >>> come to any conclusions/next steps... Or did I miss something here? >>> >>> Thanx! >>> Kannan >>> >>> -----Original Message----- >>> From: mpi3-abi-bounces_at_[hidden] >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >>> Ellis >>> Sent: Monday, March 17, 2008 4:18 AM >>> To: MPI 3.0 ABI working group >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>> >>> >>> I'm not sure how best to express this, but there are a couple of >>> things that occur to me that might be important: >>> >>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>> >>> 2. should we add some sort of table describing the current situation >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, >>> and the value resolved at (dynamic) link time. >>> >>> Cheers, >>> >>> Edric. >>> >>> >>>> -----Original Message----- >>>> From: mpi3-abi-bounces_at_[hidden] >>>> >>> [mailto:mpi3-abi- >>> >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>> Sent: Thursday, March 13, 2008 10:11 PM >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>> >>>> I propose a way we can make progress ... >>>> >>>> Let's start populating a matrix (excel spreadsheet) with a column >>>> for each MPI implementation, and rows for the various MPI >>>> datatypes, constants, etc. where the internal implementations >>>> varys. I'll kick it off for OpenMPI and send out. >>>> >>>> The last column of the matrix can be "ABI" where we propose a >>>> common approach across the implementations. >>>> >>>> A couple of driving principles: >>>> 1. the ABI solution shouldn't negatively impact quality of >>>> >>> implementation >>> >>>> 2. minimize platform specific solutions >>>> >>>> I'd like to see if we can produce a single ABI that spans platforms. >>>> >>>> comments? >>>> >>>> Jeff >>>> >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From erezh at [hidden] Thu Apr 24 11:06:11 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 24 Apr 2008 09:06:11 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <6B68D01C00C9994A8E150183E62A119E72BD94BEE0@NA-EXMSG-C105.redmond.corp.microsoft.com> Same here -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Thursday, April 24, 2008 9:04 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting I get it too .... -Kannan- -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Terry Dontje Sent: Thursday, April 24, 2008 11:02 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Am I the only one getting an "all circuits are busy" message from the number below? --td Jeff Brown wrote: > all, > > I scheduled a telecon to discuss status and get somewhat organized for > the meeting: > > Thursday April 24, 10:00 MDT > local number: 606-1201(6-1201) > toll free number: 888 343-0702. > > I'll send out some slides for the 5 minute briefing for the group. > > I'm having a hard time finding time to devote to this, but I'll have a > cut at the OpenMPI and LAMPI analysis prior to the telecon. We need > someone to look at MPICH, and the vendor implementations need to be > posted. > > Jeff > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > >> Yes, it's time to put some cycles toward this. Let's start >> populating the matrix and have a telecon toward the end of next week. >> I'll schedule a WG working session at the meeting. >> >> I'll take a look at OpenMPI and LAMPI, the two primary MPI >> implementations we use at LANL, and post to the wiki by the end of >> the week. Others, please do the same for your MPI implementation >> (especially the vendors). Overlap is OK. >> >> I'll send out specifics on the telecon. Let's shoot for Thursday >> April 24, 9:00 A.M. MST. >> >> Jeff >> >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >> >>> Folks, >>> >>> Are we planning on a WG update to report at the April 28-30 Forum >>> meeting? We have started the process of identifying the mpi.h >>> differences, but I dont think we have synthesized the data yet, or >>> come to any conclusions/next steps... Or did I miss something here? >>> >>> Thanx! >>> Kannan >>> >>> -----Original Message----- >>> From: mpi3-abi-bounces_at_[hidden] >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >>> Ellis >>> Sent: Monday, March 17, 2008 4:18 AM >>> To: MPI 3.0 ABI working group >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>> >>> >>> I'm not sure how best to express this, but there are a couple of >>> things that occur to me that might be important: >>> >>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>> >>> 2. should we add some sort of table describing the current situation >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, >>> and the value resolved at (dynamic) link time. >>> >>> Cheers, >>> >>> Edric. >>> >>> >>>> -----Original Message----- >>>> From: mpi3-abi-bounces_at_[hidden] >>>> >>> [mailto:mpi3-abi- >>> >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>> Sent: Thursday, March 13, 2008 10:11 PM >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>> >>>> I propose a way we can make progress ... >>>> >>>> Let's start populating a matrix (excel spreadsheet) with a column >>>> for each MPI implementation, and rows for the various MPI >>>> datatypes, constants, etc. where the internal implementations >>>> varys. I'll kick it off for OpenMPI and send out. >>>> >>>> The last column of the matrix can be "ABI" where we propose a >>>> common approach across the implementations. >>>> >>>> A couple of driving principles: >>>> 1. the ABI solution shouldn't negatively impact quality of >>>> >>> implementation >>> >>>> 2. minimize platform specific solutions >>>> >>>> I'd like to see if we can produce a single ABI that spans platforms. >>>> >>>> comments? >>>> >>>> Jeff >>>> >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From Terry.Dontje at [hidden] Thu Apr 24 11:12:26 2008 From: Terry.Dontje at [hidden] (Terry Dontje) Date: Thu, 24 Apr 2008 12:12:26 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <4810B16A.5060905@sun.com> I actually got in via the local number 505-606-1201. Jeff is talking to the conference people now to see what is going on. --td Narasimhan, Kannan wrote: > I get it too .... > > -Kannan- > > -----Original Message----- > From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Terry Dontje > Sent: Thursday, April 24, 2008 11:02 AM > To: MPI 3.0 ABI working group > Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > > Am I the only one getting an "all circuits are busy" message from the number below? > > --td > > Jeff Brown wrote: > >> all, >> >> I scheduled a telecon to discuss status and get somewhat organized for >> the meeting: >> >> Thursday April 24, 10:00 MDT >> local number: 606-1201(6-1201) >> toll free number: 888 343-0702. >> >> I'll send out some slides for the 5 minute briefing for the group. >> >> I'm having a hard time finding time to devote to this, but I'll have a >> cut at the OpenMPI and LAMPI analysis prior to the telecon. We need >> someone to look at MPICH, and the vendor implementations need to be >> posted. >> >> Jeff >> >> >> >> At 10:03 AM 4/16/2008, Jeff Brown wrote: >> >> >>> Yes, it's time to put some cycles toward this. Let's start >>> populating the matrix and have a telecon toward the end of next week. >>> I'll schedule a WG working session at the meeting. >>> >>> I'll take a look at OpenMPI and LAMPI, the two primary MPI >>> implementations we use at LANL, and post to the wiki by the end of >>> the week. Others, please do the same for your MPI implementation >>> (especially the vendors). Overlap is OK. >>> >>> I'll send out specifics on the telecon. Let's shoot for Thursday >>> April 24, 9:00 A.M. MST. >>> >>> Jeff >>> >>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >>> >>> >>>> Folks, >>>> >>>> Are we planning on a WG update to report at the April 28-30 Forum >>>> meeting? We have started the process of identifying the mpi.h >>>> differences, but I dont think we have synthesized the data yet, or >>>> come to any conclusions/next steps... Or did I miss something here? >>>> >>>> Thanx! >>>> Kannan >>>> >>>> -----Original Message----- >>>> From: mpi3-abi-bounces_at_[hidden] >>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >>>> Ellis >>>> Sent: Monday, March 17, 2008 4:18 AM >>>> To: MPI 3.0 ABI working group >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>> >>>> >>>> I'm not sure how best to express this, but there are a couple of >>>> things that occur to me that might be important: >>>> >>>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>>> >>>> 2. should we add some sort of table describing the current situation >>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. >>>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, >>>> and the value resolved at (dynamic) link time. >>>> >>>> Cheers, >>>> >>>> Edric. >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> >>>>> >>>> [mailto:mpi3-abi- >>>> >>>> >>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>>> Sent: Thursday, March 13, 2008 10:11 PM >>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>> >>>>> I propose a way we can make progress ... >>>>> >>>>> Let's start populating a matrix (excel spreadsheet) with a column >>>>> for each MPI implementation, and rows for the various MPI >>>>> datatypes, constants, etc. where the internal implementations >>>>> varys. I'll kick it off for OpenMPI and send out. >>>>> >>>>> The last column of the matrix can be "ABI" where we propose a >>>>> common approach across the implementations. >>>>> >>>>> A couple of driving principles: >>>>> 1. the ABI solution shouldn't negatively impact quality of >>>>> >>>>> >>>> implementation >>>> >>>> >>>>> 2. minimize platform specific solutions >>>>> >>>>> I'd like to see if we can produce a single ABI that spans platforms. >>>>> >>>>> comments? >>>>> >>>>> Jeff >>>>> >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> >> > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > From jeffb at [hidden] Thu Apr 24 11:18:11 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 10:18:11 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <4810AEEC.5090502@sun.com> Message-ID: <6.2.3.4.2.20080424101301.02f784d8@ccs-mail.lanl.gov> Just talked to our phone folks. Our trunks are down in New Mexico - so this is a bust. We are sort of a third world country out here. I don't think we have time to reschedule at this point. So ... if folks have the time please populate the matrix with your favorite MPI implementation and distribute to the group. We'll get into the guts of all this at the meeting. For my 5 minute briefing, I'll just show folks where we are and give a glimpse into the details. ee you all at the meeting Jeff At 10:01 AM 4/24/2008, Terry Dontje wrote: >Am I the only one getting an "all circuits are busy" message from the >number below? > >--td > >Jeff Brown wrote: > > all, > > > > I scheduled a telecon to discuss status and get somewhat organized > > for the meeting: > > > > Thursday April 24, 10:00 MDT > > local number: 606-1201(6-1201) > > toll free number: 888 343-0702. > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > I'm having a hard time finding time to devote to this, but I'll have > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > need someone to look at MPICH, and the vendor implementations need to > > be posted. > > > > Jeff > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > >> Yes, it's time to put some cycles toward this. Let's start > >> populating the matrix and have a telecon toward the end of next > >> week. I'll schedule a WG working session at the meeting. > >> > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > >> implementations we use at LANL, and post to the wiki by the end of > >> the week. Others, please do the same for your MPI implementation > >> (especially the vendors). Overlap is OK. > >> > >> I'll send out specifics on the telecon. Let's shoot for Thursday > >> April 24, 9:00 A.M. MST. > >> > >> Jeff > >> > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > >> > >>> Folks, > >>> > >>> Are we planning on a WG update to report at the April 28-30 Forum > >>> meeting? We have started the process of identifying the mpi.h > >>> differences, but I dont think we have synthesized the data yet, or > >>> come to any conclusions/next steps... Or did I miss something here? > >>> > >>> Thanx! > >>> Kannan > >>> > >>> -----Original Message----- > >>> From: mpi3-abi-bounces_at_[hidden] > >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > >>> Sent: Monday, March 17, 2008 4:18 AM > >>> To: MPI 3.0 ABI working group > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>> > >>> > >>> I'm not sure how best to express this, but there are a couple of > >>> things that occur to me that might be important: > >>> > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > >>> > >>> 2. should we add some sort of table describing the current situation > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > >>> and the value resolved at (dynamic) link time. > >>> > >>> Cheers, > >>> > >>> Edric. > >>> > >>> > >>>> -----Original Message----- > >>>> From: mpi3-abi-bounces_at_[hidden] > >>>> > >>> [mailto:mpi3-abi- > >>> > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > >>>> Sent: Thursday, March 13, 2008 10:11 PM > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>>> > >>>> I propose a way we can make progress ... > >>>> > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > >>>> each MPI implementation, and rows for the various MPI datatypes, > >>>> constants, etc. where the internal implementations varys. I'll kick > >>>> it off for OpenMPI and send out. > >>>> > >>>> The last column of the matrix can be "ABI" where we propose a common > >>>> approach across the implementations. > >>>> > >>>> A couple of driving principles: > >>>> 1. the ABI solution shouldn't negatively impact quality of > >>>> > >>> implementation > >>> > >>>> 2. minimize platform specific solutions > >>>> > >>>> I'd like to see if we can produce a single ABI that spans platforms. > >>>> > >>>> comments? > >>>> > >>>> Jeff > >>>> > >>>> > >>>> _______________________________________________ > >>>> mpi3-abi mailing list > >>>> mpi3-abi_at_[hidden] > >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >> _______________________________________________ > >> mpi3-abi mailing list > >> mpi3-abi_at_[hidden] > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >> > > > > > > _______________________________________________ > > mpi3-abi mailing list > > mpi3-abi_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From erezh at [hidden] Thu Apr 24 11:20:05 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 24 Apr 2008 09:20:05 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <4810B16A.5060905@sun.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E72BD94BF01@NA-EXMSG-C105.redmond.corp.microsoft.com> Works for me too -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Terry Dontje Sent: Thursday, April 24, 2008 9:12 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting I actually got in via the local number 505-606-1201. Jeff is talking to the conference people now to see what is going on. --td Narasimhan, Kannan wrote: > I get it too .... > > -Kannan- > > -----Original Message----- > From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Terry Dontje > Sent: Thursday, April 24, 2008 11:02 AM > To: MPI 3.0 ABI working group > Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > > Am I the only one getting an "all circuits are busy" message from the number below? > > --td > > Jeff Brown wrote: > >> all, >> >> I scheduled a telecon to discuss status and get somewhat organized for >> the meeting: >> >> Thursday April 24, 10:00 MDT >> local number: 606-1201(6-1201) >> toll free number: 888 343-0702. >> >> I'll send out some slides for the 5 minute briefing for the group. >> >> I'm having a hard time finding time to devote to this, but I'll have a >> cut at the OpenMPI and LAMPI analysis prior to the telecon. We need >> someone to look at MPICH, and the vendor implementations need to be >> posted. >> >> Jeff >> >> >> >> At 10:03 AM 4/16/2008, Jeff Brown wrote: >> >> >>> Yes, it's time to put some cycles toward this. Let's start >>> populating the matrix and have a telecon toward the end of next week. >>> I'll schedule a WG working session at the meeting. >>> >>> I'll take a look at OpenMPI and LAMPI, the two primary MPI >>> implementations we use at LANL, and post to the wiki by the end of >>> the week. Others, please do the same for your MPI implementation >>> (especially the vendors). Overlap is OK. >>> >>> I'll send out specifics on the telecon. Let's shoot for Thursday >>> April 24, 9:00 A.M. MST. >>> >>> Jeff >>> >>> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: >>> >>> >>>> Folks, >>>> >>>> Are we planning on a WG update to report at the April 28-30 Forum >>>> meeting? We have started the process of identifying the mpi.h >>>> differences, but I dont think we have synthesized the data yet, or >>>> come to any conclusions/next steps... Or did I miss something here? >>>> >>>> Thanx! >>>> Kannan >>>> >>>> -----Original Message----- >>>> From: mpi3-abi-bounces_at_[hidden] >>>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric >>>> Ellis >>>> Sent: Monday, March 17, 2008 4:18 AM >>>> To: MPI 3.0 ABI working group >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>> >>>> >>>> I'm not sure how best to express this, but there are a couple of >>>> things that occur to me that might be important: >>>> >>>> 1. The size of the handle types (cf. size of a pointer perhaps?) >>>> >>>> 2. should we add some sort of table describing the current situation >>>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. >>>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is >>>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. >>>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, >>>> and the value resolved at (dynamic) link time. >>>> >>>> Cheers, >>>> >>>> Edric. >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: mpi3-abi-bounces_at_[hidden] >>>>> >>>>> >>>> [mailto:mpi3-abi- >>>> >>>> >>>>> bounces_at_[hidden]] On Behalf Of Jeff Brown >>>>> Sent: Thursday, March 13, 2008 10:11 PM >>>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >>>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March >>>>> >>>>> I propose a way we can make progress ... >>>>> >>>>> Let's start populating a matrix (excel spreadsheet) with a column >>>>> for each MPI implementation, and rows for the various MPI >>>>> datatypes, constants, etc. where the internal implementations >>>>> varys. I'll kick it off for OpenMPI and send out. >>>>> >>>>> The last column of the matrix can be "ABI" where we propose a >>>>> common approach across the implementations. >>>>> >>>>> A couple of driving principles: >>>>> 1. the ABI solution shouldn't negatively impact quality of >>>>> >>>>> >>>> implementation >>>> >>>> >>>>> 2. minimize platform specific solutions >>>>> >>>>> I'd like to see if we can produce a single ABI that spans platforms. >>>>> >>>>> comments? >>>>> >>>>> Jeff >>>>> >>>>> >>>>> _______________________________________________ >>>>> mpi3-abi mailing list >>>>> mpi3-abi_at_[hidden] >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>>> >>>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> _______________________________________________ >>>> mpi3-abi mailing list >>>> mpi3-abi_at_[hidden] >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>>> >>>> >>> _______________________________________________ >>> mpi3-abi mailing list >>> mpi3-abi_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>> >>> >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> >> > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From erezh at [hidden] Thu Apr 24 11:23:45 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 24 Apr 2008 09:23:45 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424101301.02f784d8@ccs-mail.lanl.gov> Message-ID: <6B68D01C00C9994A8E150183E62A119E72BD94BF0B@NA-EXMSG-C105.redmond.corp.microsoft.com> Jeff, are you aware that we started that table on the wiki pages? (or you just prefer it in excel?) http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h On that same page you can also find the various mpi.h files. Thanks, .Erez -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Thursday, April 24, 2008 9:18 AM To: MPI 3.0 ABI working group; MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Just talked to our phone folks. Our trunks are down in New Mexico - so this is a bust. We are sort of a third world country out here. I don't think we have time to reschedule at this point. So ... if folks have the time please populate the matrix with your favorite MPI implementation and distribute to the group. We'll get into the guts of all this at the meeting. For my 5 minute briefing, I'll just show folks where we are and give a glimpse into the details. ee you all at the meeting Jeff At 10:01 AM 4/24/2008, Terry Dontje wrote: >Am I the only one getting an "all circuits are busy" message from the >number below? > >--td > >Jeff Brown wrote: > > all, > > > > I scheduled a telecon to discuss status and get somewhat organized > > for the meeting: > > > > Thursday April 24, 10:00 MDT > > local number: 606-1201(6-1201) > > toll free number: 888 343-0702. > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > I'm having a hard time finding time to devote to this, but I'll have > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > need someone to look at MPICH, and the vendor implementations need to > > be posted. > > > > Jeff > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > >> Yes, it's time to put some cycles toward this. Let's start > >> populating the matrix and have a telecon toward the end of next > >> week. I'll schedule a WG working session at the meeting. > >> > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > >> implementations we use at LANL, and post to the wiki by the end of > >> the week. Others, please do the same for your MPI implementation > >> (especially the vendors). Overlap is OK. > >> > >> I'll send out specifics on the telecon. Let's shoot for Thursday > >> April 24, 9:00 A.M. MST. > >> > >> Jeff > >> > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > >> > >>> Folks, > >>> > >>> Are we planning on a WG update to report at the April 28-30 Forum > >>> meeting? We have started the process of identifying the mpi.h > >>> differences, but I dont think we have synthesized the data yet, or > >>> come to any conclusions/next steps... Or did I miss something here? > >>> > >>> Thanx! > >>> Kannan > >>> > >>> -----Original Message----- > >>> From: mpi3-abi-bounces_at_[hidden] > >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > >>> Sent: Monday, March 17, 2008 4:18 AM > >>> To: MPI 3.0 ABI working group > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>> > >>> > >>> I'm not sure how best to express this, but there are a couple of > >>> things that occur to me that might be important: > >>> > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > >>> > >>> 2. should we add some sort of table describing the current situation > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > >>> and the value resolved at (dynamic) link time. > >>> > >>> Cheers, > >>> > >>> Edric. > >>> > >>> > >>>> -----Original Message----- > >>>> From: mpi3-abi-bounces_at_[hidden] > >>>> > >>> [mailto:mpi3-abi- > >>> > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > >>>> Sent: Thursday, March 13, 2008 10:11 PM > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>>> > >>>> I propose a way we can make progress ... > >>>> > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > >>>> each MPI implementation, and rows for the various MPI datatypes, > >>>> constants, etc. where the internal implementations varys. I'll kick > >>>> it off for OpenMPI and send out. > >>>> > >>>> The last column of the matrix can be "ABI" where we propose a common > >>>> approach across the implementations. > >>>> > >>>> A couple of driving principles: > >>>> 1. the ABI solution shouldn't negatively impact quality of > >>>> > >>> implementation > >>> > >>>> 2. minimize platform specific solutions > >>>> > >>>> I'd like to see if we can produce a single ABI that spans platforms. > >>>> > >>>> comments? > >>>> > >>>> Jeff > >>>> > >>>> > >>>> _______________________________________________ > >>>> mpi3-abi mailing list > >>>> mpi3-abi_at_[hidden] > >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >> _______________________________________________ > >> mpi3-abi mailing list > >> mpi3-abi_at_[hidden] > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >> > > > > > > _______________________________________________ > > mpi3-abi mailing list > > mpi3-abi_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Thu Apr 24 12:33:00 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 11:33:00 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BD94BF0B@NA-EXMSG-C105.r edmond.corp.microsoft.com> Message-ID: <6.2.3.4.2.20080424113225.02f86700@ccs-mail.lanl.gov> I'll sync up with the wiki At 10:23 AM 4/24/2008, Erez Haba wrote: >Jeff, are you aware that we started that table on the wiki pages? >(or you just prefer it in excel?) >http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h > >On that same page you can also find the various mpi.h files. > > >Thanks, >.Erez > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >Sent: Thursday, April 24, 2008 9:18 AM >To: MPI 3.0 ABI working group; MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >Just talked to our phone folks. Our trunks are down in New Mexico - >so this is a bust. We are sort of a third world country out here. > >I don't think we have time to reschedule at this point. > >So ... if folks have the time please populate the matrix with your >favorite MPI implementation and distribute to the group. We'll get >into the guts of all this at the meeting. > >For my 5 minute briefing, I'll just show folks where we are and give >a glimpse into the details. > >see you all at the meeting > >Jeff > >At 10:01 AM 4/24/2008, Terry Dontje wrote: > >Am I the only one getting an "all circuits are busy" message from the > >number below? > > > >--td > > > >Jeff Brown wrote: > > > all, > > > > > > I scheduled a telecon to discuss status and get somewhat organized > > > for the meeting: > > > > > > Thursday April 24, 10:00 MDT > > > local number: 606-1201(6-1201) > > > toll free number: 888 343-0702. > > > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > > > I'm having a hard time finding time to devote to this, but I'll have > > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > > need someone to look at MPICH, and the vendor implementations need to > > > be posted. > > > > > > Jeff > > > > > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > > > >> Yes, it's time to put some cycles toward this. Let's start > > >> populating the matrix and have a telecon toward the end of next > > >> week. I'll schedule a WG working session at the meeting. > > >> > > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > > >> implementations we use at LANL, and post to the wiki by the end of > > >> the week. Others, please do the same for your MPI implementation > > >> (especially the vendors). Overlap is OK. > > >> > > >> I'll send out specifics on the telecon. Let's shoot for Thursday > > >> April 24, 9:00 A.M. MST. > > >> > > >> Jeff > > >> > > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >> > > >>> Folks, > > >>> > > >>> Are we planning on a WG update to report at the April 28-30 Forum > > >>> meeting? We have started the process of identifying the mpi.h > > >>> differences, but I dont think we have synthesized the data yet, or > > >>> come to any conclusions/next steps... Or did I miss something here? > > >>> > > >>> Thanx! > > >>> Kannan > > >>> > > >>> -----Original Message----- > > >>> From: mpi3-abi-bounces_at_[hidden] > > >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > > >>> Sent: Monday, March 17, 2008 4:18 AM > > >>> To: MPI 3.0 ABI working group > > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>> > > >>> > > >>> I'm not sure how best to express this, but there are a couple of > > >>> things that occur to me that might be important: > > >>> > > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > > >>> > > >>> 2. should we add some sort of table describing the current situation > > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > > >>> and the value resolved at (dynamic) link time. > > >>> > > >>> Cheers, > > >>> > > >>> Edric. > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: mpi3-abi-bounces_at_[hidden] > > >>>> > > >>> [mailto:mpi3-abi- > > >>> > > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > > >>>> Sent: Thursday, March 13, 2008 10:11 PM > > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>>> > > >>>> I propose a way we can make progress ... > > >>>> > > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > > >>>> each MPI implementation, and rows for the various MPI datatypes, > > >>>> constants, etc. where the internal implementations varys. I'll kick > > >>>> it off for OpenMPI and send out. > > >>>> > > >>>> The last column of the matrix can be "ABI" where we propose a common > > >>>> approach across the implementations. > > >>>> > > >>>> A couple of driving principles: > > >>>> 1. the ABI solution shouldn't negatively impact quality of > > >>>> > > >>> implementation > > >>> > > >>>> 2. minimize platform specific solutions > > >>>> > > >>>> I'd like to see if we can produce a single ABI that spans platforms. > > >>>> > > >>>> comments? > > >>>> > > >>>> Jeff > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> mpi3-abi mailing list > > >>>> mpi3-abi_at_[hidden] > > >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >> _______________________________________________ > > >> mpi3-abi mailing list > > >> mpi3-abi_at_[hidden] > > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >> > > > > > > > > > _______________________________________________ > > > mpi3-abi mailing list > > > mpi3-abi_at_[hidden] > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Thu Apr 24 12:53:01 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 11:53:01 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BD94BF0B@NA-EXMSG-C105.r edmond.corp.microsoft.com> Message-ID: <6.2.3.4.2.20080424114556.02fa6e18@ccs-mail.lanl.gov> here's what I see on the wiki: Types HP IBM Microsoft MPICH2 OpenMPI ABI MPI_Datatype struct hpmp_dtype_s* int int int struct ompi_datatype_t* TBD MPI_Op struct hpmp_op_s* int int int struct ompi_op_t* TBD MPI_Comm struct hpmp_comm_s* int int int struct ompi_communicator_t* TBD MPI_Errhandler struct hpmp_err_s* int int int struct ompi_errhandler_t* TBD Examples HP IBM Microsoft MPICH2 OpenMPI ABI Datatype MPI_CHAR &hpmp_char 4 0x4c000101 0x4c000101 &ompi_mpi_char TBD Op MPI_SUM &hpmp_sum enum 2 0x58000003 0x58000003 &ompi_mpi_op_sum TBD MPI_COMM_WORLD &hpmp_comm_world enum 0 0x44000000 0x44000000 &ompi_mpi_comm_world TBD Compare MPI_IDENT 0 enum 0 0 0 enum 0 TBD There's a lot more detail in the spreadsheet. To do this right, we need to cover the entire space. I'd prefer to stick with excel (posted to the wiki) and add columns for the various implementations. At 10:23 AM 4/24/2008, Erez Haba wrote: >Jeff, are you aware that we started that table on the wiki pages? >(or you just prefer it in excel?) >http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h > >On that same page you can also find the various mpi.h files. > > >Thanks, >.Erez > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >Sent: Thursday, April 24, 2008 9:18 AM >To: MPI 3.0 ABI working group; MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >Just talked to our phone folks. Our trunks are down in New Mexico - >so this is a bust. We are sort of a third world country out here. > >I don't think we have time to reschedule at this point. > >So ... if folks have the time please populate the matrix with your >favorite MPI implementation and distribute to the group. We'll get >into the guts of all this at the meeting. > >For my 5 minute briefing, I'll just show folks where we are and give >a glimpse into the details. > >see you all at the meeting > >Jeff > >At 10:01 AM 4/24/2008, Terry Dontje wrote: > >Am I the only one getting an "all circuits are busy" message from the > >number below? > > > >--td > > > >Jeff Brown wrote: > > > all, > > > > > > I scheduled a telecon to discuss status and get somewhat organized > > > for the meeting: > > > > > > Thursday April 24, 10:00 MDT > > > local number: 606-1201(6-1201) > > > toll free number: 888 343-0702. > > > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > > > I'm having a hard time finding time to devote to this, but I'll have > > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > > need someone to look at MPICH, and the vendor implementations need to > > > be posted. > > > > > > Jeff > > > > > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > > > >> Yes, it's time to put some cycles toward this. Let's start > > >> populating the matrix and have a telecon toward the end of next > > >> week. I'll schedule a WG working session at the meeting. > > >> > > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > > >> implementations we use at LANL, and post to the wiki by the end of > > >> the week. Others, please do the same for your MPI implementation > > >> (especially the vendors). Overlap is OK. > > >> > > >> I'll send out specifics on the telecon. Let's shoot for Thursday > > >> April 24, 9:00 A.M. MST. > > >> > > >> Jeff > > >> > > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >> > > >>> Folks, > > >>> > > >>> Are we planning on a WG update to report at the April 28-30 Forum > > >>> meeting? We have started the process of identifying the mpi.h > > >>> differences, but I dont think we have synthesized the data yet, or > > >>> come to any conclusions/next steps... Or did I miss something here? > > >>> > > >>> Thanx! > > >>> Kannan > > >>> > > >>> -----Original Message----- > > >>> From: mpi3-abi-bounces_at_[hidden] > > >>> [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > > >>> Sent: Monday, March 17, 2008 4:18 AM > > >>> To: MPI 3.0 ABI working group > > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>> > > >>> > > >>> I'm not sure how best to express this, but there are a couple of > > >>> things that occur to me that might be important: > > >>> > > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > > >>> > > >>> 2. should we add some sort of table describing the current situation > > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > > >>> and the value resolved at (dynamic) link time. > > >>> > > >>> Cheers, > > >>> > > >>> Edric. > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: mpi3-abi-bounces_at_[hidden] > > >>>> > > >>> [mailto:mpi3-abi- > > >>> > > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > > >>>> Sent: Thursday, March 13, 2008 10:11 PM > > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>>> > > >>>> I propose a way we can make progress ... > > >>>> > > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > > >>>> each MPI implementation, and rows for the various MPI datatypes, > > >>>> constants, etc. where the internal implementations varys. I'll kick > > >>>> it off for OpenMPI and send out. > > >>>> > > >>>> The last column of the matrix can be "ABI" where we propose a common > > >>>> approach across the implementations. > > >>>> > > >>>> A couple of driving principles: > > >>>> 1. the ABI solution shouldn't negatively impact quality of > > >>>> > > >>> implementation > > >>> > > >>>> 2. minimize platform specific solutions > > >>>> > > >>>> I'd like to see if we can produce a single ABI that spans platforms. > > >>>> > > >>>> comments? > > >>>> > > >>>> Jeff > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> mpi3-abi mailing list > > >>>> mpi3-abi_at_[hidden] > > >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >> _______________________________________________ > > >> mpi3-abi mailing list > > >> mpi3-abi_at_[hidden] > > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >> > > > > > > > > > _______________________________________________ > > > mpi3-abi mailing list > > > mpi3-abi_at_[hidden] > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi * -------------- next part -------------- An HTML attachment was scrubbed... URL: From erezh at [hidden] Thu Apr 24 13:14:33 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 24 Apr 2008 11:14:33 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424114556.02fa6e18@ccs-mail.lanl.gov> Message-ID: <6B68D01C00C9994A8E150183E62A119E72BD94C031@NA-EXMSG-C105.redmond.corp.microsoft.com> Okay with me. From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Thursday, April 24, 2008 10:53 AM To: MPI 3.0 ABI working group; MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting here's what I see on the wiki: Types HP IBM Microsoft MPICH2 OpenMPI ABI MPI_Datatype struct hpmp_dtype_s* int int int struct ompi_datatype_t* TBD MPI_Op struct hpmp_op_s* int int int struct ompi_op_t* TBD MPI_Comm struct hpmp_comm_s* int int int struct ompi_communicator_t* TBD MPI_Errhandler struct hpmp_err_s* int int int struct ompi_errhandler_t* TBD Examples HP IBM Microsoft MPICH2 OpenMPI ABI Datatype MPI_CHAR &hpmp_char 4 0x4c000101 0x4c000101 &ompi_mpi_char TBD Op MPI_SUM &hpmp_sum enum 2 0x58000003 0x58000003 &ompi_mpi_op_sum TBD MPI_COMM_WORLD &hpmp_comm_world enum 0 0x44000000 0x44000000 &ompi_mpi_comm_world TBD Compare MPI_IDENT 0 enum 0 0 0 enum 0 TBD There's a lot more detail in the spreadsheet. To do this right, we need to cover the entire space. I'd prefer to stick with excel (posted to the wiki) and add columns for the various implementations. At 10:23 AM 4/24/2008, Erez Haba wrote: Jeff, are you aware that we started that table on the wiki pages? (or you just prefer it in excel?) http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h On that same page you can also find the various mpi.h files. Thanks, .Erez -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [ mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Thursday, April 24, 2008 9:18 AM To: MPI 3.0 ABI working group; MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Just talked to our phone folks. Our trunks are down in New Mexico - so this is a bust. We are sort of a third world country out here. I don't think we have time to reschedule at this point. So ... if folks have the time please populate the matrix with your favorite MPI implementation and distribute to the group. We'll get into the guts of all this at the meeting. For my 5 minute briefing, I'll just show folks where we are and give a glimpse into the details. ee you all at the meeting Jeff At 10:01 AM 4/24/2008, Terry Dontje wrote: >Am I the only one getting an "all circuits are busy" message from the >number below? > >--td > >Jeff Brown wrote: > > all, > > > > I scheduled a telecon to discuss status and get somewhat organized > > for the meeting: > > > > Thursday April 24, 10:00 MDT > > local number: 606-1201(6-1201) > > toll free number: 888 343-0702. > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > I'm having a hard time finding time to devote to this, but I'll have > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > need someone to look at MPICH, and the vendor implementations need to > > be posted. > > > > Jeff > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > >> Yes, it's time to put some cycles toward this. Let's start > >> populating the matrix and have a telecon toward the end of next > >> week. I'll schedule a WG working session at the meeting. > >> > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > >> implementations we use at LANL, and post to the wiki by the end of > >> the week. Others, please do the same for your MPI implementation > >> (especially the vendors). Overlap is OK. > >> > >> I'll send out specifics on the telecon. Let's shoot for Thursday > >> April 24, 9:00 A.M. MST. > >> > >> Jeff > >> > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > >> > >>> Folks, > >>> > >>> Are we planning on a WG update to report at the April 28-30 Forum > >>> meeting? We have started the process of identifying the mpi.h > >>> differences, but I dont think we have synthesized the data yet, or > >>> come to any conclusions/next steps... Or did I miss something here? > >>> > >>> Thanx! > >>> Kannan > >>> > >>> -----Original Message----- > >>> From: mpi3-abi-bounces_at_[hidden] > >>> [ mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > >>> Sent: Monday, March 17, 2008 4:18 AM > >>> To: MPI 3.0 ABI working group > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>> > >>> > >>> I'm not sure how best to express this, but there are a couple of > >>> things that occur to me that might be important: > >>> > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > >>> > >>> 2. should we add some sort of table describing the current situation > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > >>> and the value resolved at (dynamic) link time. > >>> > >>> Cheers, > >>> > >>> Edric. > >>> > >>> > >>>> -----Original Message----- > >>>> From: mpi3-abi-bounces_at_[hidden] > >>>> > >>> [mailto:mpi3-abi- > >>> > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > >>>> Sent: Thursday, March 13, 2008 10:11 PM > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > >>>> > >>>> I propose a way we can make progress ... > >>>> > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > >>>> each MPI implementation, and rows for the various MPI datatypes, > >>>> constants, etc. where the internal implementations varys. I'll kick > >>>> it off for OpenMPI and send out. > >>>> > >>>> The last column of the matrix can be "ABI" where we propose a common > >>>> approach across the implementations. > >>>> > >>>> A couple of driving principles: > >>>> 1. the ABI solution shouldn't negatively impact quality of > >>>> > >>> implementation > >>> > >>>> 2. minimize platform specific solutions > >>>> > >>>> I'd like to see if we can produce a single ABI that spans platforms. > >>>> > >>>> comments? > >>>> > >>>> Jeff > >>>> > >>>> > >>>> _______________________________________________ > >>>> mpi3-abi mailing list > >>>> mpi3-abi_at_[hidden] > >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >>> _______________________________________________ > >>> mpi3-abi mailing list > >>> mpi3-abi_at_[hidden] > >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >>> > >> _______________________________________________ > >> mpi3-abi mailing list > >> mpi3-abi_at_[hidden] > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >> > > > > > > _______________________________________________ > > mpi3-abi mailing list > > mpi3-abi_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffb at [hidden] Thu Apr 24 13:41:10 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 12:41:10 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BD94C031@NA-EXMSG-C105.r edmond.corp.microsoft.com> Message-ID: <6.2.3.4.2.20080424124023.02fba368@ccs-mail.lanl.gov> Do you have time to complete the Microsoft column? At 12:14 PM 4/24/2008, Erez Haba wrote: >Okay with me. > >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >Sent: Thursday, April 24, 2008 10:53 AM >To: MPI 3.0 ABI working group; MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >here's what I see on the wiki: >Types HP IBM Microsoft >MPICH2 OpenMPI ABI >MPI_Datatype struct >hpmp_dtype_s* int int int struct >ompi_datatype_t* TBD >MPI_Op struct >hpmp_op_s* int int int struct >ompi_op_t* TBD >MPI_Comm struct >hpmp_comm_s* int int int struct >ompi_communicator_t* TBD >MPI_Errhandler struct >hpmp_err_s* int int int struct >ompi_errhandler_t* TBD >Examples HP IBM >Microsoft MPICH2 OpenMPI ABI >Datatype >MPI_CHAR &hpmp_char 4 0x4c000101 >0x4c000101 &ompi_mpi_char TBD >Op MPI_SUM &hpmp_sum enum >2 0x58000003 0x58000003 &ompi_mpi_op_sum TBD >MPI_COMM_WORLD &hpmp_comm_world enum >0 0x44000000 0x44000000 &ompi_mpi_comm_world TBD >Compare MPI_IDENT 0 enum >0 0 0 enum 0 TBD >There's a lot more detail in the spreadsheet. To do this right, we >need to cover the entire space. I'd prefer to stick with excel >(posted to the wiki) and add columns for the various implementations. > >At 10:23 AM 4/24/2008, Erez Haba wrote: > >Jeff, are you aware that we started that table on the wiki pages? >(or you just prefer it in excel?) >http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h > >On that same page you can also find the various mpi.h files. > > >Thanks, >.Erez > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] [ >mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >Sent: Thursday, April 24, 2008 9:18 AM >To: MPI 3.0 ABI working group; MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >Just talked to our phone folks. Our trunks are down in New Mexico - >so this is a bust. We are sort of a third world country out here. > >I don't think we have time to reschedule at this point. > >So ... if folks have the time please populate the matrix with your >favorite MPI implementation and distribute to the group. We'll get >into the guts of all this at the meeting. > >For my 5 minute briefing, I'll just show folks where we are and give >a glimpse into the details. > >see you all at the meeting > >Jeff > >At 10:01 AM 4/24/2008, Terry Dontje wrote: > >Am I the only one getting an "all circuits are busy" message from the > >number below? > > > >--td > > > >Jeff Brown wrote: > > > all, > > > > > > I scheduled a telecon to discuss status and get somewhat organized > > > for the meeting: > > > > > > Thursday April 24, 10:00 MDT > > > local number: 606-1201(6-1201) > > > toll free number: 888 343-0702. > > > > > > I'll send out some slides for the 5 minute briefing for the group. > > > > > > I'm having a hard time finding time to devote to this, but I'll have > > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We > > > need someone to look at MPICH, and the vendor implementations need to > > > be posted. > > > > > > Jeff > > > > > > > > > > > > At 10:03 AM 4/16/2008, Jeff Brown wrote: > > > > > >> Yes, it's time to put some cycles toward this. Let's start > > >> populating the matrix and have a telecon toward the end of next > > >> week. I'll schedule a WG working session at the meeting. > > >> > > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI > > >> implementations we use at LANL, and post to the wiki by the end of > > >> the week. Others, please do the same for your MPI implementation > > >> (especially the vendors). Overlap is OK. > > >> > > >> I'll send out specifics on the telecon. Let's shoot for Thursday > > >> April 24, 9:00 A.M. MST. > > >> > > >> Jeff > > >> > > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote: > > >> > > >>> Folks, > > >>> > > >>> Are we planning on a WG update to report at the April 28-30 Forum > > >>> meeting? We have started the process of identifying the mpi.h > > >>> differences, but I dont think we have synthesized the data yet, or > > >>> come to any conclusions/next steps... Or did I miss something here? > > >>> > > >>> Thanx! > > >>> Kannan > > >>> > > >>> -----Original Message----- > > >>> From: mpi3-abi-bounces_at_[hidden] > > >>> [ mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis > > >>> Sent: Monday, March 17, 2008 4:18 AM > > >>> To: MPI 3.0 ABI working group > > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>> > > >>> > > >>> I'm not sure how best to express this, but there are a couple of > > >>> things that occur to me that might be important: > > >>> > > >>> 1. The size of the handle types (cf. size of a pointer perhaps?) > > >>> > > >>> 2. should we add some sort of table describing the current situation > > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g. > > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is > > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e. > > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so, > > >>> and the value resolved at (dynamic) link time. > > >>> > > >>> Cheers, > > >>> > > >>> Edric. > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: mpi3-abi-bounces_at_[hidden] > > >>>> > > >>> [<mailto:mpi3-abi- >mailto:mpi3-abi- > > >>> > > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown > > >>>> Sent: Thursday, March 13, 2008 10:11 PM > > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] > > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March > > >>>> > > >>>> I propose a way we can make progress ... > > >>>> > > >>>> Let's start populating a matrix (excel spreadsheet) with a column for > > >>>> each MPI implementation, and rows for the various MPI datatypes, > > >>>> constants, etc. where the internal implementations varys. I'll kick > > >>>> it off for OpenMPI and send out. > > >>>> > > >>>> The last column of the matrix can be "ABI" where we propose a common > > >>>> approach across the implementations. > > >>>> > > >>>> A couple of driving principles: > > >>>> 1. the ABI solution shouldn't negatively impact quality of > > >>>> > > >>> implementation > > >>> > > >>>> 2. minimize platform specific solutions > > >>>> > > >>>> I'd like to see if we can produce a single ABI that spans platforms. > > >>>> > > >>>> comments? > > >>>> > > >>>> Jeff > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> mpi3-abi mailing list > > >>>> mpi3-abi_at_[hidden] > > >>>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >>> _______________________________________________ > > >>> mpi3-abi mailing list > > >>> mpi3-abi_at_[hidden] > > >>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >>> > > >> _______________________________________________ > > >> mpi3-abi mailing list > > >> mpi3-abi_at_[hidden] > > >> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >> > > > > > > > > > _______________________________________________ > > > mpi3-abi mailing list > > > mpi3-abi_at_[hidden] > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > > > > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lindahl at [hidden] Thu Apr 24 13:46:47 2008 From: lindahl at [hidden] (Greg Lindahl) Date: Thu, 24 Apr 2008 11:46:47 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424124023.02fba368@ccs-mail.lanl.gov> Message-ID: <20080424184647.GB23338@bx9.net> > Do you have time to complete the Microsoft column? Isn't it the same as MPICH2? Likewise, many vendor MPIs are basd on MPICH-1. -- greg From erezh at [hidden] Thu Apr 24 13:55:56 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 24 Apr 2008 11:55:56 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <20080424184647.GB23338@bx9.net> Message-ID: <6B68D01C00C9994A8E150183E62A119E72BD94C0AE@NA-EXMSG-C105.redmond.corp.microsoft.com> It is the same as MPICH2 for MPI constants. -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Greg Lindahl Sent: Thursday, April 24, 2008 11:47 AM To: mpi3-abi_at_[hidden] Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > Do you have time to complete the Microsoft column? Isn't it the same as MPICH2? Likewise, many vendor MPIs are basd on MPICH-1. -- greg _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Thu Apr 24 14:02:29 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 13:02:29 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <20080424184647.GB23338@bx9.net> Message-ID: <6.2.3.4.2.20080424125935.02fc6a60@ccs-mail.lanl.gov> OK - let's document that in the spreadsheet Which vendor implementation are different than the MPICH2 reference? HP? Sun? IBM? SGI? It would be good to go into the meeting with the complete picture. Jeff At 12:46 PM 4/24/2008, Greg Lindahl wrote: > > Do you have time to complete the Microsoft column? > >Isn't it the same as MPICH2? > >Likewise, many vendor MPIs are basd on MPICH-1. > >-- greg >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From howardp at [hidden] Thu Apr 24 14:07:44 2008 From: howardp at [hidden] (Howard Pritchard) Date: Thu, 24 Apr 2008 13:07:44 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424125935.02fc6a60@ccs-mail.lanl.gov> Message-ID: <4810DA80.5080500@cray.com> Hello Jeff, Cray uses mpich2. With respect to header files, for our x86_64 linux base systems, it should be compatible with any other mpich2 configured for x86_64 linux. Howard Jeff Brown wrote: >OK - let's document that in the spreadsheet > >Which vendor implementation are different than the MPICH2 >reference? HP? Sun? IBM? SGI? > >It would be good to go into the meeting with the complete picture. > >Jeff > >At 12:46 PM 4/24/2008, Greg Lindahl wrote: > > >>>Do you have time to complete the Microsoft column? >>> >>> >>Isn't it the same as MPICH2? >> >>Likewise, many vendor MPIs are basd on MPICH-1. >> >>-- greg >>_______________________________________________ >>mpi3-abi mailing list >>mpi3-abi_at_[hidden] >>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> >> > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > -- Howard Pritchard Cray Inc. From Terry.Dontje at [hidden] Thu Apr 24 14:06:51 2008 From: Terry.Dontje at [hidden] (Terry Dontje) Date: Thu, 24 Apr 2008 15:06:51 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424125935.02fc6a60@ccs-mail.lanl.gov> Message-ID: <4810DA4B.2040704@sun.com> Jeff Brown wrote: > OK - let's document that in the spreadsheet > > Which vendor implementation are different than the MPICH2 > reference? HP? Sun? IBM? SGI? > > Sun's implementation is basically Open MPI. --td > It would be good to go into the meeting with the complete picture. > > Jeff > > At 12:46 PM 4/24/2008, Greg Lindahl wrote: > >>> Do you have time to complete the Microsoft column? >>> >> Isn't it the same as MPICH2? >> >> Likewise, many vendor MPIs are basd on MPICH-1. >> >> -- greg >> _______________________________________________ >> mpi3-abi mailing list >> mpi3-abi_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > From kannan.narasimhan at [hidden] Thu Apr 24 14:23:22 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Thu, 24 Apr 2008 19:23:22 +0000 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424125935.02fc6a60@ccs-mail.lanl.gov> Message-ID: I'll work on getting HP-MPI data in the spreadsheet. -Kannan- -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown Sent: Thursday, April 24, 2008 2:02 PM To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting OK - let's document that in the spreadsheet Which vendor implementation are different than the MPICH2 reference? HP? Sun? IBM? SGI? It would be good to go into the meeting with the complete picture. Jeff At 12:46 PM 4/24/2008, Greg Lindahl wrote: > > Do you have time to complete the Microsoft column? > >Isn't it the same as MPICH2? > >Likewise, many vendor MPIs are basd on MPICH-1. > >-- greg >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Thu Apr 24 14:29:11 2008 From: jeffb at [hidden] (Jeff Brown) Date: Thu, 24 Apr 2008 13:29:11 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <6.2.3.4.2.20080424132904.02f6d6f0@ccs-mail.lanl.gov> thanks At 01:23 PM 4/24/2008, you wrote: >I'll work on getting HP-MPI data in the spreadsheet. > >-Kannan- > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown >Sent: Thursday, April 24, 2008 2:02 PM >To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden] >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >OK - let's document that in the spreadsheet > >Which vendor implementation are different than the MPICH2 >reference? HP? Sun? IBM? SGI? > >It would be good to go into the meeting with the complete picture. > >Jeff > >At 12:46 PM 4/24/2008, Greg Lindahl wrote: > > > Do you have time to complete the Microsoft column? > > > >Isn't it the same as MPICH2? > > > >Likewise, many vendor MPIs are basd on MPICH-1. > > > >-- greg > >_______________________________________________ > >mpi3-abi mailing list > >mpi3-abi_at_[hidden] > >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From lindahl at [hidden] Thu Apr 24 15:05:25 2008 From: lindahl at [hidden] (Greg Lindahl) Date: Thu, 24 Apr 2008 13:05:25 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080424125935.02fc6a60@ccs-mail.lanl.gov> Message-ID: <20080424200525.GA32406@bx9.net> PathScale MPI is MPICH-1. -- greg From alexander.supalov at [hidden] Fri Apr 25 02:58:45 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 25 Apr 2008 08:58:45 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BD94C0AE@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A2014670F4@swsmsx413.ger.corp.intel.com> Mind the calling convention issues: MPICH2 uses _cdecl by default. MS MPI uses _stdlib. This makes a lot of difference for IA32 platform. -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Erez Haba Sent: Thursday, April 24, 2008 8:56 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting It is the same as MPICH2 for MPI constants. -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Greg Lindahl Sent: Thursday, April 24, 2008 11:47 AM To: mpi3-abi_at_[hidden] Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > Do you have time to complete the Microsoft column? Isn't it the same as MPICH2? Likewise, many vendor MPIs are basd on MPICH-1. -- greg _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From alexander.supalov at [hidden] Fri Apr 25 05:21:14 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 25 Apr 2008 11:21:14 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BD94C0AE@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201467216@swsmsx413.ger.corp.intel.com> Hi, I've reviewed the current spreadsheet. Note that according to our current observations, 64-bit Linux column covers Itanium Linux as well, as far as the contents of the MPICH2 mpi.h is concerned. I take this as an indication that we should not forget that ABI is more than mpi.h, and that we should be very specific in the platform description. Also, we might want to split Linux and Windows into different subsheets. Indeed, splitting into 32- and 64-bit might be helpful as well, as soon as the table becomes densely populated. I think this is something we should discuss when we see a joint ABI emerging. I have a process question here: how do we prevent multiple updates running into each other, or overwriting each other accidentally? Is there a check-in/out feature in Wiki, or should we introduce a manual lock? Say, a file with a well known name to create before editing and delete afterwards? The name of the creator would show who's holding the lock. The modification date of the main file would show whether the copy you're working with is still actual. Best regards. Alexander -----Original Message----- From: Supalov, Alexander Sent: Friday, April 25, 2008 9:59 AM To: 'MPI 3.0 ABI working group' Subject: RE: [Mpi3-abi] For the April MPI Forum Meeting Mind the calling convention issues: MPICH2 uses _cdecl by default. MS MPI uses _stdlib. This makes a lot of difference for IA32 platform. -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Erez Haba Sent: Thursday, April 24, 2008 8:56 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting It is the same as MPICH2 for MPI constants. -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Greg Lindahl Sent: Thursday, April 24, 2008 11:47 AM To: mpi3-abi_at_[hidden] Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > Do you have time to complete the Microsoft column? Isn't it the same as MPICH2? Likewise, many vendor MPIs are basd on MPICH-1. -- greg _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Fri Apr 25 05:41:27 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 25 Apr 2008 06:41:27 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201467216@swsmsx413.ger.corp.intel.com> Message-ID: <3993C75C-10F0-438B-A3FA-3AA41FEF64F0@cisco.com> On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > I've reviewed the current spreadsheet. Note that according to our > current observations, 64-bit Linux column covers Itanium Linux as > well, > as far as the contents of the MPICH2 mpi.h is concerned. I take this > as > an indication that we should not forget that ABI is more than mpi.h, > and > that we should be very specific in the platform description. > > Also, we might want to split Linux and Windows into different > subsheets. > Indeed, splitting into 32- and 64-bit might be helpful as well, as > soon > as the table becomes densely populated. I think this is something we > should discuss when we see a joint ABI emerging. What is the goal for all of this analysis? I think we can already tell that: - many MPI implementations have different fixed values for the same MPI constants/etc. - some MPI implementations have run-time determined values (e.g., pointers) - some MPI implementations change values/types based on the compiler +platform that they are operating on Do we really need to make a comprehensive list of all MPI's on all compiler+platforms to see these trends? Is there specific data that would be gleaned from these vs. seeing a representative sample? (just trying to understand the purpose of the ever-expanding spreadsheet) > I have a process question here: how do we prevent multiple updates > running into each other, or overwriting each other accidentally? Is > there a check-in/out feature in Wiki, or should we introduce a manual > lock? Say, a file with a well known name to create before editing and > delete afterwards? The name of the creator would show who's holding > the > lock. The modification date of the main file would show whether the > copy > you're working with is still actual. How about creating separate spreadsheets, one for each MPI implementation? This would allow for more-or-less independent updates. At some point in the future (when most changes have been complete), they can be [re-]merged back into one spreadsheet. -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Fri Apr 25 07:32:12 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 25 Apr 2008 13:32:12 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <3993C75C-10F0-438B-A3FA-3AA41FEF64F0@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20146736F@swsmsx413.ger.corp.intel.com> Hi, The purpose of having as much input data as possible before deciding whether and how to proceed is to not miss a potentially important data point. In this vein, I'd love to see some MPICH1 and HP MPI data there. That, with the data already in the spreadsheet, would most likely cover 99% of the currently targeted platforms (IA32/Intel64/Linux/Windows). A split into several files would make sense if we had dozens of actively contributing members. At the moment we are blessed with but a few. Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 12:41 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > I've reviewed the current spreadsheet. Note that according to our > current observations, 64-bit Linux column covers Itanium Linux as > well, > as far as the contents of the MPICH2 mpi.h is concerned. I take this > as > an indication that we should not forget that ABI is more than mpi.h, > and > that we should be very specific in the platform description. > > Also, we might want to split Linux and Windows into different > subsheets. > Indeed, splitting into 32- and 64-bit might be helpful as well, as > soon > as the table becomes densely populated. I think this is something we > should discuss when we see a joint ABI emerging. What is the goal for all of this analysis? I think we can already tell that: - many MPI implementations have different fixed values for the same MPI constants/etc. - some MPI implementations have run-time determined values (e.g., pointers) - some MPI implementations change values/types based on the compiler +platform that they are operating on Do we really need to make a comprehensive list of all MPI's on all compiler+platforms to see these trends? Is there specific data that would be gleaned from these vs. seeing a representative sample? (just trying to understand the purpose of the ever-expanding spreadsheet) > I have a process question here: how do we prevent multiple updates > running into each other, or overwriting each other accidentally? Is > there a check-in/out feature in Wiki, or should we introduce a manual > lock? Say, a file with a well known name to create before editing and > delete afterwards? The name of the creator would show who's holding > the > lock. The modification date of the main file would show whether the > copy > you're working with is still actual. How about creating separate spreadsheets, one for each MPI implementation? This would allow for more-or-less independent updates. At some point in the future (when most changes have been complete), they can be [re-]merged back into one spreadsheet. -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jeffb at [hidden] Fri Apr 25 08:43:26 2008 From: jeffb at [hidden] (Jeffrey S. Brown) Date: Fri, 25 Apr 2008 07:43:26 -0600 (MDT) Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <3993C75C-10F0-438B-A3FA-3AA41FEF64F0@cisco.com> Message-ID: <22468.128.165.0.81.1209131006.squirrel@webmail.lanl.gov> Well, if the goal is a common ABI then we have to expose the details in order to put together a proposal for the group. I don't see any other way to do it. As for the spreadsheet ... One way to do it would be to assign an "owner" for each of the columns on the spreadsheet. I will do the integration. Let's discuss at the WG session next week. I'd like to start getting into discussion of the ABI column. We should have enough data to get that rolling. Perhaps we will have something to propose at the next meeting (although this seems to pop up on everyones radar a week prior to the meeting - myself included). Jeff > On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > >> I've reviewed the current spreadsheet. Note that according to our >> current observations, 64-bit Linux column covers Itanium Linux as >> well, >> as far as the contents of the MPICH2 mpi.h is concerned. I take this >> as >> an indication that we should not forget that ABI is more than mpi.h, >> and >> that we should be very specific in the platform description. >> >> Also, we might want to split Linux and Windows into different >> subsheets. >> Indeed, splitting into 32- and 64-bit might be helpful as well, as >> soon >> as the table becomes densely populated. I think this is something we >> should discuss when we see a joint ABI emerging. > > What is the goal for all of this analysis? I think we can already > tell that: > > - many MPI implementations have different fixed values for the same > MPI constants/etc. > - some MPI implementations have run-time determined values (e.g., > pointers) > - some MPI implementations change values/types based on the compiler > +platform that they are operating on > > Do we really need to make a comprehensive list of all MPI's on all > compiler+platforms to see these trends? Is there specific data that > would be gleaned from these vs. seeing a representative sample? > > (just trying to understand the purpose of the ever-expanding > spreadsheet) > >> I have a process question here: how do we prevent multiple updates >> running into each other, or overwriting each other accidentally? Is >> there a check-in/out feature in Wiki, or should we introduce a manual >> lock? Say, a file with a well known name to create before editing and >> delete afterwards? The name of the creator would show who's holding >> the >> lock. The modification date of the main file would show whether the >> copy >> you're working with is still actual. > > > How about creating separate spreadsheets, one for each MPI > implementation? This would allow for more-or-less independent updates. > > At some point in the future (when most changes have been complete), > they can be [re-]merged back into one spreadsheet. > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > From jsquyres at [hidden] Fri Apr 25 09:04:14 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 25 Apr 2008 10:04:14 -0400 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20146736F@swsmsx413.ger.corp.intel.com> Message-ID: <25CDC614-ADBB-4C43-A3D7-DB5D02D94ACE@cisco.com> On Apr 25, 2008, at 8:32 AM, Supalov, Alexander wrote: > The purpose of having as much input data as possible before deciding > whether and how to proceed is to not miss a potentially important data > point. > > In this vein, I'd love to see some MPICH1 and HP MPI data there. That, > with the data already in the spreadsheet, would most likely cover > 99% of > the currently targeted platforms (IA32/Intel64/Linux/Windows). I'm all for having enough data points. I was questioning how many we need -- it just looked like we we diverging into the "need dozens of datapoints" realm. If we're not, no problem. > A split into several files would make sense if we had dozens of > actively > contributing members. At the moment we are blessed with but a few. The wiki has no "lock" feature. SVN does, but we don't really have a common SVN. This is the canonical problem with binary formats -- more power in the binary-based tool (excel), but less collaboration ability. I suppose a sharepoint server would work...? But [I'm assuming] that would be a nightmare of licensing and access control setup. We could use a tex-based format (latex?) that allows merge capabilities, or use the wiki text format (but wikis don't handle simultaneous editing nicely -- someone inevitably "loses", rather than having the ability to merge their new text in). Splitting into mutliple files, each with a discrete author, might still be the best solution. [shrug] -- Jeff Squyres Cisco Systems From erezh at [hidden] Fri Apr 25 11:01:25 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 25 Apr 2008 09:01:25 -0700 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <25CDC614-ADBB-4C43-A3D7-DB5D02D94ACE@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E72BEF8E01E@NA-EXMSG-C105.redmond.corp.microsoft.com> I think that we need enough data to put the constants into classes of constants that are different from one implementation to the other (examples for a classes, are "handles", "datatype"). Once done, we'll be able to understand the differences in the different implementations (approach) and in the different platforms. This will enable us to come up with reasonable suggestion for the ABI constants. I think that putting the constants in different spreadsheet (split by os/cpu) is reasonable. Thanks, .Erez -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 7:04 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 8:32 AM, Supalov, Alexander wrote: > The purpose of having as much input data as possible before deciding > whether and how to proceed is to not miss a potentially important data > point. > > In this vein, I'd love to see some MPICH1 and HP MPI data there. That, > with the data already in the spreadsheet, would most likely cover > 99% of > the currently targeted platforms (IA32/Intel64/Linux/Windows). I'm all for having enough data points. I was questioning how many we need -- it just looked like we we diverging into the "need dozens of datapoints" realm. If we're not, no problem. > A split into several files would make sense if we had dozens of > actively > contributing members. At the moment we are blessed with but a few. The wiki has no "lock" feature. SVN does, but we don't really have a common SVN. This is the canonical problem with binary formats -- more power in the binary-based tool (excel), but less collaboration ability. I suppose a sharepoint server would work...? But [I'm assuming] that would be a nightmare of licensing and access control setup. We could use a tex-based format (latex?) that allows merge capabilities, or use the wiki text format (but wikis don't handle simultaneous editing nicely -- someone inevitably "loses", rather than having the ability to merge their new text in). Splitting into mutliple files, each with a discrete author, might still be the best solution. [shrug] -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From alexander.supalov at [hidden] Fri Apr 25 13:11:52 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 25 Apr 2008 19:11:52 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E62A119E72BEF8E01E@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20148EF1F@swsmsx413.ger.corp.intel.com> Hi, The initial ABI proposal draft (see http://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/AbiWikiPage/MPI%20ABI%200.4.doc) cites Bill Gropp's paper to introduce the MPI constant hierarchy basing on when the items are/should be defined according to the standard. We can use that hierarchy to split all ABI entities into groups. Here is the pertinent excerpt: - Compatible MPI data entities o Compile-time values § Used in declarations (MPI_MAX_ERROR_STRING, etc.) § Other (MPI_ANY_SOURCE, MPI_ERR_TRUNCATE, etc.) o Init-time constants (MPI_INT, etc.) o Opaque objects (MPI_Comm, MPI_Datatype, etc.) o Defined objects (MPI_Status) o Defined pointers (MPI_BOTTOM, MPI_STATUS_NULL, etc.) And while we're so focused on the mpi.h contents, I can't help reiterating that this is but a part of the ABI matter. The following is no less important for the whole thing to work: - Uniform approach to the macro implementation of certain functions (MPI_Wtime, MPI_Wtick, handle conversion calls, possibly others) - Similar calling convention o Argument order and size o Stack frame management policy o Return address storage and handling o Function call and return handling - Common linkage convention o Library file format o MPI library name o MPI library path resolution mechanism o System library dependency resolution Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Erez Haba Sent: Friday, April 25, 2008 6:01 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting I think that we need enough data to put the constants into classes of constants that are different from one implementation to the other (examples for a classes, are "handles", "datatype"). Once done, we'll be able to understand the differences in the different implementations (approach) and in the different platforms. This will enable us to come up with reasonable suggestion for the ABI constants. I think that putting the constants in different spreadsheet (split by os/cpu) is reasonable. Thanks, .Erez -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 7:04 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 8:32 AM, Supalov, Alexander wrote: > The purpose of having as much input data as possible before deciding > whether and how to proceed is to not miss a potentially important data > point. > > In this vein, I'd love to see some MPICH1 and HP MPI data there. That, > with the data already in the spreadsheet, would most likely cover > 99% of > the currently targeted platforms (IA32/Intel64/Linux/Windows). I'm all for having enough data points. I was questioning how many we need -- it just looked like we we diverging into the "need dozens of datapoints" realm. If we're not, no problem. > A split into several files would make sense if we had dozens of > actively > contributing members. At the moment we are blessed with but a few. The wiki has no "lock" feature. SVN does, but we don't really have a common SVN. This is the canonical problem with binary formats -- more power in the binary-based tool (excel), but less collaboration ability. I suppose a sharepoint server would work...? But [I'm assuming] that would be a nightmare of licensing and access control setup. We could use a tex-based format (latex?) that allows merge capabilities, or use the wiki text format (but wikis don't handle simultaneous editing nicely -- someone inevitably "loses", rather than having the ability to merge their new text in). Splitting into mutliple files, each with a discrete author, might still be the best solution. [shrug] -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From alexander.supalov at [hidden] Fri Apr 25 13:53:39 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 25 Apr 2008 19:53:39 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6B68D01C00C9994A8E150183E 62A119E72BEF8E01E@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20148EF2B@swsmsx413.ger.corp.intel.com> Hi, I've updated Jeff's file in the Wiki: some formatting, some typos fixed (in red), etc. Best regards. Alexander -----Original Message----- From: Supalov, Alexander Sent: Friday, April 25, 2008 8:12 PM To: 'MPI 3.0 ABI working group' Subject: RE: [Mpi3-abi] For the April MPI Forum Meeting Hi, The initial ABI proposal draft (see http://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/AbiWikiPage/MPI%20ABI%200.4.doc) cites Bill Gropp's paper to introduce the MPI constant hierarchy basing on when the items are/should be defined according to the standard. We can use that hierarchy to split all ABI entities into groups. Here is the pertinent excerpt: - Compatible MPI data entities o Compile-time values § Used in declarations (MPI_MAX_ERROR_STRING, etc.) § Other (MPI_ANY_SOURCE, MPI_ERR_TRUNCATE, etc.) o Init-time constants (MPI_INT, etc.) o Opaque objects (MPI_Comm, MPI_Datatype, etc.) o Defined objects (MPI_Status) o Defined pointers (MPI_BOTTOM, MPI_STATUS_NULL, etc.) And while we're so focused on the mpi.h contents, I can't help reiterating that this is but a part of the ABI matter. The following is no less important for the whole thing to work: - Uniform approach to the macro implementation of certain functions (MPI_Wtime, MPI_Wtick, handle conversion calls, possibly others) - Similar calling convention o Argument order and size o Stack frame management policy o Return address storage and handling o Function call and return handling - Common linkage convention o Library file format o MPI library name o MPI library path resolution mechanism o System library dependency resolution Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Erez Haba Sent: Friday, April 25, 2008 6:01 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting I think that we need enough data to put the constants into classes of constants that are different from one implementation to the other (examples for a classes, are "handles", "datatype"). Once done, we'll be able to understand the differences in the different implementations (approach) and in the different platforms. This will enable us to come up with reasonable suggestion for the ABI constants. I think that putting the constants in different spreadsheet (split by os/cpu) is reasonable. Thanks, .Erez -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 7:04 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 8:32 AM, Supalov, Alexander wrote: > The purpose of having as much input data as possible before deciding > whether and how to proceed is to not miss a potentially important data > point. > > In this vein, I'd love to see some MPICH1 and HP MPI data there. That, > with the data already in the spreadsheet, would most likely cover > 99% of > the currently targeted platforms (IA32/Intel64/Linux/Windows). I'm all for having enough data points. I was questioning how many we need -- it just looked like we we diverging into the "need dozens of datapoints" realm. If we're not, no problem. > A split into several files would make sense if we had dozens of > actively > contributing members. At the moment we are blessed with but a few. The wiki has no "lock" feature. SVN does, but we don't really have a common SVN. This is the canonical problem with binary formats -- more power in the binary-based tool (excel), but less collaboration ability. I suppose a sharepoint server would work...? But [I'm assuming] that would be a nightmare of licensing and access control setup. We could use a tex-based format (latex?) that allows merge capabilities, or use the wiki text format (but wikis don't handle simultaneous editing nicely -- someone inevitably "loses", rather than having the ability to merge their new text in). Splitting into mutliple files, each with a discrete author, might still be the best solution. [shrug] -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From kannan.narasimhan at [hidden] Fri Apr 25 19:38:25 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Sat, 26 Apr 2008 00:38:25 +0000 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20146736F@swsmsx413.ger.corp.intel.com> Message-ID: I've added HP-MPI data to the spread sheet, and uploaded it to the Wiki. I've focused on the Linux version of mpi.h (the Windows and HP-UX versions currently have minor differences, but we plan to normalize to the Linux version in future). -Kannan- -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Friday, April 25, 2008 7:32 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Hi, The purpose of having as much input data as possible before deciding whether and how to proceed is to not miss a potentially important data point. In this vein, I'd love to see some MPICH1 and HP MPI data there. That, with the data already in the spreadsheet, would most likely cover 99% of the currently targeted platforms (IA32/Intel64/Linux/Windows). A split into several files would make sense if we had dozens of actively contributing members. At the moment we are blessed with but a few. Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 12:41 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > I've reviewed the current spreadsheet. Note that according to our > current observations, 64-bit Linux column covers Itanium Linux as > well, as far as the contents of the MPICH2 mpi.h is concerned. I take > this as an indication that we should not forget that ABI is more than > mpi.h, and that we should be very specific in the platform > description. > > Also, we might want to split Linux and Windows into different > subsheets. > Indeed, splitting into 32- and 64-bit might be helpful as well, as > soon as the table becomes densely populated. I think this is something > we should discuss when we see a joint ABI emerging. What is the goal for all of this analysis? I think we can already tell that: - many MPI implementations have different fixed values for the same MPI constants/etc. - some MPI implementations have run-time determined values (e.g., pointers) - some MPI implementations change values/types based on the compiler +platform that they are operating on Do we really need to make a comprehensive list of all MPI's on all compiler+platforms to see these trends? Is there specific data that would be gleaned from these vs. seeing a representative sample? (just trying to understand the purpose of the ever-expanding spreadsheet) > I have a process question here: how do we prevent multiple updates > running into each other, or overwriting each other accidentally? Is > there a check-in/out feature in Wiki, or should we introduce a manual > lock? Say, a file with a well known name to create before editing and > delete afterwards? The name of the creator would show who's holding > the lock. The modification date of the main file would show whether > the copy you're working with is still actual. How about creating separate spreadsheets, one for each MPI implementation? This would allow for more-or-less independent updates. At some point in the future (when most changes have been complete), they can be [re-]merged back into one spreadsheet. -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From alexander.supalov at [hidden] Sat Apr 26 02:25:13 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Sat, 26 Apr 2008 08:25:13 +0100 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A20148EF77@swsmsx413.ger.corp.intel.com> Thanks. I took the liberty of removing the older document, so that we have one source for the MPICH addition we need to complete the rough picture. Who could take on that? -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Saturday, April 26, 2008 2:38 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting I've added HP-MPI data to the spread sheet, and uploaded it to the Wiki. I've focused on the Linux version of mpi.h (the Windows and HP-UX versions currently have minor differences, but we plan to normalize to the Linux version in future). -Kannan- -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Friday, April 25, 2008 7:32 AM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting Hi, The purpose of having as much input data as possible before deciding whether and how to proceed is to not miss a potentially important data point. In this vein, I'd love to see some MPICH1 and HP MPI data there. That, with the data already in the spreadsheet, would most likely cover 99% of the currently targeted platforms (IA32/Intel64/Linux/Windows). A split into several files would make sense if we had dozens of actively contributing members. At the moment we are blessed with but a few. Best regards. Alexander -----Original Message----- From: mpi3-abi-bounces_at_[hidden] [mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, April 25, 2008 12:41 PM To: MPI 3.0 ABI working group Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > I've reviewed the current spreadsheet. Note that according to our > current observations, 64-bit Linux column covers Itanium Linux as > well, as far as the contents of the MPICH2 mpi.h is concerned. I take > this as an indication that we should not forget that ABI is more than > mpi.h, and that we should be very specific in the platform > description. > > Also, we might want to split Linux and Windows into different > subsheets. > Indeed, splitting into 32- and 64-bit might be helpful as well, as > soon as the table becomes densely populated. I think this is something > we should discuss when we see a joint ABI emerging. What is the goal for all of this analysis? I think we can already tell that: - many MPI implementations have different fixed values for the same MPI constants/etc. - some MPI implementations have run-time determined values (e.g., pointers) - some MPI implementations change values/types based on the compiler +platform that they are operating on Do we really need to make a comprehensive list of all MPI's on all compiler+platforms to see these trends? Is there specific data that would be gleaned from these vs. seeing a representative sample? (just trying to understand the purpose of the ever-expanding spreadsheet) > I have a process question here: how do we prevent multiple updates > running into each other, or overwriting each other accidentally? Is > there a check-in/out feature in Wiki, or should we introduce a manual > lock? Say, a file with a well known name to create before editing and > delete afterwards? The name of the creator would show who's holding > the lock. The modification date of the main file would show whether > the copy you're working with is still actual. How about creating separate spreadsheets, one for each MPI implementation? This would allow for more-or-less independent updates. At some point in the future (when most changes have been complete), they can be [re-]merged back into one spreadsheet. -- Jeff Squyres Cisco Systems _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi _______________________________________________ mpi3-abi mailing list mpi3-abi_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jeffb at [hidden] Sat Apr 26 18:16:26 2008 From: jeffb at [hidden] (Jeff Brown) Date: Sat, 26 Apr 2008 17:16:26 -0600 Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: Message-ID: <6.2.3.4.2.20080426171419.02edfaa0@ccs-mail.lanl.gov> I added LAMPI (Los Alamos MPI) and documented the vendor implementations that were mentioned that follow an existing implementation. Uploaded to the wiki. See everyone at the meeting. Jeff At 06:38 PM 4/25/2008, Narasimhan, Kannan wrote: >I've added HP-MPI data to the spread sheet, and uploaded it to the >Wiki. I've focused on the Linux version of mpi.h (the Windows and >HP-UX versions currently have minor differences, but we plan >to normalize to the Linux version in future). > >-Kannan- > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, Alexander >Sent: Friday, April 25, 2008 7:32 AM >To: MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >Hi, > >The purpose of having as much input data as possible before deciding >whether and how to proceed is to not miss a potentially important data point. > >In this vein, I'd love to see some MPICH1 and HP MPI data there. >That, with the data already in the spreadsheet, would most likely >cover 99% of the currently targeted platforms (IA32/Intel64/Linux/Windows). > >A split into several files would make sense if we had dozens of >actively contributing members. At the moment we are blessed with but a few. > >Best regards. > >Alexander > >-----Original Message----- >From: mpi3-abi-bounces_at_[hidden] >[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres >Sent: Friday, April 25, 2008 12:41 PM >To: MPI 3.0 ABI working group >Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting > >On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: > > > I've reviewed the current spreadsheet. Note that according to our > > current observations, 64-bit Linux column covers Itanium Linux as > > well, as far as the contents of the MPICH2 mpi.h is concerned. I take > > this as an indication that we should not forget that ABI is more than > > mpi.h, and that we should be very specific in the platform > > description. > > > > Also, we might want to split Linux and Windows into different > > subsheets. > > Indeed, splitting into 32- and 64-bit might be helpful as well, as > > soon as the table becomes densely populated. I think this is something > > we should discuss when we see a joint ABI emerging. > >What is the goal for all of this analysis? I think we can already tell that: > >- many MPI implementations have different fixed values for the same >MPI constants/etc. >- some MPI implementations have run-time determined values (e.g., >pointers) >- some MPI implementations change values/types based on the compiler >+platform that they are operating on > >Do we really need to make a comprehensive list of all MPI's on all >compiler+platforms to see these trends? Is there specific data that >would be gleaned from these vs. seeing a representative sample? > >(just trying to understand the purpose of the ever-expanding >spreadsheet) > > > I have a process question here: how do we prevent multiple updates > > running into each other, or overwriting each other accidentally? Is > > there a check-in/out feature in Wiki, or should we introduce a manual > > lock? Say, a file with a well known name to create before editing and > > delete afterwards? The name of the creator would show who's holding > > the lock. The modification date of the main file would show whether > > the copy you're working with is still actual. > > >How about creating separate spreadsheets, one for each MPI >implementation? This would allow for more-or-less independent updates. > >At some point in the future (when most changes have been complete), >they can be [re-]merged back into one spreadsheet. > >-- >Jeff Squyres >Cisco Systems > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >--------------------------------------------------------------------- >Intel GmbH >Dornacher Strasse 1 >85622 Feldkirchen/Muenchen Germany >Sitz der Gesellschaft: Feldkirchen bei Muenchen >Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >Registergericht: Muenchen HRB 47456 Ust.-IdNr. >VAT Registration No.: DE129385895 >Citibank Frankfurt (BLZ 502 109 00) 600119052 > >This e-mail and any attachments may contain confidential material >for the sole use of the intended recipient(s). Any review or >distribution by others is strictly prohibited. If you are not the >intended recipient, please contact the sender and delete all copies. > > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > >_______________________________________________ >mpi3-abi mailing list >mpi3-abi_at_[hidden] >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi From jeffb at [hidden] Mon Apr 28 15:43:02 2008 From: jeffb at [hidden] (Jeffrey S. Brown) Date: Mon, 28 Apr 2008 14:43:02 -0600 (MDT) Subject: [Mpi3-abi] For the April MPI Forum Meeting In-Reply-To: <6.2.3.4.2.20080426171419.02edfaa0@ccs-mail.lanl.gov> Message-ID: <31080.128.165.0.81.1209415382.squirrel@webmail.lanl.gov> After speaking with Alexander, I updated the spreadsheet and assigned column owners (see attached comment in the column headings), and refined the MPICH columns a bit. Attached and uploaded to the wiki. Jeff > I added LAMPI (Los Alamos MPI) and documented the vendor > implementations that were mentioned that follow an existing > implementation. Uploaded to the wiki. > > See everyone at the meeting. > > Jeff > > At 06:38 PM 4/25/2008, Narasimhan, Kannan wrote: >>I've added HP-MPI data to the spread sheet, and uploaded it to the >>Wiki. I've focused on the Linux version of mpi.h (the Windows and >>HP-UX versions currently have minor differences, but we plan >>to normalize to the Linux version in future). >> >>-Kannan- >> >>-----Original Message----- >>From: mpi3-abi-bounces_at_[hidden] >>[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Supalov, >> Alexander >>Sent: Friday, April 25, 2008 7:32 AM >>To: MPI 3.0 ABI working group >>Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >>Hi, >> >>The purpose of having as much input data as possible before deciding >>whether and how to proceed is to not miss a potentially important data >> point. >> >>In this vein, I'd love to see some MPICH1 and HP MPI data there. >>That, with the data already in the spreadsheet, would most likely >>cover 99% of the currently targeted platforms >> (IA32/Intel64/Linux/Windows). >> >>A split into several files would make sense if we had dozens of >>actively contributing members. At the moment we are blessed with but a >> few. >> >>Best regards. >> >>Alexander >> >>-----Original Message----- >>From: mpi3-abi-bounces_at_[hidden] >>[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Squyres >>Sent: Friday, April 25, 2008 12:41 PM >>To: MPI 3.0 ABI working group >>Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting >> >>On Apr 25, 2008, at 6:21 AM, Supalov, Alexander wrote: >> >> > I've reviewed the current spreadsheet. Note that according to our >> > current observations, 64-bit Linux column covers Itanium Linux as >> > well, as far as the contents of the MPICH2 mpi.h is concerned. I take >> > this as an indication that we should not forget that ABI is more than >> > mpi.h, and that we should be very specific in the platform >> > description. >> > >> > Also, we might want to split Linux and Windows into different >> > subsheets. >> > Indeed, splitting into 32- and 64-bit might be helpful as well, as >> > soon as the table becomes densely populated. I think this is something >> > we should discuss when we see a joint ABI emerging. >> >>What is the goal for all of this analysis? I think we can already tell >> that: >> >>- many MPI implementations have different fixed values for the same >>MPI constants/etc. >>- some MPI implementations have run-time determined values (e.g., >>pointers) >>- some MPI implementations change values/types based on the compiler >>+platform that they are operating on >> >>Do we really need to make a comprehensive list of all MPI's on all >>compiler+platforms to see these trends? Is there specific data that >>would be gleaned from these vs. seeing a representative sample? >> >>(just trying to understand the purpose of the ever-expanding >>spreadsheet) >> >> > I have a process question here: how do we prevent multiple updates >> > running into each other, or overwriting each other accidentally? Is >> > there a check-in/out feature in Wiki, or should we introduce a manual >> > lock? Say, a file with a well known name to create before editing and >> > delete afterwards? The name of the creator would show who's holding >> > the lock. The modification date of the main file would show whether >> > the copy you're working with is still actual. >> >> >>How about creating separate spreadsheets, one for each MPI >>implementation? This would allow for more-or-less independent updates. >> >>At some point in the future (when most changes have been complete), >>they can be [re-]merged back into one spreadsheet. >> >>-- >>Jeff Squyres >>Cisco Systems >> >>_______________________________________________ >>mpi3-abi mailing list >>mpi3-abi_at_[hidden] >>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >>--------------------------------------------------------------------- >>Intel GmbH >>Dornacher Strasse 1 >>85622 Feldkirchen/Muenchen Germany >>Sitz der Gesellschaft: Feldkirchen bei Muenchen >>Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >>Registergericht: Muenchen HRB 47456 Ust.-IdNr. >>VAT Registration No.: DE129385895 >>Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >>This e-mail and any attachments may contain confidential material >>for the sole use of the intended recipient(s). Any review or >>distribution by others is strictly prohibited. If you are not the >>intended recipient, please contact the sender and delete all copies. >> >> >>_______________________________________________ >>mpi3-abi mailing list >>mpi3-abi_at_[hidden] >>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi >> >>_______________________________________________ >>mpi3-abi mailing list >>mpi3-abi_at_[hidden] >>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > > > _______________________________________________ > mpi3-abi mailing list > mpi3-abi_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi > * -------------- next part -------------- A non-text attachment was scrubbed... Name: MPI_ABI_OpenMPI___MPICH2___HPMPI___LAMPI___vendors.xls Type: application/octet-stream Size: 113152 bytes Desc: MPI_ABI_OpenMPI___MPICH2___HPMPI___LAMPI___vendors.xls URL: