From wgropp at [hidden] Mon Sep 1 13:42:29 2008 From: wgropp at [hidden] (William Gropp) Date: Mon, 1 Sep 2008 13:42:29 -0500 Subject: [Mpi-22] MPI 2.2 at MPI Forum Meeting Message-ID: <6785C973-D60D-4B1E-892F-8FA1DEFCFB85@illinois.edu> MPI 2.2 Group, We will be discussing the current proposals that have been added to the wiki. Please look these over before the meeting; the page is https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MpiTwoTwoWikiPage . I hope to see many of you on Wednesday! Bill William Gropp Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Tue Sep 2 17:06:49 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 2 Sep 2008 23:06:49 +0100 Subject: [Mpi-22] Several new proposals / moot points Message-ID: Just in time for the Dublin meeting, I have posted several new MPI-2.2 proposals (it looks like a bunch of others have, too!) based on my items from the following URLs: http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/mpi2-2issues.htm#squyres1 http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/mpi2-2issues.htm#squyres2 http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/mpi2-2issues.htm#squyres3 I numbered each of the proposals with the item number from the above URLs. Here's the resulting proposals on the wiki: https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/CxxBindingsMissingConst https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/GrequestStartFnPtrArgs https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/FnPtrTypedefNames https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TextUpdatesToLangBindingsChapter https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/MPICancelWrongArgType https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/NamingConventionsPrefixConsistency https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/RemoveCxxDeprecatedSection Additionally, several of the suggestions that I made on the #squyresX URLs are now moot. Here's a listing by issue number, including brief descriptions of why they are moot: ====================================================================== 30.c ? p4: Two open source MPI implementations are cited that are no longer relevant. This entire sentence should be removed. Indeed, the language in the overall paragraph is forward-looking -- it should probably be re-worked to be in the present tense. Editor's comment: should be done by the Forum Editor (WDG): The historical context is important. Rather than remove this, it should be updated. --> now moot in the final 2.1 doc; the language was re-worked. ====================================================================== 30.o R2 p186.5: Descriptions for the C++ bindings need to be included here, since they are different than the C bindings. The text from 13.1.7 would seem to be sufficient. (See also 35.e) ---> seems moot now -- MPI-2.1 p195:41 says "Please see Section 16.1.7 on page 455 for further discussion about the C++ bindings for Dup() and Clone()." ====================================================================== 71.c ? - MPI-2.1 p571 Examples Index: there are still a bunch of repeated names, some in all caps, some in mixed case, etc. 71.d ? - MPI-2.1 p571 Examples Index: There are some examples listed just by MPI function name (e.g., MPI_SEND and MPI_Send) -- are we listing every MPI function in every MPI example? How was the selection for these example names given? Editor's comment: I tried to pick major routines in the examples (sometimes not all). Additionally, for some examples, I defined titles for the Index. 71.e ? - MPI-2.1 p574 MPI Constant and Predefined Handle Index: The first entry is still "MPI::*_NULL" 71.f ? - MPI-2.1 p574 MPI Constant and Predefined Handle Index: Are we listing the C++ constants and Fortran constants here, too? Or just the C constants? Or just language neutral? (I only see C++ predefined datatypes listed -- should we list all or none of them?) --> these still need to be cleaned up; but I don't think they need a proposal. ====================================================================== p10.35: due to the decision from last meeting (sort out the IN/OUT/INOUT mess in MPI-2.2), the language should be softened in this paragraph and the full paragraph following this one (because they contradict each other). Specifically, I propose changing: 10.35: Thus, in C++, IN aguments are either references... to Thus, in C++, IN arguments are usually either references... Editor's comment: Is there an exception from this rule? If yes, then the proposal is okay. --> now moot; MPI 2.1 p10:34 says "...usually..." Remember that the Forum was very careful to state that this is *not* a rule; it's a guideline. ====================================================================== 32.j' ? Which names should be visible in the Index for Dup and Clone? TODO --> MPI::Comm::Clone() and MPI::Comm::Dup() show up in MPI-2.1 Annex A. 4 (C++ bindings) and MPI_COMM_CLONE and MPI_COMM_DUP show up in the MPI Function Index. I think that this is sufficient. ====================================================================== 30.h MPI-2.1 p11.22: Fortran in this document refers to Fortran 90". For MPI-2.1, it is probably suitable to leave this, but we might want to make a statement (footnote or parenthetical) that it is expected to be updated in future MPI spec revisions. --> I no longer think that this is necessary. ====================================================================== 32.k R2 p445.36-38: Remove this entire paragraph ("Compilers that do not support..."). This feature has been a part of C++ since C+ +98, and exists in all modern C++ compilers. --> Moot: it's gone. (MPI-2.1 p443:46) ====================================================================== 33.n R2 p556.bottom: Missing prototypes for the MPI::Exceptions class --> Moot: now included in MPI-2.1 A.4.12. ====================================================================== 32.r 22 p463.17-18: Delete first sentence; delete "In MPI-2," --> Moot: done in MPI-2.2. ====================================================================== Enjoy. -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Wed Sep 3 10:18:59 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Wed, 3 Sep 2008 16:18:59 +0100 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201AF39BC@swsmsx413.ger.corp.intel.com> Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From kannan.narasimhan at [hidden] Wed Sep 3 10:54:46 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Wed, 3 Sep 2008 15:54:46 +0000 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201AF39BC@swsmsx413.ger.corp.intel.com> Message-ID: We can certainly consider this option, since it abstracts along the same lines as MPI_Offset. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 10:19 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 From alexander.supalov at [hidden] Wed Sep 3 11:38:35 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Wed, 3 Sep 2008 17:38:35 +0100 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201AF3A5D@swsmsx413.ger.corp.intel.com> Thanks. We may even do without changing the MPI routine names I think. Let's consider this: 1) #define MPI_Count int will yield the current MPI interface. 2) typedef int MPI_Count will yield the new 32-bit interface, still backward binary compatible with 1). 3) typedef long MPI_Count will yield new 64-bit interface. 4) typedef long long MPI_Count will safely bring us into XXII century (or earlier). Old applications may be rebuilt using 1) without any changes, dynamically linked against MPI library built using 2), and will have to be rebuilt to use MPI library built using 3) or 4). New applications will use library with the MPI_Count definitions, and, properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using MPI libraries built using MPI bindings 2), 3), or 4), respectively. By the way, should MPI_Count really be signed? -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 5:55 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts We can certainly consider this option, since it abstracts along the same lines as MPI_Offset. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 10:19 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From treumann at [hidden] Wed Sep 3 12:24:24 2008 From: treumann at [hidden] (Richard Treumann) Date: Wed, 3 Sep 2008 13:24:24 -0400 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201AF3A5D@swsmsx413.ger.corp.intel.com> Message-ID: MPI_Aint has the right number of bits to hold an address. For operations that are address space oriented, count can be the same size as an MPI_Aint MPI_Offset has the right number of bits to deal with the range in a file. For operations that are file oriented, arguments like count should be an MPI_Offset A problem is that MPI_Datatype constructors are intended to apply to both and it is common for a 32 bit address space to operate on files with 64 bit offsets. What type should "count" on the new "long" MPI_Datatype constructors be in this case? The proposal says MPI_Aint but that is awkward for datatypes as fileviews. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 09/03/2008 12:38:35 PM: > [image removed] > > Re: [Mpi-22] New proposal: Support for large message counts > > Supalov, Alexander > > to: > > MPI 2.2, > > 09/03/2008 12:48 PM > > Sent by: > > mpi-22-bounces_at_[hidden] > > Please respond to "MPI 2.2" > > Thanks. We may even do without changing the MPI routine names I think. > Let's consider this: > > 1) #define MPI_Count int will yield the current MPI interface. > 2) typedef int MPI_Count will yield the new 32-bit interface, still > backward binary compatible with 1). > 3) typedef long MPI_Count will yield new 64-bit interface. > 4) typedef long long MPI_Count will safely bring us into XXII century > (or earlier). > > Old applications may be rebuilt using 1) without any changes, > dynamically linked against MPI library built using 2), and will have to > be rebuilt to use MPI library built using 3) or 4). > > New applications will use library with the MPI_Count definitions, and, > properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using > MPI libraries built using MPI bindings 2), 3), or 4), respectively. > > By the way, should MPI_Count really be signed? > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, > Kannan > Sent: Wednesday, September 03, 2008 5:55 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] New proposal: Support for large message counts > > We can certainly consider this option, since it abstracts along the same > lines as MPI_Offset. > > Thanx! > Kannan > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, > Alexander > Sent: Wednesday, September 03, 2008 10:19 AM > To: MPI 2.2 > Subject: Re: [Mpi-22] New proposal: Support for large message counts > > Dear Kannan, > > Thanks. Why don't we define MPI_Count datatype instead, and make that > grow if necessary? > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, > Kannan > Sent: Friday, August 29, 2008 6:15 PM > To: mpi-22_at_[hidden] > Subject: [Mpi-22] New proposal: Support for large message counts > > I have submitted a new 2.2 proposal to addresses the need for large > message counts (i.e. counts greater than the size of 32-bit integer) for > MPI calls that communicate messages. > > Please refer to > https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts > for details. > > Comments and suggestions are welcome. > > Thanx! > Kannan > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution by > others is strictly prohibited. If you are not the intended recipient, > please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kannan.narasimhan at [hidden] Wed Sep 3 12:31:32 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Wed, 3 Sep 2008 17:31:32 +0000 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201AF3A5D@swsmsx413.ger.corp.intel.com> Message-ID: Alexander, Thanx for the suggestion. 1. Maybe I'm missing something here, but without name change, we break backward compatibility with "old" binaries coded to the existing APIs with 32-bit counts, and the new MPI 2.2 compatible MPI shared library that supports 64-bit counts --- without a re-build/relink step. Or are you proposing that implementations have multiple libraries, one that supports 32 bit counts, one that supports 64 bit counts, etc.? With a single library, (1) and (2) addresses backward compatibility, but maintains status-quo on support for large message counts, and (3) and (4) requires rebuild. However, having different names will take care of this. 2. If needed, we can explore normalization to a single API that supports MPI_Count in the MPI 3.0 time frame, since MPI 3.0 is open changes that can break backward compatibility. 3. I cannot think of any use cases for a "signed" count, but I'll let others chime in here. 4. I agree that new applications will have no issues with your approach. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 11:39 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Thanks. We may even do without changing the MPI routine names I think. Let's consider this: 1) #define MPI_Count int will yield the current MPI interface. 2) typedef int MPI_Count will yield the new 32-bit interface, still backward binary compatible with 1). 3) typedef long MPI_Count will yield new 64-bit interface. 4) typedef long long MPI_Count will safely bring us into XXII century (or earlier). Old applications may be rebuilt using 1) without any changes, dynamically linked against MPI library built using 2), and will have to be rebuilt to use MPI library built using 3) or 4). New applications will use library with the MPI_Count definitions, and, properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using MPI libraries built using MPI bindings 2), 3), or 4), respectively. By the way, should MPI_Count really be signed? -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 5:55 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts We can certainly consider this option, since it abstracts along the same lines as MPI_Offset. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 10:19 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 From jsquyres at [hidden] Thu Sep 4 01:26:23 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 4 Sep 2008 07:26:23 +0100 Subject: [Mpi-22] MPI::Grequest::Start proposal Message-ID: <80ED4FC7-DAAB-4AEC-9890-403210CD5569@cisco.com> Regarding https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/GrequestStartFnPtrArgs : I have confirmed in Open MPI that if you change this: static Grequest Start(Query_function, Free_function, Cancel_function, void *); to static Grequest Start(Query_function *, Free_function *, Cancel_function *, void *); Recompile and reinstall, both the implementation and same C++ user code compiles without warning/error. Erez raised the point that this would change the type of any implementation and user-declared variables that hold the function pointer. It does not; such a variable type must always be (Free_function*); it's only the type that is passed through a function argument that can be either (Free_function) or (Free_function*). Someone smarter than me in C++ can explain why. :-) Specifically, the following compiles and runs without warning/error: ----- #include typedef int Free_function(void *); int my_function(void*) { printf("In my_function\n"); return 0; } void foo1(Free_function ptr) { Free_function *save = ptr; save(0); } void foo2(Free_function* ptr) { Free_function *save = ptr; save(0); } int main (int argc, char*argv[]) { foo1(my_function); foo2(my_function); return 0; } ----- I tried 4 different C++ compilers (gnu, intel, pgi, pathscale): ----- [23:19] svbu-mpi:~/tmp % g++ foo.cc -o foo && ./foo In my_function In my_function [23:21] svbu-mpi:~/tmp % icpc foo.cc -o foo && ./foo In my_function In my_function [23:21] svbu-mpi:~/tmp % pgCC foo.cc -o foo && ./foo In my_function In my_function [23:22] svbu-mpi:~/tmp % pathCC foo.cc -o foo && ./foo In my_function In my_function [23:22] svbu-mpi:~/tmp % ------ So I think the proposal stands as it is written. -- Jeff Squyres Cisco Systems From erezh at [hidden] Thu Sep 4 03:02:17 2008 From: erezh at [hidden] (Erez Haba) Date: Thu, 4 Sep 2008 01:02:17 -0700 Subject: [Mpi-22] MPI::Grequest::Start proposal In-Reply-To: <80ED4FC7-DAAB-4AEC-9890-403210CD5569@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E790C731A7C@NA-EXMSG-C105.redmond.corp.microsoft.com> I think that this is okay; when we discussed it, I thought it was a different fix. The proposal is just fine. -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Thursday, September 04, 2008 7:26 AM To: MPI 2.2 Subject: [Mpi-22] MPI::Grequest::Start proposal Regarding https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/GrequestStartFnPtrArgs : I have confirmed in Open MPI that if you change this: static Grequest Start(Query_function, Free_function, Cancel_function, void *); to static Grequest Start(Query_function *, Free_function *, Cancel_function *, void *); Recompile and reinstall, both the implementation and same C++ user code compiles without warning/error. Erez raised the point that this would change the type of any implementation and user-declared variables that hold the function pointer. It does not; such a variable type must always be (Free_function*); it's only the type that is passed through a function argument that can be either (Free_function) or (Free_function*). Someone smarter than me in C++ can explain why. :-) Specifically, the following compiles and runs without warning/error: ----- #include typedef int Free_function(void *); int my_function(void*) { printf("In my_function\n"); return 0; } void foo1(Free_function ptr) { Free_function *save = ptr; save(0); } void foo2(Free_function* ptr) { Free_function *save = ptr; save(0); } int main (int argc, char*argv[]) { foo1(my_function); foo2(my_function); return 0; } ----- I tried 4 different C++ compilers (gnu, intel, pgi, pathscale): ----- [23:19] svbu-mpi:~/tmp % g++ foo.cc -o foo && ./foo In my_function In my_function [23:21] svbu-mpi:~/tmp % icpc foo.cc -o foo && ./foo In my_function In my_function [23:21] svbu-mpi:~/tmp % pgCC foo.cc -o foo && ./foo In my_function In my_function [23:22] svbu-mpi:~/tmp % pathCC foo.cc -o foo && ./foo In my_function In my_function [23:22] svbu-mpi:~/tmp % ------ So I think the proposal stands as it is written. -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 From balaji at [hidden] Thu Sep 4 03:30:23 2008 From: balaji at [hidden] (Pavan Balaji) Date: Thu, 04 Sep 2008 03:30:23 -0500 Subject: [Mpi-22] Dynamic thread levels Message-ID: <48BF9C9F.7030006@mcs.anl.gov> Hi all, I would like to propose an addition of a function call to dynamically modify the thread level required by the application, instead of at MPI_Init_thread() time. I'm sending this over email for comments (attached with this email); I'll upload it on the wiki this evening. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji Dynamic Thread Levels Author: Pavan Balaji, Argonne National Laboratory Background: MPI 2.1 standard allows users to set thread level only at MPI_Init_thread time. This means that for applications that dynamically create threads (e.g., hybrid MPI + OpenMP applications), even non-threaded portions of the application have to rely on the maximum thread level used in the application (e.g., MULTIPLE). Proposal: Add an additional function call to dynamically set thread-level during the application. int MPI_Set_thread_level(int required, int * provided); Rational: The requirement to specify the thread-level at MPI_Init_thread time is too restrictive for applications that perform a small amount of communication requiring a high-level of thread support. For correctness, the standard requires all of the code to follow the same thread-level, and provides the applications with no way to give the MPI library more information about their behavior. Impact on MPI implementations: Most MPI implementations already provide runtime support for thread-level, i.e., locks are compiled in, but whether they are invoked or not is decided at runtime. For implementations that choose not to respect this option, MPI_Set_thread_level() can just set the provided level to the current level, by ignoring the required level specified by the user. Impact on MPI applications: Existing MPI applications do not need to be modified at all. But newer applications can benefit from this additional functionality. From alexander.supalov at [hidden] Thu Sep 4 03:58:11 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 4 Sep 2008 09:58:11 +0100 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1F015@swsmsx413.ger.corp.intel.com> Hi, Yes, I was looking into maintaining one library source code with the ability to build several versions out of it. Since 1) and 2) will be essentially ABI compatible, one can ship one version of the 32-bit dynamic library to cover both cases. The 64-bit library will be different, and this is I think what customers want anyway - they don't seem to want mixing 32- and 64-bit calls in one application. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 7:32 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Alexander, Thanx for the suggestion. 1. Maybe I'm missing something here, but without name change, we break backward compatibility with "old" binaries coded to the existing APIs with 32-bit counts, and the new MPI 2.2 compatible MPI shared library that supports 64-bit counts --- without a re-build/relink step. Or are you proposing that implementations have multiple libraries, one that supports 32 bit counts, one that supports 64 bit counts, etc.? With a single library, (1) and (2) addresses backward compatibility, but maintains status-quo on support for large message counts, and (3) and (4) requires rebuild. However, having different names will take care of this. 2. If needed, we can explore normalization to a single API that supports MPI_Count in the MPI 3.0 time frame, since MPI 3.0 is open changes that can break backward compatibility. 3. I cannot think of any use cases for a "signed" count, but I'll let others chime in here. 4. I agree that new applications will have no issues with your approach. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 11:39 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Thanks. We may even do without changing the MPI routine names I think. Let's consider this: 1) #define MPI_Count int will yield the current MPI interface. 2) typedef int MPI_Count will yield the new 32-bit interface, still backward binary compatible with 1). 3) typedef long MPI_Count will yield new 64-bit interface. 4) typedef long long MPI_Count will safely bring us into XXII century (or earlier). Old applications may be rebuilt using 1) without any changes, dynamically linked against MPI library built using 2), and will have to be rebuilt to use MPI library built using 3) or 4). New applications will use library with the MPI_Count definitions, and, properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using MPI libraries built using MPI bindings 2), 3), or 4), respectively. By the way, should MPI_Count really be signed? -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 5:55 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts We can certainly consider this option, since it abstracts along the same lines as MPI_Offset. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 10:19 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From alexander.supalov at [hidden] Thu Sep 4 04:59:28 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 4 Sep 2008 10:59:28 +0100 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1F0B3@swsmsx413.ger.corp.intel.com> Good point. At face value, this seems to suggest that we should have "extended" constructors available on a level with the traditional ones, thus favoring the original name-shifted proposal. If I think about the library internals in this case, however, this addition will effectively push all of the library extent calculations to 64-bit (or more, see below). Another thing is that if we multiply to long values, the result may potentially overflow a long. So, for a library that allows long counts and long datatype extents, the internals of the library will have to be long long. long long (128 bit) arithmetic may be rather expensive on some CPUs. So, the 64-bit interface should probably be optional. ________________________________ From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: Wednesday, September 03, 2008 7:24 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts MPI_Aint has the right number of bits to hold an address. For operations that are address space oriented, count can be the same size as an MPI_Aint MPI_Offset has the right number of bits to deal with the range in a file. For operations that are file oriented, arguments like count should be an MPI_Offset A problem is that MPI_Datatype constructors are intended to apply to both and it is common for a 32 bit address space to operate on files with 64 bit offsets. What type should "count" on the new "long" MPI_Datatype constructors be in this case? The proposal says MPI_Aint but that is awkward for datatypes as fileviews. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 09/03/2008 12:38:35 PM: > [image removed] > > Re: [Mpi-22] New proposal: Support for large message counts > > Supalov, Alexander > > to: > > MPI 2.2 > > 09/03/2008 12:48 PM > > Sent by: > > mpi-22-bounces_at_[hidden] > > Please respond to "MPI 2.2" > > Thanks. We may even do without changing the MPI routine names I think. > Let's consider this: > > 1) #define MPI_Count int will yield the current MPI interface. > 2) typedef int MPI_Count will yield the new 32-bit interface, still > backward binary compatible with 1). > 3) typedef long MPI_Count will yield new 64-bit interface. > 4) typedef long long MPI_Count will safely bring us into XXII century > (or earlier). > > Old applications may be rebuilt using 1) without any changes, > dynamically linked against MPI library built using 2), and will have to > be rebuilt to use MPI library built using 3) or 4). > > New applications will use library with the MPI_Count definitions, and, > properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using > MPI libraries built using MPI bindings 2), 3), or 4), respectively. > > By the way, should MPI_Count really be signed? > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, > Kannan > Sent: Wednesday, September 03, 2008 5:55 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] New proposal: Support for large message counts > > We can certainly consider this option, since it abstracts along the same > lines as MPI_Offset. > > Thanx! > Kannan > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, > Alexander > Sent: Wednesday, September 03, 2008 10:19 AM > To: MPI 2.2 > Subject: Re: [Mpi-22] New proposal: Support for large message counts > > Dear Kannan, > > Thanks. Why don't we define MPI_Count datatype instead, and make that > grow if necessary? > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, > Kannan > Sent: Friday, August 29, 2008 6:15 PM > To: mpi-22_at_[hidden] > Subject: [Mpi-22] New proposal: Support for large message counts > > I have submitted a new 2.2 proposal to addresses the need for large > message counts (i.e. counts greater than the size of 32-bit integer) for > MPI calls that communicate messages. > > Please refer to > https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts > for details. > > Comments and suggestions are welcome. > > Thanx! > Kannan > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution by > others is strictly prohibited. If you are not the intended recipient, > please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kannan.narasimhan at [hidden] Thu Sep 4 07:52:00 2008 From: kannan.narasimhan at [hidden] (Narasimhan, Kannan) Date: Thu, 4 Sep 2008 12:52:00 +0000 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1F015@swsmsx413.ger.corp.intel.com> Message-ID: Alexander, We do have use-cases of customers requireing a mix of 32-bit and 64-bit executables (as strange as it sounds). These are not typical MPI applications, though... Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Thursday, September 04, 2008 3:58 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Hi, Yes, I was looking into maintaining one library source code with the ability to build several versions out of it. Since 1) and 2) will be essentially ABI compatible, one can ship one version of the 32-bit dynamic library to cover both cases. The 64-bit library will be different, and this is I think what customers want anyway - they don't seem to want mixing 32- and 64-bit calls in one application. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 7:32 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Alexander, Thanx for the suggestion. 1. Maybe I'm missing something here, but without name change, we break backward compatibility with "old" binaries coded to the existing APIs with 32-bit counts, and the new MPI 2.2 compatible MPI shared library that supports 64-bit counts --- without a re-build/relink step. Or are you proposing that implementations have multiple libraries, one that supports 32 bit counts, one that supports 64 bit counts, etc.? With a single library, (1) and (2) addresses backward compatibility, but maintains status-quo on support for large message counts, and (3) and (4) requires rebuild. However, having different names will take care of this. 2. If needed, we can explore normalization to a single API that supports MPI_Count in the MPI 3.0 time frame, since MPI 3.0 is open changes that can break backward compatibility. 3. I cannot think of any use cases for a "signed" count, but I'll let others chime in here. 4. I agree that new applications will have no issues with your approach. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 11:39 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Thanks. We may even do without changing the MPI routine names I think. Let's consider this: 1) #define MPI_Count int will yield the current MPI interface. 2) typedef int MPI_Count will yield the new 32-bit interface, still backward binary compatible with 1). 3) typedef long MPI_Count will yield new 64-bit interface. 4) typedef long long MPI_Count will safely bring us into XXII century (or earlier). Old applications may be rebuilt using 1) without any changes, dynamically linked against MPI library built using 2), and will have to be rebuilt to use MPI library built using 3) or 4). New applications will use library with the MPI_Count definitions, and, properly rebuilt, will work in either 32-, 64-, or 128-bit mode, using MPI libraries built using MPI bindings 2), 3), or 4), respectively. By the way, should MPI_Count really be signed? -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Wednesday, September 03, 2008 5:55 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts We can certainly consider this option, since it abstracts along the same lines as MPI_Offset. Thanx! Kannan -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Supalov, Alexander Sent: Wednesday, September 03, 2008 10:19 AM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Dear Kannan, Thanks. Why don't we define MPI_Count datatype instead, and make that grow if necessary? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Narasimhan, Kannan Sent: Friday, August 29, 2008 6:15 PM To: mpi-22_at_[hidden] Subject: [Mpi-22] New proposal: Support for large message counts I have submitted a new 2.2 proposal to addresses the need for large message counts (i.e. counts greater than the size of 32-bit integer) for MPI calls that communicate messages. Please refer to https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/LargeMsgCounts for details. Comments and suggestions are welcome. Thanx! Kannan _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 From treumann at [hidden] Thu Sep 4 09:35:58 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 4 Sep 2008 10:35:58 -0400 Subject: [Mpi-22] Dynamic thread levels In-Reply-To: <48BF9C9F.7030006@mcs.anl.gov> Message-ID: Pavan, Can you explain what improvements you expect if MPI added this? If an MPI implementation that is capable of supporting MPI_THREAD_MULTIPLE is running in MPI_THREAD_SINGLE mode, that probably means all protection against multiple threads is being bypassed at rather modest latency savings. The savings probably come from branching over lock operations. Ongoing checking for concurrent threads making MPI calls is expensive enough so doing it would eat up much of the savings from branching over lock operations. Basically, the MPI_THREAD_MULTIPLE capable MPI running in MPI_THREAD_SINGLE mode is probably defenseless against multi thread applications. What this means for MPI_Set_thread_level(int required, int * provided); is that the application would need to take full responsibility for making sure there is only one thread making MPI calls at the moment a mode switch is requested. If the application gets it wrong there is probably nothing (affordable) the MPI implementation can do to detect the danger. Applications that turned thread safety on and off but were misusing the call would never get an error message and might run without problems 99 times out of 100. 1% of the runs would have mysterious failures and the failures might each look quite different. It is certainly possible for an MPI implementation that only supports MPI_THREAD_SINGLE to do some things faster by using simpler global data structures but such an MPI implementation could not honor a runtime request to convert to MPI_THREAD_MULTIPLE. It may even be possible for an MPI implementation at INIT time to chose simple vs. thread safe data structures but again, a switch once everything is up and running is probably not practical. This does not seem to me to offer enough payoff to justify the dangers. Dick Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 09/04/2008 04:30:23 AM: > [image removed] > > [Mpi-22] Dynamic thread levels > > Pavan Balaji > > to: > > mpi-22 > > 09/04/2008 04:39 AM > > Sent by: > > mpi-22-bounces_at_[hidden] > > Please respond to "MPI 2.2" > > Hi all, > > I would like to propose an addition of a function call to dynamically > modify the thread level required by the application, instead of at > MPI_Init_thread() time. I'm sending this over email for comments > (attached with this email); I'll upload it on the wiki this evening. > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > Dynamic Thread Levels > > Author: Pavan Balaji, Argonne National Laboratory > > Background: > > MPI 2.1 standard allows users to set thread level only at > MPI_Init_thread time. This means that for applications that > dynamically create threads (e.g., hybrid MPI + OpenMP applications), > even non-threaded portions of the application have to rely on the > maximum thread level used in the application (e.g., MULTIPLE). > > > Proposal: > > Add an additional function call to dynamically set thread-level during > the application. > > int MPI_Set_thread_level(int required, int * provided); > > > Rational: > > The requirement to specify the thread-level at MPI_Init_thread time is > too restrictive for applications that perform a small amount of > communication requiring a high-level of thread support. For > correctness, the standard requires all of the code to follow the same > thread-level, and provides the applications with no way to give the > MPI library more information about their behavior. > > > Impact on MPI implementations: > > Most MPI implementations already provide runtime support for > thread-level, i.e., locks are compiled in, but whether they are > invoked or not is decided at runtime. For implementations that choose > not to respect this option, MPI_Set_thread_level() can just set the > provided level to the current level, by ignoring the required level > specified by the user. > > > Impact on MPI applications: > > Existing MPI applications do not need to be modified at all. But newer > applications can benefit from this additional functionality. > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Thu Sep 4 10:25:08 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 4 Sep 2008 11:25:08 -0400 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1F0B3@swsmsx413.ger.corp.intel.com> Message-ID: Are there systems today that use a 128 bit long long? Are there virtual memory systems or filesystems coming in the reasonable future that use a 128 bit address range? On IBM Power we support both 32 bit and 64 bit executables. For both 32 bit and 64 bit executables, the C compilers treat int as 32 bits and long long as 64 bit. C long differs so is 32 bits in a 32 bit executable and 64 bits in a 64 bit executable (I.E. big enough to hold an address/pointer). The jump from 32 bits to 64 bits is a factor of 4 billion so it may be goodness in the abstract to think about what comes after 64 bits gives way to 128 bits, should we consider this a real problem? I do not see much reason to worry about the fact that multiplying 2 arbitrary 64 bit values can overflow a 64 bit result unless we think some real situation will call for 128 bit results. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 Alexander wrote > Another thing is that if we multiply to long values, the result may > potentially overflow a long. So, for a library that allows long > counts and long datatype extents, the internals of the library will > have to be long long. long long (128 bit) arithmetic may be rather > expensive on some CPUs. So, the 64-bit interface should probably be optional. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.supalov at [hidden] Thu Sep 4 12:09:35 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 4 Sep 2008 18:09:35 +0100 Subject: [Mpi-22] New proposal: Support for large message counts In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1F44D@swsmsx413.ger.corp.intel.com> Terabytes are pushed back and forth already now, maybe not yet in one piece. MPI-3 is a couple of years away, by which time who knows what may happen. Anyway, you're right, this may not be an immediate concern. At the moment, IBM Power apparently implements lp64 model. We typically see the same thing on Intel 64 based platforms. Our compiler's long long is 64 bit in that case, too. Same on Itanium. The latter already has several multiply/add commands with an intermediate full 128-bit product out of 64-bit int operands, and the corresponding compiler intrinsics, however. ________________________________ From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: Thursday, September 04, 2008 5:25 PM To: MPI 2.2 Subject: Re: [Mpi-22] New proposal: Support for large message counts Are there systems today that use a 128 bit long long? Are there virtual memory systems or filesystems coming in the reasonable future that use a 128 bit address range? On IBM Power we support both 32 bit and 64 bit executables. For both 32 bit and 64 bit executables, the C compilers treat int as 32 bits and long long as 64 bit. C long differs so is 32 bits in a 32 bit executable and 64 bits in a 64 bit executable (I.E. big enough to hold an address/pointer). The jump from 32 bits to 64 bits is a factor of 4 billion so it may be goodness in the abstract to think about what comes after 64 bits gives way to 128 bits, should we consider this a real problem? I do not see much reason to worry about the fact that multiplying 2 arbitrary 64 bit values can overflow a 64 bit result unless we think some real situation will call for 128 bit results. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 Alexander wrote > Another thing is that if we multiply to long values, the result may > potentially overflow a long. So, for a library that allows long > counts and long datatype extents, the internals of the library will > have to be long long. long long (128 bit) arithmetic may be rather > expensive on some CPUs. So, the 64-bit interface should probably be optional. --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at [hidden] Thu Sep 4 12:24:14 2008 From: balaji at [hidden] (Pavan Balaji) Date: Thu, 04 Sep 2008 12:24:14 -0500 Subject: [Mpi-22] Dynamic thread levels In-Reply-To: Message-ID: <48C019BE.3050507@mcs.anl.gov> Dick, I think there are two independent questions here -- one is about the potential performance improvement, and another is about the ease of use (or rather complexity of use) for application writers. To some degree, I agree with both the concerns, but here are some notes for each one. With respect to the first, though latency improvement is probably only marginal (especially on fast processors), the degradation in the message rate is usually quite high. I've implemented this in MPICH2 and placed a copy here: https://svn.mcs.anl.gov/repos/mpi/mpich2/branches/dev/dyn_thread_level. Based on this implementation, I did some measurements using a single threaded application, but using MPI_Init_thread() vs. MPI_Init(). Experiment 1: Each process sends to MPI_PROC_NULL (emulating an infinitely fast network) MPI_Init -- 29.94 million messages per second MPI_Init_thread -- 15.86 million messages per second Difference -- almost 2X Experiment 2: Two processes communicating over TCP MPI_Init -- 3.947 million messages per second MPI_Init_thread -- 3.231 million messages per second Difference -- about 20% For fast networks, the difference might be somewhere in between. With respect to the second question, I agree that we are placing a lot of faith in the application developers for this, but this is probably not too different from trusting the application in that only one thread will call a Barrier or a Bcast, for example. If multiple threads call barriers in applications where only one thread is supposed to call, bad things can happen, but we trust that the user will not do that. Basically, there is a potential for bugs here, but I don't think that's any more than what is already present. If there are any concerns, or if you think we can do this in a better way, please do let me know. I'm open to editing this (even substantially if needed). Thanks. -- Pavan On 09/04/2008 09:35 AM, Richard Treumann wrote: > Pavan, > > Can you explain what improvements you expect if MPI added this? > > If an MPI implementation that is capable of supporting > MPI_THREAD_MULTIPLE is running in MPI_THREAD_SINGLE mode, that probably > means all protection against multiple threads is being bypassed at > rather modest latency savings. The savings probably come from branching > over lock operations. Ongoing checking for concurrent threads making MPI > calls is expensive enough so doing it would eat up much of the savings > from branching over lock operations. Basically, the MPI_THREAD_MULTIPLE > capable MPI running in MPI_THREAD_SINGLE mode is probably defenseless > against multi thread applications. > > What this means for MPI_Set_thread_level(int required, int * provided); > > is that the application would need to take full responsibility for > making sure there is only one thread making MPI calls at the moment a > mode switch is requested. If the application gets it wrong there is > probably nothing (affordable) the MPI implementation can do to detect > the danger. Applications that turned thread safety on and off but were > misusing the call would never get an error message and might run without > problems 99 times out of 100. 1% of the runs would have mysterious > failures and the failures might each look quite different. > > It is certainly possible for an MPI implementation that only supports > MPI_THREAD_SINGLE to do some things faster by using simpler global data > structures but such an MPI implementation could not honor a runtime > request to convert to MPI_THREAD_MULTIPLE. It may even be possible for > an MPI implementation at INIT time to chose simple vs. thread safe data > structures but again, a switch once everything is up and running is > probably not practical. > > This does not seem to me to offer enough payoff to justify the dangers. > > Dick > > Dick Treumann - MPI Team > IBM Systems & Technology Group > Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-22-bounces_at_[hidden] wrote on 09/04/2008 04:30:23 AM: > > > [image removed] > > > > [Mpi-22] Dynamic thread levels > > > > Pavan Balaji > > > > to: > > > > mpi-22 > > > > 09/04/2008 04:39 AM > > > > Sent by: > > > > mpi-22-bounces_at_[hidden] > > > > Please respond to "MPI 2.2" > > > > Hi all, > > > > I would like to propose an addition of a function call to dynamically > > modify the thread level required by the application, instead of at > > MPI_Init_thread() time. I'm sending this over email for comments > > (attached with this email); I'll upload it on the wiki this evening. > > > > -- Pavan > > > > -- > > Pavan Balaji > > http://www.mcs.anl.gov/~balaji > > Dynamic Thread Levels > > > > Author: Pavan Balaji, Argonne National Laboratory > > > > Background: > > > > MPI 2.1 standard allows users to set thread level only at > > MPI_Init_thread time. This means that for applications that > > dynamically create threads (e.g., hybrid MPI + OpenMP applications), > > even non-threaded portions of the application have to rely on the > > maximum thread level used in the application (e.g., MULTIPLE). > > > > > > Proposal: > > > > Add an additional function call to dynamically set thread-level during > > the application. > > > > int MPI_Set_thread_level(int required, int * provided); > > > > > > Rational: > > > > The requirement to specify the thread-level at MPI_Init_thread time is > > too restrictive for applications that perform a small amount of > > communication requiring a high-level of thread support. For > > correctness, the standard requires all of the code to follow the same > > thread-level, and provides the applications with no way to give the > > MPI library more information about their behavior. > > > > > > Impact on MPI implementations: > > > > Most MPI implementations already provide runtime support for > > thread-level, i.e., locks are compiled in, but whether they are > > invoked or not is decided at runtime. For implementations that choose > > not to respect this option, MPI_Set_thread_level() can just set the > > provided level to the current level, by ignoring the required level > > specified by the user. > > > > > > Impact on MPI applications: > > > > Existing MPI applications do not need to be modified at all. But newer > > applications can benefit from this additional functionality. > > _______________________________________________ > > mpi-22 mailing list > > mpi-22_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > ------------------------------------------------------------------------ > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at [hidden] Thu Sep 4 12:28:19 2008 From: balaji at [hidden] (Pavan Balaji) Date: Thu, 04 Sep 2008 12:28:19 -0500 Subject: [Mpi-22] Dynamic thread levels In-Reply-To: <48C019BE.3050507@mcs.anl.gov> Message-ID: <48C01AB3.7020805@mcs.anl.gov> > I've implemented this in MPICH2 and placed a copy here: > https://svn.mcs.anl.gov/repos/mpi/mpich2/branches/dev/dyn_thread_level. > Based on this implementation, I did some measurements using a single > threaded application, but using MPI_Init_thread() vs. MPI_Init(). > > Experiment 1: Each process sends to MPI_PROC_NULL (emulating an > infinitely fast network) > > MPI_Init -- 29.94 million messages per second > MPI_Init_thread -- 15.86 million messages per second > Difference -- almost 2X > > Experiment 2: Two processes communicating over TCP > > MPI_Init -- 3.947 million messages per second > MPI_Init_thread -- 3.231 million messages per second > Difference -- about 20% Minor clarification -- with the implementation pointed out above (which adds a new MPIX_Set_thread_level() call), MPI_Init_thread() + MPIX_Set_thread_level() to serial has the same performance as regular single-threaded MPI_Init(). -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jsquyres at [hidden] Thu Sep 4 17:14:50 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 4 Sep 2008 23:14:50 +0100 Subject: [Mpi-22] MPI::Grequest::Start proposal In-Reply-To: <6B68D01C00C9994A8E150183E62A119E790C731A7C@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <128E5A36-53E3-4811-ABE8-0BC2123E5B0D@cisco.com> FWIW, I checked this out with Doug Gregor (my C++-goto-guy) and he basically said "yep, that's right." On Sep 4, 2008, at 9:02 AM, Erez Haba wrote: > I think that this is okay; when we discussed it, I thought it was a > different fix. The proposal is just fine. > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden] > ] On Behalf Of Jeff Squyres > Sent: Thursday, September 04, 2008 7:26 AM > To: MPI 2.2 > Subject: [Mpi-22] MPI::Grequest::Start proposal > > Regarding https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/GrequestStartFnPtrArgs > : > > I have confirmed in Open MPI that if you change this: > > static Grequest Start(Query_function, Free_function, > Cancel_function, void *); > > to > > static Grequest Start(Query_function *, Free_function *, > Cancel_function *, void *); > > Recompile and reinstall, both the implementation and same C++ user > code compiles without warning/error. > > Erez raised the point that this would change the type of any > implementation and user-declared variables that hold the function > pointer. It does not; such a variable type must always be > (Free_function*); it's only the type that is passed through a function > argument that can be either (Free_function) or (Free_function*). > Someone smarter than me in C++ can explain why. :-) > > Specifically, the following compiles and runs without warning/error: > > ----- > #include > typedef int Free_function(void *); > > int my_function(void*) { > printf("In my_function\n"); > return 0; > } > > void foo1(Free_function ptr) { > Free_function *save = ptr; > save(0); > } > void foo2(Free_function* ptr) { > Free_function *save = ptr; > save(0); > } > int main (int argc, char*argv[]) { > foo1(my_function); > foo2(my_function); > return 0; > } > ----- > > I tried 4 different C++ compilers (gnu, intel, pgi, pathscale): > > ----- > [23:19] svbu-mpi:~/tmp % g++ foo.cc -o foo && ./foo > In my_function > In my_function > [23:21] svbu-mpi:~/tmp % icpc foo.cc -o foo && ./foo > In my_function > In my_function > [23:21] svbu-mpi:~/tmp % pgCC foo.cc -o foo && ./foo > In my_function > In my_function > [23:22] svbu-mpi:~/tmp % pathCC foo.cc -o foo && ./foo > In my_function > In my_function > [23:22] svbu-mpi:~/tmp % > ------ > > So I think the proposal stands as it is written. > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Sep 5 13:57:58 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 5 Sep 2008 19:57:58 +0100 Subject: [Mpi-22] Slides from Dublin Message-ID: As usual, if you presented slides in Dublin, please e-mail them to me and I will put them on the web site. Thanks. -- Jeff Squyres Cisco Systems From balaji at [hidden] Fri Sep 5 18:09:38 2008 From: balaji at [hidden] (Pavan Balaji) Date: Fri, 05 Sep 2008 18:09:38 -0500 Subject: [Mpi-22] Dynamic thread levels In-Reply-To: Message-ID: <48C1BC32.2010704@mcs.anl.gov> Hi all, Based on feedback from a number of folks (thanks!), I have updated the wiki with some more information, clarifications and examples. Please take a look at it. Thanks. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From wgropp at [hidden] Sun Sep 7 01:30:42 2008 From: wgropp at [hidden] (William Gropp) Date: Sun, 7 Sep 2008 01:30:42 -0500 Subject: [Mpi-22] MPI 2.2 Summary Message-ID: <62639632-832C-46E2-B6CA-1B76E7B06AC7@illinois.edu> I'd like to thank everyone for a very productive MPI 2.2 session and especially Jeff Squyres for keeping records and updating the wiki with the results in real time - that was an enormous help. I encourage everyone to read the proposals very carefully and add comments directly to the wiki entries. We need more input from users of MPI as well; please involve any you know in these discussions. Thanks! Bill William Gropp Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign From balaji at [hidden] Sun Sep 7 04:47:32 2008 From: balaji at [hidden] (Pavan Balaji) Date: Sun, 07 Sep 2008 04:47:32 -0500 Subject: [Mpi-22] New proposal: Concurrent MPI_Init() and MPI_Init_thread() Message-ID: <48C3A334.9040608@mcs.anl.gov> Hi all, I've added a new proposal on the 2.2 wiki. Here's the gist of it: The MPI 2.1 standard does not explicitly mention whether it is permitted for one process in an application to initialize with a different thread-level than another. This needs to be clarified. Link to the proposal: https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/concurrent_init_initthread -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From wgropp at [hidden] Sun Sep 7 06:41:17 2008 From: wgropp at [hidden] (William Gropp) Date: Sun, 7 Sep 2008 06:41:17 -0500 Subject: [Mpi-22] Slides from Dublin In-Reply-To: Message-ID: <8C58B3D2-1C79-4FB8-9DDB-0084D39D9872@illinois.edu> Here are my slides Bill On Sep 5, 2008, at 1:57 PM, Jeff Squyres wrote: > As usual, if you presented slides in Dublin, please e-mail them to me > and I will put them on the web site. > > Thanks. > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 William Gropp Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- A non-text attachment was scrubbed... Name: MPI-2-2-Sept-03-08.ppt Type: application/vnd.ms-powerpoint Size: 76800 bytes Desc: MPI-2-2-Sept-03-08.ppt URL: From alexander.supalov at [hidden] Mon Sep 8 03:45:20 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Mon, 8 Sep 2008 09:45:20 +0100 Subject: [Mpi-22] Slides from Dublin In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201B1FC13@swsmsx413.ger.corp.intel.com> Hi, Thanks. Here are the ABI final set and the collectives slides I showed. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, September 05, 2008 8:58 PM To: Main MPI Forum mailing list; MPI 2.2 Subject: [Mpi-22] Slides from Dublin As usual, if you presented slides in Dublin, please e-mail them to me and I will put them on the web site. Thanks. -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- A non-text attachment was scrubbed... Name: MPI_Forum_ABI_WG_session.ppt Type: application/vnd.ms-powerpoint Size: 82944 bytes Desc: MPI Forum ABI WG session.ppt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: General_Coll_Forum.pdf Type: application/octet-stream Size: 64918 bytes Desc: General_Coll_Forum.pdf URL: From jsquyres at [hidden] Mon Sep 8 15:08:34 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 8 Sep 2008 16:08:34 -0400 Subject: [Mpi-22] Moving proposals to wiki "tickets" Message-ID: <5CC3C203-4B01-46C4-A8D2-B09C0D97946E@cisco.com> Short version: ============== All MPI-2.2 proposal authors are responsible for moving their current wiki proposals to tickets. See this URL for instructions: https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow More details: ============= We have decided to move all the MPI-2.2 proposals to "tickets" (a la a tracking system, rather than continually editable web pages). This will allow us finer-grained searching capabilities, the ability to run reports (e.g., list tickets in a certain status), automatic e-mail updates when tickets are updated, fixed numbering, and so on. I have moved all of my proposals to tickets; the process is pretty straightforward. Most importantly, the wiki markup is exactly the same. Hence, it's mostly a copy-n-paste operation. All proposal authors are responsible for moving their proposals to tickets. Preserve any comments at the bottom of your current proposals as responses on the ticket. See this URL for information on how to create / edit tickets: https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Mon Sep 8 15:11:06 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 8 Sep 2008 16:11:06 -0400 Subject: [Mpi-22] Tracking MPI-2.2 progress Message-ID: <3FCFE478-E0C5-4B0E-8AFF-0C0AC98A21AE@cisco.com> Note that there are two ways to track what is happening in the MPI-2.2 effort: 1. The Trac we are using has an RSS feed of all activity: https://svn.mpi-forum.org/trac/mpi-forum-web/timeline?ticket=on&milestone=on&wiki=on&max=50&daysback=90&format=rss (you can tweak that URL a bit if you only want to see specific kinds of events, etc.) 2. You can add your e-mail address to the CC field of any ticket. Whenever changes are made to a ticket, the system will automatically send e-mail to the submitter, owner, and any e-mail address that is explicitly listed in the CC field. -- Jeff Squyres Cisco Systems