From wgropp at [hidden] Wed Oct 8 07:30:45 2008 From: wgropp at [hidden] (William Gropp) Date: Wed, 8 Oct 2008 07:30:45 -0500 Subject: [Mpi-22] Reminder to move items into tickets Message-ID: This is a reminder to move MPI 2.2 items into the ticket system. See https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow or the mail that Jeff sent on September 8th for more details. Only items that have tickets will be discussed at the next MPI Forum meeting. Bill William Gropp Deputy Director for Research Institute for Advanced Computing Applications and Technologies Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Mon Oct 13 09:24:06 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 13 Oct 2008 10:24:06 -0400 Subject: [Mpi-22] Higher-level languages proposal Message-ID: Doug Gregor did 98% of the work on this proposal, but it will likely be Torsten or I presenting this proposal next week in Chicago (5-7pm Monday). The short version of the attached text is that we propose the following: - add some extensions to MPI to allow higher-level languages to build their own MPI bindings - if all that works out nicely, deprecate and eventually remove the official MPI C++ bindings Don't let the proposal name fool you -- the features that we're proposing are inspired by letting higher-level language build their own bindings, but some of the issues (e.g., new MPI_BLOB datatype) are fairly wide-reaching. We haven't yet put a "2.2" or "3.0" label on this proposal (which is why I have not entered it in the MPI-2.2 ticket system); I can see valid arguments for both sides. More feedback is required from the Forum first. -- Jeff Squyres Cisco Systems * -------------- next part -------------- A non-text attachment was scrubbed... Name: high_level_languages.pdf Type: application/pdf Size: 149331 bytes Desc: high_level_languages.pdf URL: From alexander.supalov at [hidden] Mon Oct 13 09:46:26 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Mon, 13 Oct 2008 15:46:26 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06BCE@swsmsx413.ger.corp.intel.com> Hi, Thanks. The 2.1.1, which was presented last time, in my opinion does not seem to solve the right problem. Instead of defining a way for unambiguous addressing of the threads in MPI, which would eliminate the MPI_Probe/Recv ambiguity and many other issues, it attempts to add yet another concept (this time, a message id) in the current situation where any thread can do what they please. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Monday, October 13, 2008 4:24 PM To: MPI 2.2 Subject: [Mpi-22] Higher-level languages proposal Doug Gregor did 98% of the work on this proposal, but it will likely be Torsten or I presenting this proposal next week in Chicago (5-7pm Monday). The short version of the attached text is that we propose the following: - add some extensions to MPI to allow higher-level languages to build their own MPI bindings - if all that works out nicely, deprecate and eventually remove the official MPI C++ bindings Don't let the proposal name fool you -- the features that we're proposing are inspired by letting higher-level language build their own bindings, but some of the issues (e.g., new MPI_BLOB datatype) are fairly wide-reaching. We haven't yet put a "2.2" or "3.0" label on this proposal (which is why I have not entered it in the MPI-2.2 ticket system); I can see valid arguments for both sides. More feedback is required from the Forum first. -- Jeff Squyres Cisco Systems --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Mon Oct 13 16:47:35 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 13 Oct 2008 17:47:35 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06BCE@swsmsx413.ger.corp.intel.com> Message-ID: <7261392A-805A-4827-B422-2A62602330F9@cisco.com> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: > Thanks. The 2.1.1, which was presented last time, in my opinion does > not > seem to solve the right problem. Instead of defining a way for > unambiguous addressing of the threads in MPI, which would eliminate > the > MPI_Probe/Recv ambiguity and many other issues, it attempts to add yet > another concept (this time, a message id) in the current situation > where > any thread can do what they please. I'm not quite sure I understand your proposal. <... after typing out a lengthy/rambling discourse that made very little sense and was fraught with questions and ambiguities :-) ...> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for discussion of this proposal. These are exactly the kinds of larger issues that we want to raise via this proposal. -- Jeff Squyres Cisco Systems From htor at [hidden] Mon Oct 13 17:50:31 2008 From: htor at [hidden] (Torsten Hoefler) Date: Mon, 13 Oct 2008 18:50:31 -0400 Subject: [Mpi-22] Graph Interface Fixes Message-ID: <20081013225031.GY17968@benten.cs.indiana.edu> Hello Group, I finished a first implementation which maps the newly proposed graph interface to the old interface. This implementation is probably suboptimal and of course not scalable (can't be because the old interface isn't ;)). But is should be functional (compiles and works with some toy examples). I also corrected a slight mistake in ticket #33. As usual, I am open for comments. I CC'd the mpi-2.2 mailinglist because I think that might be of interest for the discussions next week. Best, Torsten -- bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ ----- Torsten Hoefler | Research Assistant Open Systems Lab | Indiana University 150 S. Woodlawn Ave. | Bloomington, IN, 474045, USA Lindley Hall Room 135 | +01 (812) 855-3608 * -------------- next part -------------- A non-text attachment was scrubbed... Name: virtual_graph.c Type: text/x-csrc Size: 4954 bytes Desc: virtual_graph.c URL: From alexander.supalov at [hidden] Tue Oct 14 01:09:05 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Tue, 14 Oct 2008 07:09:05 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <7261392A-805A-4827-B422-2A62602330F9@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06D52@swsmsx413.ger.corp.intel.com> Dear Jeff, Unfortunately, I won't be in Chicago, so we should rather discuss this here. I talked to Torsten last time about this extension. As far as I can remember, the main purpose of this extension is to make sure that the thread that called the MPI_Probe also calls the MPI_Recv and gets the message matched by the aforementioned MPI_Probe. If so, the main problem here is not the matching. The main problem is that one cannot address threads in MPI. If we fix that, the proposed extension with the message handle and such will become superfluous. See what I mean? Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Monday, October 13, 2008 11:48 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: > Thanks. The 2.1.1, which was presented last time, in my opinion does > not > seem to solve the right problem. Instead of defining a way for > unambiguous addressing of the threads in MPI, which would eliminate > the > MPI_Probe/Recv ambiguity and many other issues, it attempts to add yet > another concept (this time, a message id) in the current situation > where > any thread can do what they please. I'm not quite sure I understand your proposal. <... after typing out a lengthy/rambling discourse that made very little sense and was fraught with questions and ambiguities :-) ...> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for discussion of this proposal. These are exactly the kinds of larger issues that we want to raise via this proposal. -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From Terry.Dontje at [hidden] Tue Oct 14 05:45:15 2008 From: Terry.Dontje at [hidden] (Terry Dontje) Date: Tue, 14 Oct 2008 06:45:15 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06D52@swsmsx413.ger.corp.intel.com> Message-ID: <48F4783B.4090003@sun.com> Supalov, Alexander wrote: > Dear Jeff, > > Unfortunately, I won't be in Chicago, so we should rather discuss this > here. I talked to Torsten last time about this extension. As far as I > can remember, the main purpose of this extension is to make sure that > the thread that called the MPI_Probe also calls the MPI_Recv and gets > the message matched by the aforementioned MPI_Probe. > > If so, the main problem here is not the matching. The main problem is > that one cannot address threads in MPI. If we fix that, the proposed > extension with the message handle and such will become superfluous. > > See what I mean? > > Interesting, so you are basically redefining the MPI_Probe/Recv pair to guarrantee a message to go to a specific thread. Or in other words lowering the proposal's MPI_Mprobe/recv to be in the implementation of MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv itself is basically useless unless the programmer assures serialization when that combination is used. --td > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Monday, October 13, 2008 11:48 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: > > >> Thanks. The 2.1.1, which was presented last time, in my opinion does >> not >> seem to solve the right problem. Instead of defining a way for >> unambiguous addressing of the threads in MPI, which would eliminate >> the >> MPI_Probe/Recv ambiguity and many other issues, it attempts to add yet >> another concept (this time, a message id) in the current situation >> where >> any thread can do what they please. >> > > I'm not quite sure I understand your proposal. > > <... after typing out a lengthy/rambling discourse that made very > little sense and was fraught with questions and ambiguities :-) ...> > > Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for > discussion of this proposal. These are exactly the kinds of larger > issues that we want to raise via this proposal. > > From alexander.supalov at [hidden] Tue Oct 14 06:07:35 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Tue, 14 Oct 2008 12:07:35 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <48F4783B.4090003@sun.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06FF6@swsmsx413.ger.corp.intel.com> Hi, Thanks. I'd rather say that if the purpose of the extension is indeed to serialize the Probe/Recv pair, the better way to solve this and many other problems would be to make threads directly addressable, as if they were MPI processes. One way to do this might be, say, to create a call like MPI_Comm_thread_enroll that creates an intra-communicator out of all threads that call this function in a loosely synchronous fashion, collectively over one or several MPI processes they constitute. If paired with the appropriately extended MPI_Comm_free, this would allow, for example, all threads in an OpenMP parallel section to be addressed as if they were fully fledged MPI processes. Note that this would allow more than one parallel section during the program run. Other threading models would profit from this "opt-in/opt-out" method, too. This may be a more flexible way of dealing with threads than the one-time MPI_Init variety mentioned by George Bosilica in his EuroPVM/MPI keynote, by the way. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje Sent: Tuesday, October 14, 2008 12:45 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal Supalov, Alexander wrote: > Dear Jeff, > > Unfortunately, I won't be in Chicago, so we should rather discuss this > here. I talked to Torsten last time about this extension. As far as I > can remember, the main purpose of this extension is to make sure that > the thread that called the MPI_Probe also calls the MPI_Recv and gets > the message matched by the aforementioned MPI_Probe. > > If so, the main problem here is not the matching. The main problem is > that one cannot address threads in MPI. If we fix that, the proposed > extension with the message handle and such will become superfluous. > > See what I mean? > > Interesting, so you are basically redefining the MPI_Probe/Recv pair to guarrantee a message to go to a specific thread. Or in other words lowering the proposal's MPI_Mprobe/recv to be in the implementation of MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv itself is basically useless unless the programmer assures serialization when that combination is used. --td > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Monday, October 13, 2008 11:48 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: > > >> Thanks. The 2.1.1, which was presented last time, in my opinion does >> not >> seem to solve the right problem. Instead of defining a way for >> unambiguous addressing of the threads in MPI, which would eliminate >> the >> MPI_Probe/Recv ambiguity and many other issues, it attempts to add yet >> another concept (this time, a message id) in the current situation >> where >> any thread can do what they please. >> > > I'm not quite sure I understand your proposal. > > <... after typing out a lengthy/rambling discourse that made very > little sense and was fraught with questions and ambiguities :-) ...> > > Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for > discussion of this proposal. These are exactly the kinds of larger > issues that we want to raise via this proposal. > > _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Tue Oct 14 06:49:51 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 14 Oct 2008 07:49:51 -0400 Subject: [Mpi-22] Friendly reminder: tickets!! Message-ID: <568100CA-5063-476F-ACDF-A56BD136BA05@cisco.com> As Bill indicated previously, your proposal won't be discussed next week unless your proposal has been turned into a ticket. Please convert them! Instructions: https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow The authors listed below have proposals that have not yet been converted into tickets (pulled from the MpiTwoTwoWikiPage page): - Erez Haba - Dick Treumann - Adam Moody - Rolf Rabenseifner - Hubert Ritzdorf - Rajeev Thakur - Douglas Gregor - Quincey Koziol - Rainer Keller - Dries Kimpe - David Solt - Jesper Larsson Träff - Martin Schulz - Pavan Balaji - William Gropp -- Jeff Squyres Cisco Systems From traff at [hidden] Tue Oct 14 06:56:24 2008 From: traff at [hidden] (Jesper Larsson Traeff) Date: Tue, 14 Oct 2008 13:56:24 +0200 Subject: [Mpi-22] Friendly reminder: tickets!! In-Reply-To: <568100CA-5063-476F-ACDF-A56BD136BA05@cisco.com> Message-ID: <20081014115624.GA2697@fourier.it.neclab.eu> Hi Jeff, orry, thought I'd done them... Now, I'm done (I hope) best, Jesper On Tue, Oct 14, 2008 at 07:49:51AM -0400, Jeff Squyres wrote: > As Bill indicated previously, your proposal won't be discussed next > week unless your proposal has been turned into a ticket. Please > convert them! Instructions: > > https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow > > The authors listed below have proposals that have not yet been > converted into tickets (pulled from the MpiTwoTwoWikiPage page): > > - Erez Haba > - Dick Treumann > - Adam Moody > - Rolf Rabenseifner > - Hubert Ritzdorf > - Rajeev Thakur > - Douglas Gregor > - Quincey Koziol > - Rainer Keller > - Dries Kimpe > - David Solt > - Jesper Larsson Träff > - Martin Schulz > - Pavan Balaji > - William Gropp > > -- > Jeff Squyres > Cisco Systems > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 4642 bytes Desc: smime.p7s URL: From jsquyres at [hidden] Tue Oct 14 07:03:30 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 14 Oct 2008 08:03:30 -0400 Subject: [Mpi-22] Friendly reminder: tickets!! In-Reply-To: <20081014115624.GA2697@fourier.it.neclab.eu> Message-ID: <93D94177-94AD-4BB1-82D1-C40F2CC74874@cisco.com> Many thanks! On Oct 14, 2008, at 7:56 AM, Jesper Larsson Traeff wrote: > > Hi Jeff, > > sorry, thought I'd done them... Now, I'm done (I hope) > > best, > > Jesper > > On Tue, Oct 14, 2008 at 07:49:51AM -0400, Jeff Squyres wrote: >> As Bill indicated previously, your proposal won't be discussed next >> week unless your proposal has been turned into a ticket. Please >> convert them! Instructions: >> >> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow >> >> The authors listed below have proposals that have not yet been >> converted into tickets (pulled from the MpiTwoTwoWikiPage page): >> >> - Erez Haba >> - Dick Treumann >> - Adam Moody >> - Rolf Rabenseifner >> - Hubert Ritzdorf >> - Rajeev Thakur >> - Douglas Gregor >> - Quincey Koziol >> - Rainer Keller >> - Dries Kimpe >> - David Solt >> - Jesper Larsson Träff >> - Martin Schulz >> - Pavan Balaji >> - William Gropp >> >> -- >> Jeff Squyres >> Cisco Systems >> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Tue Oct 14 07:35:23 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 14 Oct 2008 08:35:23 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D06FF6@swsmsx413.ger.corp.intel.com> Message-ID: <1A356737-CAFB-48D1-B51B-7365E1354B3F@cisco.com> On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: > Thanks. I'd rather say that if the purpose of the extension is > indeed to > serialize the Probe/Recv pair, the better way to solve this and many > other problems would be to make threads directly addressable, as if > they > were MPI processes. > > One way to do this might be, say, to create a call like > MPI_Comm_thread_enroll that creates an intra-communicator out of all > threads that call this function in a loosely synchronous fashion, > collectively over one or several MPI processes they constitute. I'm still not sure I follow. Can you provide more details, perhaps with function prototypes and specific rules? (i.e., an alternate proposal)? > If paired with the appropriately extended MPI_Comm_free, this would > allow, for example, all threads in an OpenMP parallel section to be > addressed as if they were fully fledged MPI processes. Note that this > would allow more than one parallel section during the program run. > > Other threading models would profit from this "opt-in/opt-out" method, > too. This may be a more flexible way of dealing with threads than the > one-time MPI_Init variety mentioned by George Bosilica in his > EuroPVM/MPI keynote, by the way. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje > Sent: Tuesday, October 14, 2008 12:45 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > Supalov, Alexander wrote: >> Dear Jeff, >> >> Unfortunately, I won't be in Chicago, so we should rather discuss >> this >> here. I talked to Torsten last time about this extension. As far as I >> can remember, the main purpose of this extension is to make sure that >> the thread that called the MPI_Probe also calls the MPI_Recv and gets >> the message matched by the aforementioned MPI_Probe. >> >> If so, the main problem here is not the matching. The main problem is >> that one cannot address threads in MPI. If we fix that, the proposed >> extension with the message handle and such will become superfluous. >> >> See what I mean? >> >> > Interesting, so you are basically redefining the MPI_Probe/Recv pair > to > guarrantee a message to go to a specific thread. Or in other words > lowering the proposal's MPI_Mprobe/recv to be in the implementation of > MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv > itself > > is basically useless unless the programmer assures serialization when > that combination is used. > > --td >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Monday, October 13, 2008 11:48 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >> >> >>> Thanks. The 2.1.1, which was presented last time, in my opinion does > >>> not >>> seem to solve the right problem. Instead of defining a way for >>> unambiguous addressing of the threads in MPI, which would eliminate >>> the >>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add > yet >>> another concept (this time, a message id) in the current situation >>> where >>> any thread can do what they please. >>> >> >> I'm not quite sure I understand your proposal. >> >> <... after typing out a lengthy/rambling discourse that made very >> little sense and was fraught with questions and ambiguities :-) ...> >> >> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for > >> discussion of this proposal. These are exactly the kinds of larger >> issues that we want to raise via this proposal. >> >> > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Tue Oct 14 07:43:34 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Tue, 14 Oct 2008 13:43:34 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <1A356737-CAFB-48D1-B51B-7365E1354B3F@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D070A6@swsmsx413.ger.corp.intel.com> Sure. This will most likely be a MPI-3 topic, though. I'll drop in a link here once ready. -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Tuesday, October 14, 2008 2:35 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: > Thanks. I'd rather say that if the purpose of the extension is > indeed to > serialize the Probe/Recv pair, the better way to solve this and many > other problems would be to make threads directly addressable, as if > they > were MPI processes. > > One way to do this might be, say, to create a call like > MPI_Comm_thread_enroll that creates an intra-communicator out of all > threads that call this function in a loosely synchronous fashion, > collectively over one or several MPI processes they constitute. I'm still not sure I follow. Can you provide more details, perhaps with function prototypes and specific rules? (i.e., an alternate proposal)? > If paired with the appropriately extended MPI_Comm_free, this would > allow, for example, all threads in an OpenMP parallel section to be > addressed as if they were fully fledged MPI processes. Note that this > would allow more than one parallel section during the program run. > > Other threading models would profit from this "opt-in/opt-out" method, > too. This may be a more flexible way of dealing with threads than the > one-time MPI_Init variety mentioned by George Bosilica in his > EuroPVM/MPI keynote, by the way. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje > Sent: Tuesday, October 14, 2008 12:45 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > Supalov, Alexander wrote: >> Dear Jeff, >> >> Unfortunately, I won't be in Chicago, so we should rather discuss >> this >> here. I talked to Torsten last time about this extension. As far as I >> can remember, the main purpose of this extension is to make sure that >> the thread that called the MPI_Probe also calls the MPI_Recv and gets >> the message matched by the aforementioned MPI_Probe. >> >> If so, the main problem here is not the matching. The main problem is >> that one cannot address threads in MPI. If we fix that, the proposed >> extension with the message handle and such will become superfluous. >> >> See what I mean? >> >> > Interesting, so you are basically redefining the MPI_Probe/Recv pair > to > guarrantee a message to go to a specific thread. Or in other words > lowering the proposal's MPI_Mprobe/recv to be in the implementation of > MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv > itself > > is basically useless unless the programmer assures serialization when > that combination is used. > > --td >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Monday, October 13, 2008 11:48 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >> >> >>> Thanks. The 2.1.1, which was presented last time, in my opinion does > >>> not >>> seem to solve the right problem. Instead of defining a way for >>> unambiguous addressing of the threads in MPI, which would eliminate >>> the >>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add > yet >>> another concept (this time, a message id) in the current situation >>> where >>> any thread can do what they please. >>> >> >> I'm not quite sure I understand your proposal. >> >> <... after typing out a lengthy/rambling discourse that made very >> little sense and was fraught with questions and ambiguities :-) ...> >> >> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for > >> discussion of this proposal. These are exactly the kinds of larger >> issues that we want to raise via this proposal. >> >> > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From alexander.supalov at [hidden] Tue Oct 14 10:15:25 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Tue, 14 Oct 2008 16:15:25 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <1A356737-CAFB-48D1-B51B-7365E1354B3F@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D07205@swsmsx413.ger.corp.intel.com> Hi, The proposal is ready in draft, see https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 . I targeted it to MPI-2.2 for now. As you will see, it resolves the problem of thread addressability without any extension to the Probe/Recv calls. I bet there are more things that will follow, too. Here's the current text for reference: "A collective MPI call, MPI_Comm_thread_register, with the following syntax (in C): int MPI_Comm_thread_register(MPI_Comm comm, int index, int num, MPI_Comm *newcomm) returns a newcomm for all num threads of the comm that called this function. All threads are treated as MPI processes in the newcomm, and their ranks are ordered according to the index argument that ranges between 0 and num-1. This argument must be unique in every thread on in the given MPI process of the comm. >From this moment on, all threads contained in the newcomm are considered as MPI processes, with all that this entails, including individual MPI rank that makes the respective thread addressable in the usual manner. All MPI communicator and group management calls can be applied to the newcomm in order to produce new communicators, reorder the processes in it, etc. (see Figure 1). A slightly modified call MPI_Comm_free with the standard syntax (in C): int MPI_Comm_free(MPI_Comm comm) can be used to destroy the respective communicator comm and thus "demote" all the threads from the status of MPI processes in the comm back to the unnamed threads typical of the MPI standard. This pair of calls, or their equivalent, allow threads to be addressed directly in all MPI calls, and since the sequence of the MPI_Comm_thread_register and MPI_Comm_free calls can be repeated as needed, OpenMP parallel sections or any equivalent groups of threads in the MPI program can become MPI processes for a while and then return to their original status. If threads use (as they usually do) joint address space with one (former) MPI process, the MPI communication calls can certainly take advantage of this by copying data directly from the source to the destination buffer. This equally applies to all point-to-point, collective, one-sided, and file I/O calls. This call certainly makes sense only at the thread support level MPI_THREAD_MULTIPLE." Best regards. Alexander -----Original Message----- From: Supalov, Alexander Sent: Tuesday, October 14, 2008 2:44 PM To: 'MPI 2.2' Subject: RE: [Mpi-22] Higher-level languages proposal Sure. This will most likely be a MPI-3 topic, though. I'll drop in a link here once ready. -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Tuesday, October 14, 2008 2:35 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: > Thanks. I'd rather say that if the purpose of the extension is > indeed to > serialize the Probe/Recv pair, the better way to solve this and many > other problems would be to make threads directly addressable, as if > they > were MPI processes. > > One way to do this might be, say, to create a call like > MPI_Comm_thread_enroll that creates an intra-communicator out of all > threads that call this function in a loosely synchronous fashion, > collectively over one or several MPI processes they constitute. I'm still not sure I follow. Can you provide more details, perhaps with function prototypes and specific rules? (i.e., an alternate proposal)? > If paired with the appropriately extended MPI_Comm_free, this would > allow, for example, all threads in an OpenMP parallel section to be > addressed as if they were fully fledged MPI processes. Note that this > would allow more than one parallel section during the program run. > > Other threading models would profit from this "opt-in/opt-out" method, > too. This may be a more flexible way of dealing with threads than the > one-time MPI_Init variety mentioned by George Bosilica in his > EuroPVM/MPI keynote, by the way. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje > Sent: Tuesday, October 14, 2008 12:45 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > Supalov, Alexander wrote: >> Dear Jeff, >> >> Unfortunately, I won't be in Chicago, so we should rather discuss >> this >> here. I talked to Torsten last time about this extension. As far as I >> can remember, the main purpose of this extension is to make sure that >> the thread that called the MPI_Probe also calls the MPI_Recv and gets >> the message matched by the aforementioned MPI_Probe. >> >> If so, the main problem here is not the matching. The main problem is >> that one cannot address threads in MPI. If we fix that, the proposed >> extension with the message handle and such will become superfluous. >> >> See what I mean? >> >> > Interesting, so you are basically redefining the MPI_Probe/Recv pair > to > guarrantee a message to go to a specific thread. Or in other words > lowering the proposal's MPI_Mprobe/recv to be in the implementation of > MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv > itself > > is basically useless unless the programmer assures serialization when > that combination is used. > > --td >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Monday, October 13, 2008 11:48 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >> >> >>> Thanks. The 2.1.1, which was presented last time, in my opinion does > >>> not >>> seem to solve the right problem. Instead of defining a way for >>> unambiguous addressing of the threads in MPI, which would eliminate >>> the >>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add > yet >>> another concept (this time, a message id) in the current situation >>> where >>> any thread can do what they please. >>> >> >> I'm not quite sure I understand your proposal. >> >> <... after typing out a lengthy/rambling discourse that made very >> little sense and was fraught with questions and ambiguities :-) ...> >> >> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday for > >> discussion of this proposal. These are exactly the kinds of larger >> issues that we want to raise via this proposal. >> >> > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Wed Oct 15 08:44:26 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 15 Oct 2008 09:44:26 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D07205@swsmsx413.ger.corp.intel.com> Message-ID: <2E80681E-AFEB-469E-B3BF-C74EEE05B089@cisco.com> I just added 2 lengthy comments on ticket 39 ( https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 ). I suspect that there will need to be a *lot* of discussion about this idea. On Oct 14, 2008, at 11:15 AM, Supalov, Alexander wrote: > Hi, > > The proposal is ready in draft, see > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 . I targeted it > to MPI-2.2 for now. As you will see, it resolves the problem of thread > addressability without any extension to the Probe/Recv calls. I bet > there are more things that will follow, too. > > Here's the current text for reference: > > "A collective MPI call, MPI_Comm_thread_register, with the following > syntax (in C): > > int MPI_Comm_thread_register(MPI_Comm comm, int index, int num, > MPI_Comm > *newcomm) > > returns a newcomm for all num threads of the comm that called this > function. All threads are treated as MPI processes in the newcomm, and > their ranks are ordered according to the index argument that ranges > between 0 and num-1. This argument must be unique in every thread on > in > the given MPI process of the comm. > >> From this moment on, all threads contained in the newcomm are >> considered > as MPI processes, with all that this entails, including individual MPI > rank that makes the respective thread addressable in the usual manner. > All MPI communicator and group management calls can be applied to the > newcomm in order to produce new communicators, reorder the processes > in > it, etc. (see Figure 1). > > A slightly modified call MPI_Comm_free with the standard syntax (in > C): > > int MPI_Comm_free(MPI_Comm comm) > > can be used to destroy the respective communicator comm and thus > "demote" all the threads from the status of MPI processes in the comm > back to the unnamed threads typical of the MPI standard. > > This pair of calls, or their equivalent, allow threads to be addressed > directly in all MPI calls, and since the sequence of the > MPI_Comm_thread_register and MPI_Comm_free calls can be repeated as > needed, OpenMP parallel sections or any equivalent groups of threads > in > the MPI program can become MPI processes for a while and then return > to > their original status. > > If threads use (as they usually do) joint address space with one > (former) MPI process, the MPI communication calls can certainly take > advantage of this by copying data directly from the source to the > destination buffer. This equally applies to all point-to-point, > collective, one-sided, and file I/O calls. > > This call certainly makes sense only at the thread support level > MPI_THREAD_MULTIPLE." > > Best regards. > > Alexander > > -----Original Message----- > From: Supalov, Alexander > Sent: Tuesday, October 14, 2008 2:44 PM > To: 'MPI 2.2' > Subject: RE: [Mpi-22] Higher-level languages proposal > > Sure. This will most likely be a MPI-3 topic, though. I'll drop in a > link here once ready. > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Tuesday, October 14, 2008 2:35 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: > >> Thanks. I'd rather say that if the purpose of the extension is >> indeed to >> serialize the Probe/Recv pair, the better way to solve this and many >> other problems would be to make threads directly addressable, as if >> they >> were MPI processes. >> >> One way to do this might be, say, to create a call like >> MPI_Comm_thread_enroll that creates an intra-communicator out of all >> threads that call this function in a loosely synchronous fashion, >> collectively over one or several MPI processes they constitute. > > I'm still not sure I follow. Can you provide more details, perhaps > with function prototypes and specific rules? (i.e., an alternate > proposal)? > >> If paired with the appropriately extended MPI_Comm_free, this would >> allow, for example, all threads in an OpenMP parallel section to be >> addressed as if they were fully fledged MPI processes. Note that this >> would allow more than one parallel section during the program run. >> >> Other threading models would profit from this "opt-in/opt-out" >> method, >> too. This may be a more flexible way of dealing with threads than the >> one-time MPI_Init variety mentioned by George Bosilica in his >> EuroPVM/MPI keynote, by the way. >> >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje >> Sent: Tuesday, October 14, 2008 12:45 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> Supalov, Alexander wrote: >>> Dear Jeff, >>> >>> Unfortunately, I won't be in Chicago, so we should rather discuss >>> this >>> here. I talked to Torsten last time about this extension. As far >>> as I >>> can remember, the main purpose of this extension is to make sure >>> that >>> the thread that called the MPI_Probe also calls the MPI_Recv and >>> gets >>> the message matched by the aforementioned MPI_Probe. >>> >>> If so, the main problem here is not the matching. The main problem >>> is >>> that one cannot address threads in MPI. If we fix that, the proposed >>> extension with the message handle and such will become superfluous. >>> >>> See what I mean? >>> >>> >> Interesting, so you are basically redefining the MPI_Probe/Recv pair >> to >> guarrantee a message to go to a specific thread. Or in other words >> lowering the proposal's MPI_Mprobe/recv to be in the implementation >> of >> MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv >> itself >> >> is basically useless unless the programmer assures serialization when >> that combination is used. >> >> --td >>> Best regards. >>> >>> Alexander >>> >>> -----Original Message----- >>> From: mpi-22-bounces_at_[hidden] >>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff >>> Squyres >>> Sent: Monday, October 13, 2008 11:48 PM >>> To: MPI 2.2 >>> Subject: Re: [Mpi-22] Higher-level languages proposal >>> >>> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >>> >>> >>>> Thanks. The 2.1.1, which was presented last time, in my opinion >>>> does >> >>>> not >>>> seem to solve the right problem. Instead of defining a way for >>>> unambiguous addressing of the threads in MPI, which would eliminate >>>> the >>>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add >> yet >>>> another concept (this time, a message id) in the current situation >>>> where >>>> any thread can do what they please. >>>> >>> >>> I'm not quite sure I understand your proposal. >>> >>> <... after typing out a lengthy/rambling discourse that made very >>> little sense and was fraught with questions and ambiguities :-) ...> >>> >>> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday >>> for >> >>> discussion of this proposal. These are exactly the kinds of larger >>> issues that we want to raise via this proposal. >>> >>> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Wed Oct 15 15:42:55 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 15 Oct 2008 16:42:55 -0400 Subject: [Mpi-22] Tracking MPI-2.2 progress In-Reply-To: <3FCFE478-E0C5-4B0E-8AFF-0C0AC98A21AE@cisco.com> Message-ID: On Sep 8, 2008, at 4:11 PM, Jeff Squyres wrote: > Note that there are two ways to track what is happening in the > MPI-2.2 effort: > > 1. The Trac we are using has an RSS feed of all activity: > > https://svn.mpi-forum.org/trac/mpi-forum-web/timeline?ticket=on&milestone=on&wiki=on&max=50&daysback=90&format=rss I just discovered that the RSS URL I originally listed (above) does not show individual ticket changes. If you want the RSS feed to include individual ticket changes (i.e., whenever someone adds a comment to a ticket), use this URL: https://svn.mpi-forum.org/trac/mpi-forum-web/timeline?ticket=on&ticket_details=on&changeset=on&milestone=on&wiki=on&max=50&daysback=90&format=rss If you want to customize what your RSS feed shows you: - go to the "timeline" page on the wiki ( https://svn.mpi-forum.org/trac/mpi-forum-web/timeline ) - use the checkboxes in the top right corner to indicate what you want to see - click "Update" - once the page refreshes, scroll down to the bottom and click on the RSS feed icon and add the URL to your favorite RSS feed reader -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Thu Oct 16 14:45:59 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 16 Oct 2008 20:45:59 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <2E80681E-AFEB-469E-B3BF-C74EEE05B089@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D3015C@swsmsx413.ger.corp.intel.com> Dear Jeff, Thanks. I'm looking forward to a lot of discussion. To start it, I've answered your questions and added some clarifications to the proposal. The main point is that in the newcomm, all processes (i.e., threads) have their own unique rank as do processes in the original "normal" MPI communicator comm. The number of threads per original MPI process that join the newcomm is given by the local_num_threads argument. After this, the usual (comm,rank) addressing works both in the old-fashioned process-only communicators and in the new thread-based ones. By using the usual communicator and group management calls one can cut and splice these new communicators as needed. This is why the syntax comparable to that of the MPI_COMM_SPLIT looks superfluous to me, at least for now. And since we can create and free these communicators as often as needed, we can follow the OpenMP parallel regions and other threading constructs very closely throughout the course of the program execution. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Wednesday, October 15, 2008 3:44 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal I just added 2 lengthy comments on ticket 39 ( https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 ). I suspect that there will need to be a *lot* of discussion about this idea. On Oct 14, 2008, at 11:15 AM, Supalov, Alexander wrote: > Hi, > > The proposal is ready in draft, see > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 . I targeted it > to MPI-2.2 for now. As you will see, it resolves the problem of thread > addressability without any extension to the Probe/Recv calls. I bet > there are more things that will follow, too. > > Here's the current text for reference: > > "A collective MPI call, MPI_Comm_thread_register, with the following > syntax (in C): > > int MPI_Comm_thread_register(MPI_Comm comm, int index, int num, > MPI_Comm > *newcomm) > > returns a newcomm for all num threads of the comm that called this > function. All threads are treated as MPI processes in the newcomm, and > their ranks are ordered according to the index argument that ranges > between 0 and num-1. This argument must be unique in every thread on > in > the given MPI process of the comm. > >> From this moment on, all threads contained in the newcomm are >> considered > as MPI processes, with all that this entails, including individual MPI > rank that makes the respective thread addressable in the usual manner. > All MPI communicator and group management calls can be applied to the > newcomm in order to produce new communicators, reorder the processes > in > it, etc. (see Figure 1). > > A slightly modified call MPI_Comm_free with the standard syntax (in > C): > > int MPI_Comm_free(MPI_Comm comm) > > can be used to destroy the respective communicator comm and thus > "demote" all the threads from the status of MPI processes in the comm > back to the unnamed threads typical of the MPI standard. > > This pair of calls, or their equivalent, allow threads to be addressed > directly in all MPI calls, and since the sequence of the > MPI_Comm_thread_register and MPI_Comm_free calls can be repeated as > needed, OpenMP parallel sections or any equivalent groups of threads > in > the MPI program can become MPI processes for a while and then return > to > their original status. > > If threads use (as they usually do) joint address space with one > (former) MPI process, the MPI communication calls can certainly take > advantage of this by copying data directly from the source to the > destination buffer. This equally applies to all point-to-point, > collective, one-sided, and file I/O calls. > > This call certainly makes sense only at the thread support level > MPI_THREAD_MULTIPLE." > > Best regards. > > Alexander > > -----Original Message----- > From: Supalov, Alexander > Sent: Tuesday, October 14, 2008 2:44 PM > To: 'MPI 2.2' > Subject: RE: [Mpi-22] Higher-level languages proposal > > Sure. This will most likely be a MPI-3 topic, though. I'll drop in a > link here once ready. > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Tuesday, October 14, 2008 2:35 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: > >> Thanks. I'd rather say that if the purpose of the extension is >> indeed to >> serialize the Probe/Recv pair, the better way to solve this and many >> other problems would be to make threads directly addressable, as if >> they >> were MPI processes. >> >> One way to do this might be, say, to create a call like >> MPI_Comm_thread_enroll that creates an intra-communicator out of all >> threads that call this function in a loosely synchronous fashion, >> collectively over one or several MPI processes they constitute. > > I'm still not sure I follow. Can you provide more details, perhaps > with function prototypes and specific rules? (i.e., an alternate > proposal)? > >> If paired with the appropriately extended MPI_Comm_free, this would >> allow, for example, all threads in an OpenMP parallel section to be >> addressed as if they were fully fledged MPI processes. Note that this >> would allow more than one parallel section during the program run. >> >> Other threading models would profit from this "opt-in/opt-out" >> method, >> too. This may be a more flexible way of dealing with threads than the >> one-time MPI_Init variety mentioned by George Bosilica in his >> EuroPVM/MPI keynote, by the way. >> >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry Dontje >> Sent: Tuesday, October 14, 2008 12:45 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> Supalov, Alexander wrote: >>> Dear Jeff, >>> >>> Unfortunately, I won't be in Chicago, so we should rather discuss >>> this >>> here. I talked to Torsten last time about this extension. As far >>> as I >>> can remember, the main purpose of this extension is to make sure >>> that >>> the thread that called the MPI_Probe also calls the MPI_Recv and >>> gets >>> the message matched by the aforementioned MPI_Probe. >>> >>> If so, the main problem here is not the matching. The main problem >>> is >>> that one cannot address threads in MPI. If we fix that, the proposed >>> extension with the message handle and such will become superfluous. >>> >>> See what I mean? >>> >>> >> Interesting, so you are basically redefining the MPI_Probe/Recv pair >> to >> guarrantee a message to go to a specific thread. Or in other words >> lowering the proposal's MPI_Mprobe/recv to be in the implementation >> of >> MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv >> itself >> >> is basically useless unless the programmer assures serialization when >> that combination is used. >> >> --td >>> Best regards. >>> >>> Alexander >>> >>> -----Original Message----- >>> From: mpi-22-bounces_at_[hidden] >>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff >>> Squyres >>> Sent: Monday, October 13, 2008 11:48 PM >>> To: MPI 2.2 >>> Subject: Re: [Mpi-22] Higher-level languages proposal >>> >>> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >>> >>> >>>> Thanks. The 2.1.1, which was presented last time, in my opinion >>>> does >> >>>> not >>>> seem to solve the right problem. Instead of defining a way for >>>> unambiguous addressing of the threads in MPI, which would eliminate >>>> the >>>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add >> yet >>>> another concept (this time, a message id) in the current situation >>>> where >>>> any thread can do what they please. >>>> >>> >>> I'm not quite sure I understand your proposal. >>> >>> <... after typing out a lengthy/rambling discourse that made very >>> little sense and was fraught with questions and ambiguities :-) ...> >>> >>> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday >>> for >> >>> discussion of this proposal. These are exactly the kinds of larger >>> issues that we want to raise via this proposal. >>> >>> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Thu Oct 16 16:15:01 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 16 Oct 2008 17:15:01 -0400 Subject: [Mpi-22] Trac ticket e-mails Message-ID: <9EDC8267-2249-41B4-AB6C-4174CC8E2F67@cisco.com> IU just discovered a bug in their setup such that e-mails were not being sent when tickets were being changed. They have now fixed the bug; the following people will get e-mails when tickets are updated: - the reporter - the person it was assigned to (if different than the reporter) - anyone who is explicitly listed in the CC I can't remember offhand if everyone who has replied on the ticket will also get a mail or not. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Thu Oct 16 16:33:16 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 16 Oct 2008 17:33:16 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D3015C@swsmsx413.ger.corp.intel.com> Message-ID: <49586B33-A96C-47B5-8624-F86E5E4EF8F0@cisco.com> Ah -- major point of clarification that I missed in your initial description (sorry): that the number of threads is a *local* number of threads. That helps my understanding; I glossed over and thought that that number was a *total* number of threads. Replying here on the list because it's a bit easier than lengthy / inter-threaded replies on the ticket... (I just put the web archive URL of the reply thread on the ticket): - THREAD_REGISTER/SPLIT_THREAD: per your comment about not really being a "split" kind of action: ok, I can see that. But the color/key aspect may still be useful here. - "Must be invoked by >=1 thread" being superfluous: I still don't quite grok your definition of "collective" here -- it's not the same definition of "collective" as in other MPI collectives because it's *more* than just "every MPI process in the communicator" -- you now want every MPI process in the communicator plus other threads. - Changing addressing to (comm_id, rank, thread_id): I specifically mentioned the *internals* of MPI implementation. I realize that your proposal was aimed at keeping the external interface for MPI_SEND (etc.) the same. I was stating that this is a fundamental change for the internals of MPI implementations, even for non-communication operations such as GROUP_TRANSLATE_RANKS. - FINALIZE: It was an open question that you didn't really answer: if THREAD_REGISTER'ed threads *are* MPI processes, then do they each have to call MPI_FINALIZE? - Can you call THREAD_REGISTER/SPLIT_THREAD with a comm argument that already contains threads-as-processes? If so, what exactly does it mean? You said: "Same thing that it normally means. See the (newcomm,rank) addressing above. The applicability of all communicator and group management calls is stated in the description." Can you describe exactly what it means for a thread-now-MPI-process to call THREAD_REGISTER with a num_threads argument >1? What exactly happens in this scenario? main() { MPI_INIT(); spawn_threads(8, thread_main, NULL); wait_for_threads(); MPI_FINALIZE(); } void thread_main(void *arg) { MPI_Comm comm1; MPI_THREAD_REGISTER(MPI_COMM_SELF, my_thread_id, 8, comm1); spawn_threads(8, secondary_thread_main, comm1); } void secondary_thread_main(void *arg) { MPI_Comm comm2, parent = (MPI_Comm) arg; MPI_THREAD_REGISTER(parent, my_thread_id, 8, &comm2); } Which threads end up in which comm2? (note that there will be 8 comm2's) Since threads are "unbound" to an MPI process before they invoke THREAD_REGISTER, the grouping is not guaranteed. - THREAD_MULTIPLE: I now understand the distinction of num_local_threads; thanks. But I think num_local_threads can only be >1 if the local thread level is MPI_THREAD_MULTIPLE. You didn't really address this in your answer. - Abstracting away locality: I respectfully disagree. :-) Yes, we're enabling thread-specific addressing, and that may be a good thing. But MPI does not [currently] expose which communicator ranks are "local" to other communicator ranks. And with this proposal, now we have at least 2 levels of "local" in the MPI spec itself (in the same OS process and outside of the OS process). The hardware that the MPI job is running on likely has multiple levels of locality as well (on- vs. off-host, on- vs. off-processor, ....etc.). So yes, we may have enabled one good thing, but made determining locality more difficult. That's my only point here. On Oct 16, 2008, at 3:45 PM, Supalov, Alexander wrote: > Dear Jeff, > > Thanks. I'm looking forward to a lot of discussion. > > To start it, I've answered your questions and added some > clarifications > to the proposal. The main point is that in the newcomm, all processes > (i.e., threads) have their own unique rank as do processes in the > original "normal" MPI communicator comm. > > The number of threads per original MPI process that join the newcomm > is > given by the local_num_threads argument. After this, the usual > (comm,rank) addressing works both in the old-fashioned process-only > communicators and in the new thread-based ones. > > By using the usual communicator and group management calls one can cut > and splice these new communicators as needed. This is why the syntax > comparable to that of the MPI_COMM_SPLIT looks superfluous to me, at > least for now. > > And since we can create and free these communicators as often as > needed, > we can follow the OpenMP parallel regions and other threading > constructs > very closely throughout the course of the program execution. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Wednesday, October 15, 2008 3:44 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > I just added 2 lengthy comments on ticket 39 ( > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 > ). > > I suspect that there will need to be a *lot* of discussion about this > idea. > > > On Oct 14, 2008, at 11:15 AM, Supalov, Alexander wrote: > >> Hi, >> >> The proposal is ready in draft, see >> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 . I targeted >> it >> to MPI-2.2 for now. As you will see, it resolves the problem of >> thread >> addressability without any extension to the Probe/Recv calls. I bet >> there are more things that will follow, too. >> >> Here's the current text for reference: >> >> "A collective MPI call, MPI_Comm_thread_register, with the following >> syntax (in C): >> >> int MPI_Comm_thread_register(MPI_Comm comm, int index, int num, >> MPI_Comm >> *newcomm) >> >> returns a newcomm for all num threads of the comm that called this >> function. All threads are treated as MPI processes in the newcomm, >> and >> their ranks are ordered according to the index argument that ranges >> between 0 and num-1. This argument must be unique in every thread on >> in >> the given MPI process of the comm. >> >>> From this moment on, all threads contained in the newcomm are >>> considered >> as MPI processes, with all that this entails, including individual >> MPI >> rank that makes the respective thread addressable in the usual >> manner. >> All MPI communicator and group management calls can be applied to the >> newcomm in order to produce new communicators, reorder the processes >> in >> it, etc. (see Figure 1). >> >> A slightly modified call MPI_Comm_free with the standard syntax (in >> C): >> >> int MPI_Comm_free(MPI_Comm comm) >> >> can be used to destroy the respective communicator comm and thus >> "demote" all the threads from the status of MPI processes in the comm >> back to the unnamed threads typical of the MPI standard. >> >> This pair of calls, or their equivalent, allow threads to be >> addressed >> directly in all MPI calls, and since the sequence of the >> MPI_Comm_thread_register and MPI_Comm_free calls can be repeated as >> needed, OpenMP parallel sections or any equivalent groups of threads >> in >> the MPI program can become MPI processes for a while and then return >> to >> their original status. >> >> If threads use (as they usually do) joint address space with one >> (former) MPI process, the MPI communication calls can certainly take >> advantage of this by copying data directly from the source to the >> destination buffer. This equally applies to all point-to-point, >> collective, one-sided, and file I/O calls. >> >> This call certainly makes sense only at the thread support level >> MPI_THREAD_MULTIPLE." >> >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: Supalov, Alexander >> Sent: Tuesday, October 14, 2008 2:44 PM >> To: 'MPI 2.2' >> Subject: RE: [Mpi-22] Higher-level languages proposal >> >> Sure. This will most likely be a MPI-3 topic, though. I'll drop in a >> link here once ready. >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Tuesday, October 14, 2008 2:35 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: >> >>> Thanks. I'd rather say that if the purpose of the extension is >>> indeed to >>> serialize the Probe/Recv pair, the better way to solve this and many >>> other problems would be to make threads directly addressable, as if >>> they >>> were MPI processes. >>> >>> One way to do this might be, say, to create a call like >>> MPI_Comm_thread_enroll that creates an intra-communicator out of all >>> threads that call this function in a loosely synchronous fashion, >>> collectively over one or several MPI processes they constitute. >> >> I'm still not sure I follow. Can you provide more details, perhaps >> with function prototypes and specific rules? (i.e., an alternate >> proposal)? >> >>> If paired with the appropriately extended MPI_Comm_free, this would >>> allow, for example, all threads in an OpenMP parallel section to be >>> addressed as if they were fully fledged MPI processes. Note that >>> this >>> would allow more than one parallel section during the program run. >>> >>> Other threading models would profit from this "opt-in/opt-out" >>> method, >>> too. This may be a more flexible way of dealing with threads than >>> the >>> one-time MPI_Init variety mentioned by George Bosilica in his >>> EuroPVM/MPI keynote, by the way. >>> >>> Best regards. >>> >>> Alexander >>> >>> -----Original Message----- >>> From: mpi-22-bounces_at_[hidden] >>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry >>> Dontje >>> Sent: Tuesday, October 14, 2008 12:45 PM >>> To: MPI 2.2 >>> Subject: Re: [Mpi-22] Higher-level languages proposal >>> >>> Supalov, Alexander wrote: >>>> Dear Jeff, >>>> >>>> Unfortunately, I won't be in Chicago, so we should rather discuss >>>> this >>>> here. I talked to Torsten last time about this extension. As far >>>> as I >>>> can remember, the main purpose of this extension is to make sure >>>> that >>>> the thread that called the MPI_Probe also calls the MPI_Recv and >>>> gets >>>> the message matched by the aforementioned MPI_Probe. >>>> >>>> If so, the main problem here is not the matching. The main problem >>>> is >>>> that one cannot address threads in MPI. If we fix that, the >>>> proposed >>>> extension with the message handle and such will become superfluous. >>>> >>>> See what I mean? >>>> >>>> >>> Interesting, so you are basically redefining the MPI_Probe/Recv pair >>> to >>> guarrantee a message to go to a specific thread. Or in other words >>> lowering the proposal's MPI_Mprobe/recv to be in the implementation >>> of >>> MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv >>> itself >>> >>> is basically useless unless the programmer assures serialization >>> when >>> that combination is used. >>> >>> --td >>>> Best regards. >>>> >>>> Alexander >>>> >>>> -----Original Message----- >>>> From: mpi-22-bounces_at_[hidden] >>>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff >>>> Squyres >>>> Sent: Monday, October 13, 2008 11:48 PM >>>> To: MPI 2.2 >>>> Subject: Re: [Mpi-22] Higher-level languages proposal >>>> >>>> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >>>> >>>> >>>>> Thanks. The 2.1.1, which was presented last time, in my opinion >>>>> does >>> >>>>> not >>>>> seem to solve the right problem. Instead of defining a way for >>>>> unambiguous addressing of the threads in MPI, which would >>>>> eliminate >>>>> the >>>>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add >>> yet >>>>> another concept (this time, a message id) in the current situation >>>>> where >>>>> any thread can do what they please. >>>>> >>>> >>>> I'm not quite sure I understand your proposal. >>>> >>>> <... after typing out a lengthy/rambling discourse that made very >>>> little sense and was fraught with questions and >>>> ambiguities :-) ...> >>>> >>>> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday >>>> for >>> >>>> discussion of this proposal. These are exactly the kinds of larger >>>> issues that we want to raise via this proposal. >>>> >>>> >>> >>> _______________________________________________ >>> mpi-22 mailing list >>> mpi-22_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >>> --------------------------------------------------------------------- >>> Intel GmbH >>> Dornacher Strasse 1 >>> 85622 Feldkirchen/Muenchen Germany >>> Sitz der Gesellschaft: Feldkirchen bei Muenchen >>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >>> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >>> VAT Registration No.: DE129385895 >>> Citibank Frankfurt (BLZ 502 109 00) 600119052 >>> >>> This e-mail and any attachments may contain confidential material >>> for >>> the sole use of the intended recipient(s). Any review or >>> distribution >>> by others is strictly prohibited. If you are not the intended >>> recipient, please contact the sender and delete all copies. >>> >>> >>> _______________________________________________ >>> mpi-22 mailing list >>> mpi-22_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> >> >> -- >> Jeff Squyres >> Cisco Systems >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Thu Oct 16 16:45:03 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 16 Oct 2008 17:45:03 -0400 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <49586B33-A96C-47B5-8624-F86E5E4EF8F0@cisco.com> Message-ID: On Oct 16, 2008, at 5:33 PM, Jeff Squyres wrote: > main() { > MPI_INIT(); > spawn_threads(8, thread_main, NULL); > wait_for_threads(); > MPI_FINALIZE(); > } > > void thread_main(void *arg) { > MPI_Comm comm1; > MPI_THREAD_REGISTER(MPI_COMM_SELF, my_thread_id, 8, comm1); > spawn_threads(8, secondary_thread_main, comm1); > } > > void secondary_thread_main(void *arg) { > MPI_Comm comm2, parent = (MPI_Comm) arg; > MPI_THREAD_REGISTER(parent, my_thread_id, 8, &comm2); > } Never mind -- this was a bad example. The grouping above is guaranteed because of the unique values of the parent communicator. Hmm. Something still bugs me about this, but I can't quite put my finger on it. I'll therefore shut up about this specific point until I can be clear about it. :-) -- Jeff Squyres Cisco Systems From traff at [hidden] Fri Oct 17 03:14:42 2008 From: traff at [hidden] (Jesper Larsson Traeff) Date: Fri, 17 Oct 2008 10:14:42 +0200 Subject: [Mpi-22] Some (minor) updates to tickets #29, #30, #31 In-Reply-To: Message-ID: <20081017081442.GA13729@fourier.it.neclab.eu> jesper * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 4642 bytes Desc: smime.p7s URL: From alexander.supalov at [hidden] Fri Oct 17 06:34:20 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 17 Oct 2008 12:34:20 +0100 Subject: [Mpi-22] Higher-level languages proposal In-Reply-To: <49586B33-A96C-47B5-8624-F86E5E4EF8F0@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D30414@swsmsx413.ger.corp.intel.com> Dear Jeff, Thanks. I reply below. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Thursday, October 16, 2008 11:33 PM To: MPI 2.2 Subject: Re: [Mpi-22] Higher-level languages proposal Ah -- major point of clarification that I missed in your initial description (sorry): that the number of threads is a *local* number of threads. That helps my understanding; I glossed over and thought that that number was a *total* number of threads. Replying here on the list because it's a bit easier than lengthy / inter-threaded replies on the ticket... (I just put the web archive URL of the reply thread on the ticket): - THREAD_REGISTER/SPLIT_THREAD: per your comment about not really being a "split" kind of action: ok, I can see that. But the color/key aspect may still be useful here. AS> I'd rather work around this in a modular way. - "Must be invoked by >=1 thread" being superfluous: I still don't quite grok your definition of "collective" here -- it's not the same definition of "collective" as in other MPI collectives because it's *more* than just "every MPI process in the communicator" -- you now want every MPI process in the communicator plus other threads. AS> In my opinion, it's superfluous because collectives must be called by all processes on the comm in a loosely synchronous manner. This is exactly what will happen, so that the requirement of >= 1 thread calling the proposed function will be met automatically. I see your point wrt the use of the word "collective" here, though. May be we'll come up with a better wording. - Changing addressing to (comm_id, rank, thread_id): I specifically mentioned the *internals* of MPI implementation. I realize that your proposal was aimed at keeping the external interface for MPI_SEND (etc.) the same. I was stating that this is a fundamental change for the internals of MPI implementations, even for non-communication operations such as GROUP_TRANSLATE_RANKS. AS> Agree on internals - this won't be a snap. - FINALIZE: It was an open question that you didn't really answer: if THREAD_REGISTER'ed threads *are* MPI processes, then do they each have to call MPI_FINALIZE? AS> I effectively said this should be possible, as long as MPI_Finalize is able to handle live communicators of this kind. - Can you call THREAD_REGISTER/SPLIT_THREAD with a comm argument that already contains threads-as-processes? If so, what exactly does it mean? You said: "Same thing that it normally means. See the (newcomm,rank) addressing above. The applicability of all communicator and group management calls is stated in the description." Can you describe exactly what it means for a thread-now-MPI-process to call THREAD_REGISTER with a num_threads argument >1? What exactly happens in this scenario? main() { MPI_INIT(); spawn_threads(8, thread_main, NULL); wait_for_threads(); MPI_FINALIZE(); } void thread_main(void *arg) { MPI_Comm comm1; MPI_THREAD_REGISTER(MPI_COMM_SELF, my_thread_id, 8, comm1); spawn_threads(8, secondary_thread_main, comm1); } void secondary_thread_main(void *arg) { MPI_Comm comm2, parent = (MPI_Comm) arg; MPI_THREAD_REGISTER(parent, my_thread_id, 8, &comm2); } Which threads end up in which comm2? (note that there will be 8 comm2's) Since threads are "unbound" to an MPI process before they invoke THREAD_REGISTER, the grouping is not guaranteed. AS> I defer my reply to your follow-up message. - THREAD_MULTIPLE: I now understand the distinction of num_local_threads; thanks. But I think num_local_threads can only be >1 if the local thread level is MPI_THREAD_MULTIPLE. You didn't really address this in your answer. AS> I said for starters that the op is only meaningful for MPI_THREAD_MULTIPLE. Other modes may require additional contemplation. Calling this function with only one thread should not be a problem. In the case of MPI_THREAD_SINGLE this op will be like MPI_COMM_DUP. This infers that the comm should be an intracomm, by the way. In the MPI_THREAD_FUNNELED only the main thread can call MPI, so this will be equivalent to the above. This is not good, because funneled model is used very frequently by mixed programs. May be we should relax the requirement here and allow MPI_COMM_THREAD_REGISTER being called in this case. This needs additional contemplation. In the MPI_THREAD_SERIALIZED, the call would connect the threads that called the function. Looks reasonable to me. Finally, in the MPI_THREAD_MULTIPLE everything is fine, also if the number of threads used is 1 per process. - Abstracting away locality: I respectfully disagree. :-) Yes, we're enabling thread-specific addressing, and that may be a good thing. But MPI does not [currently] expose which communicator ranks are "local" to other communicator ranks. And with this proposal, now we have at least 2 levels of "local" in the MPI spec itself (in the same OS process and outside of the OS process). The hardware that the MPI job is running on likely has multiple levels of locality as well (on- vs. off-host, on- vs. off-processor, ....etc.). So yes, we may have enabled one good thing, but made determining locality more difficult. That's my only point here. AS> I see your point now, thanks. We do address locality already, but in a different sense and indirectly: we sort of work around this matter by providing virtual communicators. On Oct 16, 2008, at 3:45 PM, Supalov, Alexander wrote: > Dear Jeff, > > Thanks. I'm looking forward to a lot of discussion. > > To start it, I've answered your questions and added some > clarifications > to the proposal. The main point is that in the newcomm, all processes > (i.e., threads) have their own unique rank as do processes in the > original "normal" MPI communicator comm. > > The number of threads per original MPI process that join the newcomm > is > given by the local_num_threads argument. After this, the usual > (comm,rank) addressing works both in the old-fashioned process-only > communicators and in the new thread-based ones. > > By using the usual communicator and group management calls one can cut > and splice these new communicators as needed. This is why the syntax > comparable to that of the MPI_COMM_SPLIT looks superfluous to me, at > least for now. > > And since we can create and free these communicators as often as > needed, > we can follow the OpenMP parallel regions and other threading > constructs > very closely throughout the course of the program execution. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Wednesday, October 15, 2008 3:44 PM > To: MPI 2.2 > Subject: Re: [Mpi-22] Higher-level languages proposal > > I just added 2 lengthy comments on ticket 39 ( > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 > ). > > I suspect that there will need to be a *lot* of discussion about this > idea. > > > On Oct 14, 2008, at 11:15 AM, Supalov, Alexander wrote: > >> Hi, >> >> The proposal is ready in draft, see >> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/39 . I targeted >> it >> to MPI-2.2 for now. As you will see, it resolves the problem of >> thread >> addressability without any extension to the Probe/Recv calls. I bet >> there are more things that will follow, too. >> >> Here's the current text for reference: >> >> "A collective MPI call, MPI_Comm_thread_register, with the following >> syntax (in C): >> >> int MPI_Comm_thread_register(MPI_Comm comm, int index, int num, >> MPI_Comm >> *newcomm) >> >> returns a newcomm for all num threads of the comm that called this >> function. All threads are treated as MPI processes in the newcomm, >> and >> their ranks are ordered according to the index argument that ranges >> between 0 and num-1. This argument must be unique in every thread on >> in >> the given MPI process of the comm. >> >>> From this moment on, all threads contained in the newcomm are >>> considered >> as MPI processes, with all that this entails, including individual >> MPI >> rank that makes the respective thread addressable in the usual >> manner. >> All MPI communicator and group management calls can be applied to the >> newcomm in order to produce new communicators, reorder the processes >> in >> it, etc. (see Figure 1). >> >> A slightly modified call MPI_Comm_free with the standard syntax (in >> C): >> >> int MPI_Comm_free(MPI_Comm comm) >> >> can be used to destroy the respective communicator comm and thus >> "demote" all the threads from the status of MPI processes in the comm >> back to the unnamed threads typical of the MPI standard. >> >> This pair of calls, or their equivalent, allow threads to be >> addressed >> directly in all MPI calls, and since the sequence of the >> MPI_Comm_thread_register and MPI_Comm_free calls can be repeated as >> needed, OpenMP parallel sections or any equivalent groups of threads >> in >> the MPI program can become MPI processes for a while and then return >> to >> their original status. >> >> If threads use (as they usually do) joint address space with one >> (former) MPI process, the MPI communication calls can certainly take >> advantage of this by copying data directly from the source to the >> destination buffer. This equally applies to all point-to-point, >> collective, one-sided, and file I/O calls. >> >> This call certainly makes sense only at the thread support level >> MPI_THREAD_MULTIPLE." >> >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: Supalov, Alexander >> Sent: Tuesday, October 14, 2008 2:44 PM >> To: 'MPI 2.2' >> Subject: RE: [Mpi-22] Higher-level languages proposal >> >> Sure. This will most likely be a MPI-3 topic, though. I'll drop in a >> link here once ready. >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Tuesday, October 14, 2008 2:35 PM >> To: MPI 2.2 >> Subject: Re: [Mpi-22] Higher-level languages proposal >> >> On Oct 14, 2008, at 7:07 AM, Supalov, Alexander wrote: >> >>> Thanks. I'd rather say that if the purpose of the extension is >>> indeed to >>> serialize the Probe/Recv pair, the better way to solve this and many >>> other problems would be to make threads directly addressable, as if >>> they >>> were MPI processes. >>> >>> One way to do this might be, say, to create a call like >>> MPI_Comm_thread_enroll that creates an intra-communicator out of all >>> threads that call this function in a loosely synchronous fashion, >>> collectively over one or several MPI processes they constitute. >> >> I'm still not sure I follow. Can you provide more details, perhaps >> with function prototypes and specific rules? (i.e., an alternate >> proposal)? >> >>> If paired with the appropriately extended MPI_Comm_free, this would >>> allow, for example, all threads in an OpenMP parallel section to be >>> addressed as if they were fully fledged MPI processes. Note that >>> this >>> would allow more than one parallel section during the program run. >>> >>> Other threading models would profit from this "opt-in/opt-out" >>> method, >>> too. This may be a more flexible way of dealing with threads than >>> the >>> one-time MPI_Init variety mentioned by George Bosilica in his >>> EuroPVM/MPI keynote, by the way. >>> >>> Best regards. >>> >>> Alexander >>> >>> -----Original Message----- >>> From: mpi-22-bounces_at_[hidden] >>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Terry >>> Dontje >>> Sent: Tuesday, October 14, 2008 12:45 PM >>> To: MPI 2.2 >>> Subject: Re: [Mpi-22] Higher-level languages proposal >>> >>> Supalov, Alexander wrote: >>>> Dear Jeff, >>>> >>>> Unfortunately, I won't be in Chicago, so we should rather discuss >>>> this >>>> here. I talked to Torsten last time about this extension. As far >>>> as I >>>> can remember, the main purpose of this extension is to make sure >>>> that >>>> the thread that called the MPI_Probe also calls the MPI_Recv and >>>> gets >>>> the message matched by the aforementioned MPI_Probe. >>>> >>>> If so, the main problem here is not the matching. The main problem >>>> is >>>> that one cannot address threads in MPI. If we fix that, the >>>> proposed >>>> extension with the message handle and such will become superfluous. >>>> >>>> See what I mean? >>>> >>>> >>> Interesting, so you are basically redefining the MPI_Probe/Recv pair >>> to >>> guarrantee a message to go to a specific thread. Or in other words >>> lowering the proposal's MPI_Mprobe/recv to be in the implementation >>> of >>> MPI_Probe/Recv. This seems reasonable to me since MPI_Probe/Recv >>> itself >>> >>> is basically useless unless the programmer assures serialization >>> when >>> that combination is used. >>> >>> --td >>>> Best regards. >>>> >>>> Alexander >>>> >>>> -----Original Message----- >>>> From: mpi-22-bounces_at_[hidden] >>>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff >>>> Squyres >>>> Sent: Monday, October 13, 2008 11:48 PM >>>> To: MPI 2.2 >>>> Subject: Re: [Mpi-22] Higher-level languages proposal >>>> >>>> On Oct 13, 2008, at 10:46 AM, Supalov, Alexander wrote: >>>> >>>> >>>>> Thanks. The 2.1.1, which was presented last time, in my opinion >>>>> does >>> >>>>> not >>>>> seem to solve the right problem. Instead of defining a way for >>>>> unambiguous addressing of the threads in MPI, which would >>>>> eliminate >>>>> the >>>>> MPI_Probe/Recv ambiguity and many other issues, it attempts to add >>> yet >>>>> another concept (this time, a message id) in the current situation >>>>> where >>>>> any thread can do what they please. >>>>> >>>> >>>> I'm not quite sure I understand your proposal. >>>> >>>> <... after typing out a lengthy/rambling discourse that made very >>>> little sense and was fraught with questions and >>>> ambiguities :-) ...> >>>> >>>> Let's discuss this in Chicago; Rich has allocated 5-7pm on Monday >>>> for >>> >>>> discussion of this proposal. These are exactly the kinds of larger >>>> issues that we want to raise via this proposal. >>>> >>>> >>> >>> _______________________________________________ >>> mpi-22 mailing list >>> mpi-22_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >>> --------------------------------------------------------------------- >>> Intel GmbH >>> Dornacher Strasse 1 >>> 85622 Feldkirchen/Muenchen Germany >>> Sitz der Gesellschaft: Feldkirchen bei Muenchen >>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >>> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >>> VAT Registration No.: DE129385895 >>> Citibank Frankfurt (BLZ 502 109 00) 600119052 >>> >>> This e-mail and any attachments may contain confidential material >>> for >>> the sole use of the intended recipient(s). Any review or >>> distribution >>> by others is strictly prohibited. If you are not the intended >>> recipient, please contact the sender and delete all copies. >>> >>> >>> _______________________________________________ >>> mpi-22 mailing list >>> mpi-22_at_[hidden] >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> >> >> -- >> Jeff Squyres >> Cisco Systems >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 >> --------------------------------------------------------------------- >> Intel GmbH >> Dornacher Strasse 1 >> 85622 Feldkirchen/Muenchen Germany >> Sitz der Gesellschaft: Feldkirchen bei Muenchen >> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer >> Registergericht: Muenchen HRB 47456 Ust.-IdNr. >> VAT Registration No.: DE129385895 >> Citibank Frankfurt (BLZ 502 109 00) 600119052 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > --------------------------------------------------------------------- > Intel GmbH > Dornacher Strasse 1 > 85622 Feldkirchen/Muenchen Germany > Sitz der Gesellschaft: Feldkirchen bei Muenchen > Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer > Registergericht: Muenchen HRB 47456 Ust.-IdNr. > VAT Registration No.: DE129385895 > Citibank Frankfurt (BLZ 502 109 00) 600119052 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From thakur at [hidden] Sun Oct 19 12:50:35 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Sun, 19 Oct 2008 12:50:35 -0500 Subject: [Mpi-22] FW: Reminder to move items into tickets Message-ID: <55E64331D8AD4AB4A93C5DC7C44D1F51@thakurlaptop> If you have converted your wiki entry to a ticket, but not added a link to the ticket from the main page like others have done, it would be helpful to add the link. https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MpiTwoTwoWikiPage Rajeev _____ From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of William Gropp Sent: Wednesday, October 08, 2008 7:31 AM To: MPI 2.2 Subject: [Mpi-22] Reminder to move items into tickets This is a reminder to move MPI 2.2 items into the ticket system. See https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow or the mail that Jeff sent on September 8th for more details. Only items that have tickets will be discussed at the next MPI Forum meeting. Bill William Gropp Deputy Director for Research Institute for Advanced Computing Applications and Technologies Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Thu Oct 23 10:33:12 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 23 Oct 2008 11:33:12 -0400 Subject: [Mpi-22] Cross-language attributes example wrong Message-ID: <12333DE2-23DE-459C-B653-EAA4D09878EC@cisco.com> I just filed ticket 55, about example 16.17 in MPI-2.1 p487: https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/55 The last line of the second sub-example is wrong, but I think the examples themselves are extremely subtle and are worth both making more explicit and adding some comments. More details are available on the ticket. I've asked several people for reviews already. Enjoy. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Thu Oct 23 15:13:22 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 23 Oct 2008 16:13:22 -0400 Subject: [Mpi-22] Should all constants/types be available in all language bindings? Message-ID: I notice that OMPI, MPICH2, Intel MPI, and HP MPI all do the following: * define Fortran datatypes in mpi.h (e.g., MPI_INTEGER, MPI_DOUBLE) * define Fortran datatypes in the C++ MPI namespace (e.g., MPI::INTEGER, MPI::DOUBLE, etc.) * do *not* define C datatypes in mpif.h (e.g., MPI_INT, MPI_FLOAT, etc.) Why? AFAICT, there is no rule about what constants have to appear in which language bindings. But doesn't that implicitly mean that all constants are supposed to appear in all language bindings? (the argument for having MPI_DOUBLE available in C, for example, is that a C routine may be invoked to send or receive a message containing Fortran data. Similar arguments exist for why you'd want MPI datatypes from other languages available in your language) This is also related to whether the type MPI::Fint should exist or not. I just filed a ticket about these issues (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/56 ) and marked it as "feedback requested", meaning that I need feedback from the Forum before a proposal can be made. What do people think? -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Fri Oct 24 04:48:28 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 24 Oct 2008 10:48:28 +0100 Subject: [Mpi-22] Should all constants/types be available in all languagebindings? In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CC79@swsmsx413.ger.corp.intel.com> Hi, I think we should answer the following questions to decide whether and how to define the constants: 1. When would a C MPI program call Fortran MPI subprogram? Probably, never. 2. When would a Fortran MPI program call a C MPI subprogram? Sometimes. 3. When would a C++ MPI program call Fortran MPI program? Probably, never. 4. When would a Fortran MPI program call a C++ MPI program? Probably, never. Here, "subprogram" means a part of program that relies on the MPI use or knowledge of MPI datatypes, etc. This may or may not be a direct call into the respective MPI library. >From this, what is needed is to define Fortran datatypes from mpif.h in C mpi.h. The rest does not seem relevant. Even the existing C++ definitions may just reflect C heritage. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Thursday, October 23, 2008 10:13 PM To: MPI 2.2 Subject: [Mpi-22] Should all constants/types be available in all languagebindings? I notice that OMPI, MPICH2, Intel MPI, and HP MPI all do the following: * define Fortran datatypes in mpi.h (e.g., MPI_INTEGER, MPI_DOUBLE) * define Fortran datatypes in the C++ MPI namespace (e.g., MPI::INTEGER, MPI::DOUBLE, etc.) * do *not* define C datatypes in mpif.h (e.g., MPI_INT, MPI_FLOAT, etc.) Why? AFAICT, there is no rule about what constants have to appear in which language bindings. But doesn't that implicitly mean that all constants are supposed to appear in all language bindings? (the argument for having MPI_DOUBLE available in C, for example, is that a C routine may be invoked to send or receive a message containing Fortran data. Similar arguments exist for why you'd want MPI datatypes from other languages available in your language) This is also related to whether the type MPI::Fint should exist or not. I just filed a ticket about these issues (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/56 ) and marked it as "feedback requested", meaning that I need feedback from the Forum before a proposal can be made. What do people think? -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From ritzdorf at [hidden] Fri Oct 24 08:43:02 2008 From: ritzdorf at [hidden] (Hubert Ritzdorf) Date: Fri, 24 Oct 2008 15:43:02 +0200 Subject: [Mpi-22] Should all constants/types be available in all languagebindings? In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CC79@swsmsx413.ger.corp.intel.com> Message-ID: <4901D0E6.9080309@it.neclab.eu> Hi, as far as I remember it was cleared in the MPI 2.1 standardization process that "all" datatypes have to be defined in all 3 languages. Changelog item 2 requires this explicitly for MPI_LONG_LONG, MPI_LONG_LONG_INT, ... (But I think that some C++ datatypes are missing since there wasn't defined a name). Possibly, Jeff is not referencing MPI 2.1 implementations. Supalov, Alexander wrote: > Hi, > > I think we should answer the following questions to decide whether and > how to define the constants: > > 1. When would a C MPI program call Fortran MPI subprogram? Probably, > never. > 2. When would a Fortran MPI program call a C MPI subprogram? Sometimes. > 3. When would a C++ MPI program call Fortran MPI program? Probably, > never. > I don't agree. I know some programs of this kind. Hubert > 4. When would a Fortran MPI program call a C++ MPI program? Probably, > never. > > Here, "subprogram" means a part of program that relies on the MPI use or > knowledge of MPI datatypes, etc. This may or may not be a direct call > into the respective MPI library. > > >From this, what is needed is to define Fortran datatypes from mpif.h in > C mpi.h. The rest does not seem relevant. Even the existing C++ > definitions may just reflect C heritage. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Thursday, October 23, 2008 10:13 PM > To: MPI 2.2 > Subject: [Mpi-22] Should all constants/types be available in all > languagebindings? > > I notice that OMPI, MPICH2, Intel MPI, and HP MPI all do the following: > > * define Fortran datatypes in mpi.h (e.g., MPI_INTEGER, MPI_DOUBLE) > * define Fortran datatypes in the C++ MPI namespace (e.g., > MPI::INTEGER, MPI::DOUBLE, etc.) > * do *not* define C datatypes in mpif.h (e.g., MPI_INT, MPI_FLOAT, > etc.) > > Why? AFAICT, there is no rule about what constants have to appear in > which language bindings. But doesn't that implicitly mean that all > constants are supposed to appear in all language bindings? > > (the argument for having MPI_DOUBLE available in C, for example, is > that a C routine may be invoked to send or receive a message > containing Fortran data. Similar arguments exist for why you'd want > MPI datatypes from other languages available in your language) > > This is also related to whether the type MPI::Fint should exist or not. > > I just filed a ticket about these issues > (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/56 > ) and marked it as "feedback requested", meaning that I need feedback > from the Forum before a proposal can be made. > > What do people think? > > * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3245 bytes Desc: S/MIME Cryptographic Signature URL: From alexander.supalov at [hidden] Fri Oct 24 08:54:31 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 24 Oct 2008 14:54:31 +0100 Subject: [Mpi-22] Should all constants/types be available inall languagebindings? In-Reply-To: <4901D0E6.9080309@it.neclab.eu> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CE9B@swsmsx413.ger.corp.intel.com> Thanks. We can surely define all datatypes everywhere in MPI-2.2. It appears however that the current arrangement, with Fortran types defined in C and C++, may be adequate. -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Hubert Ritzdorf Sent: Friday, October 24, 2008 3:43 PM To: MPI 2.2 Subject: Re: [Mpi-22] Should all constants/types be available inall languagebindings? Hi, as far as I remember it was cleared in the MPI 2.1 standardization process that "all" datatypes have to be defined in all 3 languages. Changelog item 2 requires this explicitly for MPI_LONG_LONG, MPI_LONG_LONG_INT, ... (But I think that some C++ datatypes are missing since there wasn't defined a name). Possibly, Jeff is not referencing MPI 2.1 implementations. Supalov, Alexander wrote: > Hi, > > I think we should answer the following questions to decide whether and > how to define the constants: > > 1. When would a C MPI program call Fortran MPI subprogram? Probably, > never. > 2. When would a Fortran MPI program call a C MPI subprogram? Sometimes. > 3. When would a C++ MPI program call Fortran MPI program? Probably, > never. > I don't agree. I know some programs of this kind. Hubert > 4. When would a Fortran MPI program call a C++ MPI program? Probably, > never. > > Here, "subprogram" means a part of program that relies on the MPI use or > knowledge of MPI datatypes, etc. This may or may not be a direct call > into the respective MPI library. > > >From this, what is needed is to define Fortran datatypes from mpif.h in > C mpi.h. The rest does not seem relevant. Even the existing C++ > definitions may just reflect C heritage. > > Best regards. > > Alexander > > -----Original Message----- > From: mpi-22-bounces_at_[hidden] > [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Thursday, October 23, 2008 10:13 PM > To: MPI 2.2 > Subject: [Mpi-22] Should all constants/types be available in all > languagebindings? > > I notice that OMPI, MPICH2, Intel MPI, and HP MPI all do the following: > > * define Fortran datatypes in mpi.h (e.g., MPI_INTEGER, MPI_DOUBLE) > * define Fortran datatypes in the C++ MPI namespace (e.g., > MPI::INTEGER, MPI::DOUBLE, etc.) > * do *not* define C datatypes in mpif.h (e.g., MPI_INT, MPI_FLOAT, > etc.) > > Why? AFAICT, there is no rule about what constants have to appear in > which language bindings. But doesn't that implicitly mean that all > constants are supposed to appear in all language bindings? > > (the argument for having MPI_DOUBLE available in C, for example, is > that a C routine may be invoked to send or receive a message > containing Fortran data. Similar arguments exist for why you'd want > MPI datatypes from other languages available in your language) > > This is also related to whether the type MPI::Fint should exist or not. > > I just filed a ticket about these issues > (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/56 > ) and marked it as "feedback requested", meaning that I need feedback > from the Forum before a proposal can be made. > > What do people think? > > --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Fri Oct 24 08:56:46 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 24 Oct 2008 09:56:46 -0400 Subject: [Mpi-22] Should all constants/types be available in all languagebindings? In-Reply-To: <4901D0E6.9080309@it.neclab.eu> Message-ID: <6685C57D-28B9-4F00-9C51-FE358A7789D2@cisco.com> You're right, I was not checking 2.1 implementations. While 2.1 changelog item 2 does specifically mention those constants in all 3 languages, what about the other datatypes? (e.g., MPI_INT vs. MPI_INTEGER, etc.?) Per Alexander's comments: 1. Why would we artificially limit what types can be accessed from each language? "Probably never" is not really the same thing as "never." It may not be probable, but it is conceivable that a C++ program could call a Fortran computational library -- perhaps the C++ program is essentially the "driver" for a master/worker model, and the Fortran computational library is what does the back-end work. Exporting new handles for already-existing MPI objects to a specific implementation's language bindings is not a lot of work. 2. Regardless of which way we go, I think that the standard should be explicit about which constants are available in which language. Otherwise, codes have the danger of not being source code compatible if different implementations make different decisions. On Oct 24, 2008, at 9:43 AM, Hubert Ritzdorf wrote: > Hi, > > as far as I remember it was cleared in the MPI 2.1 > standardization process that "all" datatypes have to be > defined in all 3 languages. > Changelog item 2 requires this explicitly for MPI_LONG_LONG, > MPI_LONG_LONG_INT, ... > (But I think that some C++ datatypes are missing since there wasn't > defined a name). > > Possibly, Jeff is not referencing MPI 2.1 implementations. > > Supalov, Alexander wrote: >> Hi, >> >> I think we should answer the following questions to decide whether >> and >> how to define the constants: >> >> 1. When would a C MPI program call Fortran MPI subprogram? Probably, >> never. >> 2. When would a Fortran MPI program call a C MPI subprogram? >> Sometimes. >> 3. When would a C++ MPI program call Fortran MPI program? Probably, >> never. >> > I don't agree. I know some programs of this kind. > > Hubert >> 4. When would a Fortran MPI program call a C++ MPI program? Probably, >> never. >> >> Here, "subprogram" means a part of program that relies on the MPI >> use or >> knowledge of MPI datatypes, etc. This may or may not be a direct call >> into the respective MPI library. >> >> >From this, what is needed is to define Fortran datatypes from >> mpif.h in >> C mpi.h. The rest does not seem relevant. Even the existing C++ >> definitions may just reflect C heritage. >> >> Best regards. >> >> Alexander >> >> -----Original Message----- >> From: mpi-22-bounces_at_[hidden] >> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres >> Sent: Thursday, October 23, 2008 10:13 PM >> To: MPI 2.2 >> Subject: [Mpi-22] Should all constants/types be available in all >> languagebindings? >> >> I notice that OMPI, MPICH2, Intel MPI, and HP MPI all do the >> following: >> >> * define Fortran datatypes in mpi.h (e.g., MPI_INTEGER, MPI_DOUBLE) >> * define Fortran datatypes in the C++ MPI namespace (e.g., >> MPI::INTEGER, MPI::DOUBLE, etc.) >> * do *not* define C datatypes in mpif.h (e.g., MPI_INT, >> MPI_FLOAT, etc.) >> >> Why? AFAICT, there is no rule about what constants have to appear >> in which language bindings. But doesn't that implicitly mean that >> all constants are supposed to appear in all language bindings? >> >> (the argument for having MPI_DOUBLE available in C, for example, >> is that a C routine may be invoked to send or receive a message >> containing Fortran data. Similar arguments exist for why you'd >> want MPI datatypes from other languages available in your language) >> >> This is also related to whether the type MPI::Fint should exist or >> not. >> >> I just filed a ticket about these issues >> (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/56 ) and >> marked it as "feedback requested", meaning that I need feedback >> from the Forum before a proposal can be made. >> >> What do people think? >> >> > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From ritzdorf at [hidden] Fri Oct 24 11:29:36 2008 From: ritzdorf at [hidden] (Hubert Ritzdorf) Date: Fri, 24 Oct 2008 18:29:36 +0200 Subject: [Mpi-22] Should all constants/types be available in all languagebindings? In-Reply-To: <6685C57D-28B9-4F00-9C51-FE358A7789D2@cisco.com> Message-ID: <4901F7F0.1010101@it.neclab.eu> Jeff Squyres wrote: > You're right, I was not checking 2.1 implementations. > > While 2.1 changelog item 2 does specifically mention those constants > in all 3 languages, what about the other datatypes? (e.g., MPI_INT vs. > MPI_INTEGER, etc.?) > MPI-2.1 Section 16.3.6 MPI Opaque Objects, page 483, lines 46-47: "All predefined datatypes can be used in datatype constructors in any language." * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3245 bytes Desc: S/MIME Cryptographic Signature URL: From alexander.supalov at [hidden] Fri Oct 24 13:03:13 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 24 Oct 2008 19:03:13 +0100 Subject: [Mpi-22] Should all constants/types be availablein all languagebindings? In-Reply-To: <4901F7F0.1010101@it.neclab.eu> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CF91@swsmsx413.ger.corp.intel.com> Hi, If this requirement is to be interpreted literally, then yes, the standard must define all datatypes in all languages. However, the example following this statement shows how an MPI_REAL is transformed into an equivalent C data type. Whether or not this is indeed MPI_REAL is not clear. It may well be an MPI_FLOAT. By the way, this example shows that MPI::Fint may be necessary: MPI_Fint is used here to pass Fortran datatype thru the C function boundary. Imagine a C++ function. There was a ticket #4 calling for removal of the MPI::Fint until it was redefined. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Hubert Ritzdorf Sent: Friday, October 24, 2008 6:30 PM To: MPI 2.2 Subject: Re: [Mpi-22] Should all constants/types be availablein all languagebindings? Jeff Squyres wrote: > You're right, I was not checking 2.1 implementations. > > While 2.1 changelog item 2 does specifically mention those constants > in all 3 languages, what about the other datatypes? (e.g., MPI_INT vs. > MPI_INTEGER, etc.?) > MPI-2.1 Section 16.3.6 MPI Opaque Objects, page 483, lines 46-47: "All predefined datatypes can be used in datatype constructors in any language." --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Fri Oct 24 14:12:50 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 24 Oct 2008 15:12:50 -0400 Subject: [Mpi-22] Should all constants/types be availablein all languagebindings? In-Reply-To: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CF91@swsmsx413.ger.corp.intel.com> Message-ID: <074DA491-F527-46A5-9C2D-D836DE48237A@cisco.com> On Oct 24, 2008, at 2:03 PM, Supalov, Alexander wrote: > If this requirement is to be interpreted literally, then yes, the > standard must define all datatypes in all languages. However, the > example following this statement shows how an MPI_REAL is transformed > into an equivalent C data type. Whether or not this is indeed MPI_REAL > is not clear. It may well be an MPI_FLOAT. I'm not sure what you mean -- Example 16.16 shows making a combined datatype with 1 C int and the 5 Fortran REALs. The C code never tries to use the values in the R array; it just MPI_SEND's them. Using the MPI_REAL type passed up from Fortran is the right thing to do here (vs. using MPI_FLOAT). Alternatively, if the C code knew that the buffer was 5 reals, it wouldn't need to have a type passed in from Fortran -- just the buffer. Then it could TYPE_CREATE_STRUCT with MPI_INT and MPI_REAL and send using the resulting datatype. That's exactly why the datatypes are supposed to be defined in all languages, right? Not for dereferencing, but for sending/receiving data that originally came from another language. The way I read Hubert's text (p483:46-47), none of us are 2.1 compliant because we don't offer *all* the datatypes in all languages. Right? > By the way, this example shows that MPI::Fint may be necessary: > MPI_Fint > is used here to pass Fortran datatype thru the C function boundary. > Imagine a C++ function. There was a ticket #4 calling for removal of > the > MPI::Fint until it was redefined. Why not use MPI_Fint? -- Jeff Squyres Cisco Systems From alexander.supalov at [hidden] Fri Oct 24 15:27:28 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Fri, 24 Oct 2008 21:27:28 +0100 Subject: [Mpi-22] Should all constants/types beavailablein all languagebindings? In-Reply-To: <074DA491-F527-46A5-9C2D-D836DE48237A@cisco.com> Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A201D8CFC8@swsmsx413.ger.corp.intel.com> Hi, Yes, it appears that if p483:46-47 are to be interpreted literally, any implementation that does not provide all datatypes in all languages will be non-compliant with MPI-2.1. More on this below. For now, a bit of fun: Re C handle produced out of the Fortran MPI_REAL - imagine it's MPI_FLOAT. Will anything change for the MPI_Send? Probably, no. The datatype signature would be different if the struct was created directly in Fortran, but the MPI_Send would not suffer a jot. Re MPI_Fint, C++ defines to many C++ entities that have C equivalents, like MPI::REAL, MPI::SEEK_SET, etc. Why should the user rely on MPI_Fint then? More generally, where is the boundary between the C++ and C interfaces, or, how much of C++ should really look like C++? So far I was pulling your let a little. Now it's getting serious. There are four things to consider: 1) Language independent datatype definition - say, MPI_INTEGER (see p27:13). 2) Language binding for the datatype handle representing 1) in actual programs - say, C MPI_Datatype MPI_INTEGER, C++ MPI::INTEGER, and Fortran INTEGER MPI_INTEGER. 3) Internal representation of the MPI datatype inside the MPI library referenced by 1) and 2). 4) Actual format of the data in memory defined by the language and platform, and referenced by 1), 2), and 3). Note that distinction between 1) and 2) sort of justifies existence of so specifically C++ binding MPI::INTEGER for otherwise mundane entities in C and Fortran. The argument for MPI::Fint arises from this observation: if we started to do things C++ way, let's stick to this. And what I was talking about re example in p483 was basically a play between 2) and 4): what is different for the user may actually be the same thing for the platform, in this case - typically a machine word interpreted as a real value. See p. 490 for an acknowledgement of this fact (about nonportable possibility to receive MPI_INTEGER as an MPI_INT). Finally, at the moment a part of the standard (see pp. 27-28) does not seem to differentiate clearly between 1) and 2), while another part (see pp. 483-484) talks about making 1) available in all languages thru 2), and Appendix (pp 494-495) serves as the only definition of the datatype equivalence. This, in addition to practical aspects mentioned in earlier messages, is probably why so many implementations are noncompliant with MPI-2.1. Best regards. Alexander -----Original Message----- From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, October 24, 2008 9:13 PM To: MPI 2.2 Subject: Re: [Mpi-22] Should all constants/types beavailablein all languagebindings? On Oct 24, 2008, at 2:03 PM, Supalov, Alexander wrote: > If this requirement is to be interpreted literally, then yes, the > standard must define all datatypes in all languages. However, the > example following this statement shows how an MPI_REAL is transformed > into an equivalent C data type. Whether or not this is indeed MPI_REAL > is not clear. It may well be an MPI_FLOAT. I'm not sure what you mean -- Example 16.16 shows making a combined datatype with 1 C int and the 5 Fortran REALs. The C code never tries to use the values in the R array; it just MPI_SEND's them. Using the MPI_REAL type passed up from Fortran is the right thing to do here (vs. using MPI_FLOAT). Alternatively, if the C code knew that the buffer was 5 reals, it wouldn't need to have a type passed in from Fortran -- just the buffer. Then it could TYPE_CREATE_STRUCT with MPI_INT and MPI_REAL and send using the resulting datatype. That's exactly why the datatypes are supposed to be defined in all languages, right? Not for dereferencing, but for sending/receiving data that originally came from another language. The way I read Hubert's text (p483:46-47), none of us are 2.1 compliant because we don't offer *all* the datatypes in all languages. Right? > By the way, this example shows that MPI::Fint may be necessary: > MPI_Fint > is used here to pass Fortran datatype thru the C function boundary. > Imagine a C++ function. There was a ticket #4 calling for removal of > the > MPI::Fint until it was redefined. Why not use MPI_Fint? -- Jeff Squyres Cisco Systems _______________________________________________ mpi-22 mailing list mpi-22_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.