From rabenseifner at [hidden] Fri Jan 18 08:59:41 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 15:59:41 +0100 Subject: [mpi-21] MPI 2.1 Ackno text Message-ID: This is the currently proposed Ackno-text on the title page of MPI 2.1: ---------------------------------------------------------------------- This work was supported in part by ARPA, NSF and DARPA under grant ASC-9310330, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, and the NSF contract CDA-9115428, and by the Commission of the European Community through Esprit project P6643 and under project HPC Standards (21111). ---------------------------------------------------------------------- It is based on the references on the title pages of MPI 1.1 and 2.0. Do we need additional references? Is the wording okay? Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Fri Jan 18 09:23:18 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 16:23:18 +0100 Subject: [mpi-21] MPI 2.1 Abstract text Message-ID: This is the currently proposed Abstract for MPI 2.1: ---------------------------------------------------------------------- This document describes the MPI standard version 2.1 in one combined document. This document combines the content from the previous standards “MPI: A Message-Passing Interface Standard, June 12, 1995” (MPI-1.1) and “MPI-2: Extensions to the Message-Passing Interface, July, 1997” (MPI-1.2 and MPI-2.0) and errata documents from the MPI Forum. The standard MPI-1.1 includes point-to-point message passing, collective communications, group and communicator concepts, process topologies, environmental management, and a profiling interface. Language bindings for C and Fortran are defined. The MPI-1.2 chapter of the MPI-2 document contains clarifications and corrections to the MPI-1.1 standard and defines MPI-1.2. Together with corrections, these MPI-1 documents were combined to MPI 1.3 (, 2008) which was used as input for this book. Second input is the MPI-2 part of the MPI-2 document which describes additions to the MPI-1 standard and defines the MPI standard version 2.0. These include miscellaneous topics, process creation and management, one-sided communications, extended collective operations, external interfaces, I/O, and additional language bindings (C++). Additional clarifications and errata corrections to MPI-2.0 are also included. ---------------------------------------------------------------------- It is based on the abstract of MPI 2.0. MPI 1.1 hadn't any abstract. This abstact should tell the main features of MPI. It should not give a full history. This is done by the versions page. Hints from the MPI Forum meeting Jan. 2008 are also included, i.e., the major sources are referenced, but also the steps between, i.e., the clarifications and MPI 1.3. -------------------------------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Fri Jan 18 09:38:28 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 16:38:28 +0100 Subject: [mpi-21] MPI 2.1 Versions history Message-ID: This is the currently proposed Versions History as shown on the back-page of the title page ---------------------------------------------------------------------- Version 2.1: , 2008. This document combines the previous documents MPI 1.3 (????, 2008) and MPI-2.0 (July 18, 1997). Certain parts of MPI 2.0, such as some sections of Chapter 4, Miscellany, and Chapter 7, Extended Collective Operations have been merged into the Chapters of MPI 1.3. Additional errata and clarifications collected by the MPI Forum are also included in this document. Version 1.3: , 2008. This document combines the previous documents MPI 1.1 (June 12, 1995) and the MPI 1.2 Chapter in MPI-2 (July 18, 1997). Additional errata collected by the MPI Forum referring to MPI 1.1 and MPI 1.2 are also included in this document. Version 2.0: , 1997. Beginning after the release of MPI 1.1, the MPI Forum began meeting to consider corrections and extensions. MPI-2 has been focused on process creation and management, one-sided communications, extended collective communications, external interfaces and parallel I/O. A miscellany chapter discusses items that don't fit elsewhere, in particular language interoperability. Version 1.2: July 18, 1997. The MPI-2 Forum introduced MPI 1.2 ... Version 1.1: June, 1995. Beginning in March, 1995, the Message ... Version 1.0: June, 1994. The Message Passing Interface Forum ... ---------------------------------------------------------------------- It is based on the existing text from MPI 1.1 (For completness, I repeated the already voted text about MPI 1.3) The words "Version x.x: date, year" will be printed bold face (as shown in MPI 1.1 on the second frontmatter page). Are there proposals for modifications? Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From bronis at [hidden] Fri Jan 18 09:51:35 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Fri, 18 Jan 2008 07:51:35 -0800 (PST) Subject: [mpi-21] MPI 2.1 Abstract text In-Reply-To: Message-ID: Rolf: Re: > This is the currently proposed Abstract for MPI 2.1: A couple minor editing suggestions: ---------------------------------------------------------------------- This document describes the MPI standard version 2.1 in one combined document. This document combines the content from the previous standards “MPI: A Message-Passing Interface Standard, June 12, 1995” (MPI-1.1) and “MPI-2: Extensions to the Message-Passing Interface, July, 1997” (MPI-1.2 and MPI-2.0) and errata documents from the MPI Forum. The standard MPI-1.1 includes point-to-point message passing, collective communications, group and communicator concepts, process topologies, environmental management, and a profiling interface. Language bindings for C and Fortran are defined. The MPI-1.2 chapter of the MPI-2 document contains clarifications and corrections to the MPI-1.1 standard and defines MPI-1.2. Together with corrections, these MPI-1 documents were combined to MPI 1.3 (, 2008) which was used as input for this document. The second input is the MPI-2 part of the MPI-2 document, which describes additions to the MPI-1 standard and defines the MPI standard version 2.0. These include miscellaneous topics, process creation and management, one-sided communications, extended collective operations, external interfaces, I/O, and additional language bindings (C++). Additional clarifications and errata corrections to MPI-2.0 are also included. ---------------------------------------------------------------------- I changed "book" to "document" for consistency, and added a "The " and a comma needed for grammatical correctness. Bronis > It is based on the abstract of MPI 2.0. > MPI 1.1 hadn't any abstract. > > This abstact should tell the main features of MPI. > It should not give a full history. This is done by the versions page. > Hints from the MPI Forum meeting Jan. 2008 are also included, > i.e., the major sources are referenced, but also the steps > between, i.e., the clarifications and MPI 1.3. > > -------------------------------------------------------- > Discussion should be done through the new mailing list > mpi-21_at_cs.uiuc.edu. > > I have sent out this mail with CC through the old general list > mpi-21_at_[hidden] > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > From rabenseifner at [hidden] Fri Jan 18 10:17:44 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 17:17:44 +0100 Subject: [mpi-21] Incorrect use of MPI_IN_PLACE in description of MPI_ALLGATHER and MPI_ALLGATHERV Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Incorrect use of MPI_IN_PLACE in description of MPI_ALLGATHER and MPI_ALLGATHERV http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/gatherinplace/ Jeff is right with his error report. I propose to remove the wrong text, because this (wrong) analogy is not needed for understanding the meaning of "in place" in MPI_ALLGATHER and MPI_ALLGATHERV. Proposal for MPI 2.1, Ballot 4: ------------------------------- On MPI-2.0, page 159, lines 23-28, remove the text "Specifically, ... ... n-1." On MPI-2.0, page 158, lines 25-31, remove the text "Specifically, ... ... n-1." ------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Fri Jan 18 10:32:51 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 11:32:51 -0500 Subject: [mpi-21] MPI 2.1 Versions history In-Reply-To: Message-ID: This looks good enough to me. Note that the for Version 2.0 is July 18 (as cited in the 2.1 bullet). On Jan 18, 2008, at 10:38 AM, Rolf Rabenseifner wrote: > This is the currently proposed Versions History as shown on the > back-page of the title page > > ---------------------------------------------------------------------- > Version 2.1: , 2008. This document combines the previous > documents MPI 1.3 (????, 2008) and MPI-2.0 (July 18, 1997). > Certain parts of MPI 2.0, such as some sections of Chapter 4, > Miscellany, and Chapter 7, Extended Collective Operations have > been merged into the Chapters of MPI 1.3. Additional errata and > clarifications collected by the MPI Forum are also included in > this document. > > Version 1.3: , 2008. This document combines the previous > documents MPI 1.1 (June 12, 1995) and the MPI 1.2 Chapter in MPI-2 > (July 18, 1997). Additional errata collected by the MPI Forum > referring to MPI 1.1 and MPI 1.2 are also included in this > document. > > Version 2.0: , 1997. Beginning after the release of > MPI 1.1, the MPI Forum began meeting to consider corrections and > extensions. MPI-2 has been focused on process creation and > management, one-sided communications, extended collective > communications, external interfaces and parallel I/O. A miscellany > chapter discusses items that don't fit elsewhere, in particular > language interoperability. > > Version 1.2: July 18, 1997. The MPI-2 Forum introduced MPI 1.2 ... > > Version 1.1: June, 1995. Beginning in March, 1995, the Message ... > > Version 1.0: June, 1994. The Message Passing Interface Forum ... > ---------------------------------------------------------------------- > > It is based on the existing text from MPI 1.1 > (For completness, I repeated the already voted text about MPI 1.3) > The words "Version x.x: date, year" will be printed bold face > (as shown in MPI 1.1 on the second frontmatter page). > > Are there proposals for modifications? > > > Discussion should be done through the new mailing list > mpi-21_at_cs.uiuc.edu. > > I have sent out this mail with CC through the old general list > mpi-21_at_[hidden] > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Fri Jan 18 11:03:40 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 18:03:40 +0100 Subject: [mpi-21] MPI-2 MPI_Comm_spawn inconsistency In-Reply-To: <[mpi-21] MPI-2 MPI_Comm_spawn inconsistency> Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Significance of non-root arguments to MPI_COMM_SPAWN in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/spawn/ The problem may be clarified with the following proposal Proposal for MPI 2.1, Ballot 4: ------------------------------- Add in MPI-2.0, page 88, after line 24: Advice to users. If the non-root processes do not use MPI_ERRCODES_IGNORE, then they have to allocate the appropriate number of entries (see maxproc at root) in the array_of_errcodes although the maxproc argument is unused in non-root processes. (End of advice to users.) ------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From erezh at [hidden] Fri Jan 18 11:12:52 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 18 Jan 2008 09:12:52 -0800 Subject: [mpi-21] MPI 2.1 Abstract text In-Reply-To: Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E041@NA-EXMSG-C105.redmond.corp.microsoft.com> Shouldn't the first paragraph use the 1.3 version? (rather than 1.1 or 1.2) E.g., This document describes the MPI standard version 2.1 in one combined document. This document combines the content from the previous standards "MPI: A Message-Passing Interface Standard, " (MPI-1.3) and "MPI-2: Extensions to the Message-Passing Interface, July, 1997" (MPI-1.3 and MPI-2.0) and errata documents from the MPI Forum. -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Bronis R. de Supinski Sent: Friday, January 18, 2008 7:52 AM To: Mailing list for discussion of MPI 2.1 Cc: mpi-21_at_[hidden] Subject: Re: [mpi-21] MPI 2.1 Abstract text Rolf: Re: > This is the currently proposed Abstract for MPI 2.1: A couple minor editing suggestions: ---------------------------------------------------------------------- This document describes the MPI standard version 2.1 in one combined document. This document combines the content from the previous standards "MPI: A Message-Passing Interface Standard, June 12, 1995" (MPI-1.1) and "MPI-2: Extensions to the Message-Passing Interface, July, 1997" (MPI-1.2 and MPI-2.0) and errata documents from the MPI Forum. The standard MPI-1.1 includes point-to-point message passing, collective communications, group and communicator concepts, process topologies, environmental management, and a profiling interface. Language bindings for C and Fortran are defined. The MPI-1.2 chapter of the MPI-2 document contains clarifications and corrections to the MPI-1.1 standard and defines MPI-1.2. Together with corrections, these MPI-1 documents were combined to MPI 1.3 (, 2008) which was used as input for this document. The second input is the MPI-2 part of the MPI-2 document, which describes additions to the MPI-1 standard and defines the MPI standard version 2.0. These include miscellaneous topics, process creation and management, one-sided communications, extended collective operations, external interfaces, I/O, and additional language bindings (C++). Additional clarifications and errata corrections to MPI-2.0 are also included. ---------------------------------------------------------------------- I changed "book" to "document" for consistency, and added a "The " and a comma needed for grammatical correctness. Bronis > It is based on the abstract of MPI 2.0. > MPI 1.1 hadn't any abstract. > > This abstact should tell the main features of MPI. > It should not give a full history. This is done by the versions page. > Hints from the MPI Forum meeting Jan. 2008 are also included, > i.e., the major sources are referenced, but also the steps > between, i.e., the clarifications and MPI 1.3. > > -------------------------------------------------------- > Discussion should be done through the new mailing list > mpi-21_at_cs.uiuc.edu. > > I have sent out this mail with CC through the old general list > mpi-21_at_[hidden] > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > From rabenseifner at [hidden] Fri Jan 18 11:55:09 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 18:55:09 +0100 Subject: [mpi-21] problem with MPI_Get_count and MPI_Probe In-Reply-To: <[mpi-21] problem with MPI_Get_count and MPI_Probe> Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Datatypes and MPI_PROBE in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/probedatatype/ Proposal for MPI 2.1, Ballot 4: ------------------------------- MPI 1.1, page 222, line 48 reads used after a call to MPI_PROBE. (End of rationale.) but should read used after a call to MPI_PROBE or MPI_IPROBE. With a status returned from MPI_PROBE or MPI_IPROBE, the same dataypes are allowed as in a call to MPI_RECV to receive this message. (End of rationale.) Advice to users. To allocate the appropriate amount of memory as receive buffer, the same datatype as in the following receive call should be used to determine the needed space. In portable programs due to possible data conversions, it is not guaranteed that the count returned by MPI_GET_COUNT with datatype MPI_BYTE is the correct amount of needed memory space in the receive buffer (although MPI_BYTE is matching every datatype). (End of advice to users.) ------------------------------- Reason for the first part: The current MPI-1.1 text says "The datatype argument should match the argument provided by the receive call that set the status variable." With MPI_PROBE, there isn't such a receive call. Reason for the advice to users: It helps to write portable code. Because malloc needs a byte count, users may write wrong programs by using MPI_BYTE. ------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Fri Jan 18 12:13:42 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 18 Jan 2008 19:13:42 +0100 Subject: [mpi-21] MPI 2.1 Abstract text In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E041@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: Erez, I used at the beginning MPI 1.1 and MPI-2 documents to reference the major documents that have been produced by MPI-1 and MPI-2 Forum. And I'm summarizing all other input in "and errata documents from the MPI Forum" (which are small compared to the work of MPI 1.1 and MPI-2). At the beginning, I'm intentionally not separate 1.2 and 2.0. I'm referencing the documents (i.e. "MPI 1.1 and MPI-2" and not "MPI 1.1 + 1.2 + 2.0") In the further parts of the abstract, I point to MPI 1.2 and MPI 1.3. Best regards Rolf On Fri, 18 Jan 2008 09:12:52 -0800 Erez Haba wrote: >Shouldn't the first paragraph use the 1.3 version? (rather than 1.1 or 1.2) E.g., > >This document describes the MPI standard version 2.1 in one >combined document. This document combines the content from the >previous standards "MPI: A Message-Passing Interface Standard, >" (MPI-1.3) and "MPI-2: Extensions to the >Message-Passing Interface, July, 1997" (MPI-1.3 and MPI-2.0) >and errata documents from the MPI Forum. > > >-----Original Message----- >From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Bronis R. de Supinski >Sent: Friday, January 18, 2008 7:52 AM >To: Mailing list for discussion of MPI 2.1 >Cc: mpi-21_at_[hidden] >Subject: Re: [mpi-21] MPI 2.1 Abstract text > > >Rolf: > >Re: >> This is the currently proposed Abstract for MPI 2.1: > >A couple minor editing suggestions: > >---------------------------------------------------------------------- >This document describes the MPI standard version 2.1 in one >combined document. This document combines the content from the >previous standards "MPI: A Message-Passing Interface Standard, >June 12, 1995" (MPI-1.1) and "MPI-2: Extensions to the >Message-Passing Interface, July, 1997" (MPI-1.2 and MPI-2.0) >and errata documents from the MPI Forum. >The standard MPI-1.1 includes point-to-point message passing, >collective communications, group and communicator concepts, >process topologies, environmental management, and a profiling >interface. Language bindings for C and Fortran are defined. >The MPI-1.2 chapter of the MPI-2 document contains clarifications >and corrections to the MPI-1.1 standard and defines MPI-1.2. >Together with corrections, these MPI-1 documents were combined >to MPI 1.3 (, 2008) which was used as input for this document. >The second input is the MPI-2 part of the MPI-2 document, which >describes additions to the MPI-1 standard and defines the >MPI standard version 2.0. These include miscellaneous topics, >process creation and management, one-sided communications, >extended collective operations, external interfaces, I/O, >and additional language bindings (C++). Additional clarifications >and errata corrections to MPI-2.0 are also included. >---------------------------------------------------------------------- > >I changed "book" to "document" for consistency, and added a >"The " and a comma needed for grammatical correctness. > >Bronis > >> It is based on the abstract of MPI 2.0. >> MPI 1.1 hadn't any abstract. >> >> This abstact should tell the main features of MPI. >> It should not give a full history. This is done by the versions page. >> Hints from the MPI Forum meeting Jan. 2008 are also included, >> i.e., the major sources are referenced, but also the steps >> between, i.e., the clarifications and MPI 1.3. >> >> -------------------------------------------------------- >> Discussion should be done through the new mailing list >> mpi-21_at_cs.uiuc.edu. >> >> I have sent out this mail with CC through the old general list >> mpi-21_at_[hidden] >> >> Best regards >> Rolf >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Fri Jan 18 12:28:39 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 13:28:39 -0500 Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles Message-ID: This mail is a proposal for MPI 2.1, ballot 4. NOTE: This mail is a slight re-formatting of http://www.mpi-forum.org/mail_archive/mpi-21/2008/01/msg00119.html to be in ballot proposal format. In MPI 2.0, page 9, lines 17-18 state: "MPI provides certain predefined opaque objects and predefined, static handles to these objects. The user must not free such objects. In C+ +, this is enforced by declaring the handles to these predefined objects to be {\tt static const}." The "static" in the last sentence should be deleted. Rationale: When using namespaces, all the MPI symbols are in the namespace and objects do not need to be static in a singleton object for the MPI class. Specifically: they are static *only* if you are using the singleton object for the MPI class. The context for the statement is talking about the constant quality; the "static" is superfluous -- describing whether "static" is necessary or not would be too much for this section. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Jan 18 12:38:54 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 13:38:54 -0500 Subject: [mpi-21] Withdraw errata request Message-ID: Per http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/ , I withdraw my initial query about "MPI_FINALIZE in MPI-2 (with spawn)" (mail discussing: http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/finalize/) . There is no problem; there is no need for clarification. Let's remove it from the list of outstanding issues. -- Jeff Squyres Cisco Systems From erezh at [hidden] Fri Jan 18 12:53:23 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 18 Jan 2008 10:53:23 -0800 Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles In-Reply-To: Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E16E@NA-EXMSG-C105.redmond.corp.microsoft.com> Aren't we also removing the const :) The text would still be incorrect; in some implementations the MPI::COMM_WORLD is not const qualified. I suggest removing this sentence. (In C++....) -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, January 18, 2008 10:29 AM To: mpi-21_at_[hidden]; mpi-21_at_[hidden] Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles This mail is a proposal for MPI 2.1, ballot 4. NOTE: This mail is a slight re-formatting of http://www.mpi-forum.org/mail_archive/mpi-21/2008/01/msg00119.html to be in ballot proposal format. In MPI 2.0, page 9, lines 17-18 state: "MPI provides certain predefined opaque objects and predefined, static handles to these objects. The user must not free such objects. In C+ +, this is enforced by declaring the handles to these predefined objects to be {\tt static const}." The "static" in the last sentence should be deleted. Rationale: When using namespaces, all the MPI symbols are in the namespace and objects do not need to be static in a singleton object for the MPI class. Specifically: they are static *only* if you are using the singleton object for the MPI class. The context for the statement is talking about the constant quality; the "static" is superfluous -- describing whether "static" is necessary or not would be too much for this section. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Jan 18 12:57:39 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 13:57:39 -0500 Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E16E@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <8505C6C3-4F78-4CF2-B918-B2AAED67925F@cisco.com> On Jan 18, 2008, at 1:53 PM, Erez Haba wrote: > Aren't we also removing the const :) > The text would still be incorrect; in some implementations the > MPI::COMM_WORLD is not const qualified. > > I suggest removing this sentence. (In C++....) One issue at a time: this proposal is to remove "static" because it is clearly wrong. The "const" issue is different, and [much] more complicated. I am working on another proposal about the "const" issue in this sentence -- I still haven't heard back from you (or others) about what is common usage for C++ global variables (whether it is common to not specify whether they should be const or not). > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 10:29 AM > To: mpi-21_at_[hidden]; mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ > handles > > This mail is a proposal for MPI 2.1, ballot 4. > > NOTE: This mail is a slight re-formatting of http://www.mpi-forum.org/mail_archive/mpi-21/2008/01/msg00119.html > to be in ballot proposal format. > > In MPI 2.0, page 9, lines 17-18 state: > > "MPI provides certain predefined opaque objects and predefined, static > handles to these objects. The user must not free such objects. In C+ > +, this is enforced by declaring the handles to these predefined > objects to be {\tt static const}." > > The "static" in the last sentence should be deleted. > > Rationale: > > When using namespaces, all the MPI symbols are in the namespace and > objects do not need to be static in a singleton object for the MPI > class. > > Specifically: they are static *only* if you are using the singleton > object for the MPI class. The context for the statement is talking > about the constant quality; the "static" is superfluous -- describing > whether "static" is necessary or not would be too much for this > section. > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From erezh at [hidden] Fri Jan 18 12:58:20 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 18 Jan 2008 10:58:20 -0800 Subject: [mpi-21] MPI 2.1 Abstract text In-Reply-To: Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E184@NA-EXMSG-C105.redmond.corp.microsoft.com> Doesn't it make is simpler for the reader if you explicitly use 1.3 rather than 1.2 or 1.1? less ambiguity. -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Rolf Rabenseifner Sent: Friday, January 18, 2008 10:14 AM To: Mailing list for discussion of MPI 2.1 Cc: mpi-21_at_[hidden] Subject: Re: [mpi-21] MPI 2.1 Abstract text Erez, I used at the beginning MPI 1.1 and MPI-2 documents to reference the major documents that have been produced by MPI-1 and MPI-2 Forum. And I'm summarizing all other input in "and errata documents from the MPI Forum" (which are small compared to the work of MPI 1.1 and MPI-2). At the beginning, I'm intentionally not separate 1.2 and 2.0. I'm referencing the documents (i.e. "MPI 1.1 and MPI-2" and not "MPI 1.1 + 1.2 + 2.0") In the further parts of the abstract, I point to MPI 1.2 and MPI 1.3. Best regards Rolf On Fri, 18 Jan 2008 09:12:52 -0800 Erez Haba wrote: >Shouldn't the first paragraph use the 1.3 version? (rather than 1.1 or 1.2) E.g., > >This document describes the MPI standard version 2.1 in one >combined document. This document combines the content from the >previous standards "MPI: A Message-Passing Interface Standard, >" (MPI-1.3) and "MPI-2: Extensions to the >Message-Passing Interface, July, 1997" (MPI-1.3 and MPI-2.0) >and errata documents from the MPI Forum. > > >-----Original Message----- >From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Bronis R. de Supinski >Sent: Friday, January 18, 2008 7:52 AM >To: Mailing list for discussion of MPI 2.1 >Cc: mpi-21_at_[hidden] >Subject: Re: [mpi-21] MPI 2.1 Abstract text > > >Rolf: > >Re: >> This is the currently proposed Abstract for MPI 2.1: > >A couple minor editing suggestions: > >---------------------------------------------------------------------- >This document describes the MPI standard version 2.1 in one >combined document. This document combines the content from the >previous standards "MPI: A Message-Passing Interface Standard, >June 12, 1995" (MPI-1.1) and "MPI-2: Extensions to the >Message-Passing Interface, July, 1997" (MPI-1.2 and MPI-2.0) >and errata documents from the MPI Forum. >The standard MPI-1.1 includes point-to-point message passing, >collective communications, group and communicator concepts, >process topologies, environmental management, and a profiling >interface. Language bindings for C and Fortran are defined. >The MPI-1.2 chapter of the MPI-2 document contains clarifications >and corrections to the MPI-1.1 standard and defines MPI-1.2. >Together with corrections, these MPI-1 documents were combined >to MPI 1.3 (, 2008) which was used as input for this document. >The second input is the MPI-2 part of the MPI-2 document, which >describes additions to the MPI-1 standard and defines the >MPI standard version 2.0. These include miscellaneous topics, >process creation and management, one-sided communications, >extended collective operations, external interfaces, I/O, >and additional language bindings (C++). Additional clarifications >and errata corrections to MPI-2.0 are also included. >---------------------------------------------------------------------- > >I changed "book" to "document" for consistency, and added a >"The " and a comma needed for grammatical correctness. > >Bronis > >> It is based on the abstract of MPI 2.0. >> MPI 1.1 hadn't any abstract. >> >> This abstact should tell the main features of MPI. >> It should not give a full history. This is done by the versions page. >> Hints from the MPI Forum meeting Jan. 2008 are also included, >> i.e., the major sources are referenced, but also the steps >> between, i.e., the clarifications and MPI 1.3. >> >> -------------------------------------------------------- >> Discussion should be done through the new mailing list >> mpi-21_at_cs.uiuc.edu. >> >> I have sent out this mail with CC through the old general list >> mpi-21_at_[hidden] >> >> Best regards >> Rolf >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From erezh at [hidden] Fri Jan 18 13:02:28 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 18 Jan 2008 11:02:28 -0800 Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles In-Reply-To: <8505C6C3-4F78-4CF2-B918-B2AAED67925F@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E193@NA-EXMSG-C105.redmond.corp.microsoft.com> Okay; about one issue at a time. *For this sentence* it does not matter what's a common usage for C++ global variables. Some MPI implementations would need to have non-const qualified global objects. For the specific definition (which seems to presented as a C++ comment in the standard) I agree, you want to know what to recommend. Thanks, .Erez -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, January 18, 2008 10:58 AM To: mpi-21_at_[hidden] Cc: mpi-21_at_[hidden] Subject: Re: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ handles On Jan 18, 2008, at 1:53 PM, Erez Haba wrote: > Aren't we also removing the const :) > The text would still be incorrect; in some implementations the > MPI::COMM_WORLD is not const qualified. > > I suggest removing this sentence. (In C++....) One issue at a time: this proposal is to remove "static" because it is clearly wrong. The "const" issue is different, and [much] more complicated. I am working on another proposal about the "const" issue in this sentence -- I still haven't heard back from you (or others) about what is common usage for C++ global variables (whether it is common to not specify whether they should be const or not). > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 10:29 AM > To: mpi-21_at_[hidden]; mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 proposal: "static" predefined MPI C++ > handles > > This mail is a proposal for MPI 2.1, ballot 4. > > NOTE: This mail is a slight re-formatting of http://www.mpi-forum.org/mail_archive/mpi-21/2008/01/msg00119.html > to be in ballot proposal format. > > In MPI 2.0, page 9, lines 17-18 state: > > "MPI provides certain predefined opaque objects and predefined, static > handles to these objects. The user must not free such objects. In C+ > +, this is enforced by declaring the handles to these predefined > objects to be {\tt static const}." > > The "static" in the last sentence should be deleted. > > Rationale: > > When using namespaces, all the MPI symbols are in the namespace and > objects do not need to be static in a singleton object for the MPI > class. > > Specifically: they are static *only* if you are using the singleton > object for the MPI class. The context for the statement is talking > about the constant quality; the "static" is superfluous -- describing > whether "static" is necessary or not would be too much for this > section. > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Jan 18 13:13:33 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 14:13:33 -0500 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E193@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <2661455B-7BBB-4ACE-BAFF-172B57F6D41F@cisco.com> On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: > Okay; about one issue at a time. Changing mail subject to reflect the discussion... > *For this sentence* it does not matter what's a common usage for C++ > global variables. Some MPI implementations would need to have non- > const qualified global objects. Why? As I understand it, most (all?) MPI C++ implementations currently only require some objects to be non-const because of the standard-related issue that was already raised (Set_attr(), Set_name(), Set_errhandler() methods not having const variants). Is there a reason that an implementation would *need* MPI handles to be non-const? Per my prior mail, I believe that the standard should specify that some of the methods on these classes should have const and non-const variants, and then it should be fine to require that the predefined handles be const. So the question is still open: what's common practice in the C++ community regarding const/non-const global variable specification? This question will be moot if you can demonstrate that an implementation would need non-const C++ MPI predefined handles. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Jan 18 15:10:49 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 16:10:49 -0500 Subject: [mpi-21] Ballot 4 proposal: fix attribute example 4.13 Message-ID: <0367FBAD-2ACE-4BBD-99D8-9D2FB6E0F92B@cisco.com> Per http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/ , the errata item entitled "Error in Example 4.13 in MPI-2 (Use of Attributes in C and Fortran)". I believe that this errata item supersedes the errata item "Interlanguage use of Attributes". See the mail discussing: http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/attrcandf/ Proposal: Change MPI-2:4.12, p58:36 from: IF (val.NE.5) THEN CALL ERROR to IF (val.NE.address_of_i) THEN CALL ERROR Rationale: MPI-2:4.12 p58:12-13 and 16-18 clearly state that if an attribute is set by C, retrieving it in Fortran will obtain the address of the attribute. See the mails for more discussion, including an exhaustive list of what happens for each of the 9 possibilities of setting and getting attribute values between the different languages. -- Jeff Squyres Cisco Systems From erezh at [hidden] Fri Jan 18 19:35:21 2008 From: erezh at [hidden] (Erez Haba) Date: Fri, 18 Jan 2008 17:35:21 -0800 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <2661455B-7BBB-4ACE-BAFF-172B57F6D41F@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E52E@NA-EXMSG-C105.redmond.corp.microsoft.com> For example an implementation might choose to cache the error handler for MPI::COMM_WORD (in the MPI::Comm object) and call it itself on error so it can pass in the right object to the error handler. Thus requiring MPI::COMM_WORLD not to be const. -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, January 18, 2008 11:14 AM To: mpi-21_at_[hidden] Cc: mpi-21_at_[hidden] Subject: [mpi-21] const C++ MPI handles (take 2) On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: > Okay; about one issue at a time. Changing mail subject to reflect the discussion... > *For this sentence* it does not matter what's a common usage for C++ > global variables. Some MPI implementations would need to have non- > const qualified global objects. Why? As I understand it, most (all?) MPI C++ implementations currently only require some objects to be non-const because of the standard-related issue that was already raised (Set_attr(), Set_name(), Set_errhandler() methods not having const variants). Is there a reason that an implementation would *need* MPI handles to be non-const? Per my prior mail, I believe that the standard should specify that some of the methods on these classes should have const and non-const variants, and then it should be fine to require that the predefined handles be const. So the question is still open: what's common practice in the C++ community regarding const/non-const global variable specification? This question will be moot if you can demonstrate that an implementation would need non-const C++ MPI predefined handles. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Fri Jan 18 19:48:52 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Fri, 18 Jan 2008 20:48:52 -0500 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E52E@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <2AF5F3B2-803E-44C4-B77D-DF574D07C59D@cisco.com> Yes, that's the way the original C++ bindings were implemented. But it's not required or necessary to do that; that C errhandler could easily be cached somewhere else. More specifically, isn't it better to have a const object to allow for compiler optimizations? (I'm not a compiler guru, but I thought the point of why we originally made the C++ handles be const was on the argument for potential compiler optimizations) On Jan 18, 2008, at 8:35 PM, Erez Haba wrote: > For example an implementation might choose to cache the error > handler for MPI::COMM_WORD (in the MPI::Comm object) and call it > itself on error so it can pass in the right object to the error > handler. > Thus requiring MPI::COMM_WORLD not to be const. > > > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 11:14 AM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: [mpi-21] const C++ MPI handles (take 2) > > On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: > >> Okay; about one issue at a time. > > Changing mail subject to reflect the discussion... > >> *For this sentence* it does not matter what's a common usage for C++ >> global variables. Some MPI implementations would need to have non- >> const qualified global objects. > > Why? As I understand it, most (all?) MPI C++ implementations > currently only require some objects to be non-const because of the > standard-related issue that was already raised (Set_attr(), > Set_name(), Set_errhandler() methods not having const variants). Is > there a reason that an implementation would *need* MPI handles to be > non-const? > > Per my prior mail, I believe that the standard should specify that > some of the methods on these classes should have const and non-const > variants, and then it should be fine to require that the predefined > handles be const. > > So the question is still open: what's common practice in the C++ > community regarding const/non-const global variable specification? > This question will be moot if you can demonstrate that an > implementation would need non-const C++ MPI predefined handles. > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From erezh at [hidden] Sat Jan 19 12:21:41 2008 From: erezh at [hidden] (Erez Haba) Date: Sat, 19 Jan 2008 10:21:41 -0800 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <2AF5F3B2-803E-44C4-B77D-DF574D07C59D@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E5FB@NA-EXMSG-C105.redmond.corp.microsoft.com> I agree, const is the better way to implement. The question is: do you want to *force* the optimized implementation? -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Jeff Squyres Sent: Friday, January 18, 2008 5:49 PM To: mpi-21_at_[hidden] Cc: mpi-21_at_[hidden] Subject: Re: [mpi-21] const C++ MPI handles (take 2) Yes, that's the way the original C++ bindings were implemented. But it's not required or necessary to do that; that C errhandler could easily be cached somewhere else. More specifically, isn't it better to have a const object to allow for compiler optimizations? (I'm not a compiler guru, but I thought the point of why we originally made the C++ handles be const was on the argument for potential compiler optimizations) On Jan 18, 2008, at 8:35 PM, Erez Haba wrote: > For example an implementation might choose to cache the error > handler for MPI::COMM_WORD (in the MPI::Comm object) and call it > itself on error so it can pass in the right object to the error > handler. > Thus requiring MPI::COMM_WORLD not to be const. > > > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 11:14 AM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: [mpi-21] const C++ MPI handles (take 2) > > On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: > >> Okay; about one issue at a time. > > Changing mail subject to reflect the discussion... > >> *For this sentence* it does not matter what's a common usage for C++ >> global variables. Some MPI implementations would need to have non- >> const qualified global objects. > > Why? As I understand it, most (all?) MPI C++ implementations > currently only require some objects to be non-const because of the > standard-related issue that was already raised (Set_attr(), > Set_name(), Set_errhandler() methods not having const variants). Is > there a reason that an implementation would *need* MPI handles to be > non-const? > > Per my prior mail, I believe that the standard should specify that > some of the methods on these classes should have const and non-const > variants, and then it should be fine to require that the predefined > handles be const. > > So the question is still open: what's common practice in the C++ > community regarding const/non-const global variable specification? > This question will be moot if you can demonstrate that an > implementation would need non-const C++ MPI predefined handles. > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Sat Jan 19 13:16:55 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Sat, 19 Jan 2008 14:16:55 -0500 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E5FB@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <482CF2F7-15E3-435A-B588-895713CD81ED@cisco.com> Well, we pretty much have so far. :-) The const was removed in ballot 2, but I wonder how many people actually noticed (I didn't, until a few days ago). On Jan 19, 2008, at 1:21 PM, Erez Haba wrote: > I agree, const is the better way to implement. The question is: do > you want to *force* the optimized implementation? > > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 5:49 PM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: Re: [mpi-21] const C++ MPI handles (take 2) > > Yes, that's the way the original C++ bindings were implemented. But > it's not required or necessary to do that; that C errhandler could > easily be cached somewhere else. > > More specifically, isn't it better to have a const object to allow for > compiler optimizations? (I'm not a compiler guru, but I thought the > point of why we originally made the C++ handles be const was on the > argument for potential compiler optimizations) > > > On Jan 18, 2008, at 8:35 PM, Erez Haba wrote: > >> For example an implementation might choose to cache the error >> handler for MPI::COMM_WORD (in the MPI::Comm object) and call it >> itself on error so it can pass in the right object to the error >> handler. >> Thus requiring MPI::COMM_WORLD not to be const. >> >> >> -----Original Message----- >> From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] >> On Behalf Of Jeff Squyres >> Sent: Friday, January 18, 2008 11:14 AM >> To: mpi-21_at_[hidden] >> Cc: mpi-21_at_[hidden] >> Subject: [mpi-21] const C++ MPI handles (take 2) >> >> On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: >> >>> Okay; about one issue at a time. >> >> Changing mail subject to reflect the discussion... >> >>> *For this sentence* it does not matter what's a common usage for C++ >>> global variables. Some MPI implementations would need to have non- >>> const qualified global objects. >> >> Why? As I understand it, most (all?) MPI C++ implementations >> currently only require some objects to be non-const because of the >> standard-related issue that was already raised (Set_attr(), >> Set_name(), Set_errhandler() methods not having const variants). Is >> there a reason that an implementation would *need* MPI handles to be >> non-const? >> >> Per my prior mail, I believe that the standard should specify that >> some of the methods on these classes should have const and non-const >> variants, and then it should be fine to require that the predefined >> handles be const. >> >> So the question is still open: what's common practice in the C++ >> community regarding const/non-const global variable specification? >> This question will be moot if you can demonstrate that an >> implementation would need non-const C++ MPI predefined handles. >> >> -- >> Jeff Squyres >> Cisco Systems >> > > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From erezh at [hidden] Sat Jan 19 16:59:57 2008 From: erezh at [hidden] (Erez Haba) Date: Sat, 19 Jan 2008 14:59:57 -0800 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <482CF2F7-15E3-435A-B588-895713CD81ED@cisco.com> Message-ID: <6B68D01C00C9994A8E150183E62A119E6F9BB8E61C@NA-EXMSG-C105.redmond.corp.microsoft.com> If the 'const' was removed in ballot 2 whey do you want to leave the text that refers to it? -----Original Message----- From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Jeff Squyres Sent: Saturday, January 19, 2008 11:17 AM To: mpi-21_at_[hidden] Cc: mpi-21_at_[hidden] Subject: Re: [mpi-21] const C++ MPI handles (take 2) Well, we pretty much have so far. :-) The const was removed in ballot 2, but I wonder how many people actually noticed (I didn't, until a few days ago). On Jan 19, 2008, at 1:21 PM, Erez Haba wrote: > I agree, const is the better way to implement. The question is: do > you want to *force* the optimized implementation? > > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Friday, January 18, 2008 5:49 PM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: Re: [mpi-21] const C++ MPI handles (take 2) > > Yes, that's the way the original C++ bindings were implemented. But > it's not required or necessary to do that; that C errhandler could > easily be cached somewhere else. > > More specifically, isn't it better to have a const object to allow for > compiler optimizations? (I'm not a compiler guru, but I thought the > point of why we originally made the C++ handles be const was on the > argument for potential compiler optimizations) > > > On Jan 18, 2008, at 8:35 PM, Erez Haba wrote: > >> For example an implementation might choose to cache the error >> handler for MPI::COMM_WORD (in the MPI::Comm object) and call it >> itself on error so it can pass in the right object to the error >> handler. >> Thus requiring MPI::COMM_WORLD not to be const. >> >> >> -----Original Message----- >> From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] >> On Behalf Of Jeff Squyres >> Sent: Friday, January 18, 2008 11:14 AM >> To: mpi-21_at_[hidden] >> Cc: mpi-21_at_[hidden] >> Subject: [mpi-21] const C++ MPI handles (take 2) >> >> On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: >> >>> Okay; about one issue at a time. >> >> Changing mail subject to reflect the discussion... >> >>> *For this sentence* it does not matter what's a common usage for C++ >>> global variables. Some MPI implementations would need to have non- >>> const qualified global objects. >> >> Why? As I understand it, most (all?) MPI C++ implementations >> currently only require some objects to be non-const because of the >> standard-related issue that was already raised (Set_attr(), >> Set_name(), Set_errhandler() methods not having const variants). Is >> there a reason that an implementation would *need* MPI handles to be >> non-const? >> >> Per my prior mail, I believe that the standard should specify that >> some of the methods on these classes should have const and non-const >> variants, and then it should be fine to require that the predefined >> handles be const. >> >> So the question is still open: what's common practice in the C++ >> community regarding const/non-const global variable specification? >> This question will be moot if you can demonstrate that an >> implementation would need non-const C++ MPI predefined handles. >> >> -- >> Jeff Squyres >> Cisco Systems >> > > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Sat Jan 19 19:02:46 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Sat, 19 Jan 2008 20:02:46 -0500 Subject: [mpi-21] const C++ MPI handles (take 2) In-Reply-To: <6B68D01C00C9994A8E150183E62A119E6F9BB8E61C@NA-EXMSG-C105.redmond.corp.microsoft.com> Message-ID: <19D729A6-C2AA-4F5F-BC12-506E1D7F9E6F@cisco.com> Because I have a mail proposal pending to restore the "const" that was [erroneously, IMHO] removed in ballot 2. :-D On Jan 19, 2008, at 5:59 PM, Erez Haba wrote: > If the 'const' was removed in ballot 2 whey do you want to leave the > text that refers to it? > > -----Original Message----- > From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] > On Behalf Of Jeff Squyres > Sent: Saturday, January 19, 2008 11:17 AM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: Re: [mpi-21] const C++ MPI handles (take 2) > > Well, we pretty much have so far. :-) > > The const was removed in ballot 2, but I wonder how many people > actually noticed (I didn't, until a few days ago). > > > > On Jan 19, 2008, at 1:21 PM, Erez Haba wrote: > >> I agree, const is the better way to implement. The question is: do >> you want to *force* the optimized implementation? >> >> -----Original Message----- >> From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] >> On Behalf Of Jeff Squyres >> Sent: Friday, January 18, 2008 5:49 PM >> To: mpi-21_at_[hidden] >> Cc: mpi-21_at_[hidden] >> Subject: Re: [mpi-21] const C++ MPI handles (take 2) >> >> Yes, that's the way the original C++ bindings were implemented. But >> it's not required or necessary to do that; that C errhandler could >> easily be cached somewhere else. >> >> More specifically, isn't it better to have a const object to allow >> for >> compiler optimizations? (I'm not a compiler guru, but I thought the >> point of why we originally made the C++ handles be const was on the >> argument for potential compiler optimizations) >> >> >> On Jan 18, 2008, at 8:35 PM, Erez Haba wrote: >> >>> For example an implementation might choose to cache the error >>> handler for MPI::COMM_WORD (in the MPI::Comm object) and call it >>> itself on error so it can pass in the right object to the error >>> handler. >>> Thus requiring MPI::COMM_WORLD not to be const. >>> >>> >>> -----Original Message----- >>> From: owner-mpi-21_at_[hidden] [mailto:owner-mpi-21_at_[hidden]] >>> On Behalf Of Jeff Squyres >>> Sent: Friday, January 18, 2008 11:14 AM >>> To: mpi-21_at_[hidden] >>> Cc: mpi-21_at_[hidden] >>> Subject: [mpi-21] const C++ MPI handles (take 2) >>> >>> On Jan 18, 2008, at 2:02 PM, Erez Haba wrote: >>> >>>> Okay; about one issue at a time. >>> >>> Changing mail subject to reflect the discussion... >>> >>>> *For this sentence* it does not matter what's a common usage for C >>>> ++ >>>> global variables. Some MPI implementations would need to have non- >>>> const qualified global objects. >>> >>> Why? As I understand it, most (all?) MPI C++ implementations >>> currently only require some objects to be non-const because of the >>> standard-related issue that was already raised (Set_attr(), >>> Set_name(), Set_errhandler() methods not having const variants). Is >>> there a reason that an implementation would *need* MPI handles to be >>> non-const? >>> >>> Per my prior mail, I believe that the standard should specify that >>> some of the methods on these classes should have const and non-const >>> variants, and then it should be fine to require that the predefined >>> handles be const. >>> >>> So the question is still open: what's common practice in the C++ >>> community regarding const/non-const global variable specification? >>> This question will be moot if you can demonstrate that an >>> implementation would need non-const C++ MPI predefined handles. >>> >>> -- >>> Jeff Squyres >>> Cisco Systems >>> >> >> >> -- >> Jeff Squyres >> Cisco Systems >> > > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Mon Jan 21 04:35:39 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 21 Jan 2008 11:35:39 +0100 Subject: [mpi-21] MPI-2.1, 1-sided and MPI_PROC_NULL --> Ballot 4 In-Reply-To: <[mpi-21] MPI-2.1, 1-sided and MPI_PROC_NULL --> Ballot 4> Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: MPI_PROCNULL and RMA Part 2 in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/procnull2/ Proposal for MPI 2.1, Ballot 4: ------------------------------- Proposed text MPI-2, page 114, after line 4 (and after the lines added about MPI_PROC_NULL), add After an RMA operations with rank MPI_PROC_NULL, it is still necessary to finish the RMA epoch with the synchronization method that has started the epoch. ------------------------------- Reason: With Ballot 2, the Forum included MPI_PROC_NULL also in RMA calls. The proposed text clarifies a till exiting open question. ------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 21 04:46:25 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 21 Jan 2008 11:46:25 +0100 Subject: [mpi-21] Ballot 4 proposal: fix attribute example 4.13 In-Reply-To: <0367FBAD-2ACE-4BBD-99D8-9D2FB6E0F92B@cisco.com> Message-ID: We'll put it on Ballot 4. It finishes two discussion streams: ----------------------------------------- Interlanguage use of Attributes http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/getattr/ Subject: Re: clarification on inter-language interoperability of attributes See also Error in Example 4.13 in MPI-2 Error in Example 4.13 in MPI-2 (Use of Attributes in C and Fortran) http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/attrcandf/ Subject: MPI-2 attributes question See also Interlanguage use of Attributes. ----------------------------------------- I checked again all the cross references in MPI-1 and MPI-2. It seems that this was the only inconsistency on attribute caching Fortran and C interfaces. Best regards Rolf On Fri, 18 Jan 2008 16:10:49 -0500 Jeff Squyres wrote: >Per http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/ , the errata item entitled "Error in Example 4.13 in MPI-2 (Use of Attributes in C and Fortran)". I believe that this errata item supersedes the errata item "Interlanguage use of Attributes". > >See the mail discussing: > >http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/attrcandf/ > >Proposal: > >Change MPI-2:4.12, p58:36 from: > IF (val.NE.5) THEN CALL ERROR >to > IF (val.NE.address_of_i) THEN CALL ERROR > >Rationale: > >MPI-2:4.12 p58:12-13 and 16-18 clearly state that if an attribute is set by C, retrieving it in Fortran will obtain the address of the attribute. > >See the mails for more discussion, including an exhaustive list of what happens for each of the 9 possibilities of setting and getting attribute values between the different languages. > >-- >Jeff Squyres >Cisco Systems > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Mon Jan 21 08:42:02 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 21 Jan 2008 09:42:02 -0500 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments Message-ID: <1806EFCB-A2C1-471D-97E7-2B0FB2F35525@cisco.com> This is not already on Bill's errata page. Proposal: change the INOUT designation of the MPI handle parameters in several MPI-2 functions to be IN, because the values of the handles are not changing. Only the underlying MPI objects are changing. The C bindings for each of these functions do *not* pass the MPI handle by reference, therefore disallowing the possibility of these parameters actually being INOUT. By the same argument, the C++ methods for these functions should be const since the C++ object invoking the method will not be changed. The functions in question are: MPI_*_SET_NAME for communicators, datatypes, windows MPI_*_SET_ATTR for communicators, datatypes, windows MPI_*_SET_ERRHANDLER for communicators, files, windows Locations of specific text to be changed (INOUT -> IN, add "const" to C ++ methods): MPI_COMM_SET_NAME: MPI-2:8.4, p177:35,44 MPI-2:A.8.5, p336:28 MPI_TYPE_SET_NAME: MPI-2:8.4, p179:41, p180:2 MPI-2:A.8.5, p337:2 MPI_WIN_SET_NAME: MPI-2:8.4, p181:25,35 MPI-2:A.8.5, p337:36 MPI_COMM_SET_ATTR: MPI-2:8.8.1, p201:6,17 MPI-2:A.8.5, p336:27 MPI_WIN_SET_ATTR: MPI-2:8.8.2, p204:2,12 MPI-2:A.8.5, p337:35 MPI_TYPE_SET_ATTR: MPI-2:8.8.3, p206:26,37 MPI-2:A.8.5, p337:1 MPI_COMM_SET_ERRHANDLER: MPI-2:4.13.1, p62:35,43 MPI-2:A.8.1, p331:38 MPI_WIN_SET_ERRHANDLER: MPI-2:4.13.2, p64:2,10 MPI-2:A.8.1, p333:13 MPI_FILE_SET_ERRHANDLER: MPI-2:4.13.3, p65:14,22 MPI-2:A.8.1, p332:34 NOTE: The "const" issues of this proposal will be dependent upon a 2nd proposal about re-instituting the "const" on MPI predefined handles that was removed in ballot 2. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Mon Jan 21 09:37:42 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 21 Jan 2008 10:37:42 -0500 Subject: [mpi-21] Ballot 4 proposal: "const" predefined MPI C++ handles Message-ID: <42E85CBF-C97B-4C89-BE2E-F13D77976097@cisco.com> This is not already on Bill's errata page. Proposal: ballot 2 removed "const" from several predefined MPI C++ handles. This was incorrect; they should have been left const. Remove text from the "Updated MPI-2.0 errata" document: p7:33-41. NOTE: this proposal is dependent upon the proposal to change some INOUT parameters to IN and add "const" to the corresponding C++ methods. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Mon Jan 21 09:40:39 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 21 Jan 2008 10:40:39 -0500 Subject: [mpi-21] Ballot 4 proposal: MPI::COMM_WORLD and MPI::COMM_SELF should be const Message-ID: This is not already on Bill's errata page. Proposal: Make MPI::COMM_WORLD and MPI::COMM_SELF be "const". Change MPI-2:B.2 p345:18 from // Type: MPI::Intracomm to // Type: const MPI::Intracomm Rationale: The COMM_WORLD and COMM_SELF C++ handles were erroneously not marked "const" because of the incorrect INOUT MPI handle parameter designation of the MPI_COMM_SET_ERRHANDLER, MPI_COMM_SET_ATTR, and MPI_COMM_SET_ERRHANDLER functions. This caused the C++ bindings methods to not be const, resulting in compile errors if COMM_WORLD was const and you invoked any of the methods listed above (e.g., MPI::COMM_WORLD.Set_errhandler(...)). The proper solution is to have all the MPI handle arguments be IN instead of INOUT (covered in another proposal) and therefore have the Set_* functions be const (also covered that other proposal). Once that solution is in place, COMM_WORLD and COMM_SELF should be marked as "const". Not that many other predefined MPI C++ handles are already const; some erroneously had their "const" designation removed in ballot 2 -- a different proposal seeks to restore their "const" status. NOTE: This proposal depends on the "change some INOUT parameters to IN" proposal. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Mon Jan 21 09:43:15 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 21 Jan 2008 10:43:15 -0500 Subject: [mpi-21] Ballot 4 proposal: MPI::COMM_WORLD and MPI::COMM_SELF should be const In-Reply-To: Message-ID: <5C9A93B7-2FD4-4433-8C2B-022CF9FC0B31@cisco.com> On Jan 21, 2008, at 10:40 AM, Jeff Squyres wrote: > Not that many other predefined MPI C++ handles are already const; ... Gaa! That should be "Note" (vs. "Not"). That somewhat changes the meaning of the sentence. :-) -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Mon Jan 21 10:48:39 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 21 Jan 2008 17:48:39 +0100 Subject: [mpi-21] Ballot 4 proposal: MPI::COMM_WORLD and MPI::COMM_SELF should be const In-Reply-To: Message-ID: As far as I know, this propsal is technical wrong: MPI-2, Sect.2.5.4, page 10, lines 40-41 clearly allows the changing of the value of MPI::COMM_WORLD and MPI::COMM_SELF in MPI_Init and MPI_Finalize. Therefore, I do not expect, that const is correct. Remember that in Ballot 1&2, the MPI Forum already decided MPI-2 Page 343, line 44 Remove the const from const MPI::Datatype. Page 344, lines 13, 23, 32, 38, and 47 Remove the const from const MPI::Datatype. Page 345, lines 5 and 11 Remove the const from const MPI::Datatype. See http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/errata-20-adopted.pdf And on Jan. 2008 meeting, positve straw vtes wer given for (in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/ballot3.pdf ) 6. MPI-2, page 345, line 37: Remove the const from const MPI::Op. MPI-2, page 346, line 20: Remove the const from const MPI::Group. MPI-2, page 346, add after line 34: Advice to implementors. If an implementation does not change the value of predefined handles while execution of MPI Init, the implementation is free to define the predefined operation handles as const MPI::Op and the predefined group handle MPI::GROUP EMPTY as const MPI::Group. Other predefined handles must not be ”const” because they are allowed as INOUT argument in the MPI COMM SET NAME/ATTR and MPI TYPE SET NAME/ATTR routines. I'll move this item from Ballot 3 to Ballot 4 because the last sentence is not needed when your Proposal "Ballot 4 proposal: INOUT arguments" will be accepted. Best regards Rolf On Mon, 21 Jan 2008 10:40:39 -0500 Jeff Squyres wrote: >This is not already on Bill's errata page. > >Proposal: Make MPI::COMM_WORLD and MPI::COMM_SELF be "const". > >Change MPI-2:B.2 p345:18 from > > // Type: MPI::Intracomm >to > // Type: const MPI::Intracomm > >Rationale: The COMM_WORLD and COMM_SELF C++ handles were erroneously >not marked "const" because of the incorrect INOUT MPI handle parameter >designation of the MPI_COMM_SET_ERRHANDLER, MPI_COMM_SET_ATTR, and >MPI_COMM_SET_ERRHANDLER functions. This caused the C++ bindings >methods to not be const, resulting in compile errors if COMM_WORLD was >const and you invoked any of the methods listed above (e.g., >MPI::COMM_WORLD.Set_errhandler(...)). > >The proper solution is to have all the MPI handle arguments be IN >instead of INOUT (covered in another proposal) and therefore have the >Set_* functions be const (also covered that other proposal). Once >that solution is in place, COMM_WORLD and COMM_SELF should be marked >as "const". Not that many other predefined MPI C++ handles are >already const; some erroneously had their "const" designation removed >in ballot 2 -- a different proposal seeks to restore their "const" >status. > >NOTE: This proposal depends on the "change some INOUT parameters to >IN" proposal. > >-- >Jeff Squyres >Cisco Systems > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 21 11:25:45 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 21 Jan 2008 18:25:45 +0100 Subject: [mpi-21] Ballot 4 - Re: MPI-2 thread safety and collectives Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Thread safety and collective communication in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/thread-safety/index.htm After checking the e-mails and looking at - MPI-2 8.7.2 page 195 lines 6-9 Collective calls Matching of collective calls on a communicator, window, or file handle is done according to the order in which the calls are issued at each process. If concurrent threads issue such calls on the same communicator, window or file handle, it is up to the user to make sure the calls are correctly ordered, using interthread synchronization. - MPI-2 6.2.1 Window Creation, page 110, lines 10-12: The call returns an opaque object that represents the group of processes that own and access the set of windows, and the attributes of each window, as specified by the initialization call. - MPI-2 9.2. Opening a File, page 211, line 46 - page 212, line 2: Note that the communicator comm is unaffected by MPI FILE OPEN and continues to be usable in all MPI routines (e.g., MPI SEND). Furthermore, the use of comm will not interfere with I/O behavior. it seems that the standard should be clarified. Proposal for MPI 2.1, Ballot 4: ------------------------------- Add new paragraphs after MPI-2, 8.7.2 page 195 line 9 (the end of the clarification on "Collective calls"): Advice to users. With three concurrent threads in each MPI process of a communicator comm, it is allowed that thread A in each MPI process calls a collective operation on comm, thread B calls a file operation on an existing filehandle that was formerly opened on comm, and thread C invokes one-sided operations on an existing window handle that was also formerly created on comm. (End of advice to users.) Rationale. As already specified in MP_FILE_OPEN and MI_WIN_CREATE, a file handle and a window handle inherit only the group of processes of the underlying communicator, but not the communicator itself. Accesses to communicators, window handles and file handles cannot affect one another. (End of rationale.) Advice to implementors. If the implementation of file or window operations wants to internally use MPI communication then a duplicated communicator handle may be cached on the file or window handle. (End of advice to implementors.) ------------------------------- Reason: The emails have shown, that the current MPI-2 text can be well misunderstood. ------------------------------- Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 21 15:52:37 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 21 Jan 2008 22:52:37 +0100 Subject: [mpi-21] Ballot 4 - Re: MPI Process Topologies - discussion? Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Questions about Graph_create, Cart_crate, and Cart_coords in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/topo/ Based on the questions for clarification, I propose: Proposal for MPI 2.1, Ballot 4: ------------------------------- MPI-1.1, Sect. 6.5.3, page 181, line 1-3 read: If the size, nnodes, of the graph is smaller than the size of the group of comm, then some processes are returned MPI_COMM_NULL, in analogy to MPI_CART_CREATE and MPI_COMM_SPLIT. but should read If the size, nnodes, of the graph is smaller than the size of the group of comm, then some processes are returned MPI_COMM_NULL, in analogy to MPI_CART_CREATE and MPI_COMM_SPLIT. If the graph is empty, i.e., nnodes == 0, then MPI_COMM_NULL is returned in all processes. ------ Rationale for this clarification: As in MPI_COMM_CREATE, empty groups are allowed, but empty groups are described here in a different way, and should be mentioned explicitly therefore. ------------------------------- After MPI-1.1, Sect. 6.5.3, page 181, line 35, the following paragraph should be added: It is allowed that at a process, a neighbor process is defined multiple in the list of neighbors (i.e., multiple edges). It is also allowed that a process is neighbor to itself (i.e., a self loop in the graph). It is allowed, that the adjacency matrix is not symmetric. Advice to users. Whether using of multiple edges or a non-symmetric adjacency matrix may have possible performance implications is not defined by this standard. The definition of a node-neighbor edge does not imply a direction of the communication. (End of advice to users.) ------ Rationale for this clarification: The Example 6.3, MPI-1.1, page 15, line 29 - page 186, line 13, clearly shows multiple edges between nodes and self loops: the two (multiple) self-loops of node 0 and of node 7. It is nowhere forbidden, that the graph has edges only in one direction. ------------------------------- After MPI-1.1, Sect. 6.4, page 178, end of the sentence on lines 6-7, the following sentence should be added: All input arguments must have identical values on all processes of the group of comm_old. ------ Rationale for this clarification: This statement is missing. ------------------------------- I have split this discussion track into this track on graph topologies and an additional track (will come) on 0-dim MPI_CART_CREATE and MPI_CART_SUB. Discussion should be done through the new mailing list mpi-21_at_cs.uiuc.edu. I have sent out this mail with CC through the old general list mpi-21_at_[hidden] Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Mon Jan 21 19:50:38 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 21 Jan 2008 20:50:38 -0500 Subject: [mpi-21] Ballot 4 proposal: MPI::COMM_WORLD and MPI::COMM_SELF should be const In-Reply-To: Message-ID: <081E6EED-3861-48C3-A1A3-1C5C0DBD7B67@cisco.com> On Jan 21, 2008, at 11:48 AM, Rolf Rabenseifner wrote: > As far as I know, this propsal is technical wrong: > MPI-2, Sect.2.5.4, page 10, lines 40-41 clearly allows the changing of > the value of MPI::COMM_WORLD and MPI::COMM_SELF in MPI_Init and > MPI_Finalize. > Therefore, I do not expect, that const is correct. Hmm. If you include the previous two lines (38-41), I read the overall interpretation as the opposite of what you are saying: "All named constants, with the exceptions noted below for Fortran, can be used in initialization expressions or assignments. These constants do not change values during execution. Opaque objects accessed by constant handles are defined and do not change value between MPI initialization MPI_INIT and MPI completion MPI_FINALIZE. To me, that says that the *MPI objects* ("object" in the MPI sense, not the C++ sense) do not change value between INIT and FINALIZE, but the *handles* must be valid for initialization before INIT. I think that the overloading of the word "object" is what is confusing here. Yes, the predefined MPI C++ handles such as MPI::COMM_WORLD are C++ objects (in the C++ sense of the word "object"). But in the MPI sense, MPI::COMM_WORLD and friends are *handles*, representing back- end MPI objets (in the MPI sense of the word "object"). It is critical to keep the difference between the two in mind. The "opaque objects" referred to in MPI-2:2.5.4 are *MPI* objects, not C++ objects. The C++ MPI named constants are just like the C MPI named constants: they must be suitable for initialization [before MPI_INIT] and cannot change value during execution. For example, this code is valid: int main(int argc, char* argv[]) { MPI::Intracomm cxx_comm = MPI::COMM_WORLD; MPI_Comm c_comm = MPI_COMM_WORLD; MPI_Init(NULL, NULL); if (SENDER == MPI::COMM_WORLD.Get_rank()) { // The cxx_comm and c_comm handles are valid even though they were // assigned before MPI_INIT cxx_comm.Send(...); MPI_Send(..., c_comm); else if (RECEIVER == MPI::COMM_WORLD.Get_rank()) { // receive stuff } MPI_Finalize(); return 0; } > Remember that in Ballot 1&2, the MPI Forum already decided > MPI-2 Page 343, line 44 > Remove the const from const MPI::Datatype. > Page 344, lines 13, 23, 32, 38, and 47 > Remove the const from const MPI::Datatype. > Page 345, lines 5 and 11 > Remove the const from const MPI::Datatype. > See http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/errata-20-adopted.pdf Correct. But see my other proposal about restoring those const's (subject: "const" predefined MPI C++ handles"). > And on Jan. 2008 meeting, positve straw vtes wer given for > (in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/ballot3.pdf > ) > 6. MPI-2, page 345, line 37: Remove the const from const MPI::Op. > MPI-2, page 346, line 20: Remove the const from const MPI::Group. > MPI-2, page 346, add after line 34: > Advice to implementors. If an implementation does not change > the value of > predefined handles while execution of MPI Init, the > implementation is free to > define the predefined operation handles as const MPI::Op and > the predefined > group handle MPI::GROUP EMPTY as const MPI::Group. Other > predefined > handles must not be ”const” because they are allowed as INOUT > argument in > the MPI COMM SET NAME/ATTR and MPI TYPE SET NAME/ATTR routines. Note that this exact item did *not* come up in Bill's slides in the Chicago Jan 08 meeting. It was on the ballot, but this specific item did not get raised during the meeting (so it did not get positive straw votes). Hubert pointed out this item to me on Wednesday and I didn't remember it coming up on Monday or Tuesday. So I asked Bill about it on Wednesday after the meeting; he wasn't sure that he had remembered to put this specific item in the ballot 3 discussion during the meeting. I don't have notes about this item in the discussion, either. I may have missed it...? (but I usually tend to perk up at C++ items) > I'll move this item from Ballot 3 to Ballot 4 because the last > sentence > is not needed when your Proposal "Ballot 4 proposal: INOUT arguments" > will be accepted. All of these ballots are unfortunately intertwined, even though they are technically separate issues -- sorry, everyone! Note that these 3 proposals, taken together, represent a very, very complex set of issues. I have another rollup mail coming on these issues... please stand by. I am struggling to distill a short, concise, easy-to-understand explanation of the issues involved -- I spent a lot of time with Andrew Lumsdaine and Erez today discussing exactly these issues... > > Best regards > Rolf > > On Mon, 21 Jan 2008 10:40:39 -0500 > Jeff Squyres wrote: >> This is not already on Bill's errata page. >> >> Proposal: Make MPI::COMM_WORLD and MPI::COMM_SELF be "const". >> >> Change MPI-2:B.2 p345:18 from >> >> // Type: MPI::Intracomm >> to >> // Type: const MPI::Intracomm >> >> Rationale: The COMM_WORLD and COMM_SELF C++ handles were erroneously >> not marked "const" because of the incorrect INOUT MPI handle >> parameter >> designation of the MPI_COMM_SET_ERRHANDLER, MPI_COMM_SET_ATTR, and >> MPI_COMM_SET_ERRHANDLER functions. This caused the C++ bindings >> methods to not be const, resulting in compile errors if COMM_WORLD >> was >> const and you invoked any of the methods listed above (e.g., >> MPI::COMM_WORLD.Set_errhandler(...)). >> >> The proper solution is to have all the MPI handle arguments be IN >> instead of INOUT (covered in another proposal) and therefore have the >> Set_* functions be const (also covered that other proposal). Once >> that solution is in place, COMM_WORLD and COMM_SELF should be marked >> as "const". Not that many other predefined MPI C++ handles are >> already const; some erroneously had their "const" designation removed >> in ballot 2 -- a different proposal seeks to restore their "const" >> status. >> >> NOTE: This proposal depends on the "change some INOUT parameters to >> IN" proposal. >> >> -- >> Jeff Squyres >> Cisco Systems >> >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Tue Jan 22 10:29:51 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 22 Jan 2008 17:29:51 +0100 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: <1806EFCB-A2C1-471D-97E7-2B0FB2F35525@cisco.com> Message-ID: An alternative-proposal would be to add a clarifying sentence in Terms and Conventions, MPI-2.0 Sect. 2.5.1, page 9, line 7: Handle arguments can be marked at INOUT to indicate that the handle itself may be changed by the routine (e.g., in MPI_TYPE_COMMIT), or to indicate that the object referenced by the handle may be changed but the handle itself is kept unchanged (e.g., MPI_TYPE_SET_NAME). Rolf Rabenseifner On Mon, 21 Jan 2008 09:42:02 -0500 Jeff Squyres wrote: >This is not already on Bill's errata page. > >Proposal: change the INOUT designation of the MPI handle parameters in several MPI-2 functions to be IN, because the values of the handles are not changing. Only the underlying MPI objects are changing. The C bindings for each of these functions do *not* pass the MPI handle by reference, therefore disallowing the possibility of these parameters actually being INOUT. By the same argument, the C++ methods for these functions should be const since the C++ object invoking the method will not be changed. > >The functions in question are: > >MPI_*_SET_NAME for communicators, datatypes, windows >MPI_*_SET_ATTR for communicators, datatypes, windows >MPI_*_SET_ERRHANDLER for communicators, files, windows > >Locations of specific text to be changed (INOUT -> IN, add "const" to C ++ methods): > >MPI_COMM_SET_NAME: MPI-2:8.4, p177:35,44 > MPI-2:A.8.5, p336:28 >MPI_TYPE_SET_NAME: MPI-2:8.4, p179:41, p180:2 > MPI-2:A.8.5, p337:2 >MPI_WIN_SET_NAME: MPI-2:8.4, p181:25,35 > MPI-2:A.8.5, p337:36 > >MPI_COMM_SET_ATTR: MPI-2:8.8.1, p201:6,17 > MPI-2:A.8.5, p336:27 >MPI_WIN_SET_ATTR: MPI-2:8.8.2, p204:2,12 > MPI-2:A.8.5, p337:35 >MPI_TYPE_SET_ATTR: MPI-2:8.8.3, p206:26,37 > MPI-2:A.8.5, p337:1 > >MPI_COMM_SET_ERRHANDLER: MPI-2:4.13.1, p62:35,43 > MPI-2:A.8.1, p331:38 >MPI_WIN_SET_ERRHANDLER: MPI-2:4.13.2, p64:2,10 > MPI-2:A.8.1, p333:13 >MPI_FILE_SET_ERRHANDLER: MPI-2:4.13.3, p65:14,22 > MPI-2:A.8.1, p332:34 > >NOTE: The "const" issues of this proposal will be dependent upon a 2nd proposal about re-instituting the "const" on MPI predefined handles that was removed in ballot 2. > >-- >Jeff Squyres >Cisco Systems > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Tue Jan 22 19:07:05 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 22 Jan 2008 20:07:05 -0500 Subject: [mpi-21] C++ predefined MPI handles, const, IN/INOUT/OUT, etc. Message-ID: <1651E063-A0C2-414F-935F-11FDACD675FF@cisco.com> The 3 proposals that I sent about C++ issues are both intertwined and represent a very complex set of issues. Shorter version =============== Does anyone know/remember why the "special case" for the definition of OUT parameters exists in MPI-1:2.2? I ask because the C++ bindings were modeled off the IN/OUT/INOUT designations of the language neutral bindings. MPI_COMM_SET_NAME (and others) use the "special case" definition of the [IN]OUT designation for the MPI communicator handle parameter. Two facts indicate that we should either override this INOUT designation for the C++ binding (and therefore make the method const) and/or revisit the "special case" language in MPI-1:2.2: 1. The C binding does not allow the implementation to change the handle value 2. The following is a valid MPI code: MPI::Intracomm cxx_comm = MPI::COMM_WORLD; cxx_comm.Set_name("foo"); MPI::COMM_WORLD.Get_name(name, len); cout << name << endl; The output will be "foo" even though we set the name on cxx_comm and retrieved it from MPI::COMM_WORLD ***because the state changed on the underlying MPI object, not the upper-level handles*** (the same is true for error handlers). Hence, the Set_name() method should be const because the MPI handle will not (and cannot) change. Similar arguments apply to keeping the MPI predefined C++ handles as "const" (MPI::INT, etc.) -- their values must never change during execution. It then follows that unless there is a good reason for the "special case" language in MPI-1:2.2, it should be removed. Longer version / more details ============================= At the heart of the issue seems to be text from MPI-1:2.2 about the definition of IN, OUT, and INOUT parameters to MPI functions. This text was used to guide many of the decisions about the C++ bindings, such as the const-ness (or not) of C++ methods and MPI predefined C++ handles. The text states: ----- * the call uses but does not update an argument marked IN * the call may update an argument marked OUT * the call both uses and updates an argument marked INOUT There is one special case -- if an argument is a handle to an opaque object (these terms are defined in Section 2.4.1) and the object is updated by the procedure call, then the argument is marked OUT. It is marked this way even though the handle itself is not modified -- we use the OUT attribute to denote that what the handle _references_ is updated. ----- The special case for the OUT definition is important because the C++ bindings were created to mimic the IN, OUT, and INOUT behavior in a language that is stricter than C and Fortran: C++ will fail to compile if an application violates the defined semantics (which is a good thing). *** The big question: does anyone know/remember why this special case *** for the "OUT" definition exists? The special case seems to imply that *explicit* changes to MPI objects should be marked as an [IN]OUT parameter (e.g., SET_NAME and SET_ERRHANDLER). Apparently, *implicit* changes to the underlying MPI object (such as MPI_ISEND) do not count / should be IN (i.e., many MPI implementation *do* change the state either on the communicator or something related to the communicator when a send or receive is initiated, even though the communicator is an IN argument). But remember that MPI clearly states that the handle is separate from the underlying MPI object. So why does the binding care if the back- end object is updated? (regardless of whether the change to the object is explicit or implicit) For example, the language-neutral binding for MPI_COMM_SET_NAME has the communicator as an INOUT argument. This clearly falls within the "special case" definition because the function semantics explicitly change state on the underlying MPI object. But note that the C binding is "int MPI_Comm_set_name(MPI_Comm comm, ...)". Notice that the comm is passed by value, not by reference. So even though the language neutral binding called that parameter INOUT, it's not possible for the MPI implementation to change the value of the handle. My claim is that if we want to ensure that the C++ bindings match the C bindings (i.e., that the implementation cannot change the value of the MPI handle), then the method should be const (i.e., cxx_comm.Set_name(...)) *because the handle value will not, and ***cannot***, change*. Simply put: regardless of language or implementation, MPI handles must have true handle semantics. For example: MPI::Intracomm cxx_comm = MPI::COMM_WORLD; cxx_comm.Set_name("C++ r00l3z!"); MPI::COMM_WORLD.Get_name(name, len); cout << name << endl; The above will output "C++ r00l3z!" because cxx_comm and MPI::COMM_WORLD are handles referring to the same underlying communicator. Hence, the only state that the handles have is whatever refers to their back-end MPI object. Having Set_name() be const keeps the *handle* const, not the underlying MPI object. Tying this all together: 1. cxx_comm.Set_name() *cannot* change state on the cxx_comm handle because cxx_comm.Get_name() and MPI::COMM_WORLD.Get_name() must return the same results (the same is true for error handlers). Hence, regardless of the implementation of the C++ bindings, the handle value cannot change. Therefore, this method (and all the others like it) should be const. 2. As a related issue, if no one can remember why the "special case" exists for OUT, then I think we should remove this text and then change all those INOUT parameters for the functions I cited in my earlier proposal to IN. This would make the C++ bindings consistent with the IN/OUT/INOUT specifications of the language-neutral bindings. 3. All the MPI C++ predefined handles should be const for many of the same reasons. Regardless of what happens to the underlying MPI object, the value of the handle cannot ever change. This is guaranteed by MPI-2:2.5.4 pages 10 lines 38-41: "All named constants, with the exceptions noted below for Fortran, can be used in initialization expressions or assignments. These constants do not change values during execution. Opaque objects accessed by constant handles are defined and do not change value between MPI initialization MPI_INIT and MPI completion MPI_FINALIZE." Hence, they should all be "const". ----- In short: C++ gives us stronger protections to ensure that applications don't shoot themselves in the foot. If the MPI predefined handles are const, then statements like "MPI::INT = my_dtype;" will fail to compile. This is a Good Thing. The original C++ bindings tried to take advantage of const, but missed a few points. Ballot two and one of the items in ballot 3 incorrectly tried to fix these points by removing const in several places. That "fixes" the problem, but removes many of the good qualities that we can get in C++ with "const". So let's fix the real problem and leave "const" in the C++ bindings. Are you confused yet? :-) -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Wed Jan 23 03:33:47 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Wed, 23 Jan 2008 10:33:47 +0100 Subject: [mpi-21] C++ predefined MPI handles, const, IN/INOUT/OUT, etc. In-Reply-To: <1651E063-A0C2-414F-935F-11FDACD675FF@cisco.com> Message-ID: Jeff, you are referencing to MPI-1.1, Sect. 2.2. The full Chapter 2 of MPI-1.1 is deprecated and substituted by the full Chapter 2 of MPI-2.0. In the substituting MPI-2.0 Sect. 2.3, there exist the following new paragraph: "MPI’s use of IN, OUT and INOUT is intended to indicate to the user how an argument is to be used, but does not provide a rigorous classification that can be translated directly into all language bindings (e.g., INTENT in Fortran 90 bindings or const in C bindings). For instance, the “constant” MPI BOTTOM can usually be passed to OUT buffer arguments. Similarly, MPI STATUS IGNORE can be passed as the OUT status argument." My comments and alternative-proposal from Tue, 22 Jan 2008 17:29:51 was about this paragraph (updated text to reflect also the "IN" of "INOUT"): >An alternative-proposal would be to add a clarifying sentence in >Terms and Conventions, MPI-2.0 Sect. 2.5.1, page 9, line 7: > Handle arguments can be marked at INOUT to indicate that the handle > itself may be changed by the routine (e.g., in MPI_TYPE_COMMIT), > or to indicate that the object referenced by the handle already exists and > may be changed but the handle itself is kept unchanged > (e.g., MPI_TYPE_SET_NAME). Best regards Rolf On Tue, 22 Jan 2008 20:07:05 -0500 Jeff Squyres wrote: >The 3 proposals that I sent about C++ issues are both intertwined and >represent a very complex set of issues. > >Shorter version >=============== > >Does anyone know/remember why the "special case" for the definition of >OUT parameters exists in MPI-1:2.2? > >I ask because the C++ bindings were modeled off the IN/OUT/INOUT >designations of the language neutral bindings. MPI_COMM_SET_NAME (and >others) use the "special case" definition of the [IN]OUT designation >for the MPI communicator handle parameter. Two facts indicate that we >should either override this INOUT designation for the C++ binding (and >therefore make the method const) and/or revisit the "special case" >language in MPI-1:2.2: > >1. The C binding does not allow the implementation to change the >handle value >2. The following is a valid MPI code: > > MPI::Intracomm cxx_comm = MPI::COMM_WORLD; > cxx_comm.Set_name("foo"); > MPI::COMM_WORLD.Get_name(name, len); > cout << name << endl; > > The output will be "foo" even though we set the name on cxx_comm >and retrieved it from MPI::COMM_WORLD ***because the state changed on >the underlying MPI object, not the upper-level handles*** (the same is >true for error handlers). > >Hence, the Set_name() method should be const because the MPI handle >will not (and cannot) change. Similar arguments apply to keeping the >MPI predefined C++ handles as "const" (MPI::INT, etc.) -- their values >must never change during execution. It then follows that unless there >is a good reason for the "special case" language in MPI-1:2.2, it >should be removed. > >Longer version / more details >============================= > >At the heart of the issue seems to be text from MPI-1:2.2 about the >definition of IN, OUT, and INOUT parameters to MPI functions. This >text was used to guide many of the decisions about the C++ bindings, >such as the const-ness (or not) of C++ methods and MPI predefined C++ >handles. The text states: > >----- > * the call uses but does not update an argument marked IN > * the call may update an argument marked OUT > * the call both uses and updates an argument marked INOUT > >There is one special case -- if an argument is a handle to an opaque >object (these terms are defined in Section 2.4.1) and the object is >updated by the procedure call, then the argument is marked OUT. It is >marked this way even though the handle itself is not modified -- we >use the OUT attribute to denote that what the handle _references_ is >updated. >----- > >The special case for the OUT definition is important because the C++ >bindings were created to mimic the IN, OUT, and INOUT behavior in a >language that is stricter than C and Fortran: C++ will fail to compile >if an application violates the defined semantics (which is a good >thing). > >*** The big question: does anyone know/remember why this special case >*** for the "OUT" definition exists? > >The special case seems to imply that *explicit* changes to MPI objects >should be marked as an [IN]OUT parameter (e.g., SET_NAME and >SET_ERRHANDLER). Apparently, *implicit* changes to the underlying MPI >object (such as MPI_ISEND) do not count / should be IN (i.e., many MPI >implementation *do* change the state either on the communicator or >something related to the communicator when a send or receive is >initiated, even though the communicator is an IN argument). > >But remember that MPI clearly states that the handle is separate from >the underlying MPI object. So why does the binding care if the back- >end object is updated? (regardless of whether the change to the >object is explicit or implicit) > >For example, the language-neutral binding for MPI_COMM_SET_NAME has >the communicator as an INOUT argument. This clearly falls within the >"special case" definition because the function semantics explicitly >change state on the underlying MPI object. > >But note that the C binding is "int MPI_Comm_set_name(MPI_Comm >comm, ...)". Notice that the comm is passed by value, not by >reference. So even though the language neutral binding called that >parameter INOUT, it's not possible for the MPI implementation to >change the value of the handle. > >My claim is that if we want to ensure that the C++ bindings match the >C bindings (i.e., that the implementation cannot change the value of >the MPI handle), then the method should be const (i.e., >cxx_comm.Set_name(...)) *because the handle value will not, and >***cannot***, change*. > >Simply put: regardless of language or implementation, MPI handles must >have true handle semantics. For example: > > MPI::Intracomm cxx_comm = MPI::COMM_WORLD; > cxx_comm.Set_name("C++ r00l3z!"); > > MPI::COMM_WORLD.Get_name(name, len); > cout << name << endl; > >The above will output "C++ r00l3z!" because cxx_comm and >MPI::COMM_WORLD are handles referring to the same underlying >communicator. Hence, the only state that the handles have is whatever >refers to their back-end MPI object. Having Set_name() be const >keeps the *handle* const, not the underlying MPI object. > >Tying this all together: > >1. cxx_comm.Set_name() *cannot* change state on the cxx_comm handle >because cxx_comm.Get_name() and MPI::COMM_WORLD.Get_name() must return >the same results (the same is true for error handlers). Hence, >regardless of the implementation of the C++ bindings, the handle value >cannot change. Therefore, this method (and all the others like it) >should be const. > >2. As a related issue, if no one can remember why the "special case" >exists for OUT, then I think we should remove this text and then >change all those INOUT parameters for the functions I cited in my >earlier proposal to IN. This would make the C++ bindings consistent >with the IN/OUT/INOUT specifications of the language-neutral bindings. > >3. All the MPI C++ predefined handles should be const for many of the >same reasons. Regardless of what happens to the underlying MPI >object, the value of the handle cannot ever change. This is >guaranteed by MPI-2:2.5.4 pages 10 lines 38-41: > >"All named constants, with the exceptions noted below for Fortran, can >be used in initialization expressions or assignments. These constants >do not change values during execution. Opaque objects accessed by >constant handles are defined and do not change value between MPI >initialization MPI_INIT and MPI completion MPI_FINALIZE." > >Hence, they should all be "const". > >----- > >In short: C++ gives us stronger protections to ensure that >applications don't shoot themselves in the foot. If the MPI >predefined handles are const, then statements like "MPI::INT = >my_dtype;" will fail to compile. This is a Good Thing. > >The original C++ bindings tried to take advantage of const, but missed >a few points. Ballot two and one of the items in ballot 3 incorrectly >tried to fix these points by removing const in several places. That >"fixes" the problem, but removes many of the good qualities that we >can get in C++ with "const". So let's fix the real problem and leave >"const" in the C++ bindings. > >Are you confused yet? :-) > >-- >Jeff Squyres >Cisco Systems > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Wed Jan 23 06:16:23 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 23 Jan 2008 07:16:23 -0500 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: Message-ID: <21404E79-7F61-46F2-820A-B77225748C49@cisco.com> This would be in accordance with the existing MPI-1:2.2 language, and therefore (I think) redundant. However, see my summary e-mail from last night about the C++ bindings -- I am wondering why the special case for [IN]OUT handle arguments exists to begin with (that a parameter is OUT if the underlying MPI object is changed when the handle is not). On Jan 22, 2008, at 11:29 AM, Rolf Rabenseifner wrote: > An alternative-proposal would be to add a clarifying sentence in > Terms and Conventions, MPI-2.0 Sect. 2.5.1, page 9, line 7: > Handle arguments can be marked at INOUT to indicate that the handle > itself may be changed by the routine (e.g., in MPI_TYPE_COMMIT), > or to indicate that the object referenced by the handle may be > changed but the handle itself is kept unchanged > (e.g., MPI_TYPE_SET_NAME). > > Rolf Rabenseifner > > > On Mon, 21 Jan 2008 09:42:02 -0500 > Jeff Squyres wrote: >> This is not already on Bill's errata page. >> >> Proposal: change the INOUT designation of the MPI handle parameters >> in several MPI-2 functions to be IN, because the values of the >> handles are not changing. Only the underlying MPI objects are >> changing. The C bindings for each of these functions do *not* >> pass the MPI handle by reference, therefore disallowing the >> possibility of these parameters actually being INOUT. By the same >> argument, the C++ methods for these functions should be const >> since the C++ object invoking the method will not be changed. >> >> The functions in question are: >> >> MPI_*_SET_NAME for communicators, datatypes, windows >> MPI_*_SET_ATTR for communicators, datatypes, windows >> MPI_*_SET_ERRHANDLER for communicators, files, windows >> >> Locations of specific text to be changed (INOUT -> IN, add "const" >> to C ++ methods): >> >> MPI_COMM_SET_NAME: MPI-2:8.4, p177:35,44 >> MPI-2:A.8.5, p336:28 >> MPI_TYPE_SET_NAME: MPI-2:8.4, p179:41, p180:2 >> MPI-2:A.8.5, p337:2 >> MPI_WIN_SET_NAME: MPI-2:8.4, p181:25,35 >> MPI-2:A.8.5, p337:36 >> >> MPI_COMM_SET_ATTR: MPI-2:8.8.1, p201:6,17 >> MPI-2:A.8.5, p336:27 >> MPI_WIN_SET_ATTR: MPI-2:8.8.2, p204:2,12 >> MPI-2:A.8.5, p337:35 >> MPI_TYPE_SET_ATTR: MPI-2:8.8.3, p206:26,37 >> MPI-2:A.8.5, p337:1 >> >> MPI_COMM_SET_ERRHANDLER: MPI-2:4.13.1, p62:35,43 >> MPI-2:A.8.1, p331:38 >> MPI_WIN_SET_ERRHANDLER: MPI-2:4.13.2, p64:2,10 >> MPI-2:A.8.1, p333:13 >> MPI_FILE_SET_ERRHANDLER: MPI-2:4.13.3, p65:14,22 >> MPI-2:A.8.1, p332:34 >> >> NOTE: The "const" issues of this proposal will be dependent upon a >> 2nd proposal about re-instituting the "const" on MPI predefined >> handles that was removed in ballot 2. >> >> -- >> Jeff Squyres >> Cisco Systems >> > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Wed Jan 23 07:04:35 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 23 Jan 2008 08:04:35 -0500 Subject: [mpi-21] C++ predefined MPI handles, const, IN/INOUT/OUT, etc. In-Reply-To: Message-ID: On Jan 23, 2008, at 4:33 AM, Rolf Rabenseifner wrote: > you are referencing to MPI-1.1, Sect. 2.2. > The full Chapter 2 of MPI-1.1 is deprecated and substituted by > the full Chapter 2 of MPI-2.0. > In the substituting MPI-2.0 Sect. 2.3, there exist the > following new paragraph: > "MPI’s use of IN, OUT and INOUT is intended to indicate to the user > how an argument is to be used, but does not provide a rigorous > classification that can be translated directly into all language > bindings (e.g., INTENT in Fortran 90 bindings or const in C > bindings). > For instance, the “constant” MPI BOTTOM can usually be passed to OUT > buffer arguments. Similarly, MPI STATUS IGNORE can be passed as the > OUT status argument." Ok; I missed that text -- thanks. So replace every instance of "MPI-1:2.2" in my original text with "MPI-2:2.3", where exactly the same special case definition exists. > My comments and alternative-proposal from Tue, 22 Jan 2008 17:29:51 > was about this paragraph (updated text to reflect also the "IN" of > "INOUT"): > >> An alternative-proposal would be to add a clarifying sentence in >> Terms and Conventions, MPI-2.0 Sect. 2.5.1, page 9, line 7: >> Handle arguments can be marked at INOUT to indicate that the handle >> itself may be changed by the routine (e.g., in MPI_TYPE_COMMIT), >> or to indicate that the object referenced by the handle > already exists and >> may be changed but the handle itself is kept unchanged >> (e.g., MPI_TYPE_SET_NAME). Per the text in my summary mail (below), I think that that would be exactly the wrong thing to say (and it would be redundant with MPI-2:2.3). My point is that this special case definition for [IN]OUT should go away if no one can specify the reason why it's there. Indeed, the text that Rolf cites in MPI-2:2.3 both strengthens and weakens my arguments: - STRENGTHEN: "MPI's use of IN, OUT and INOUT is intended to indicate to the user how an argument is to be used..." - WEAKEN: "...but does not provide a rigorous classification that can be translated directly into all language bindings (e.g., INTENT in Fortran 90 bindings or const in C++ bindings)." However, the latter part of the sentence can be ignored because it says that IN/OUT/INOUT do not *have* to be used as a rigorous classification. This clears the way for the 3 proposals that I sent the other day: 1. Make all the SET_NAME and SET_ERRHANDLER functions have IN MPI handle arguments (vs. INOUT) -- by the new sentence in MPI-2:2.3, the value of the argument is not being changed by the function call (per the already-existing C bindings). So these are now clearly IN arguments. Also, make the C++ bindings for these functions be const. 2. Restore the "const" to the C++ MPI predefined handles that were removed in ballot 2. 3. Make MPI::COMM_WORLD and MPI::COMM_SELF be const. Additionally, I would like to request/motion/whatever that the "remove const from more C++ constants" item be withdrawn from what was supposed to be on ballot 3 but got moved to ballot 4. *** The end goal is to make all the MPI C++ constants be "const" (except MPI::BOTTOM, as we discussed in Chicago) and have all the SET_NAME and SET_ERRHANDLER methods be const. Thanks. - Jeff > Best regards > Rolf > > > On Tue, 22 Jan 2008 20:07:05 -0500 > Jeff Squyres wrote: >> The 3 proposals that I sent about C++ issues are both intertwined and >> represent a very complex set of issues. >> >> Shorter version >> =============== >> >> Does anyone know/remember why the "special case" for the definition >> of >> OUT parameters exists in MPI-1:2.2? >> >> I ask because the C++ bindings were modeled off the IN/OUT/INOUT >> designations of the language neutral bindings. MPI_COMM_SET_NAME >> (and >> others) use the "special case" definition of the [IN]OUT designation >> for the MPI communicator handle parameter. Two facts indicate that >> we >> should either override this INOUT designation for the C++ binding >> (and >> therefore make the method const) and/or revisit the "special case" >> language in MPI-1:2.2: >> >> 1. The C binding does not allow the implementation to change the >> handle value >> 2. The following is a valid MPI code: >> >> MPI::Intracomm cxx_comm = MPI::COMM_WORLD; >> cxx_comm.Set_name("foo"); >> MPI::COMM_WORLD.Get_name(name, len); >> cout << name << endl; >> >> The output will be "foo" even though we set the name on cxx_comm >> and retrieved it from MPI::COMM_WORLD ***because the state changed on >> the underlying MPI object, not the upper-level handles*** (the same >> is >> true for error handlers). >> >> Hence, the Set_name() method should be const because the MPI handle >> will not (and cannot) change. Similar arguments apply to keeping the >> MPI predefined C++ handles as "const" (MPI::INT, etc.) -- their >> values >> must never change during execution. It then follows that unless >> there >> is a good reason for the "special case" language in MPI-1:2.2, it >> should be removed. >> >> Longer version / more details >> ============================= >> >> At the heart of the issue seems to be text from MPI-1:2.2 about the >> definition of IN, OUT, and INOUT parameters to MPI functions. This >> text was used to guide many of the decisions about the C++ bindings, >> such as the const-ness (or not) of C++ methods and MPI predefined C++ >> handles. The text states: >> >> ----- >> * the call uses but does not update an argument marked IN >> * the call may update an argument marked OUT >> * the call both uses and updates an argument marked INOUT >> >> There is one special case -- if an argument is a handle to an opaque >> object (these terms are defined in Section 2.4.1) and the object is >> updated by the procedure call, then the argument is marked OUT. It >> is >> marked this way even though the handle itself is not modified -- we >> use the OUT attribute to denote that what the handle _references_ is >> updated. >> ----- >> >> The special case for the OUT definition is important because the C++ >> bindings were created to mimic the IN, OUT, and INOUT behavior in a >> language that is stricter than C and Fortran: C++ will fail to >> compile >> if an application violates the defined semantics (which is a good >> thing). >> >> *** The big question: does anyone know/remember why this special case >> *** for the "OUT" definition exists? >> >> The special case seems to imply that *explicit* changes to MPI >> objects >> should be marked as an [IN]OUT parameter (e.g., SET_NAME and >> SET_ERRHANDLER). Apparently, *implicit* changes to the underlying >> MPI >> object (such as MPI_ISEND) do not count / should be IN (i.e., many >> MPI >> implementation *do* change the state either on the communicator or >> something related to the communicator when a send or receive is >> initiated, even though the communicator is an IN argument). >> >> But remember that MPI clearly states that the handle is separate from >> the underlying MPI object. So why does the binding care if the back- >> end object is updated? (regardless of whether the change to the >> object is explicit or implicit) >> >> For example, the language-neutral binding for MPI_COMM_SET_NAME has >> the communicator as an INOUT argument. This clearly falls within the >> "special case" definition because the function semantics explicitly >> change state on the underlying MPI object. >> >> But note that the C binding is "int MPI_Comm_set_name(MPI_Comm >> comm, ...)". Notice that the comm is passed by value, not by >> reference. So even though the language neutral binding called that >> parameter INOUT, it's not possible for the MPI implementation to >> change the value of the handle. >> >> My claim is that if we want to ensure that the C++ bindings match the >> C bindings (i.e., that the implementation cannot change the value of >> the MPI handle), then the method should be const (i.e., >> cxx_comm.Set_name(...)) *because the handle value will not, and >> ***cannot***, change*. >> >> Simply put: regardless of language or implementation, MPI handles >> must >> have true handle semantics. For example: >> >> MPI::Intracomm cxx_comm = MPI::COMM_WORLD; >> cxx_comm.Set_name("C++ r00l3z!"); >> >> MPI::COMM_WORLD.Get_name(name, len); >> cout << name << endl; >> >> The above will output "C++ r00l3z!" because cxx_comm and >> MPI::COMM_WORLD are handles referring to the same underlying >> communicator. Hence, the only state that the handles have is >> whatever >> refers to their back-end MPI object. Having Set_name() be const >> keeps the *handle* const, not the underlying MPI object. >> >> Tying this all together: >> >> 1. cxx_comm.Set_name() *cannot* change state on the cxx_comm handle >> because cxx_comm.Get_name() and MPI::COMM_WORLD.Get_name() must >> return >> the same results (the same is true for error handlers). Hence, >> regardless of the implementation of the C++ bindings, the handle >> value >> cannot change. Therefore, this method (and all the others like it) >> should be const. >> >> 2. As a related issue, if no one can remember why the "special case" >> exists for OUT, then I think we should remove this text and then >> change all those INOUT parameters for the functions I cited in my >> earlier proposal to IN. This would make the C++ bindings consistent >> with the IN/OUT/INOUT specifications of the language-neutral >> bindings. >> >> 3. All the MPI C++ predefined handles should be const for many of the >> same reasons. Regardless of what happens to the underlying MPI >> object, the value of the handle cannot ever change. This is >> guaranteed by MPI-2:2.5.4 pages 10 lines 38-41: >> >> "All named constants, with the exceptions noted below for Fortran, >> can >> be used in initialization expressions or assignments. These >> constants >> do not change values during execution. Opaque objects accessed by >> constant handles are defined and do not change value between MPI >> initialization MPI_INIT and MPI completion MPI_FINALIZE." >> >> Hence, they should all be "const". >> >> ----- >> >> In short: C++ gives us stronger protections to ensure that >> applications don't shoot themselves in the foot. If the MPI >> predefined handles are const, then statements like "MPI::INT = >> my_dtype;" will fail to compile. This is a Good Thing. >> >> The original C++ bindings tried to take advantage of const, but >> missed >> a few points. Ballot two and one of the items in ballot 3 >> incorrectly >> tried to fix these points by removing const in several places. That >> "fixes" the problem, but removes many of the good qualities that we >> can get in C++ with "const". So let's fix the real problem and leave >> "const" in the C++ bindings. >> >> Are you confused yet? :-) >> >> -- >> Jeff Squyres >> Cisco Systems >> >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Wed Jan 23 07:05:34 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 23 Jan 2008 08:05:34 -0500 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: <21404E79-7F61-46F2-820A-B77225748C49@cisco.com> Message-ID: <70CD810D-AEFA-4D78-BA9D-A0EFB2428957@cisco.com> On Jan 23, 2008, at 7:16 AM, Jeff Squyres wrote: > This would be in accordance with the existing MPI-1:2.2 language, > and therefore (I think) redundant. > > However, see my summary e-mail from last night about the C++ > bindings -- I am wondering why the special case for [IN]OUT handle > arguments exists to begin with (that a parameter is OUT if the > underlying MPI object is changed when the handle is not). I just now saw your other mail about all of MPI-1:2 being deprecated in favor of MPI-2:2 -- you're right; I followed up in that thread. > > > > On Jan 22, 2008, at 11:29 AM, Rolf Rabenseifner wrote: > >> An alternative-proposal would be to add a clarifying sentence in >> Terms and Conventions, MPI-2.0 Sect. 2.5.1, page 9, line 7: >> Handle arguments can be marked at INOUT to indicate that the handle >> itself may be changed by the routine (e.g., in MPI_TYPE_COMMIT), >> or to indicate that the object referenced by the handle may be >> changed but the handle itself is kept unchanged >> (e.g., MPI_TYPE_SET_NAME). >> >> Rolf Rabenseifner >> >> >> On Mon, 21 Jan 2008 09:42:02 -0500 >> Jeff Squyres wrote: >>> This is not already on Bill's errata page. >>> >>> Proposal: change the INOUT designation of the MPI handle >>> parameters in several MPI-2 functions to be IN, because the >>> values of the handles are not changing. Only the underlying MPI >>> objects are changing. The C bindings for each of these functions >>> do *not* pass the MPI handle by reference, therefore disallowing >>> the possibility of these parameters actually being INOUT. By the >>> same argument, the C++ methods for these functions should be >>> const since the C++ object invoking the method will not be changed. >>> >>> The functions in question are: >>> >>> MPI_*_SET_NAME for communicators, datatypes, windows >>> MPI_*_SET_ATTR for communicators, datatypes, windows >>> MPI_*_SET_ERRHANDLER for communicators, files, windows >>> >>> Locations of specific text to be changed (INOUT -> IN, add "const" >>> to C ++ methods): >>> >>> MPI_COMM_SET_NAME: MPI-2:8.4, p177:35,44 >>> MPI-2:A.8.5, p336:28 >>> MPI_TYPE_SET_NAME: MPI-2:8.4, p179:41, p180:2 >>> MPI-2:A.8.5, p337:2 >>> MPI_WIN_SET_NAME: MPI-2:8.4, p181:25,35 >>> MPI-2:A.8.5, p337:36 >>> >>> MPI_COMM_SET_ATTR: MPI-2:8.8.1, p201:6,17 >>> MPI-2:A.8.5, p336:27 >>> MPI_WIN_SET_ATTR: MPI-2:8.8.2, p204:2,12 >>> MPI-2:A.8.5, p337:35 >>> MPI_TYPE_SET_ATTR: MPI-2:8.8.3, p206:26,37 >>> MPI-2:A.8.5, p337:1 >>> >>> MPI_COMM_SET_ERRHANDLER: MPI-2:4.13.1, p62:35,43 >>> MPI-2:A.8.1, p331:38 >>> MPI_WIN_SET_ERRHANDLER: MPI-2:4.13.2, p64:2,10 >>> MPI-2:A.8.1, p333:13 >>> MPI_FILE_SET_ERRHANDLER: MPI-2:4.13.3, p65:14,22 >>> MPI-2:A.8.1, p332:34 >>> >>> NOTE: The "const" issues of this proposal will be dependent upon a >>> 2nd proposal about re-instituting the "const" on MPI predefined >>> handles that was removed in ballot 2. >>> >>> -- >>> Jeff Squyres >>> Cisco Systems >>> >> >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> > > > -- > Jeff Squyres > Cisco Systems > -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Wed Jan 23 11:42:43 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Wed, 23 Jan 2008 18:42:43 +0100 Subject: [mpi-21] Ballot 4 - 0-dim MPI_CART_CREATE and MPI_CART_SUB Message-ID: This is a discussion-point for MPI 2.1, Ballot 4. This is a follow up to: Questions about Graph_create, Cart_crate, and Cart_coords in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/topo/ I'm starting a new 2nd stream for the zero-dim discussion. ------------------------------- This report shows significant differences of MPI implementations in the (extreme) case of zero-dimensional topologies! Therefore clarifications may be necessary. (Although hopefully never an application may produce zero-dimensional Cartesian topologies!) If there are no objections, I would produce clarifications for MPI-2.1 according to the behavior of mpich2 and MPI from IBM (except the rank -767705708, see below). MPI-1.1 Sect. 6.5.6, page 187, routine MPI_CART_SUB defines on lines 38-42: If a cartesian topology has been created with MPI_CART_CREATE, the function MPI_CART_SUB can be used to partition the communicator group into subgroups that form lower-dimensional cartesian subgrids, and to build for each subgroup a communicator with the associated subgrid cartesian topology. (This function is closely related to MPI_COMM_SPLIT.) The text clearly says, that the new communicator must (1) have a Cartesian topology associated (2) be lower-dimensional in the case of a subgrid There is no restriction on the input array remain_dims. Therefore all MPI implementations (that I tested) allow that all entries in remain_dims are "false". I tested several MPI libraries with creating a subgrid with MPI_Cart_sub(remain_dims=0) from a 1-dim Cartesian topology. I tested the subgrid communicator with MPI_Topo_test. In the case of MPI_CART, I used MPI_Cartdim_get to retrieve ndims, and MPI_Cart_get for further details: - mpich2: MPI_CART, ndims=0, 1.0.3 MPI_Cart_get works, but keeps all OUT arguments unchanged - IBM: MPI_CART, ndims=0, (on SP) MPI_Cart_get works, but keeps all OUT arguments unchanged - OpenMPI: MPI_CART, ndims=1, 1.2.4 MPI_Cart_get works and returns dims=1, periods=0, coords=0 independent from process or periods in the original comm. (may be wrong because (2) is not fulfilled) - NEC MP/EX: not MPI_CART (may be wrong because (1) is not fulfilled) - Voltaire: not MPI_CART (mpich1) (may be wrong because (1) is not fulfilled) With the implementations that return a correct zero-dim Cartesian topology, I tested further usage of this zero-dim communicator: - MPI_Comm_size returns 1 and MPI_Comm_rank returns 0 because this communicator is like MPI_COMM_SELF, but with Cartesian topology associated. - MPI_Cart_rank(IN ZeroDimComm, IN coords=0, OUT rank) Rationale. A zero-dim communicator has zero coords, i.e., this routine should not examine the coords input argument. --> mpich2: rank = 0 is returned (may be because this is the only existing rank in this communicator, this value may make sense, independent of the coord, that should not be analyzed) --> IBM: rank = -767705708 is returned (strange value, not MPI_PROC_NULL, not MPI_UNDEFINED) - MPI_Cart_coords(IN ZeroDimComm, IN rank=0, OUT coords) Rationale. A zero-dim communicator has zero coords, i.e., this routine should not return anything in the coords output argument. --> mpich2 and IBM: coords is not modified (as expected) - MPI_Cart_sub(IN ZeroDimComm, IN remain_dims=0, OUT subsubcomm) Rationale. A zero-dim communicator has zero dimensions, i.e., this routine should not examine remain_dims and the returned communicator should be again a zero-dim Cartesian communicator. --> mpich2 and IBM: subsubcomm is a zero-dim Cartesian communicator (as expected) - MPI_Cart_shift(IN ZeroDimComm, IN direction=0, IN disp=1, OUT src, OUT dest) Rationale. This call is erroneous because in a zero-dim communicator, the direction=0 does not exist. mpich2 and IBM: They detect the error and abort. (In OpenMPI, all these calls work as expected on a 1-dimensional topology on a "MPI_COMM_SELF") The last test is not addressed by MPI-1.1: Is it possible to build a zero-dim Cartesian topology directly by calling MPI_Cart_create: MPI_Cart_create(IN MPI_COMM_SELF, IN ndims=0, IN dims=1, IN Periods, IN reorder, OUT ZeroDimComm) Results: All tested MPI implementations return an error and abort. (Same on MPI_COMM_WORLD) The latter question was raised by Jesper Traeff. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) * -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mpi_0-dim_topology_test.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mpi_0-dim_topology_test_protocol.txt URL: From wyu at [hidden] Wed Jan 23 15:02:47 2008 From: wyu at [hidden] (William Yu) Date: Thu, 24 Jan 2008 05:02:47 +0800 Subject: [mpi-21] topo mpi (was Re: Ballot 4 - 0-dim MPI_CART_CREATE and MP I_CART_SUB) Message-ID: <9FrdWCx1nB3y.bNBznGK5@smtp.gmail.com> Hi Rolf, Just curious. Is this implemented already? Sounds like you have a reference implementation built against different MPIs. Thanks. ________________ Reply Header ________________ Subject: [mpi-21] Ballot 4 - 0-dim MPI_CART_CREATE and MPI_CART_SUB Author: "Rolf Rabenseifner" Date: January 23rd 2008 5:44 pm This is a discussion-point for MPI 2.1, Ballot 4. This is a follow up to: Questions about Graph_create, Cart_crate, and Cart_coords in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/topo/ I'm starting a new 2nd stream for the zero-dim discussion. ------------------------------- This report shows significant differences of MPI implementations in the (extreme) case of zero-dimensional topologies! Therefore clarifications may be necessary. (Although hopefully never an application may produce zero-dimensional Cartesian topologies!) If there are no objections, I would produce clarifications for MPI-2.1 according to the behavior of mpich2 and MPI from IBM (except the rank -767705708, see below). MPI-1.1 Sect. 6.5.6, page 187, routine MPI_CART_SUB defines on lines 38-42: If a cartesian topology has been created with MPI_CART_CREATE, the function MPI_CART_SUB can be used to partition the communicator group into subgroups that form lower-dimensional cartesian subgrids, and to build for each subgroup a communicator with the associated subgrid cartesian topology. (This function is closely related to MPI_COMM_SPLIT.) The text clearly says, that the new communicator must (1) have a Cartesian topology associated (2) be lower-dimensional in the case of a subgrid There is no restriction on the input array remain_dims. Therefore all MPI implementations (that I tested) allow that all entries in remain_dims are "false". I tested several MPI libraries with creating a subgrid with MPI_Cart_sub(remain_dims=0) from a 1-dim Cartesian topology. I tested the subgrid communicator with MPI_Topo_test. In the case of MPI_CART, I used MPI_Cartdim_get to retrieve ndims, and MPI_Cart_get for further details: - mpich2: MPI_CART, ndims=0, 1.0.3 MPI_Cart_get works, but keeps all OUT arguments unchanged - IBM: MPI_CART, ndims=0, (on SP) MPI_Cart_get works, but keeps all OUT arguments unchanged - OpenMPI: MPI_CART, ndims=1, 1.2.4 MPI_Cart_get works and returns dims=1, periods=0, coords=0 independent from process or periods in the original comm. (may be wrong because (2) is not fulfilled) - NEC MP/EX: not MPI_CART (may be wrong because (1) is not fulfilled) - Voltaire: not MPI_CART (mpich1) (may be wrong because (1) is not fulfilled) With the implementations that return a correct zero-dim Cartesian topology, I tested further usage of this zero-dim communicator: - MPI_Comm_size returns 1 and MPI_Comm_rank returns 0 because this communicator is like MPI_COMM_SELF, but with Cartesian topology associated. - MPI_Cart_rank(IN ZeroDimComm, IN coords=0, OUT rank) Rationale. A zero-dim communicator has zero coords, i.e., this routine should not examine the coords input argument. --> mpich2: rank = 0 is returned (may be because this is the only existing rank in this communicator, this value may make sense, independent of the coord, that should not be analyzed) --> IBM: rank = -767705708 is returned (strange value, not MPI_PROC_NULL, not MPI_UNDEFINED) - MPI_Cart_coords(IN ZeroDimComm, IN rank=0, OUT coords) Rationale. A zero-dim communicator has zero coords, i.e., this routine should not return anything in the coords output argument. --> mpich2 and IBM: coords is not modified (as expected) - MPI_Cart_sub(IN ZeroDimComm, IN remain_dims=0, OUT subsubcomm) Rationale. A zero-dim communicator has zero dimensions, i.e., this routine should not examine remain_dims and the returned communicator should be again a zero-dim Cartesian communicator. --> mpich2 and IBM: subsubcomm is a zero-dim Cartesian communicator (as expected) - MPI_Cart_shift(IN ZeroDimComm, IN direction=0, IN disp=1, OUT src, OUT dest) Rationale. This call is erroneous because in a zero-dim communicator, the direction=0 does not exist. mpich2 and IBM: They detect the error and abort. (In OpenMPI, all these calls work as expected on a 1-dimensional topology on a "MPI_COMM_SELF") The last test is not addressed by MPI-1.1: Is it possible to build a zero-dim Cartesian topology directly by calling MPI_Cart_create: MPI_Cart_create(IN MPI_COMM_SELF, IN ndims=0, IN dims=1, IN Periods, IN reorder, OUT ZeroDimComm) Results: All tested MPI implementations return an error and abort. (Same on MPI_COMM_WORLD) The latter question was raised by Jesper Traeff. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) _______________________________________________ mpi-21 mailing list mpi-21_at_[hidden] http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 From rabenseifner at [hidden] Thu Jan 24 02:20:29 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 24 Jan 2008 09:20:29 +0100 Subject: [mpi-21] topo mpi (was Re: Ballot 4 - 0-dim MPI_CART_CREATE and MP I_CART_SUB) In-Reply-To: <9FrdWCx1nB3y.bNBznGK5@smtp.gmail.com> Message-ID: William, yes, I reported the bahavior of 5 different MPI existing MPI implementations. And I attached my test program that calls the MPI topology functions and reports the behavior of the MPI library to which it is compiled and linked. Using this test program, everybody can detect that different MPI implementations are giving quite different answers and therefore, there is a need for clarifying the MPI 1.1 standard. Best regards Rolf On Thu, 24 Jan 2008 05:02:47 +0800 "William Yu" wrote: >Hi Rolf, > >Just curious. Is this implemented already? Sounds like you have a reference implementation built against different MPIs. > >Thanks. > >________________ Reply Header ________________ >Subject: [mpi-21] Ballot 4 - 0-dim MPI_CART_CREATE and MPI_CART_SUB >Author: "Rolf Rabenseifner" >Date: January 23rd 2008 5:44 pm > >This is a discussion-point for MPI 2.1, Ballot 4. > >This is a follow up to: > Questions about Graph_create, Cart_crate, and Cart_coords > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html >with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/topo/ > >I'm starting a new 2nd stream for the zero-dim discussion. >------------------------------- > >This report shows significant differences of MPI implementations >in the (extreme) case of zero-dimensional topologies! > >Therefore clarifications may be necessary. >(Although hopefully never an application may produce >zero-dimensional Cartesian topologies!) > >If there are no objections, I would produce clarifications >for MPI-2.1 according to the behavior of mpich2 >and MPI from IBM (except the rank -767705708, see below). > >MPI-1.1 Sect. 6.5.6, page 187, routine MPI_CART_SUB defines >on lines 38-42: > If a cartesian topology has been created with MPI_CART_CREATE, > the function MPI_CART_SUB can be used to partition the > communicator group into subgroups that form lower-dimensional > cartesian subgrids, and to build for each subgroup a > communicator with the associated subgrid cartesian topology. > (This function is closely related to MPI_COMM_SPLIT.) > >The text clearly says, that the new communicator must >(1) have a Cartesian topology associated >(2) be lower-dimensional in the case of a subgrid > >There is no restriction on the input array remain_dims. >Therefore all MPI implementations (that I tested) >allow that all entries in remain_dims are "false". > >I tested several MPI libraries with creating a subgrid >with MPI_Cart_sub(remain_dims=0) from a 1-dim Cartesian >topology. >I tested the subgrid communicator with MPI_Topo_test. >In the case of MPI_CART, I used MPI_Cartdim_get to retrieve >ndims, and MPI_Cart_get for further details: > >- mpich2: MPI_CART, ndims=0, > 1.0.3 MPI_Cart_get works, > but keeps all OUT arguments unchanged > >- IBM: MPI_CART, ndims=0, > (on SP) MPI_Cart_get works, > but keeps all OUT arguments unchanged > >- OpenMPI: MPI_CART, ndims=1, > 1.2.4 MPI_Cart_get works > and returns dims=1, periods=0, coords=0 independent > from process or periods in the original comm. > (may be wrong because (2) is not fulfilled) > >- NEC MP/EX: not MPI_CART > (may be wrong because (1) is not fulfilled) > >- Voltaire: not MPI_CART > (mpich1) (may be wrong because (1) is not fulfilled) > >With the implementations that return a correct zero-dim Cartesian >topology, I tested further usage of this zero-dim communicator: > >- MPI_Comm_size returns 1 and MPI_Comm_rank returns 0 because > this communicator is like MPI_COMM_SELF, but with Cartesian > topology associated. > >- MPI_Cart_rank(IN ZeroDimComm, IN coords=0, OUT rank) > Rationale. A zero-dim communicator has zero coords, > i.e., this routine should not examine the coords > input argument. > --> mpich2: rank = 0 is returned > (may be because this is the only existing rank > in this communicator, this value may make sense, > independent of the coord, that should not be > analyzed) > --> IBM: rank = -767705708 is returned > (strange value, not MPI_PROC_NULL, not MPI_UNDEFINED) > >- MPI_Cart_coords(IN ZeroDimComm, IN rank=0, OUT coords) > Rationale. A zero-dim communicator has zero coords, > i.e., this routine should not return anything > in the coords output argument. > --> mpich2 and IBM: coords is not modified (as expected) > >- MPI_Cart_sub(IN ZeroDimComm, IN remain_dims=0, OUT subsubcomm) > Rationale. A zero-dim communicator has zero dimensions, > i.e., this routine should not examine remain_dims > and the returned communicator should be again > a zero-dim Cartesian communicator. > --> mpich2 and IBM: subsubcomm is a zero-dim Cartesian > communicator (as expected) > >- MPI_Cart_shift(IN ZeroDimComm, IN direction=0, IN disp=1, > OUT src, OUT dest) > Rationale. This call is erroneous because in a zero-dim > communicator, the direction=0 does not exist. > mpich2 and IBM: They detect the error and abort. > >(In OpenMPI, all these calls work as expected on a > 1-dimensional topology on a "MPI_COMM_SELF") > >The last test is not addressed by MPI-1.1: >Is it possible to build a zero-dim Cartesian topology >directly by calling MPI_Cart_create: > > MPI_Cart_create(IN MPI_COMM_SELF, IN ndims=0, IN dims=1, > IN Periods, IN reorder, OUT ZeroDimComm) > >Results: All tested MPI implementations return an error > and abort. > (Same on MPI_COMM_WORLD) > >The latter question was raised by Jesper Traeff. > >Best regards >Rolf > > > >Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From wyu at [hidden] Thu Jan 24 07:26:09 2008 From: wyu at [hidden] (William Yu) Date: Thu, 24 Jan 2008 21:26:09 +0800 Subject: [mpi-21] topo mpi (was Re: Ballot 4 - 0-dim MPI_CART_CREATE an d MP I_CART_SUB) In-Reply-To: <[mpi-21] topo mpi (was Re: Ballot 4 - 0-dim MPI_CART_CREATE an d MP I_CART_SUB)> Message-ID: Understood. Nice work. Thanks! ________________ Reply Header ________________ Subject: Re: [mpi-21] topo mpi (was Re: Ballot 4 - 0-dim MPI_CART_CREATE and MP I_CART_SUB) Author: "Rolf Rabenseifner" Date: January 24th 2008 11:57 am William, yes, I reported the bahavior of 5 different MPI existing MPI implementations. And I attached my test program that calls the MPI topology functions and reports the behavior of the MPI library to which it is compiled and linked. Using this test program, everybody can detect that different MPI implementations are giving quite different answers and therefore, there is a need for clarifying the MPI 1.1 standard. Best regards Rolf On Thu, 24 Jan 2008 05:02:47 +0800 "William Yu" wrote: >Hi Rolf, > >Just curious. Is this implemented already? Sounds like you have a reference implementation built against different MPIs. > >Thanks. > >________________ Reply Header ________________ >Subject: [mpi-21] Ballot 4 - 0-dim MPI_CART_CREATE and MPI_CART_SUB >Author: "Rolf Rabenseifner" >Date: January 23rd 2008 5:44 pm > >This is a discussion-point for MPI 2.1, Ballot 4. > >This is a follow up to: > Questions about Graph_create, Cart_crate, and Cart_coords > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html >with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/topo/ > >I'm starting a new 2nd stream for the zero-dim discussion. >------------------------------- > >This report shows significant differences of MPI implementations >in the (extreme) case of zero-dimensional topologies! > >Therefore clarifications may be necessary. >(Although hopefully never an application may produce >zero-dimensional Cartesian topologies!) > >If there are no objections, I would produce clarifications >for MPI-2.1 according to the behavior of mpich2 >and MPI from IBM (except the rank -767705708, see below). > >MPI-1.1 Sect. 6.5.6, page 187, routine MPI_CART_SUB defines >on lines 38-42: > If a cartesian topology has been created with MPI_CART_CREATE, > the function MPI_CART_SUB can be used to partition the > communicator group into subgroups that form lower-dimensional > cartesian subgrids, and to build for each subgroup a > communicator with the associated subgrid cartesian topology. > (This function is closely related to MPI_COMM_SPLIT.) > >The text clearly says, that the new communicator must >(1) have a Cartesian topology associated >(2) be lower-dimensional in the case of a subgrid > >There is no restriction on the input array remain_dims. >Therefore all MPI implementations (that I tested) >allow that all entries in remain_dims are "false". > >I tested several MPI libraries with creating a subgrid >with MPI_Cart_sub(remain_dims=0) from a 1-dim Cartesian >topology. >I tested the subgrid communicator with MPI_Topo_test. >In the case of MPI_CART, I used MPI_Cartdim_get to retrieve >ndims, and MPI_Cart_get for further details: > >- mpich2: MPI_CART, ndims=0, > 1.0.3 MPI_Cart_get works, > but keeps all OUT arguments unchanged > >- IBM: MPI_CART, ndims=0, > (on SP) MPI_Cart_get works, > but keeps all OUT arguments unchanged > >- OpenMPI: MPI_CART, ndims=1, > 1.2.4 MPI_Cart_get works > and returns dims=1, periods=0, coords=0 independent > from process or periods in the original comm. > (may be wrong because (2) is not fulfilled) > >- NEC MP/EX: not MPI_CART > (may be wrong because (1) is not fulfilled) > >- Voltaire: not MPI_CART > (mpich1) (may be wrong because (1) is not fulfilled) > >With the implementations that return a correct zero-dim Cartesian >topology, I tested further usage of this zero-dim communicator: > >- MPI_Comm_size returns 1 and MPI_Comm_rank returns 0 because > this communicator is like MPI_COMM_SELF, but with Cartesian > topology associated. > >- MPI_Cart_rank(IN ZeroDimComm, IN coords=0, OUT rank) > Rationale. A zero-dim communicator has zero coords, > i.e., this routine should not examine the coords > input argument. > --> mpich2: rank = 0 is returned > (may be because this is the only existing rank > in this communicator, this value may make sense, > independent of the coord, that should not be > analyzed) > --> IBM: rank = -767705708 is returned > (strange value, not MPI_PROC_NULL, not MPI_UNDEFINED) > >- MPI_Cart_coords(IN ZeroDimComm, IN rank=0, OUT coords) > Rationale. A zero-dim communicator has zero coords, > i.e., this routine should not return anything > in the coords output argument. > --> mpich2 and IBM: coords is not modified (as expected) > >- MPI_Cart_sub(IN ZeroDimComm, IN remain_dims=0, OUT subsubcomm) > Rationale. A zero-dim communicator has zero dimensions, > i.e., this routine should not examine remain_dims > and the returned communicator should be again > a zero-dim Cartesian communicator. > --> mpich2 and IBM: subsubcomm is a zero-dim Cartesian > communicator (as expected) > >- MPI_Cart_shift(IN ZeroDimComm, IN direction=0, IN disp=1, > OUT src, OUT dest) > Rationale. This call is erroneous because in a zero-dim > communicator, the direction=0 does not exist. > mpich2 and IBM: They detect the error and abort. > >(In OpenMPI, all these calls work as expected on a > 1-dimensional topology on a "MPI_COMM_SELF") > >The last test is not addressed by MPI-1.1: >Is it possible to build a zero-dim Cartesian topology >directly by calling MPI_Cart_create: > > MPI_Cart_create(IN MPI_COMM_SELF, IN ndims=0, IN dims=1, > IN Periods, IN reorder, OUT ZeroDimComm) > >Results: All tested MPI implementations return an error > and abort. > (Same on MPI_COMM_WORLD) > >The latter question was raised by Jesper Traeff. > >Best regards >Rolf > > > >Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 24 10:30:03 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 24 Jan 2008 17:30:03 +0100 Subject: [mpi-21] Ballot 4 - 0-dim MPI_CART_CREATE and MPI_CART_SUB In-Reply-To: Message-ID: On Thu, 24 Jan 2008 21:26:09 +0800 "William Yu" wrote: >Understood. Nice work. Thanks! Thank you. And here is the proposal to correct all the unclear specifications on Cartesian topologies. This is a proposal for MPI 2.1, Ballot 4. ________________________________________________________________ Proposal about handling zero-dimensional Cartesian communicators that are produced with MPI_Cart_sub if all remain_dims are false. MPI-1.1 Sect.6.5.4, page 187, lines 38-42 (definition of MPI_Cart_sub) reads If a cartesian topology has been created with MPI_CART_CREATE, the function MPI_CART_SUB can be used to partition the communicator group into subgroups that form lower-dimensional cartesian subgrids, and to build for each subgroup a communicator with the associated subgrid cartesian topology. (This function is closely related to MPI_COMM_SPLIT.) but should read If a cartesian topology has been created with MPI_CART_CREATE, the function MPI_CART_SUB can be used to partition the communicator group into subgroups that form lower-dimensional cartesian subgrids, and to build for each subgroup a communicator with the associated subgrid cartesian topology. If all entries in remain_dims are false or comm is already associated with a zero-dimensional Cartesian topology then newcomm is associated with a zero-dimensional Cartesian topology. (This function is closely related to MPI_COMM_SPLIT.) Rationale for this clarification (not to be included into the MPI standard): Several MPI implementations have implemented the MPI-1 standard which requires a "lower-dimensional cartesian" subgrid and that newcomm is "associated" with the "subgrid Cartesian topology". Other MPI implementations did not or only partially. Therefore, a clarification may help that all MPI implementations implment the same interface. ________________________ MPI-1.1 Sect.6.5.4, page 183, lines 29-30 (definition of MPI_Cartdim_get and MPI_Cart_get) reads: The functions MPI_CARTDIM_GET and MPI_CART_GET return the cartesian topology information that was associated with a communicator by MPI_CART_CREATE. but should read The functions MPI_CARTDIM_GET and MPI_CART_GET return the cartesian topology information that was associated with a communicator by MPI_CART_CREATE. If comm is associated with a zero-dimensional Cartesian topology, MPI_Cartdim_get returns ndims=0 and MPI_Cart_get will keep all output arguments unchanged. Rationale for this clarification (not to be included into the MPI standard): Zero-dimensional topologies have zero coords, i.e., do not have coords and therefore also no dims, or periods. ________________________ MPI-1.1 Sect.6.5.4, page 184, lines 17-23 (definition of MPI_Cart_rank) reads For a process group with cartesian structure, the function MPI_CART_RANK translates the logical process coordinates to process ranks as they are used by the point-to-point routines. For dimension i with periods(i) = true, if the coordinate, coords(i), is out of range, that is, coords(i) < 0 or coords(i) >= dims(i), it is shifted back to the interval 0 <= coords(i) < dims(i) automatically. Out-of-range coordinates are erroneous for non-periodic dimensions. but should read For a process group with cartesian structure, the function MPI_CART_RANK translates the logical process coordinates to process ranks as they are used by the point-to-point routines. For dimension i with periods(i) = true, if the coordinate, coords(i), is out of range, that is, coords(i) < 0 or coords(i) >= dims(i), it is shifted back to the interval 0 <= coords(i) < dims(i) automatically. Out-of-range coordinates are erroneous for non-periodic dimensions. If comm is associated with a zero-dimensional Cartesian topology, coord is not significant and 0 is returned in rank. Rationale for this clarification (not to be included into the MPI standard): Zero-dimensional topologies have zero coords, i.e., do not have coords. The communicator has only one process and therefore only the rank 0 is valid. ________________________ MPI-1.1 Sect.6.5.4, page 184, lines 38-39 (definition of MPI_Cart_coords) reads The inverse mapping, rank-to-coordinates translation is provided by MPI CART COORDS. but should read The inverse mapping, rank-to-coordinates translation is provided by MPI CART COORDS. If comm is associated with a zero-dimensional Cartesian topology, coords will be unchanged. Rationale for this clarification (not to be included into the MPI standard): Zero-dimensional topologies have zero coords, i.e., do not have coords. ________________________ Alternative 1: -------------- MPI-1.1 Sect.6.5.5, page 186, after line 47 (end of definition of MPI_Cart_shift), the following paragraph is added: It is erroneous to call MPI_CART_SHIFT with comm that is associated with a zero-dimensional Cartesian topology. Rationale for this clarification (not to be included into the MPI standard): It is impossible to define a correct input for direction because there does not exist any directions in a zero-dimensional topology. (This is currently implemented by several MPI implementations.) Alternative 2: -------------- MPI-1.1 Sect.6.5.5, page 186, after line 47 (end of definition of MPI_Cart_shift), the following paragraph is added: If comm is associated with a zero-dimensional Cartesian topology, then the input arguments direction and disp are ignored and always MPI_PROC_NULL is returned in rank_source and rank_dest. Rationale for this clarification (not to be included into the MPI standard): It is impossible to define a correct input for direction because there does not exist any directions in a zero-dimensional topology. For convenience, the routine returns MPI_PROC_NULL. _______________________ MPI-1.1 Sect.6.5.1, page 179, lines 29-30 (end of definition of MPI_Cart_create) reads The call is erroneous if it specifies a grid that is larger than the group size. but should read The call is erroneous if it specifies a grid that is larger than the group size or ndims is zero or negative. Rationale for this clarification (not to be included into the MPI standard): Although it is allowed to produce zero-dimensional subgrids with MPI_CART_SUB, it makes no sense to establish a zero-dimensional grid a priori with MPI_CART_CREATE. _______________________ MPI-1.1 Sect.6.5.4, page 184, lines 30 (definition of MPI_Cart_coords) reads IN maxdims length of vector coord in the calling program (integer) but should read (missing “s” at coords) IN maxdims length of vector coords in the calling program (integer) Rationale for this clarification (not to be included into the MPI standard): Typo. _______________________ Best regards Rolf Rabenseifner PS: All background informations with reports from 5 different MPI implementations and their differences have been in my previous mail on Wed, 23 Jan 2008 18:42:43. Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From robl at [hidden] Thu Jan 24 10:41:52 2008 From: robl at [hidden] (Robert Latham) Date: Thu, 24 Jan 2008 10:41:52 -0600 Subject: [mpi-21] Proposal: MPI_OFFSET built-in type Message-ID: <20080124164152.GL24508@mcs.anl.gov> I hope this is less contentious than adding 'const' keywords... I would like to propose a new built-in type MPI_OFFSET, defined to be a type corresponding to INTEGER(KIND=MPI_OFFSET_KIND) or MPI_Offset This is a minor addition to the standard, which would have no impact on existing code while serving to simplify code which exchanges file offsets among processes. There is a workaround in the standard: a user can define a type from MPI_BYTE: MPI_Type_contiguous(sizeof(MPI_Offset), MPI_BYTE, &offtype); However, it would clearly be more convienient to operate on built-in types. MPI Datatype: MPI_OFFSET Corresponding C type: long long int Corresponding Fortran type: INTEGER(KIND=MPI_OFFSET_KIND) Thanks ==rob -- Rob Latham Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B From rabenseifner at [hidden] Fri Jan 25 11:00:42 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 25 Jan 2008 18:00:42 +0100 Subject: [mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C In-Reply-To: <[mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C> Message-ID: This is a discussion-point for MPI 2.1, Ballot 4. This is a follow up to: MPI_GET_PROCESSOR_NAME and Fortran in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/procname/ _________________________________________________________________ MPI_GET_PROCESSOR_NAME and Fortran and in C and all MPI_xxxx_GET_NAME routines ------------------------------------------- Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME and MPI_xxxx_GET_NAME quite different. Not all implementations are doing the same with zero-filling. And what they do is at least with MPI_GET_PROCESSOR_NAME different to what the current standard requires. A propose to adapt the standard to the common reasonable implementations. The very short proposal for clarification can be found at the end of this text, see C. Proposal. A. MPI_GET_PROCESSOR_NAME ------------------------- MPI_GET_PROCESSOR_NAME defines the returned string with several sentences: (1) OUT name A unique specifier for the actual (as opposed to virtual) node. (2) OUT resultlen Length (in printable characters) of the result returned in name (3) The argument name must represent storage that is at least MPI_MAX_PROCESSOR_NAME characters long. (4) MPI_GET_PROCESSOR_NAME may write up to this many characters into name. (5) The number of characters actually written is returned in the output argument, resultlen. (6) The user must provide at least MPI_MAX_PROCESSOR_NAME space to write the processor name — processor names can be this long. (7) The user should examine the ouput argument, resultlen, to determine the actual length of the name. I tested 5 implementations with C and Fortran. I called MPI_GET_PROCESSOR_NAME with a string (i.e. character array) with size MPI_MAX_PROCESSOR_NAME+2. C-Interface: ------------ All tested C implementations returned the processor-name in name[0..resultlen-1] and a the non-printable character \0 in name[resultlen]. All other elements of name were unchanged. (1,2,3,4, 6,7) are fulfilled; (5) are __NOT__ fulfilled, because resultlen+1 characters are written in name. My opinion: The returned name and resultlen is what the user expects, but the standard needs a clarification. Fortran-Interface: ------------------ All tested Fortran implementations return in processor-name in name(1:resultlen) and the rest of the total string is filled with spaces. (1, 3, 6,7) are fulfilled; (2,4,5) are __NOT__ fulfilled, because MPI_MAX_PROCESSOR_NAME+2 characters are written in name. My opinion: The returned name and resultlen is what the user expects, but the standard needs a clarification. B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME) -------------------------------------------------- The string output is defined with different wording: (1) OUT comm_name the name previously stored on the communicator, or an empty string if no such name exists (string) (2) OUT resultlen length of returned name (integer) (3) name should be allocated so that it can hold a resulting string of length MPI_MAX_OBJECT_NAME characters. (4) If the user has not associated a name with a communicator, or an error occurs, MPI_COMM_GET_NAME will return an empty string (all spaces in Fortran, "" in C and C++). and in the definition of MPI_COMM_SET_NAME: (5) The length of the name which can be stored is limited to the value of MPI_MAX_OBJECT_NAME in Fortran and MPI_MAX_OBJECT_NAME-1 in C and C++ to allow for the null terminator. (6) Attempts to put names longer than this will result in truncation of the name. (7) MPI_MAX_OBJECT_NAME must have a value of at least 64. I called MPI_COMM_GET_NAME with a string (i.e. character array) with size MPI_MAX_OBJECT_NAME+2. C-Interface: ------------ All tested C implementations returned the communicator-name in comm_name[0..resultlen-1] and a the non-printable character \0 in comm_name[resultlen]. One implementation filled up the rest until name[MPI_MAX_OBJECT_NAME-1] with \0. In all other implementations, all other elements of comm_name were unchanged. (1-7) are fulfilled although the retuned zero-filling in comm_name depends on the implementations; My opinion: A clarification can make the API unambiguous. Fortran-Interface: ------------------ All tested Fortran implementations return in processor-name in name(1:resultlen) and the rest of the total string is filled with spaces. (1-7) are fulfilled; Although it is nowhere specified that the string must be filled up with spaces, and not only until position MPI_MAX_OBJECT_NAME but also further spaces until the end of comm_name. My opinion: The returned name and resultlen is what the user expects, but the standard needs a clarification. C. Proposal: ------------ Add the following sentences to the current interface definitions: ------------------ In C, a \0 is additionally stored at name[resultlen]. resultlen cannot be larger then MPI_MAX_PROCESSOR_NAME-1 (or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:) is filled with spaces. resultlen cannot be larger then MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME). ------------------ Typo correction: ---------------- MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads examine the ouput argument But should read (additional t in output) examine the output argument Okay? _________________________________________________________________ Best regards Rolf PS: Attached my tests and short protocols Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) * -------------- next part -------------- A non-text attachment was scrubbed... Name: mpi_get_xxx_name.tar.gz Type: application/x-gzip Size: 2880 bytes Desc: mpi_get_xxx_name.tar.gz URL: From treumann at [hidden] Fri Jan 25 14:32:07 2008 From: treumann at [hidden] (Richard Treumann) Date: Fri, 25 Jan 2008 15:32:07 -0500 Subject: [mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C In-Reply-To: Message-ID: We also should decide whether every call to MPI_GET_PROCESSOR_NAME across the life of the task must return the same name. On very large machines running very large jobs, migration of some tasks off of failing nodes and on to robust nodes will become more interesting. Checkpoint/restart raises the same issue. A restarted job will probably not have the same task to node mapping. We can either require the name to remain constant and allow that it might be a "virtual" name or require that it return an "actual" name but allow it to change. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/25/2008 12:00:42 PM: > This is a discussion-point for MPI 2.1, Ballot 4. > > This is a follow up to: > MPI_GET_PROCESSOR_NAME and Fortran > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/procname/ > > _________________________________________________________________ > > MPI_GET_PROCESSOR_NAME and Fortran > and in C and all MPI_xxxx_GET_NAME routines > ------------------------------------------- > > Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME > and MPI_xxxx_GET_NAME quite different. Not all implementations > are doing the same with zero-filling. And what they do is > at least with MPI_GET_PROCESSOR_NAME different to what > the current standard requires. A propose to adapt the standard > to the common reasonable implementations. > The very short proposal for clarification can be found at the > end of this text, see C. Proposal. > > A. MPI_GET_PROCESSOR_NAME > ------------------------- > > MPI_GET_PROCESSOR_NAME defines the returned string with several > sentences: > > (1) OUT name A unique specifier for the actual > (as opposed to virtual) node. > (2) OUT resultlen Length (in printable characters) > of the result returned in name > > (3) The argument name must represent storage that is at least > MPI_MAX_PROCESSOR_NAME characters long. > (4) MPI_GET_PROCESSOR_NAME may write up to this many characters > into name. > (5) The number of characters actually written is returned > in the output argument, resultlen. > (6) The user must provide at least MPI_MAX_PROCESSOR_NAME > space to write the processor name — processor names > can be this long. > (7) The user should examine the ouput argument, resultlen, > to determine the actual length of the name. > > I tested 5 implementations with C and Fortran. > > I called MPI_GET_PROCESSOR_NAME with a string (i.e. character > array) with size MPI_MAX_PROCESSOR_NAME+2. > > C-Interface: > ------------ > All tested C implementations returned the processor-name > in name[0..resultlen-1] and a the non-printable character > \0 in name[resultlen]. > All other elements of name were unchanged. > > (1,2,3,4, 6,7) are fulfilled; > (5) are __NOT__ fulfilled, because resultlen+1 characters > are written in name. > > My opinion: The returned name and resultlen is what the user > expects, but the standard needs a clarification. > > Fortran-Interface: > ------------------ > All tested Fortran implementations return in processor-name > in name(1:resultlen) and the rest of the total string is > filled with spaces. > > (1, 3, 6,7) are fulfilled; > (2,4,5) are __NOT__ fulfilled, because > MPI_MAX_PROCESSOR_NAME+2 characters are written in name. > > My opinion: The returned name and resultlen is what the user > expects, but the standard needs a clarification. > > > B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME) > -------------------------------------------------- > > The string output is defined with different wording: > > (1) OUT comm_name the name previously stored on the > communicator, or an empty string if no > such name exists (string) > (2) OUT resultlen length of returned name (integer) > > (3) name should be allocated so that it can hold a resulting > string of length MPI_MAX_OBJECT_NAME characters. > (4) If the user has not associated a name with a communicator, > or an error occurs, MPI_COMM_GET_NAME will return an empty > string (all spaces in Fortran, "" in C and C++). > > and in the definition of MPI_COMM_SET_NAME: > (5) The length of the name which can be stored is limited > to the value of MPI_MAX_OBJECT_NAME in Fortran and > MPI_MAX_OBJECT_NAME-1 in C and C++ to allow for the null > terminator. > (6) Attempts to put names longer than this will result in > truncation of the name. > (7) MPI_MAX_OBJECT_NAME must have a value of at least 64. > > I called MPI_COMM_GET_NAME with a string (i.e. character > array) with size MPI_MAX_OBJECT_NAME+2. > > C-Interface: > ------------ > All tested C implementations returned the communicator-name > in comm_name[0..resultlen-1] and a the non-printable character > \0 in comm_name[resultlen]. > One implementation filled up the rest until > name[MPI_MAX_OBJECT_NAME-1] with \0. > In all other implementations, all other elements of comm_name > were unchanged. > > (1-7) are fulfilled although the retuned zero-filling in comm_name > depends on the implementations; > > My opinion: A clarification can make the API unambiguous. > > Fortran-Interface: > ------------------ > All tested Fortran implementations return in processor-name > in name(1:resultlen) and the rest of the total string is > filled with spaces. > > (1-7) are fulfilled; > Although it is nowhere specified that the string must be filled > up with spaces, and not only until position MPI_MAX_OBJECT_NAME > but also further spaces until the end of comm_name. > > My opinion: The returned name and resultlen is what the user > expects, but the standard needs a clarification. > > C. Proposal: > ------------ > > Add the following sentences to the current interface definitions: > ------------------ > In C, a \0 is additionally stored at name[resultlen]. resultlen > cannot be larger then MPI_MAX_PROCESSOR_NAME-1 > (or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:) > is filled with spaces. resultlen cannot be larger then > MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME). > ------------------ > > Typo correction: > ---------------- > MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads > examine the ouput argument > But should read (additional t in output) > examine the output argument > > > Okay? > _________________________________________________________________ > > Best regards > Rolf > > PS: Attached my tests and short protocols > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > [attachment "mpi_get_xxx_name.tar.gz" deleted by Richard > Treumann/Poughkeepsie/IBM] _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Mon Jan 28 02:24:32 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 28 Jan 2008 09:24:32 +0100 Subject: [mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C In-Reply-To: Message-ID: Dick, your right and it is already decided: MPI 1.1 Sect. 7.1 page 193, lines 13-14: "This routine returns the name of the processor on which it was called at the moment of the call." And lines 22-25: "Rationale. This function allows MPI implementations that do process migration to return the current processor. Note that nothing in MPI requires or defines process migration; this definition of MPI GET PROCESSOR NAME simply allows such an implementation. (End of rationale.)" I.e., current location, i.e., it may change in case of check point/restart and all the other reasons you mentioned. I would say, that the sentences above are clear enough. Okay? Best regards Rolf On Fri, 25 Jan 2008 15:32:07 -0500 Richard Treumann wrote: >We also should decide whether every call to MPI_GET_PROCESSOR_NAME across >the life of the task must return the same name. On very large machines >running very large jobs, migration of some tasks off of failing nodes and >on to robust nodes will become more interesting. Checkpoint/restart raises >the same issue. A restarted job will probably not have the same task to >node mapping. > >We can either require the name to remain constant and allow that it might >be a "virtual" name or require that it return an "actual" name but allow it >to change. > > Dick > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/25/2008 12:00:42 PM: > >> This is a discussion-point for MPI 2.1, Ballot 4. >> >> This is a follow up to: >> MPI_GET_PROCESSOR_NAME and Fortran >> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> errata/index.html >> with mail discussion in >> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> errata/discuss/procname/ >> >> _________________________________________________________________ >> >> MPI_GET_PROCESSOR_NAME and Fortran >> and in C and all MPI_xxxx_GET_NAME routines >> ------------------------------------------- >> >> Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME >> and MPI_xxxx_GET_NAME quite different. Not all implementations >> are doing the same with zero-filling. And what they do is >> at least with MPI_GET_PROCESSOR_NAME different to what >> the current standard requires. A propose to adapt the standard >> to the common reasonable implementations. >> The very short proposal for clarification can be found at the >> end of this text, see C. Proposal. >> >> A. MPI_GET_PROCESSOR_NAME ... >> B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME) ... >> C. Proposal: >> ------------ >> >> Add the following sentences to the current interface definitions: >> ------------------ >> In C, a \0 is additionally stored at name[resultlen]. resultlen >> cannot be larger then MPI_MAX_PROCESSOR_NAME-1 >> (or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:) >> is filled with spaces. resultlen cannot be larger then >> MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME). >> ------------------ >> >> Typo correction: >> ---------------- >> MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads >> examine the ouput argument >> But should read (additional t in output) >> examine the output argument >> >> >> Okay? >> _________________________________________________________________ >> >> Best regards >> Rolf >> >> PS: Attached my tests and short protocols >> >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> [attachment "mpi_get_xxx_name.tar.gz" deleted by Richard >> Treumann/Poughkeepsie/IBM] >_______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 28 07:59:53 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 28 Jan 2008 14:59:53 +0100 Subject: [mpi-21] Ballot 4 - Why no MPI_INPLACE for MPI_EXSCAN? Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Why no MPI_INPLACE for MPI_EXSCAN? in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/exscan/ Bill Gropp has already put a proposal on the web page. I would chang the location: Instead of putting the clarification at the end of the Advice to users, I would add at the beginning of the 2nd paragraph of the Rationale. Proposal for MPI 2.1, Ballot 4: ------------------------------- MPI-2, Sect. 7.3.6, page 167. lines 6-8 read: The reason that MPI-1 chose the inclusive scan is that the definition of behavior on processes zero and one was thought to offer too many complexities in definition, particularly for user-defined operations. (End of rationale.) but should read: No in-place version is specified for MPI_EXSCAN because it isn't clear what this means for the process for rank zero. The reason that MPI-1 chose the inclusive scan is that the definition of behavior on processes zero and one was thought to offer too many complexities in definition, particularly for user-defined operations. (End of rationale.) ------------------------------- Background information: The total rationale On MPI-2 page 167 would be then Rationale. The exclusive scan is more general than the inclusive scan provided in MPI-1 as MPI SCAN. Any inclusive scan operation can be achieved by using the exclusive scan and then locally combining the local contribution. Note that for noninvertable operations such as MPI MAX, the exclusive scan cannot be computed with the inclusive scan. No in-place version is specified for MPI_EXSCAN because it isn't clear what this means for the process for rank zero. The reason that MPI-1 chose the inclusive scan is that the definition of behavior on processes zero and one was thought to offer too many complexities in definition, particularly for user-defined operations. (End of rationale.) Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 28 08:44:19 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 28 Jan 2008 15:44:19 +0100 Subject: [mpi-21] Ballot 4 - Re: Re: SEND COUNT and RECEIVE COUNT Message-ID: Bill, please can you forward this to Yukiya Aoyama , the originator of this track. Thanks, Rolf ___________________________________ This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Description of the send and receive count arguments to MPI_Alltoallv in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/alltoallv/ Based on the questions for clarification, I propose not to include any further clarification into the standard. Reason: MPI-1, Sect. 4.8, MPI_ALLTOALLV, page 112, lines 37-40 clearly states: "The type signature associated with sendcount[j], sendtype at process i must be equal to the type signature associated with recvcount[i], recvtype at process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of processes. Distinct type maps between sender and receiver are still allowed." The same text is also used for MPI-2.0, Sect. 7.3.5 MPI_ALLTOALLW, page 165, lines 42-45. Okay? Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Mon Jan 28 11:52:54 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 28 Jan 2008 18:52:54 +0100 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: <[mpi-21] MPI_FINALIZE> Message-ID: Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas Nevin, Bill Gropp, Dick Treumann This is a follow up to: MPI_FINALIZE in MPI-2 (with spawn) in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/finalize/ _________________________________________ When I understand correctly, then a clarification is not needed because the MPI standard expresses all, i.e. - MPI_Finalize need not to behave like a barrier - but is allowed to have a barrier inside. - If the user wants to exit one spawned process while the others still continue to work, he/she must disconnect this process before calling MPI_Finalize on it. If somebody wants a clarification to be included into the standard and therefore in Ballot 4, please send me your wording with the page and line references included. If all agree, that no clarification is needed, then I would finish this track. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Tue Jan 29 03:06:49 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 10:06:49 +0100 Subject: [mpi-21] Ballot 4 - Re: MPI_Abort nit Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially Karl Feind, Bill Gropp, Andrew Lumsdaine, Dick Treumann, the participants of the email-discussion in 2001, to review this proposal. This is a follow up to: MPI_Abort in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/abort/ ___________________________________ Proposal: MPI-1.1 Sect. 7.5, MPI_Abort, page 200, lines 23-26 read: This routine makes a "best attempt" to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program or an abort(errorcode). but should read (" or an abort(errorcode)" removed): This routine makes a "best attempt" to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program. ___________________________________ Rationale for this clarification: POSIX defines void abort(void). The routine void exit(int status) may be used to implement "handle this as a return errorcode from the main program". abort(errorcode) was not substituted by exit(errorcode) because this is technically not enough, if the MPI implementation wants to return it also from mpiexec, see next proposal. ___________________________________ Proposal: Add after MPI-1.1 Sect. 7.5, MPI_Abort, page 200, line 34 (end of rationale): Advice to users. Whether the errorcode is returned from the executable or from the MPI process startup mechanism (e.g., mpiexec), is an aspect of quality of the MPI library but not mandatory. (End of advice to users.) ___________________________________ Rationale for this clarification: The intent of word "should" in "should handle this as a return errorcode from the main program" is only a quality of implementation aspect and not a must. This is not clear and can be misinterpreted. ___________________________________ Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Tue Jan 29 04:28:35 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 11:28:35 +0100 Subject: [mpi-21] Ballot 4 - Re: MPI-2 standard issues - MPI_Type_create_f90_... Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially Nicholas Nevin, Bill Gropp, the participants of the email-discussion in 2001, and all implementors of MPI libraries to review this proposal carefully. This is a follow up to: MPI_Type_create_f90_real etc. in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/typef90real/ Problem: An application may repeatedly call (probably with same (p,r) combination) the MPI_TYPE_CREATE_F90_xxxx routines. ___________________________________ Proposal: Add after MPI-2.0 Sect. 10.2.5, MPI_TYPE_CREATE_F90_xxxx, page 295, line 47 (End of advice to users.): Advice to implementors. An application may often repeat a call to MPI_TYPE_CREATE_F90_xxxx with same combination of (xxxx, p, r). The application is not allowed to free the returned predefined, unnamed datatype handles. To prevent the creation of a potentially huge amount of handles, the MPI implementation should always return the same datatype handle for the same (REAL/COMPLEX/INTEGER, p, r) combination. Checking for the combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and using a hash-table to find formerly generated handles should limit the overhead of finding a previously generated with same combination of (type,p,r). (End of advice to implementors.) ___________________________________ Rationale for this clarification: Currently most MPI implementations are handling the MPI_TYPE_CREATE_F90_xxxx functions wrong or not with the requested quality: Current implementatios do: - Return of a predefined named handle based on the mapping specified in MPI-2.0 page 296, e.g., returning MPI_REAL4, MPI_REAL8, or MPI_REAL16: --> wrong return from MPI_TYPE_GET_ENVELOPE --> no problem, if the application calls MPI_TYPE_CREATE_F90_xxxx in a loop with millions of iterations. --> A good work around as long as MPI_TYPE_GET_ENVELOPE is not used directly by the application. Indirect usage in MPI parallel file I/O may not be a problem, because with datarep=external32 the same mapping is specified by the MPI-2.0 standard, page 296) Example: OpenMPI - Return of a correct predefined, unnamed datatype handle with (type,p,r) cached, but in each call with same input, a new datatype handle is returned: --> correct return from MPI_TYPE_GET_ENVELOPE --> i.e., totally correct implementation according to MPI-2.0 --> waste of memory space, if the application calls MPI_TYPE_CREATE_F90_xxxx in a loop with millions of iterations. Example: NEC MPI/EX I'm not aware of an implementation that is currently correct and that handles milions of calls with same combination of (type,p,r) without performance problems in space or time. ___________________________________ Alternative proposal: Instead of giving the implementation hint in form of the advice to implementors, the MPI Forum can modify the MPI standard and require that for each call to MPI_TYPE_CREATE_F90_xxxx a new datatype handle is generated and that this may be freed if no longer in use (if the user may not waste space). Because this alternative proposal may break existing application code, it would be an MPI 3.0 proposal. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From james.h.cownie at [hidden] Tue Jan 29 04:40:46 2008 From: james.h.cownie at [hidden] (Cownie, James H) Date: Tue, 29 Jan 2008 10:40:46 -0000 Subject: [mpi-21] Ballot 4 - Re: MPI-2 standard issues -MPI_Type_create_f90_... In-Reply-To: Message-ID: Trivial wording change in the proposal... To prevent the creation of a potentially huge amount of handles, should be To prevent the creation of a potentially huge number of handles, -- Jim James Cownie SSG/DPD/PAT Tel: +44 117 9071438 > -----Original Message----- > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On > Behalf Of Rolf Rabenseifner > Sent: 29 January 2008 10:29 > To: mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 - Re: MPI-2 standard issues - > MPI_Type_create_f90_... > > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially > Nicholas Nevin, Bill Gropp, the participants of the email-discussion in > 2001, > and all implementors of MPI libraries > to review this proposal carefully. > > This is a follow up to: > MPI_Type_create_f90_real etc. > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/typef90real/ > > Problem: An application may repeatedly call (probably with same (p,r) > combination) the MPI_TYPE_CREATE_F90_xxxx routines. > ___________________________________ > > Proposal: > Add after MPI-2.0 Sect. 10.2.5, MPI_TYPE_CREATE_F90_xxxx, page 295, line > 47 > (End of advice to users.): > > Advice to implementors. > An application may often repeat a call to MPI_TYPE_CREATE_F90_xxxx > with same combination of (xxxx, p, r). > The application is not allowed to free the returned predefined, unnamed > datatype handles. > To prevent the creation of a potentially huge amount of handles, > the MPI implementation should always return the same datatype handle > for the same (REAL/COMPLEX/INTEGER, p, r) combination. > Checking for the combination (p,r) in the preceding call to > MPI_TYPE_CREATE_F90_xxxx and using a hash-table to find formerly > generated handles should limit the overhead of finding a previously > generated with same combination of (type,p,r). > (End of advice to implementors.) > ___________________________________ > Rationale for this clarification: > Currently most MPI implementations are handling the > MPI_TYPE_CREATE_F90_xxxx functions wrong or not with the requested > quality: > Current implementatios do: > > - Return of a predefined named handle based on the mapping specified > in MPI-2.0 page 296, e.g., returning MPI_REAL4, MPI_REAL8, or > MPI_REAL16: > --> wrong return from MPI_TYPE_GET_ENVELOPE > --> no problem, if the application calls MPI_TYPE_CREATE_F90_xxxx > in a loop with millions of iterations. > --> A good work around as long as MPI_TYPE_GET_ENVELOPE is not > used directly by the application. > Indirect usage in MPI parallel file I/O may not be a problem, > because with datarep=external32 the same mapping is specified > by the MPI-2.0 standard, page 296) > Example: OpenMPI > > - Return of a correct predefined, unnamed datatype handle with > (type,p,r) > cached, but in each call with same input, a new datatype handle is > returned: > --> correct return from MPI_TYPE_GET_ENVELOPE > --> i.e., totally correct implementation according to MPI-2.0 > --> waste of memory space, if the application calls > MPI_TYPE_CREATE_F90_xxxx in a loop with millions of iterations. > Example: NEC MPI/EX > > I'm not aware of an implementation that is currently correct and that > handles milions of calls with same combination of (type,p,r) > without performance problems in space or time. > ___________________________________ > > Alternative proposal: > > Instead of giving the implementation hint in form of the advice to > implementors, the MPI Forum can modify the MPI standard and require > that for each call to MPI_TYPE_CREATE_F90_xxxx a new datatype handle > is generated and that this may be freed if no longer in use (if the > user may not waste space). > > Because this alternative proposal may break existing application code, > it would be an MPI 3.0 proposal. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 From rabenseifner at [hidden] Tue Jan 29 04:43:57 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 11:43:57 +0100 Subject: [mpi-21] Ballot 4 - RE: clarification text for MPI_Reduce_scatter Message-ID: This is a follow-up for MPI 2.1, Ballot 4. I'm asking especially Rajeev Thakur, the participant of the email-discussion in 2002, to review this proposal. I would finish the following track without any proposed clarification, because the text is already clear when the MPI-1.1 part of MPI_Reduce_scatter is put in front of the MPI-2.0 part, i.e., in the upcoming combined document MPI-2.1. This is a follow up to: MPI_IN_PLACE for MPI_Reduce_scatter in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/redscat/ ___________________________________ The currently text in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html should not be accepted, because it is a significant modification of the MPI-2.0 standard and would break user code: Proposed text to replace lines 16-20, pg 163 The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE in the sendbuf argument. In this case, on each process, the input data is taken from recvbuf. Process i gets the ith segment of the result, and it is stored at the location corresponding to segment i in recvbuf. ___________________________________ Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Tue Jan 29 06:21:55 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 13:21:55 +0100 Subject: [mpi-21] Two MPI I/O questions In-Reply-To: <[mpi-21] Two MPI I/O questions> Message-ID: Mainly to Leonard Wisniewski and Rajeev Thakur who have contributed to this mail-discussion thread. This is a follow up to: Shared File Pointers in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/sharedfile/ and here only to the second topic in the mails. (The first topic is handled in the thread "MPI C++ Constants conflict with stdio") _________________________________________ When I understand correctly, then a clarification is not needed in the MPI standard. If somebody wants a clarification to be included into the standard and therefore in Ballot 4, please send me your wording with the page and line references included. If all agree, that no clarification is needed, then I would finish this discussion-thread. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Tue Jan 29 06:33:50 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 13:33:50 +0100 Subject: [mpi-21] IEEE Floating point In-Reply-To: <[mpi-21] IEEE Floating point> Message-ID: Mainly to Marc Snir and Jim Cownie, who have contributed to this mail-discussion thread. This is a follow up to: IEEE Floating Point Behavior in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/IEEEFloat/ _________________________________________ If you want a clarification to be included into the standard and therefore, e.g., in MPI 2.1 Ballot 4, please send me your wording with the page and line references to MPI 1.1 or/and MPI 2.0 included. If all agree, that no clarification is needed, then I would finish this track. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From gropp at [hidden] Tue Jan 29 09:29:01 2008 From: gropp at [hidden] (William Gropp) Date: Tue, 29 Jan 2008 09:29:01 -0600 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: Message-ID: <9AF7F463-1B40-4613-AD62-B06E3A9A6A6D@mcs.anl.gov> I think that we're fine as is, and can move this to the discussed but require no change page. Bill On Jan 28, 2008, at 11:52 AM, Rolf Rabenseifner wrote: > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, > Nicholas Nevin, > Bill Gropp, Dick Treumann > > This is a follow up to: > MPI_FINALIZE in MPI-2 (with spawn) > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/finalize/ > _________________________________________ > > When I understand correctly, then a clarification is not needed > because the MPI standard expresses all, i.e. > - MPI_Finalize need not to behave like a barrier > - but is allowed to have a barrier inside. > - If the user wants to exit one spawned process while > the others still continue to work, he/she must > disconnect this process before calling MPI_Finalize on it. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this track. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > William Gropp Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gropp at [hidden] Tue Jan 29 09:43:11 2008 From: gropp at [hidden] (William Gropp) Date: Tue, 29 Jan 2008 09:43:11 -0600 Subject: [mpi-21] Inconsistent error behavior of multiple completion routines In-Reply-To: Message-ID: <3EBCDADD-D2B0-41D6-B2A4-327A0AD2CD76@mcs.anl.gov> Rolf, Answers inline below. Bill On Jan 28, 2008, at 11:33 AM, Rolf Rabenseifner wrote: > Bill, > > please, can you specify more precisely: > > - Is your proposal necessary for MPI_Wait/Test_some/all > (but not for MPI_Wait/Test or MPI_Wait/Test_any) That is correct. It applies only if MPI_ERR_IN_STATUS may be returned by the routine; that is for MPI_{Wait,Test}_{some,all} . > - Should it be inserted 4 times at the 4 routines, or can it be > done once > somewhere at errorhandling? It is probably better to insert it once but have each affected routine clearly point at the one discussion. > - should you substitute "valid communicators (including File > operations)" > by "valid communicators and file handles"? Probably. > - error handler on MPI_COMM_WORLD --> how is it for file handles? This is tricky. The idea is to use the error handler that is used when there is no other obvious error handler, and in the rest of MPI, that is MPI_COMM_WORLD. If it is clear that the request is an MPI I/ O request, then the error handler on MPI_FILE_NULL should be invoked, to be consistent with the operations on files. > - default for file handles is _MPI_ERRORS_RETURN! Yes, but this should be ok. For uses of these routines where all of the requests are MPI I/O requests, the error handler on the file object will be used. If the parameters to the routine are bad (e.g., a negative count), then the error handler on MPI_COMM_WORLD will be invoked. That seems fine. > and reply this on the mpi-21_at_[hidden] as a Ballot 4 issue (if > you want) > > I expect, it is the best to give this still valid topic from 2001 > back to the author of the topic :-) > > Thanks and best regards > Rolf > ___________________________________ > This is a inquity for detailed specification for MPI 2.1, Ballot 4. > > This is a follow up to: > Error handler for multiple completions > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/waitallerr/ > ___________________________________ > > Best regards > Rolf > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > William Gropp Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Tue Jan 29 10:38:02 2008 From: treumann at [hidden] (Richard Treumann) Date: Tue, 29 Jan 2008 11:38:02 -0500 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: Message-ID: I do not see a need for clarification. As long as somewhere in the standard the following points are clear: 1) Only MPI_BARRIER promises barrier behavior 2) Any collective may be implemented with barrier behavior as a side effect (performance issues make some collectives (eg. MPI_Bcast) unlikely to be like a barrier like but the standard does not rule it out) Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/28/2008 12:52:54 PM: > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas Nevin, > Bill Gropp, Dick Treumann > > This is a follow up to: > MPI_FINALIZE in MPI-2 (with spawn) > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/finalize/ > _________________________________________ > > When I understand correctly, then a clarification is not needed > because the MPI standard expresses all, i.e. > - MPI_Finalize need not to behave like a barrier > - but is allowed to have a barrier inside. > - If the user wants to exit one spawned process while > the others still continue to work, he/she must > disconnect this process before calling MPI_Finalize on it. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this track. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailman-bounces at [hidden] Tue Jan 29 10:38:31 2008 From: mailman-bounces at [hidden] (mailman-bounces_at_[hidden]) Date: Tue, 29 Jan 2008 10:38:31 -0600 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: <[mpi-21] MPI_FINALIZE> Message-ID: From: Richard Treumann Date: January 29, 2008 10:38:02 AM CST To: "Mailing list for discussion of MPI 2.1" Subject: Re: [mpi-21] MPI_FINALIZE I do not see a need for clarification. As long as somewhere in the standard the following points are clear: 1) Only MPI_BARRIER promises barrier behavior 2) Any collective may be implemented with barrier behavior as a side effect (performance issues make some collectives (eg. MPI_Bcast) unlikely to be like a barrier like but the standard does not rule it out) Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas Nevin, > Bill Gropp, Dick Treumann > > This is a follow up to: > MPI_FINALIZE in MPI-2 (with spawn) > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/finalize/ > _________________________________________ > > When I understand correctly, then a clarification is not needed > because the MPI standard expresses all, i.e. > - MPI_Finalize need not to behave like a barrier > - but is allowed to have a barrier inside. > - If the user wants to exit one spawned process while > the others still continue to work, he/she must > disconnect this process before calling MPI_Finalize on it. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this track. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Tue Jan 29 11:07:18 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 18:07:18 +0100 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: <70CD810D-AEFA-4D78-BA9D-A0EFB2428957@cisco.com> Message-ID: Proposal: The current text in MPI 2.0 Sect. 2.3 Procedure Specification, page 6 lines 30-34 read: There is one special case — if an argument is a handle to an opaque object (these terms are defined in Section 2.5.1), and the object is updated by the procedure call, then the argument is marked OUT. It is marked this way even though the handle itself is not modified — we use the OUT attribute to denote that what the handle references is updated. Thus, in C++, IN arguments are either references or pointers to const objects. but should read: There is one special case — if an argument is a handle to an opaque object (these terms are defined in Section 2.5.1), and the object is updated by the procedure call but the handle itself is not modified , then the argument is marked IN/INOUT. We use the first part (IN) to specify the use of the handle and the second part (INOUT) to specify the use of the opaque object. Thus, in C++, IN arguments are either references or pointers to const objects, IN/INOUT arguments are references to const handles to non-const objects. In all reoutines mentioned in the clarification below, the INOUT handle declaration (in MPI-2.0) and the IN handle declaration (in MPI-1.1) ist modified into a IN/INOUT handle declaaration. ____________________________ Rationale for this proposal: I have checked the total MPI 1.1 and 2.0 standard to find all routines with an argument specification according to the following declaration pattern: Language independent interface: INOUT handle C interface MPI_handletype handle We can find this pattern only in MPI-2.0 at following routines: MPI_INFO_SET / _DELETE MPI_xxxx_SET_ERRHANDLER with xxxx=COMM / TYPE / WIN MPI_GREQUEST_COMPLETE MPI_xxxx_SET_NAME with xxxx=COMM / TYPE / WIN MPI_xxxx_SET_ATTR with xxxx=COMM / TYPE / WIN MPI_xxxx_DELETE_ATTR with xxxx=COMM / TYPE / WIN MPI_FILE_SET_SIZE / _PREALLOCATE / _SET_INFO / _SET_VIEW MPI_FILE_WRITE_AT / _WRITE_AT_ALL / _IWRITE_AT MPI_FILE_READ / _READ_ALL / _WRITE / _WRITE_ALL MPI_FILE_IREAD / _IWRITE MPI_FILE_SEEK MPI_FILE_READ_SHARED / _WRITE_SHARED MPI_FILE_IREAD_SHARED / _IWRITE_SHARED MPI_FILE_READ_ORDERED / _WRITE_ORDERED MPI_FILE_SEEK_SHARED MPI_FILE_WRITE_AT_ALL_BEGIN / MPI_FILE_WRITE_AT_ALL_END MPI_FILE_READ_ALL_BEGIN / MPI_FILE_READ_ALL_END MPI_FILE_WRITE_ALL_BEGIN / MPI_FILE_WRITE_ALL_END MPI_FILE_READ_ORDERED_BEGIN / MPI_FILE_READ_ORDERED_END MPI_FILE_WRITE_ORDERED_BEGIN / MPI_FILE_WRITE_ORDERED_END MPI_FILE_SET_ATOMICITY / _SYNC All these routines keep the handle itself unchanged, but the opaque object is modified in a way, that with oother MPI routines this change can be detected. For example, an attribute is cached or changed, a file pointer is moved, the content of a file was modified. The current text in MPI 2.0 Sect. 2.3 Procedure Specification, page 6 lines 30-34 read: There is one special case — if an argument is a handle to an opaque object (these terms are defined in Section 2.5.1), and the object is updated by the procedure call, then the argument is marked OUT. It is marked this way even though the handle itself is not modified — we use the OUT attribute to denote that what the handle references is updated. Thus, in C++, IN arguments are either references or pointers to const objects. With this definition (that I want to change later on) the use of INOUT is correct. In MPI-1.1 we have several of such routines, but in all cases, the handle is declared only as IN. I hope, I could find all of them: MPI_ATTR_PUT / _ DELETE MPI_ERRHANDLER_SET (these routines are now deprecated, but compare with use of INOUT in MPI_COMM_SET_ATTR and MPI_COMM_DELETE_ATTR in MPI-2.0) The following proposal should remove this inconsistency and should be a basis for correct definition of const. ____________________________ I hope, with this proposal, the INOUT handle problem can be really solved. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From jsquyres at [hidden] Tue Jan 29 11:19:20 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 29 Jan 2008 12:19:20 -0500 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: Message-ID: <43DB3F59-3798-4D74-A958-7AC51E8BEB91@cisco.com> I agree; there is no need for clarification (I already withdrew this errata request: http://lists.cs.uiuc.edu/mailman/private/mpi-21/2008-January/000011.html) . On Jan 28, 2008, at 12:52 PM, Rolf Rabenseifner wrote: > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas > Nevin, > Bill Gropp, Dick Treumann > > This is a follow up to: > MPI_FINALIZE in MPI-2 (with spawn) > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/finalize/ > _________________________________________ > > When I understand correctly, then a clarification is not needed > because the MPI standard expresses all, i.e. > - MPI_Finalize need not to behave like a barrier > - but is allowed to have a barrier inside. > - If the user wants to exit one spawned process while > the others still continue to work, he/she must > disconnect this process before calling MPI_Finalize on it. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this track. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 -- Jeff Squyres Cisco Systems From treumann at [hidden] Tue Jan 29 11:27:12 2008 From: treumann at [hidden] (Richard Treumann) Date: Tue, 29 Jan 2008 12:27:12 -0500 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: Message-ID: I do not see a need for clarification. As long as somewhere in the standard the following points are clear enough: 1) Only MPI_BARRIER promises barrier behavior 2) Any collective may be implemented with barrier behavior as a side effect (performance issues make some collectives (eg. MPI_Bcast) unlikely to be barrier like. An MPI_Allreduce will always be barrier like. Either way though, the standard does not stipulate) Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/28/2008 12:52:54 PM: > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas Nevin, > Bill Gropp, Dick Treumann > > This is a follow up to: > MPI_FINALIZE in MPI-2 (with spawn) > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/finalize/ > _________________________________________ > > When I understand correctly, then a clarification is not needed > because the MPI standard expresses all, i.e. > - MPI_Finalize need not to behave like a barrier > - but is allowed to have a barrier inside. > - If the user wants to exit one spawned process while > the others still continue to work, he/she must > disconnect this process before calling MPI_Finalize on it. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this track. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Tue Jan 29 11:40:19 2008 From: treumann at [hidden] (Richard Treumann) Date: Tue, 29 Jan 2008 12:40:19 -0500 Subject: [mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C In-Reply-To: Message-ID: Agreed - the migration and checkpoint/restart issues are already clear in MPI 1.1 Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/28/2008 03:24:32 AM: > Dick, > > your right and it is already decided: > MPI 1.1 Sect. 7.1 page 193, lines 13-14: > "This routine returns the name of the processor on which it > was called at the moment of the call." > And lines 22-25: > "Rationale. This function allows MPI implementations that do > process migration to return the current processor. Note that > nothing in MPI requires or defines process migration; this > definition of MPI GET PROCESSOR NAME simply allows such > an implementation. (End of rationale.)" > > I.e., current location, i.e., it may change in case of > check point/restart and all the other reasons you mentioned. > > I would say, that the sentences above are clear enough. > > Okay? > > Best regards > Rolf > > > On Fri, 25 Jan 2008 15:32:07 -0500 > Richard Treumann wrote: > >We also should decide whether every call to MPI_GET_PROCESSOR_NAME across > >the life of the task must return the same name. On very large machines > >running very large jobs, migration of some tasks off of failing nodes and > >on to robust nodes will become more interesting. Checkpoint/restart raises > >the same issue. A restarted job will probably not have the same task to > >node mapping. > > > >We can either require the name to remain constant and allow that it might > >be a "virtual" name or require that it return an "actual" name but allow it > >to change. > > > > Dick > > > >Dick Treumann - MPI Team/TCEM > >IBM Systems & Technology Group > >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >Tele (845) 433-7846 Fax (845) 433-8363 > > > > > >mpi-21-bounces_at_[hidden] wrote on 01/25/2008 12:00:42 PM: > > > >> This is a discussion-point for MPI 2.1, Ballot 4. > >> > >> This is a follow up to: > >> MPI_GET_PROCESSOR_NAME and Fortran > >> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > >> errata/index.html > >> with mail discussion in > >> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > >> errata/discuss/procname/ > >> > >> _________________________________________________________________ > >> > >> MPI_GET_PROCESSOR_NAME and Fortran > >> and in C and all MPI_xxxx_GET_NAME routines > >> ------------------------------------------- > >> > >> Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME > >> and MPI_xxxx_GET_NAME quite different. Not all implementations > >> are doing the same with zero-filling. And what they do is > >> at least with MPI_GET_PROCESSOR_NAME different to what > >> the current standard requires. A propose to adapt the standard > >> to the common reasonable implementations. > >> The very short proposal for clarification can be found at the > >> end of this text, see C. Proposal. > >> > >> A. MPI_GET_PROCESSOR_NAME > ... > >> B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME) > ... > >> C. Proposal: > >> ------------ > >> > >> Add the following sentences to the current interface definitions: > >> ------------------ > >> In C, a \0 is additionally stored at name[resultlen]. resultlen > >> cannot be larger then MPI_MAX_PROCESSOR_NAME-1 > >> (or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:) > >> is filled with spaces. resultlen cannot be larger then > >> MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME). > >> ------------------ > >> > >> Typo correction: > >> ---------------- > >> MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads > >> examine the ouput argument > >> But should read (additional t in output) > >> examine the output argument > >> > >> > >> Okay? > >> _________________________________________________________________ > >> > >> Best regards > >> Rolf > >> > >> PS: Attached my tests and short protocols > >> > >> > >> > >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > >> [attachment "mpi_get_xxx_name.tar.gz" deleted by Richard > >> Treumann/Poughkeepsie/IBM] > >_______________________________________________ > >> mpi-21 mailing list > >> mpi-21_at_[hidden] > >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Tue Jan 29 11:43:02 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 18:43:02 +0100 Subject: [mpi-21] Intended meaning of zero blocklengths in MPI_Type_indexed/MPI_Type_create_struct In-Reply-To: <20080129140210.GB24258@fourier.it.neclab.eu> Message-ID: This is a proposal for MPI 2.1, Ballot 4. This is a follow up to: Block lengths of zero in MPI Datatypes in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/blklenzero/ ___________________________________ Proposal: Add the following paragraph in MPI 1.1, Sect. 3.12, page 62, after line 2 (i.e., after ... "of the types defined by Typesig."): Most datatype constructors have replication count or block length arguments. Allowed values are nonnegative integers. If the value is zero, no elements are generated in the type map nor in the type signature for this entry in the argument list. MPI 1.1, Sect 3.12.1, MPI_TYPE_HINDEXED, page 67, line 22-24 read: IN count number of blocks – also number of entries in array_of_displacements and array_of_blocklengths (integer) but should read: IN count number of blocks – also number of entries in array_of_displacements and array_of_blocklengths (nonnegative integer) MPI 1.1, Sect 3.12.1, MPI_TYPE_STRUCT, page 68, line 19-22 read: IN count number of blocks (integer) – also number of entries in arrays array_of_types, array_of_displacements and array_of_blocklengths IN array of blocklength number of elements in each block (array of integer) but should read: IN count number of blocks (nonnegative integer) – also number of entries in arrays array_of_types, array_of_displacements and array_of_blocklengths IN array of blocklength number of elements in each block (array of nonnegative integer) MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_HINDEXED, page 66, line 36-38 read: IN count number of blocks – also number of entries in array_of_displacements and array_of_blocklengths (integer) but should read: IN count number of blocks – also number of entries in array_of_displacements and array_of_blocklengths (nonnegative integer) MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_STRUCT, page 67, line 14-18 read: IN count number of blocks (integer) – also number of entries in arrays array_of_types, array_of_displacements and array_of_blocklengths IN array of blocklength number of elements in each block (array of integer) but should read: IN count number of blocks (nonnegative integer) – also number of entries in arrays array_of_types, array_of_displacements and array_of_blocklengths IN array of blocklength number of elements in each block (array of nonnegative integer) ___________________________________ Rationale for this clarification and modification: The outcome of zero-count entries in the type map was not defined. For this, a clarification was needed. The interfaces of HINDEXED and STRUCT was inconsistent to the rest derived datatype routines. This was probably due to editing errors. A meaning of negative values was never defined not intended. Therefore, portable applications could not use negative values. These editing errors are fixed by this proposal. ___________________________________ I hope, I could catch all the problems Jesper mentioned in his mail. Best regards Rolf On Tue, 29 Jan 2008 15:02:10 +0100 Jesper Larsson Traeff wrote: > >Dear Rolf, > >(thanks for all the work you are putting into this!) > >On Tue, Jan 29, 2008 at 12:47:14PM +0100, Rolf Rabenseifner wrote: >> Reply is off-line from the mailing list. >> >> Dear Jesper and Bill, >> >> I believe that this is not a topic for the one and final shot >> of MPI 2.1 Ballot 4 in the next (March 2008) meeting. >> Therefore, I would propose to handle it in MPI 2.2. >> >In the MPI-1 document (p.66, p.68) index and struct types are treated >differently. For index, blocklength is an array of non-negative integers >(p.66, line 12), for struct, just an array of integers (p. 68, line 22). >Same inconsistency for the new MPI-2 types, p.66 and p.67. >I believe that this should at least be made consistent. > >I would also be in favor of a small remark that 0 blocklengths are allowed, >with the remarks that these are (of course) not included in the type map. > >Btw, I will be coming to the meeting in March > >best regards > >Jesper > >> Jesper, you have not been at Jan meeting. Therefore a short background. >> We decided, that MPI 2.1 is mainly the combining of the MPI 1.1 >> and 2.0 documents to one MPI 2.1 standard. >> Additionally the existing MPI 1.1 errata, and the MPI-2 electonically >> voted Ballots 1 & 2, and the Jan. meeting Ballot 3 and the March meeting >> Ballot 4 are included. Topics that do not get final official reading >> in these meetings are forwarded to MPI 2.2. >> >> Therefore I try to have mainly items in Ballot 3 and 4 that do not >> need longer discussions and that are mainly obvious, i.e., that can >> be shown on one or two slides together with needed background infos. >> >> Implication for me: I do not put your item on my MPI 2.1 agenda. >> >> Okay? >> >> Best regards >> Rolf >> >> On Mon, 23 Apr 2007 17:10:35 +0200 >> Jesper Larsson Traeff wrote: >> > >> >(ps: also sent to mpi-core_at_[hidden] - did anybody get it from there?) >> > >> >Dear MPI forum, >> > >> >an old discussion (July-September 2001) concluded that blocklengths of >> >value zero are allowed in MPI indexed and struct types. Reading the >> >standard, both text and specification (p.66ff), such "blocks" generate >> >no entries in the corresponding type maps, at least this seems to be >> >the idea - and implementations like mpich2 follow this interpretation, >> >while the original MPICH for instance did not, and maybe with some right? >> > >> >The question is what a derived datatype is intended to specify. If >> >it specifies just a type map, then clearly blocks of length zero should >> >simply be treated as not being there, and the resulting type map alone >> >defines the extent, lower and upper bounds of the datatype. But if a >> >derived datatype is intended as specifying a layout in memory, zero >> >length blocks could possibly have another meaning (as markers of locations >> >in the layout), and in addition to the type map affect the extent, >> >lower and upper bound of the datatype. >> > >> > >> >A concrete example >> > >> > displacements(1)=1 >> > displacements(2)=0 >> > lengths(1)=1 >> > lengths(2)=0 >> > >> > call mpi_type_indexed(2,lengths,displacements,MPI_INTEGER,newtype,code) >> > call mpi_type_commit(newtype,code) >> > >> >In the first interpretation (datatypes specify type map) the above has >> >extent 4 (assuming MPI_INTEGER is 4 bytes) and lower bound 4 >> > >> >In the second interpretation (datatypes specify layout) the above would >> >have lower bound 0 and extent 8 >> > >> > >> >The point is that in the first interpretation (which is the one at least >> >implicitly prescribed by the standard) the layout that was possibly intended >> >by the user ("something starting at displacement soandso" - irrespective of >> >whether there is actual data at displacement soandso) can be quite >> >arbitrarily collapsed. In an application where datatypes are generated >> >automatically, it might be a high burden on the user to take care of >> >the special cases arising by some blocklenghts just happening to >> >be zero (either he would have to resize, or to put in explcit >> >MPI_LB/MPI_UB markers). >> > >> >A point could therefore be made for treating blocks of length >> >zero still as "markers" in the specification of a layout - as did for >> >instance the original MPICH implementation. Some changes in the text >> >would be necessary to put this interpretation on a firm foundation. >> >That is not the intention here, but only to see if there are opinions >> >on this issue? A suggestion would be to add a short paragraph stating >> >that zero blocklengths are allowed, and what their effects and side-effects >> >are. >> > >> >Jesper >> > >> > >> >Minor clarifications: >> > >> >p.63, l.39, p.66, l.12, and p.67, l.25 have "nonnegative integer" >> >blocklengths, but p. 68, l.22 only has "integer". >> >To be consistent P. 68, l.22 should also be "nonnegative integer" >> >(negative blocklenghts don't make sense) >> > >> >(as already pointed out by Steven Huss-Lederman) The specification >> >p.66, l.46ff and similarly for MPI_Type_hindexed and MPI_Type_struct >> >makes little sense (well, is plain wrong) when some B[i] is actually zero. >> >First, l.46 forces some elements into the type map [ (type_0,disp_0+D[0]) ,,,] >> >that shouldn't be there (blocks of length zero shouln't give rise to any >> >elements in the type map), and also l.48 has no real meaning >> >when B[0]. Better would be to add two lines after l.48 giving the general >> >case: >> > >> >(type_0,disp_0+D[i]*ex),...,(type_{n-1},disp_{n-1}+D[i]*ex),..., >> >(type_0,disp_0+(D[i]+B[i]-1)*ex,...,(type_{n-1},disp_{n-1}+(D[i]+B[i]-1)*ex, >> > >> >with the proviso that this is empty if B[i]==0 (or even more formally >> >precise/correct) >> > >> >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From treumann at [hidden] Tue Jan 29 12:26:04 2008 From: treumann at [hidden] (Richard Treumann) Date: Tue, 29 Jan 2008 13:26:04 -0500 Subject: [mpi-21] Intended meaning of zero blocklengths in MPI_Type_indexed/MPI_Type_create_struct In-Reply-To: Message-ID: How about this? Most datatype constructors have replication count or block length arguments. Allowed values are nonnegative integers. If the value is zero, no elements are generated in the type map and there is no effect on datatype bounds or extent. I think it is bounds and extent that were questioned. I think it is self evident that a zero value cannot affect the type signature. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/29/2008 12:43:02 PM: > This is a proposal for MPI 2.1, Ballot 4. > > This is a follow up to: > Block lengths of zero in MPI Datatypes > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/blklenzero/ > ___________________________________ > > Proposal: > Add the following paragraph in MPI 1.1, Sect. 3.12, page 62, > after line 2 (i.e., after ... "of the types defined by Typesig."): > > Most datatype constructors have replication count or block > length arguments. Allowed values are nonnegative integers. > If the value is zero, no elements are generated in the type map > nor in the type signature for this entry in the argument list. > > MPI 1.1, Sect 3.12.1, MPI_TYPE_HINDEXED, page 67, line 22-24 read: > IN count number of blocks – also number of entries in > array_of_displacements and array_of_blocklengths > (integer) > but should read: > IN count number of blocks – also number of entries in > array_of_displacements and array_of_blocklengths > (nonnegative integer) > > > MPI 1.1, Sect 3.12.1, MPI_TYPE_STRUCT, page 68, line 19-22 read: > IN count number of blocks (integer) – also number > of entries in arrays array_of_types, > array_of_displacements and array_of_blocklengths > IN array of blocklength number of elements in each > block (array of integer) > but should read: > IN count number of blocks (nonnegative integer) – also number > of entries in arrays array_of_types, > array_of_displacements and array_of_blocklengths > IN array of blocklength number of elements in each > block (array of nonnegative integer) > > > MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_HINDEXED, page 66, line 36-38 read: > IN count number of blocks – also number of entries in > array_of_displacements and array_of_blocklengths > (integer) > but should read: > IN count number of blocks – also number of entries in > array_of_displacements and array_of_blocklengths > (nonnegative integer) > > > MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_STRUCT, page 67, line 14-18 read: > IN count number of blocks (integer) – also number > of entries in arrays array_of_types, > array_of_displacements and array_of_blocklengths > IN array of blocklength number of elements in each > block (array of integer) > but should read: > IN count number of blocks (nonnegative integer) – also number > of entries in arrays array_of_types, > array_of_displacements and array_of_blocklengths > IN array of blocklength number of elements in each > block (array of nonnegative integer) > ___________________________________ > > Rationale for this clarification and modification: > The outcome of zero-count entries in the type map was not defined. > For this, a clarification was needed. > The interfaces of HINDEXED and STRUCT was inconsistent to the rest > derived datatype routines. This was probably due to editing errors. > A meaning of negative values was never defined not intended. > Therefore, portable applications could not use negative values. > These editing errors are fixed by this proposal. > ___________________________________ > > > I hope, I could catch all the problems Jesper mentioned in his mail. > > Best regards > Rolf > > On Tue, 29 Jan 2008 15:02:10 +0100 > Jesper Larsson Traeff wrote: > > > >Dear Rolf, > > > >(thanks for all the work you are putting into this!) > > > >On Tue, Jan 29, 2008 at 12:47:14PM +0100, Rolf Rabenseifner wrote: > >> Reply is off-line from the mailing list. > >> > >> Dear Jesper and Bill, > >> > >> I believe that this is not a topic for the one and final shot > >> of MPI 2.1 Ballot 4 in the next (March 2008) meeting. > >> Therefore, I would propose to handle it in MPI 2.2. > >> > >In the MPI-1 document (p.66, p.68) index and struct types are treated > >differently. For index, blocklength is an array of non-negative integers > >(p.66, line 12), for struct, just an array of integers (p. 68, line 22). > >Same inconsistency for the new MPI-2 types, p.66 and p.67. > >I believe that this should at least be made consistent. > > > >I would also be in favor of a small remark that 0 blocklengths are allowed, > >with the remarks that these are (of course) not included in the type map. > > > >Btw, I will be coming to the meeting in March > > > >best regards > > > >Jesper > > > >> Jesper, you have not been at Jan meeting. Therefore a short background. > >> We decided, that MPI 2.1 is mainly the combining of the MPI 1.1 > >> and 2.0 documents to one MPI 2.1 standard. > >> Additionally the existing MPI 1.1 errata, and the MPI-2 electonically > >> voted Ballots 1 & 2, and the Jan. meeting Ballot 3 and the March meeting > >> Ballot 4 are included. Topics that do not get final official reading > >> in these meetings are forwarded to MPI 2.2. > >> > >> Therefore I try to have mainly items in Ballot 3 and 4 that do not > >> need longer discussions and that are mainly obvious, i.e., that can > >> be shown on one or two slides together with needed background infos. > >> > >> Implication for me: I do not put your item on my MPI 2.1 agenda. > >> > >> Okay? > >> > >> Best regards > >> Rolf > >> > >> On Mon, 23 Apr 2007 17:10:35 +0200 > >> Jesper Larsson Traeff wrote: > >> > > >> >(ps: also sent to mpi-core_at_[hidden] - did anybody get it from there?) > >> > > >> >Dear MPI forum, > >> > > >> >an old discussion (July-September 2001) concluded that blocklengths of > >> >value zero are allowed in MPI indexed and struct types. Reading the > >> >standard, both text and specification (p.66ff), such "blocks" generate > >> >no entries in the corresponding type maps, at least this seems to be > >> >the idea - and implementations like mpich2 follow this interpretation, > >> >while the original MPICH for instance did not, and maybe with some right? > >> > > >> >The question is what a derived datatype is intended to specify. If > >> >it specifies just a type map, then clearly blocks of length zero should > >> >simply be treated as not being there, and the resulting type map alone > >> >defines the extent, lower and upper bounds of the datatype. But if a > >> >derived datatype is intended as specifying a layout in memory, zero > >> >length blocks could possibly have another meaning (as markers oflocations > >> >in the layout), and in addition to the type map affect the extent, > >> >lower and upper bound of the datatype. > >> > > >> > > >> >A concrete example > >> > > >> > displacements(1)=1 > >> > displacements(2)=0 > >> > lengths(1)=1 > >> > lengths(2)=0 > >> > > >> > call mpi_type_indexed(2,lengths,displacements,MPI_INTEGER,newtype,code) > >> > call mpi_type_commit(newtype,code) > >> > > >> >In the first interpretation (datatypes specify type map) the above has > >> >extent 4 (assuming MPI_INTEGER is 4 bytes) and lower bound 4 > >> > > >> >In the second interpretation (datatypes specify layout) the above would > >> >have lower bound 0 and extent 8 > >> > > >> > > >> >The point is that in the first interpretation (which is the one at least > >> >implicitly prescribed by the standard) the layout that was > possibly intended > >> >by the user ("something starting at displacement soandso" - > irrespective of > >> >whether there is actual data at displacement soandso) can be quite > >> >arbitrarily collapsed. In an application where datatypes are generated > >> >automatically, it might be a high burden on the user to take care of > >> >the special cases arising by some blocklenghts just happening to > >> >be zero (either he would have to resize, or to put in explcit > >> >MPI_LB/MPI_UB markers). > >> > > >> >A point could therefore be made for treating blocks of length > >> >zero still as "markers" in the specification of a layout - as did for > >> >instance the original MPICH implementation. Some changes in the text > >> >would be necessary to put this interpretation on a firm foundation. > >> >That is not the intention here, but only to see if there are opinions > >> >on this issue? A suggestion would be to add a short paragraph stating > >> >that zero blocklengths are allowed, and what their effects and > side-effects > >> >are. > >> > > >> >Jesper > >> > > >> > > >> >Minor clarifications: > >> > > >> >p.63, l.39, p.66, l.12, and p.67, l.25 have "nonnegative integer" > >> >blocklengths, but p. 68, l.22 only has "integer". > >> >To be consistent P. 68, l.22 should also be "nonnegative integer" > >> >(negative blocklenghts don't make sense) > >> > > >> >(as already pointed out by Steven Huss-Lederman) The specification > >> >p.66, l.46ff and similarly for MPI_Type_hindexed and MPI_Type_struct > >> >makes little sense (well, is plain wrong) when some B[i] is > actually zero. > >> >First, l.46 forces some elements into the type map [ (type_0, > disp_0+D[0]) ,,,] > >> >that shouldn't be there (blocks of length zero shouln't give rise to any > >> >elements in the type map), and also l.48 has no real meaning > >> >when B[0]. Better would be to add two lines after l.48 giving the general > >> >case: > >> > > >> >(type_0,disp_0+D[i]*ex),...,(type_{n-1},disp_{n-1}+D[i]*ex),..., > >> >(type_0,disp_0+(D[i]+B[i]-1)*ex,...,(type_{n-1},disp_{n-1}+(D[i] > +B[i]-1)*ex, > >> > > >> >with the proviso that this is empty if B[i]==0 (or even more formally > >> >precise/correct) > >> > > >> > >> > >> > >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ritzdorf at [hidden] Tue Jan 29 14:46:53 2008 From: ritzdorf at [hidden] (Hubert Ritzdorf) Date: Tue, 29 Jan 2008 21:46:53 +0100 Subject: [mpi-21] MPI_FINALIZE In-Reply-To: <9AF7F463-1B40-4613-AD62-B06E3A9A6A6D@mcs.anl.gov> Message-ID: <479F90BD.3050806@it.neclab.eu> I agree. Hubert William Gropp wrote: > I think that we're fine as is, and can move this to the discussed but > require no change page. > > Bill > > On Jan 28, 2008, at 11:52 AM, Rolf Rabenseifner wrote: > >> Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas >> Nevin, >> Bill Gropp, Dick Treumann >> >> This is a follow up to: >> MPI_FINALIZE in MPI-2 (with spawn) >> in >> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html >> with mail discussion in >> >> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/finalize/ >> _________________________________________ >> >> When I understand correctly, then a clarification is not needed >> because the MPI standard expresses all, i.e. >> - MPI_Finalize need not to behave like a barrier >> - but is allowed to have a barrier inside. >> - If the user wants to exit one spawned process while >> the others still continue to work, he/she must >> disconnect this process before calling MPI_Finalize on it. >> >> If somebody wants a clarification to be included into the standard >> and therefore in Ballot 4, please send me your wording >> with the page and line references included. >> >> If all agree, that no clarification is needed, then I would finish >> this track. >> >> Best regards >> Rolf >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> > > William Gropp > Paul and Cynthia Saylor Professor of Computer Science > University of Illinois Urbana-Champaign > > > ------------------------------------------------------------------------ > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3245 bytes Desc: S/MIME Cryptographic Signature URL: From rabenseifner at [hidden] Tue Jan 29 16:36:36 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Tue, 29 Jan 2008 23:36:36 +0100 Subject: [mpi-21] Intended meaning of zero blocklengths in MPI_Type_indexed/MPI_Type_create_struct In-Reply-To: Message-ID: Yes, agreed, your text fits better to the clarification needs. Rolf On Tue, 29 Jan 2008 13:26:04 -0500 Richard Treumann wrote: >How about this? > > > Most datatype constructors have replication count or block > length arguments. Allowed values are nonnegative integers. > If the value is zero, no elements are generated in the type map > and there is no effect on datatype bounds or extent. > >I think it is bounds and extent that were questioned. I think it is self >evident that a zero value cannot affect the type signature. > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/29/2008 12:43:02 PM: > >> This is a proposal for MPI 2.1, Ballot 4. >> >> This is a follow up to: >> Block lengths of zero in MPI Datatypes >> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> errata/index.html >> with mail discussion in >> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> errata/discuss/blklenzero/ >> ___________________________________ >> >> Proposal: >> Add the following paragraph in MPI 1.1, Sect. 3.12, page 62, >> after line 2 (i.e., after ... "of the types defined by Typesig."): >> >> Most datatype constructors have replication count or block >> length arguments. Allowed values are nonnegative integers. >> If the value is zero, no elements are generated in the type map >> nor in the type signature for this entry in the argument list. >> >> MPI 1.1, Sect 3.12.1, MPI_TYPE_HINDEXED, page 67, line 22-24 read: >> IN count number of blocks – also number of entries in >> array_of_displacements and array_of_blocklengths >> (integer) >> but should read: >> IN count number of blocks – also number of entries in >> array_of_displacements and array_of_blocklengths >> (nonnegative integer) >> >> >> MPI 1.1, Sect 3.12.1, MPI_TYPE_STRUCT, page 68, line 19-22 read: >> IN count number of blocks (integer) – also number >> of entries in arrays array_of_types, >> array_of_displacements and array_of_blocklengths >> IN array of blocklength number of elements in each >> block (array of integer) >> but should read: >> IN count number of blocks (nonnegative integer) – also number >> of entries in arrays array_of_types, >> array_of_displacements and array_of_blocklengths >> IN array of blocklength number of elements in each >> block (array of nonnegative integer) >> >> >> MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_HINDEXED, page 66, line 36-38 read: >> IN count number of blocks – also number of entries in >> array_of_displacements and array_of_blocklengths >> (integer) >> but should read: >> IN count number of blocks – also number of entries in >> array_of_displacements and array_of_blocklengths >> (nonnegative integer) >> >> >> MPI 2.0, Sect 4.14.1, MPI_TYPE_CREATE_STRUCT, page 67, line 14-18 read: >> IN count number of blocks (integer) – also number >> of entries in arrays array_of_types, >> array_of_displacements and array_of_blocklengths >> IN array of blocklength number of elements in each >> block (array of integer) >> but should read: >> IN count number of blocks (nonnegative integer) – also number >> of entries in arrays array_of_types, >> array_of_displacements and array_of_blocklengths >> IN array of blocklength number of elements in each >> block (array of nonnegative integer) >> ___________________________________ >> >> Rationale for this clarification and modification: >> The outcome of zero-count entries in the type map was not defined. >> For this, a clarification was needed. >> The interfaces of HINDEXED and STRUCT was inconsistent to the rest >> derived datatype routines. This was probably due to editing errors. >> A meaning of negative values was never defined not intended. >> Therefore, portable applications could not use negative values. >> These editing errors are fixed by this proposal. >> ___________________________________ >> >> >> I hope, I could catch all the problems Jesper mentioned in his mail. >> >> Best regards >> Rolf >> >> On Tue, 29 Jan 2008 15:02:10 +0100 >> Jesper Larsson Traeff wrote: >> > >> >Dear Rolf, >> > >> >(thanks for all the work you are putting into this!) >> > >> >On Tue, Jan 29, 2008 at 12:47:14PM +0100, Rolf Rabenseifner wrote: >> >> Reply is off-line from the mailing list. >> >> >> >> Dear Jesper and Bill, >> >> >> >> I believe that this is not a topic for the one and final shot >> >> of MPI 2.1 Ballot 4 in the next (March 2008) meeting. >> >> Therefore, I would propose to handle it in MPI 2.2. >> >> >> >In the MPI-1 document (p.66, p.68) index and struct types are treated >> >differently. For index, blocklength is an array of non-negative integers >> >(p.66, line 12), for struct, just an array of integers (p. 68, line 22). >> >Same inconsistency for the new MPI-2 types, p.66 and p.67. >> >I believe that this should at least be made consistent. >> > >> >I would also be in favor of a small remark that 0 blocklengths are >allowed, >> >with the remarks that these are (of course) not included in the type >map. >> > >> >Btw, I will be coming to the meeting in March >> > >> >best regards >> > >> >Jesper >> > >> >> Jesper, you have not been at Jan meeting. Therefore a short >background. >> >> We decided, that MPI 2.1 is mainly the combining of the MPI 1.1 >> >> and 2.0 documents to one MPI 2.1 standard. >> >> Additionally the existing MPI 1.1 errata, and the MPI-2 electonically >> >> voted Ballots 1 & 2, and the Jan. meeting Ballot 3 and the March >meeting >> >> Ballot 4 are included. Topics that do not get final official reading >> >> in these meetings are forwarded to MPI 2.2. >> >> >> >> Therefore I try to have mainly items in Ballot 3 and 4 that do not >> >> need longer discussions and that are mainly obvious, i.e., that can >> >> be shown on one or two slides together with needed background infos. >> >> >> >> Implication for me: I do not put your item on my MPI 2.1 agenda. >> >> >> >> Okay? >> >> >> >> Best regards >> >> Rolf >> >> >> >> On Mon, 23 Apr 2007 17:10:35 +0200 >> >> Jesper Larsson Traeff wrote: >> >> > >> >> >(ps: also sent to mpi-core_at_[hidden] - did anybody get it from >there?) >> >> > >> >> >Dear MPI forum, >> >> > >> >> >an old discussion (July-September 2001) concluded that blocklengths >of >> >> >value zero are allowed in MPI indexed and struct types. Reading the >> >> >standard, both text and specification (p.66ff), such "blocks" >generate >> >> >no entries in the corresponding type maps, at least this seems to be >> >> >the idea - and implementations like mpich2 follow this >interpretation, >> >> >while the original MPICH for instance did not, and maybe with some >right? >> >> > >> >> >The question is what a derived datatype is intended to specify. If >> >> >it specifies just a type map, then clearly blocks of length zero >should >> >> >simply be treated as not being there, and the resulting type map >alone >> >> >defines the extent, lower and upper bounds of the datatype. But if a >> >> >derived datatype is intended as specifying a layout in memory, zero >> >> >length blocks could possibly have another meaning (as markers >oflocations >> >> >in the layout), and in addition to the type map affect the extent, >> >> >lower and upper bound of the datatype. >> >> > >> >> > >> >> >A concrete example >> >> > >> >> > displacements(1)=1 >> >> > displacements(2)=0 >> >> > lengths(1)=1 >> >> > lengths(2)=0 >> >> > >> >> > call >mpi_type_indexed(2,lengths,displacements,MPI_INTEGER,newtype,code) >> >> > call mpi_type_commit(newtype,code) >> >> > >> >> >In the first interpretation (datatypes specify type map) the above >has >> >> >extent 4 (assuming MPI_INTEGER is 4 bytes) and lower bound 4 >> >> > >> >> >In the second interpretation (datatypes specify layout) the above >would >> >> >have lower bound 0 and extent 8 >> >> > >> >> > >> >> >The point is that in the first interpretation (which is the one at >least >> >> >implicitly prescribed by the standard) the layout that was >> possibly intended >> >> >by the user ("something starting at displacement soandso" - >> irrespective of >> >> >whether there is actual data at displacement soandso) can be quite >> >> >arbitrarily collapsed. In an application where datatypes are >generated >> >> >automatically, it might be a high burden on the user to take care of >> >> >the special cases arising by some blocklenghts just happening to >> >> >be zero (either he would have to resize, or to put in explcit >> >> >MPI_LB/MPI_UB markers). >> >> > >> >> >A point could therefore be made for treating blocks of length >> >> >zero still as "markers" in the specification of a layout - as did for >> >> >instance the original MPICH implementation. Some changes in the text >> >> >would be necessary to put this interpretation on a firm foundation. >> >> >That is not the intention here, but only to see if there are opinions >> >> >on this issue? A suggestion would be to add a short paragraph stating >> >> >that zero blocklengths are allowed, and what their effects and >> side-effects >> >> >are. >> >> > >> >> >Jesper >> >> > >> >> > >> >> >Minor clarifications: >> >> > >> >> >p.63, l.39, p.66, l.12, and p.67, l.25 have "nonnegative integer" >> >> >blocklengths, but p. 68, l.22 only has "integer". >> >> >To be consistent P. 68, l.22 should also be "nonnegative integer" >> >> >(negative blocklenghts don't make sense) >> >> > >> >> >(as already pointed out by Steven Huss-Lederman) The specification >> >> >p.66, l.46ff and similarly for MPI_Type_hindexed and MPI_Type_struct >> >> >makes little sense (well, is plain wrong) when some B[i] is >> actually zero. >> >> >First, l.46 forces some elements into the type map [ (type_0, >> disp_0+D[0]) ,,,] >> >> >that shouldn't be there (blocks of length zero shouln't give rise to >any >> >> >elements in the type map), and also l.48 has no real meaning >> >> >when B[0]. Better would be to add two lines after l.48 giving the >general >> >> >case: >> >> > >> >> >(type_0,disp_0+D[i]*ex),...,(type_{n-1},disp_{n-1}+D[i]*ex),..., >> >> >(type_0,disp_0+(D[i]+B[i]-1)*ex,...,(type_{n-1},disp_{n-1}+(D[i] >> +B[i]-1)*ex, >> >> > >> >> >with the proviso that this is empty if B[i]==0 (or even more formally >> >> >precise/correct) >> >> > >> >> >> >> >> >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Wed Jan 30 08:39:17 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Wed, 30 Jan 2008 15:39:17 +0100 Subject: [mpi-21] Reading of Chapter 4 In-Reply-To: <20080109110724.GA2915@fourier.it.neclab.eu> Message-ID: Thank you for your detailed reading of MPI 2.0 Chapter 4. Some comments and answers are inline below. Most will be for MPI 2.1 Ballot 4. Therefore, I switched the mail distribution to mpi-21_at_[hidden] On Wed, 9 Jan 2008 12:07:24 +0100 Jesper Larsson Traeff wrote: > >Dear all, > >I have read Chapter 4 - here are my corrections, comments and suggestions >(only things not already in the errata sheet from Jan 30, 2002). There is >little of great substance. > >I have prefixed the comments as follows: >(typo) - uncontroversial typo/misspelling, should be corrected >(suggestion) - suggestion for alternative wording/extra comment >(addition) - suggestion for addition >(comment) - my comment, not intended for standard document > >General typo: is "implementor" (with an "o") a word in US English? >Shouldn't it be "Implementers"? (as in "Advice to implementers") > >General comments: the chapter is quite awful, and in the longer run I >would be in favor of having the parts (e.g. on datatypes) moved to >the chapters where they belong properly. I think Rolf will be doing some >of this. >I would also suggest having a special chapter in the new 2.1 >document summarizing all deprecated functionality (with a short rationale >as to why the function was deprecated) > >For mnemonic reasons, I would suggest having all datatype constructors >being of the form > >MPI_TYPE_CREATE_XXX > >and thus deprecate MPI_TYPE_CONTIGUOUS and MPI_TYPE_VECTOR as well. >It could also be argued to have an MPI_TYPE_CREATE_HINDEXED_BLOCK as >counterpart to MPI_TYPE_CREATE_INDEXED_BLOCK (but I am not making a >strong suggestion). > >best regards > >Jesper > >Chapter 4: >---------- > >Page 37, line 44-46: >(suggestion) drop, since this is largely a repetition of lines 34-36 >(except the statement "It is not suggested that this be the only way..." --> Ballot 4 > Page 43, line 33-34: >(suggestion) change "It consists of (key,value) pairs" to >"It stores a(n unordered) set of (key,value) pairs" to emphasize that the >info object is a kind of dictionary (data structure) --> Ballot 4 > >Page 43, line 34: >(suggestion) replace "may have only one value" by "can have only one value" --> Ballot 4 > >Page 43, line 37: > >(addition) "An empty info object is denoted by MPI_INFO_NULL." --> Ballot 4 Page 43, line 37: >(comment/discussion) The default is that the "null handle" is not >allowed as an IN argument (see MPI 1, Page 8, line 28), unless >explicitly allowed. This explicit permission is missing at some >places in MPI-2 (I found two): MPI_WIN_CREATE, page 111, and >MPI_FILE_SET_INFO, page 218. Is this default interpretation also intended >for MPI_INFO_NULL? A sentence on this would be in order, since the >info object is introduced here:" It is erroneous to pass MPI_INFO_NULL >as an IN argument, unless where explicitly allowed" (or whatever appropriate >depending on the intended interpretation). No, because MPI-1.1, Sect. 2.4.1, Page 8, line 28 is no longer valid. Whole Chapter 2 is substituted by MPI 2.0, Chapter 2. And in MPI 2.0, Sect. 2.5.1, Pages 8-9 the cited paragraph is removed. >Page 47, line 1: >(suggestion) change "Keys are numbered..." to >"At any point in time, keys are numbered consecutively from 0 to N-1, where..." If it is okay, I would ignore this, because it isn't a clarification. >Page 47, line 32: >(addition) >"Advice to implementers: The info object is a dictionary, and can be >implemented by a suitable data structure. Since the number of (key,value) >pairs can normally be assumed to be small a linear list implementation >will often suffice." This must be handled in the discussion thread "What info keys can be set?" on the errata page and http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/infoset/ I will continue on that thread with a ballot 4 proposal. >Page 49, line 21: >(addition) to align with the Fortran example 4.7, add C comment >"/* no memory is allocated */" --> Ballot 4 > >Page 49, line 22: >(addition) to make consistent with Fortran example, add line: >/* memory allocated */ --> Ballot 4 > >Page 50, line 9: >(type) remove first "in" --> Ballot 4 > >Page 55, line 26: >(comment) is this good Fortran style - scalars, e.g. 5 and MPI_REAL, >as implicit arrays of size 1? For a C programmer this is hurtful, and >some Fortran compilers do complain. I would suggest declaring the >proper arrays, so as to make the example more readily understandable, >also to the Fortran non-expert. Same goes for the similar Example 4.14 >on Page 60. --> Ballot 4 with following details: MPI 2.0, Sect. 4.12.6, Exa. 4.12, page 55, line 21-22 read: INTEGER TYPE, IERR INTEGER (KIND=MPI_ADDRESS_KIND) ADDR but should read: INTEGER TYPE, IERR, AOBLEN(1), AOTYPE(1) INTEGER (KIND=MPI_ADDRESS_KIND) AODISP(1) MPI 2.0, Sect. 4.12.6, Exa. 4.12, page 55, line 25-26 read: CALL MPI_GET_ADDRESS( R, ADDR, IERR) CALL MPI_TYPE_CREATE_STRUCT(1, 5, ADDR, MPI_REAL, TYPE, IERR) but should read: AOBLEN(1) = 5 CALL MPI_GET_ADDRESS( R, AODISP(1), IERR) AOTYPE(1) = MPI_REAL CALL MPI_TYPE_CREATE_STRUCT(1, AOBLEN(1),AODISP(1),AOTYPE(1), TYPE, IERR) MPI 2.0, Sect. 4.12.10, Exa. 4.14, page 60, line 31-32 read: INTEGER TYPE, IERR, MYRANK INTEGER (KIND=MPI_ADDRESS_KIND) ADDR but should read: INTEGER TYPE, IERR, MYRANK, AOBLEN(1), AOTYPE(1) INTEGER (KIND=MPI_ADDRESS_KIND) AODISP(1) MPI 2.0, Sect. 4.12.10, Exa. 4.14, page 55, line 35-36 read: CALL MPI_GET_ADDRESS( R, ADDR, IERR) CALL MPI_TYPE_CREATE_STRUCT(1, 5, ADDR, MPI_REAL, TYPE, IERR) but should read: AOBLEN(1) = 5 CALL MPI_GET_ADDRESS( R, AODISP(1), IERR) AOTYPE(1) = MPI_REAL CALL MPI_TYPE_CREATE_STRUCT(1, AOBLEN(1),AODISP(1),AOTYPE(1), TYPE, IERR) > >Page 56, line 29: >(typo) "assciated" should be "associated" --> Ballot 4 > >Page 67, line 18: >(suggestion) change "array of integer" to "array of nonnegative integer" >(shouldn't it be "integers"?) - to make consistent with HVECTOR and >HINDEXED. >(comment) There was an email discussion about 0-blocklengths, and >although it did not go very far, I think the conclusion was that >0-blocklengths are allowed and give rise to no element in the type >map. I think that some changes in the MPI-1 rules for how type maps >are generated by the INDEXED and STRUCT constructors are needed to >make this precise and correct. I have put this into the thread "Block lengths of zero in MPI Datatypes" http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/blklenzero/ >Page 70-77: >(comment) Two pictures illustrating the subarray and distributed array >constructors would be really helpful. May be nice to have. > >Page 74, line 9: >(typo) "is" missing in "it erroneous" --> Ballot 4 > >Chapter 5: >---------- > >Page 85, line 25: >(typo) "as the as the" should be "as the" --> Ballot 4 > Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Wed Jan 30 11:07:58 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Wed, 30 Jan 2008 18:07:58 +0100 Subject: [mpi-21] MPI 2.1 protocol from Jan. 2008 meeting of the MPI Forum Message-ID: Dear all, I have done the protocol of the MPI 2.1 sessions of the Jan. 2008 meeting in form of a copy of the slides. I modified the slides in the way that they already reflect the decisions made by the Forum, e.g., names are now MPI 2.1 and MPI 1.3. Bill has stored the protocol on the web-pages at http://mpi-forum.cs.uiuc.edu/meetings/2008/JAN-14/mpi_21_at_MPIforum_2008-01.pdf Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Wed Jan 30 11:33:34 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Wed, 30 Jan 2008 18:33:34 +0100 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially Linda Stanberry, Bill Gropp, rajeev Thakur, Dick Treumann, Raja Daoud, the participants of the email-discussion in 1999, to review this proposal. This is a follow up to: What info keys can be set? in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/infoset/ ___________________________________ Background: The sentence MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 "If a function does not recognize a key, it will ignore it, unless otherwise specified." was interpreted in two different ways: 1. This sentence is only valid in the routines in other chapters (e.g. processs creation and management, one-sided communication, parallel file I/O) where infos may be used to specify portable and implementation defined hints. 2. This interpretation is also valid for MPI_INFO_DELETE, MPI_INFO_SET, MPI_INFO_GET, and MPI_INFO_DUP. The following proposal ist based on Option 1. Option 2 is rejected due to the reason shown after the proposal. ___________________________________ Proposal: MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 read: If a function does not recognize a key, it will ignore it, unless otherwise specified. but should read: If a function in another section of the MPI standard does not recognize a key, it will ignore it, unless otherwise specified. The routines in this section. The functions in this section must not ignore any (key,value) pair. Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new paragraph: Advice to implementors. For optimization, an MPI implementation may already sort out which (key,value) pair can be recognized for use in other chapters (e.g., in processs creation and management, one-sided communication, parallel file I/O) to guarantee a fast access to the appropriate information when used in routines of those chapters. For porpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation must still keep also all other (key,value) pairs that cannot be recognized by that MPI implementation. (End of advice to implementors.) ________________________________ Rationale for this clarification: The first of the two interpretation options mentioned in the background information is the only valid interpretation when we assume, that layered implementation of parts of the MPI-2 standard should be possible. This was a goal of the MPI-2 Forum and the MPI-2.0 specification. ___________________________________ Comment: In the discussion (see the mails from 1999) one can clearly see, that an implementation of option 2 (done by IBM) cannot coexist with a layered implementation of MPI parallel file I/O, because MPI parallel file I/O routine need a different implementation of the INFO routines. Other chapters of MPI (e.g. 1-sided) need the original INF implementation. Two different INFO implementations cannot coexist in the same MPI library. We could see the ouutcome of such problems with MPI_Wait & co. as long as a generalized request concept was not available. ROMIO had to define non-standard MPIO_Wait & co. routines. And they persist for a long time! Without the clarification above, we have the same problem with the INFO handling. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From bronis at [hidden] Wed Jan 30 12:28:54 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Wed, 30 Jan 2008 10:28:54 -0800 (PST) Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Rolf: Re: > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially > Linda Stanberry, Bill Gropp, rajeev Thakur, Dick Treumann, Raja Daoud, > the participants of the email-discussion in 1999, to review this proposal. Linda has retired. I do remember this issue. Our position is unchanged - your option 1, which you suggest adopting, is clearly the right behavior to standardize. A couple minor tweaks to the proposed wording below. > This is a follow up to: > What info keys can be set? > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/infoset/ > ___________________________________ > Background: > The sentence MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 > > "If a function does not recognize a key, it will ignore it, > unless otherwise specified." > > was interpreted in two different ways: > 1. This sentence is only valid in the routines in other chapters > (e.g. processs creation and management, one-sided communication, > parallel file I/O) where infos may be used to specify portable > and implementation defined hints. > 2. This interpretation is also valid for MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP. > The following proposal ist based on Option 1. > Option 2 is rejected due to the reason shown after the proposal. > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > but should read: > If a function in another section of the MPI standard > does not recognize a key, > it will ignore it, unless otherwise specified. > The routines in this section. > The functions in this section must not ignore any (key,value) pair. Something was garbled. Change to: If a function in any other section of the MPI standard does not recognize a key, it will ignore it, unless otherwise specified. The functions in this section must not ignore any (key,value) pair. I'm not sure that "section" is well defined. Perhaps it would be best to list the functions that cannot ignore unrecognized pairs explicitly. Opinions? > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > For optimization, an MPI implementation may already sort out > which (key,value) pair can be recognized for use in other chapters > (e.g., in processs creation and management, one-sided communication, > parallel file I/O) to guarantee a fast access to the appropriate > information when used in routines of those chapters. > For porpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, > MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation > must still keep also all other (key,value) pairs that cannot be > recognized by that MPI implementation. > (End of advice to implementors.) Tweaking: Advice to implementors. An MPI implementation may restrict (key,value) pairs that are valid for use in routines from other chapters (e.g., in processs creation and management, one-sided communication, or parallel file I/O) to guarantee fast access to the appropriate information. For the purpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation must retain all (key,value) pairs so that layered functionality can also use the Info object. (End of advice to implementors.) "For optimization" and "fast access" are redundant. Added rationalization for retaining the others pairs. Various other grammar and spelling changes suggested. Bronis > ________________________________ > Rationale for this clarification: > > The first of the two interpretation options mentioned in the > background information is the only valid interpretation > when we assume, that layered implementation of parts of > the MPI-2 standard should be possible. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Comment: > In the discussion (see the mails from 1999) > one can clearly see, that an implementation of option 2 (done by IBM) > cannot coexist with a layered implementation of MPI parallel > file I/O, because MPI parallel file I/O routine need a different > implementation of the INFO routines. Other chapters of MPI (e.g. 1-sided) > need the original INF implementation. > Two different INFO implementations cannot coexist in the same MPI library. > > We could see the ouutcome of such problems with MPI_Wait & co. > as long as a generalized request concept was not available. > ROMIO had to define non-standard MPIO_Wait & co. routines. > And they persist for a long time! > > Without the clarification above, we have the same problem with the > INFO handling. > > Best regards > Rolf > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > From treumann at [hidden] Wed Jan 30 14:05:26 2008 From: treumann at [hidden] (Richard Treumann) Date: Wed, 30 Jan 2008 15:05:26 -0500 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: IBM long ago chose the real-politic approach of supporting both interpretations. By default we restrict INFO objects to key:value pairs that mean something to our functions that take INFO arguments and quietly discard key:value pairs we do not recognize. As far as I know this has not been a problem for anyone writing portable MPI applications. I can see that hint filtering would restrict someone who wants to use an INFO for layered libraries. For that reason, any user who wishes to use an INFO object as a general purpose cache for key:value pairs that mean nothing to our MPI implementation can set an environment variable or command line option (MP_HINTS_FILTERED/-hints_filtered = no) and get the behavior Linda was expecting. I do have any problem with clarifying the standard to say that an MPI_Info object should be prepared to manage unrecognized key:value string pairs. I suggest changing: ============== If a function does not recognize a key, it will ignore it, unless otherwise specified. If an implementation recognizes a key but does not recognize the format of the corresponding value, the result is undefined. to An info object is a cache for arbitrary ( key, value) pairs. Each MPI function which takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. ============ I think this approach implies that any statements about what happens when a recognized key has a garbage value belong with the function that recognizes the key. By insulating the info description from the descriptions of particular MPI functions that take MPI_Info hints, we leave people the option of using info objects as hints for third party libraries that work with MPI but are not part of the MPI API. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/30/2008 12:33:34 PM: > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially > Linda Stanberry, Bill Gropp, rajeev Thakur, Dick Treumann, Raja Daoud, > the participants of the email-discussion in 1999, to review this proposal. > > This is a follow up to: > What info keys can be set? > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/infoset/ > ___________________________________ > Background: > The sentence MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 > > "If a function does not recognize a key, it will ignore it, > unless otherwise specified." > > was interpreted in two different ways: > 1. This sentence is only valid in the routines in other chapters > (e.g. processs creation and management, one-sided communication, > parallel file I/O) where infos may be used to specify portable > and implementation defined hints. > 2. This interpretation is also valid for MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP. > The following proposal ist based on Option 1. > Option 2 is rejected due to the reason shown after the proposal. > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > but should read: > If a function in another section of the MPI standard > does not recognize a key, > it will ignore it, unless otherwise specified. > The routines in this section. > The functions in this section must not ignore any (key,value) pair. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > For optimization, an MPI implementation may already sort out > which (key,value) pair can be recognized for use in other chapters > (e.g., in processs creation and management, one-sided communication, > parallel file I/O) to guarantee a fast access to the appropriate > information when used in routines of those chapters. > For porpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, > MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation > must still keep also all other (key,value) pairs that cannot be > recognized by that MPI implementation. > (End of advice to implementors.) > ________________________________ > Rationale for this clarification: > > The first of the two interpretation options mentioned in the > background information is the only valid interpretation > when we assume, that layered implementation of parts of > the MPI-2 standard should be possible. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Comment: > In the discussion (see the mails from 1999) > one can clearly see, that an implementation of option 2 (done by IBM) > cannot coexist with a layered implementation of MPI parallel > file I/O, because MPI parallel file I/O routine need a different > implementation of the INFO routines. Other chapters of MPI (e.g. 1-sided) > need the original INF implementation. > Two different INFO implementations cannot coexist in the same MPI library. > > We could see the ouutcome of such problems with MPI_Wait & co. > as long as a generalized request concept was not available. > ROMIO had to define non-standard MPIO_Wait & co. routines. > And they persist for a long time! > > Without the clarification above, we have the same problem with the > INFO handling. > > Best regards > Rolf > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bronis at [hidden] Wed Jan 30 15:39:44 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Wed, 30 Jan 2008 13:39:44 -0800 (PST) Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Dick: Re: > IBM long ago chose the real-politic approach of supporting both > interpretations. By default we restrict INFO objects to key:value pairs > that mean something to our functions that take INFO arguments and quietly > discard key:value pairs we do not recognize. As far as I know this has not > been a problem for anyone writing portable MPI applications. > > I can see that hint filtering would restrict someone who wants to use an > INFO for layered libraries. For that reason, any user who wishes to use an > INFO object as a general purpose cache for key:value pairs that mean > nothing to our MPI implementation can set an environment variable or > command line option (MP_HINTS_FILTERED/-hints_filtered = no) and get the > behavior Linda was expecting. > > I do have any problem with clarifying the standard to say that an MPI_Info Did you mean "do not have any problem"? Otherwise the sentence is hard to parse. > object should be prepared to manage unrecognized key:value string pairs. I > suggest changing: > > ============== > If a function does not recognize a key, it will ignore it, unless otherwise > specified. If an implementation recognizes a key but does not recognize the > format of the corresponding value, the result is undefined. > > to > > An info object is a cache for arbitrary ( key, value) pairs. Each MPI > function which takes hints in the form of an MPI_Info must be prepared to > ignore any key it does not recognize. Minor grammar tweak: An info object is a cache for arbitrary ( key, value) pairs. Each MPI function that takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. I think I see what you mean. I'm a little worried that this could still be considered ambiguous. Perhaps changing the first sentence to: An implementation must support info objects as caches for arbitrary (key, value) pairs, regardless of whether the it recognizes the pairs. Bronis > ============ > > I think this approach implies that any statements about what happens when a > recognized key has a garbage value belong with the function that recognizes > the key. By insulating the info description from the descriptions of > particular MPI functions that take MPI_Info hints, we leave people the > option of using info objects as hints for third party libraries that work > with MPI but are not part of the MPI API. > > Dick > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/30/2008 12:33:34 PM: > > > This is a proposal for MPI 2.1, Ballot 4. > > > > I'm asking especially > > Linda Stanberry, Bill Gropp, rajeev Thakur, Dick Treumann, Raja Daoud, > > the participants of the email-discussion in 1999, to review this > proposal. > > > > This is a follow up to: > > What info keys can be set? > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/index.html > > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/discuss/infoset/ > > ___________________________________ > > Background: > > The sentence MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 > > > > "If a function does not recognize a key, it will ignore it, > > unless otherwise specified." > > > > was interpreted in two different ways: > > 1. This sentence is only valid in the routines in other chapters > > (e.g. processs creation and management, one-sided communication, > > parallel file I/O) where infos may be used to specify portable > > and implementation defined hints. > > 2. This interpretation is also valid for MPI_INFO_DELETE, MPI_INFO_SET, > > MPI_INFO_GET, and MPI_INFO_DUP. > > The following proposal ist based on Option 1. > > Option 2 is rejected due to the reason shown after the proposal. > > ___________________________________ > > > > Proposal: > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38 read: > > If a function does not recognize a key, > > it will ignore it, unless otherwise specified. > > but should read: > > If a function in another section of the MPI standard > > does not recognize a key, > > it will ignore it, unless otherwise specified. > > The routines in this section. > > The functions in this section must not ignore any (key,value) pair. > > > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > > paragraph: > > Advice to implementors. > > For optimization, an MPI implementation may already sort out > > which (key,value) pair can be recognized for use in other chapters > > (e.g., in processs creation and management, one-sided communication, > > parallel file I/O) to guarantee a fast access to the appropriate > > information when used in routines of those chapters. > > For porpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, > > MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation > > must still keep also all other (key,value) pairs that cannot be > > recognized by that MPI implementation. > > (End of advice to implementors.) > > ________________________________ > > Rationale for this clarification: > > > > The first of the two interpretation options mentioned in the > > background information is the only valid interpretation > > when we assume, that layered implementation of parts of > > the MPI-2 standard should be possible. > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > > ___________________________________ > > > > Comment: > > In the discussion (see the mails from 1999) > > one can clearly see, that an implementation of option 2 (done by IBM) > > cannot coexist with a layered implementation of MPI parallel > > file I/O, because MPI parallel file I/O routine need a different > > implementation of the INFO routines. Other chapters of MPI (e.g. 1-sided) > > need the original INF implementation. > > Two different INFO implementations cannot coexist in the same MPI > library. > > > > We could see the ouutcome of such problems with MPI_Wait & co. > > as long as a generalized request concept was not available. > > ROMIO had to define non-standard MPIO_Wait & co. routines. > > And they persist for a long time! > > > > Without the clarification above, we have the same problem with the > > INFO handling. > > > > Best regards > > Rolf > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 From jsquyres at [hidden] Wed Jan 30 20:34:31 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 30 Jan 2008 21:34:31 -0500 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: Message-ID: <11119E10-72BB-4149-A362-59DD0AFF4FA5@cisco.com> 1. Why do we need to indicate the INOUT status of the back-end MPI object in the language neutral bindings? All the bindings -- regardless of language -- only deal with the MPI handles, not the back- end MPI objects. 2. Adding qualifiers on what is supposed to happen to the back-end MPI object would seem to require additional semantics on the back-end MPI object. Should we really be specifying what the implementation must/ must not do with the back-end MPI object? Who benefits from that? On Jan 29, 2008, at 12:07 PM, Rolf Rabenseifner wrote: > Proposal: > The current text in MPI 2.0 Sect. 2.3 Procedure Specification, > page 6 lines 30-34 read: > > There is one special case — if an argument is a handle to an > opaque object (these terms are defined in Section 2.5.1), and > the object is updated by the procedure call, then the argument > is marked OUT. It is marked this way even though the handle > itself is not modified — we use the OUT attribute to denote > that what the handle references is updated. Thus, in C++, > IN arguments are either references or pointers to const objects. > > but should read: > > There is one special case — if an argument is a handle to an > opaque object (these terms are defined in Section 2.5.1), and > the object is updated by the procedure call > but the handle itself is not modified , then the argument > is marked IN/INOUT. > We use the first part (IN) to specify the use of the handle > and the second part (INOUT) to specify the use of the opaque > object. > Thus, in C++, > IN arguments are either references or pointers to const objects, > IN/INOUT arguments are references to const handles to non-const > objects. > > In all reoutines mentioned in the clarification below, > the INOUT handle declaration (in MPI-2.0) and the IN handle > declaration > (in MPI-1.1) ist modified into a IN/INOUT handle declaaration. > ____________________________ > > Rationale for this proposal: > > I have checked the total MPI 1.1 and 2.0 standard to find > all routines with an argument specification according to > the following declaration pattern: > > Language independent interface: > INOUT handle > C interface > MPI_handletype handle > > We can find this pattern only in MPI-2.0 at following > routines: > > MPI_INFO_SET / _DELETE > MPI_xxxx_SET_ERRHANDLER with xxxx=COMM / TYPE / WIN > MPI_GREQUEST_COMPLETE > MPI_xxxx_SET_NAME with xxxx=COMM / TYPE / WIN > MPI_xxxx_SET_ATTR with xxxx=COMM / TYPE / WIN > MPI_xxxx_DELETE_ATTR with xxxx=COMM / TYPE / WIN > MPI_FILE_SET_SIZE / _PREALLOCATE / _SET_INFO / _SET_VIEW > MPI_FILE_WRITE_AT / _WRITE_AT_ALL / _IWRITE_AT > MPI_FILE_READ / _READ_ALL / _WRITE / _WRITE_ALL > MPI_FILE_IREAD / _IWRITE > MPI_FILE_SEEK > MPI_FILE_READ_SHARED / _WRITE_SHARED > MPI_FILE_IREAD_SHARED / _IWRITE_SHARED > MPI_FILE_READ_ORDERED / _WRITE_ORDERED > MPI_FILE_SEEK_SHARED > MPI_FILE_WRITE_AT_ALL_BEGIN / MPI_FILE_WRITE_AT_ALL_END > MPI_FILE_READ_ALL_BEGIN / MPI_FILE_READ_ALL_END > MPI_FILE_WRITE_ALL_BEGIN / MPI_FILE_WRITE_ALL_END > MPI_FILE_READ_ORDERED_BEGIN / MPI_FILE_READ_ORDERED_END > MPI_FILE_WRITE_ORDERED_BEGIN / MPI_FILE_WRITE_ORDERED_END > MPI_FILE_SET_ATOMICITY / _SYNC > > All these routines keep the handle itself unchanged, but the > opaque object is modified in a way, that with oother MPI routines > this change can be detected. > For example, an attribute is cached or changed, a file pointer > is moved, the content of a file was modified. > > The current text in MPI 2.0 Sect. 2.3 Procedure Specification, > page 6 lines 30-34 read: > > There is one special case — if an argument is a handle to an > opaque object (these terms are defined in Section 2.5.1), and > the object is updated by the procedure call, then the argument > is marked OUT. It is marked this way even though the handle > itself is not modified — we use the OUT attribute to denote > that what the handle references is updated. Thus, in C++, > IN arguments are either references or pointers to const objects. > > With this definition (that I want to change later on) the use > of INOUT is correct. > > In MPI-1.1 we have several of such routines, but in all cases, > the handle is declared only as IN. > > I hope, I could find all of them: > > MPI_ATTR_PUT / _ DELETE > MPI_ERRHANDLER_SET > (these routines are now deprecated, but compare with use of > INOUT in MPI_COMM_SET_ATTR and MPI_COMM_DELETE_ATTR in MPI-2.0) > > The following proposal should remove this inconsistency and > should be a basis for correct definition of const. > ____________________________ > > I hope, with this proposal, the INOUT handle problem can be really > solved. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Thu Jan 31 04:25:46 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 11:25:46 +0100 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: I try to summarize all 3 replies in one proposal: ___________________________________ Proposal: MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: If a function does not recognize a key, it will ignore it, unless otherwise specified. If an implementation recognizes a key but does not recognize the format of the corresponding value, the result is undefined. but should read: An implementation must support info objects as caches for arbitrary (key, value) pairs, regardless of whether it recognizes the pairs. Each MPI function which takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new paragraph: Advice to implementors. Although in MPI functions that take hints in form of an MPI_Info (e.g., in process creation and management, one-sided communication, or parallel file I/O), an implementation must be prepared to ignore keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation must retain all (key,value) pairs so that layered functionality can also use the Info object. (End of advice to implementors.) _____________________________ Rationale for this clarification: The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs that are not recognized in routines in other chapters that take hints with info arguments. The proposed clarification is necessary when we assume, that layered implementation of parts of the MPI-2 standard should be possible and may use the MPI_Info objects for their needs. This was a goal of the MPI-2 Forum and the MPI-2.0 specification. ___________________________________ Bronis, for me, your wording "an MPI implementation may restrict" was in conflict with the rest of the advice. I hope the formulation above is also okay. It is based on the new wording from you and Dick in first part of the proposal. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From bronis at [hidden] Thu Jan 31 05:36:58 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Thu, 31 Jan 2008 03:36:58 -0800 (PST) Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Rolf: Your new wording works for me. Bronis On Thu, 31 Jan 2008, Rolf Rabenseifner wrote: > I try to summarize all 3 replies in one proposal: > > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > If an implementation recognizes a key but does not recognize > the format of the corresponding value, the result is undefined. > but should read: > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the pairs. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > Although in MPI functions that take hints in form of an MPI_Info > (e.g., in process creation and management, one-sided communication, > or parallel file I/O), an implementation must be prepared to ignore > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > implementation must retain all (key,value) pairs so that layered > functionality can also use the Info object. > (End of advice to implementors.) > _____________________________ > Rationale for this clarification: > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > that are not recognized in routines in other chapters that > take hints with info arguments. > The proposed clarification is necessary when we assume, that > layered implementation of parts of the MPI-2 standard should > be possible and may use the MPI_Info objects for their needs. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Bronis, for me, your wording "an MPI implementation may restrict" was > in conflict with the rest of the advice. I hope the formulation above > is also okay. It is based on the new wording from you and Dick in first > part of the proposal. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > From rross at [hidden] Thu Jan 31 06:53:31 2008 From: rross at [hidden] (Rob Ross) Date: Thu, 31 Jan 2008 06:53:31 -0600 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: <647852DB-4143-483A-9233-471A82A1404A@mcs.anl.gov> This is excellent. -- Rob On Jan 31, 2008, at 4:25 AM, Rolf Rabenseifner wrote: > I try to summarize all 3 replies in one proposal: > > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > If an implementation recognizes a key but does not recognize > the format of the corresponding value, the result is undefined. > but should read: > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the pairs. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > Although in MPI functions that take hints in form of an MPI_Info > (e.g., in process creation and management, one-sided communication, > or parallel file I/O), an implementation must be prepared to ignore > keys that it does not recognize, for the purpose of > MPI_INFO_GET_NKEYS, > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > implementation must retain all (key,value) pairs so that layered > functionality can also use the Info object. > (End of advice to implementors.) > _____________________________ > Rationale for this clarification: > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > that are not recognized in routines in other chapters that > take hints with info arguments. > The proposed clarification is necessary when we assume, that > layered implementation of parts of the MPI-2 standard should > be possible and may use the MPI_Info objects for their needs. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Bronis, for me, your wording "an MPI implementation may restrict" was > in conflict with the rest of the advice. I hope the formulation above > is also okay. It is based on the new wording from you and Dick in > first > part of the proposal. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > From rabenseifner at [hidden] Thu Jan 31 07:01:20 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 14:01:20 +0100 Subject: [mpi-21] Ballot 4 - MPI_File_get_info Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially the implementors to check, whether this interpretation is implemented in their MPI implementations, or does not contradict to the existing implementation. This is a follow up to: MPI_File_get_info in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion not yet existing ___________________________________ Proposal: MPI-2.0 Sect. 9.2.8, File Info, page 219, lines 11-13 read: MPI_FILE_GET_INFO returns a new info object containing the hints of the file associated with fh. The current setting of all hints actually used by the system related to this open file is returned in info_used. The user is responsible for freeing info_used via MPI_INFO_FREE. but should read (" or an abort(errorcode)" removed): MPI_FILE_GET_INFO returns a new info object containing the hints of the file associated with fh. The current setting of all hints actually used by the system related to this open file is returned in info_used. If there does not exist such a hint, MPI_INFO_NULL is returned. The user is responsible for freeing info_used via MPI_INFO_FREE if info_used is not MPI_INFO_NULL. ___________________________________ Rationale for this clarification: This text was missing. It was not clear, whether a MPI_Info handle would be returned that would return nkeys=0 from MPI_INFO_GET_NKEYS. From user's point of view, this behavior might have been expected without this clarification. ___________________________________ As far as I understand, ROMIO is using for all filesystems some default hints and therefore "no-hints" is never returned. MPI_File_set_view and MPI_File_set_info are only modifying values but do not remove keys. Therefore, the info handle cannot become empty. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From treumann at [hidden] Thu Jan 31 07:28:54 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 08:28:54 -0500 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Your wording works for me Rolf. -- Thanks Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > I try to summarize all 3 replies in one proposal: > > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > If an implementation recognizes a key but does not recognize > the format of the corresponding value, the result is undefined. > but should read: > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the pairs. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > Although in MPI functions that take hints in form of an MPI_Info > (e.g., in process creation and management, one-sided communication, > or parallel file I/O), an implementation must be prepared to ignore > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > implementation must retain all (key,value) pairs so that layered > functionality can also use the Info object. > (End of advice to implementors.) > _____________________________ > Rationale for this clarification: > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > that are not recognized in routines in other chapters that > take hints with info arguments. > The proposed clarification is necessary when we assume, that > layered implementation of parts of the MPI-2 standard should > be possible and may use the MPI_Info objects for their needs. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Bronis, for me, your wording "an MPI implementation may restrict" was > in conflict with the rest of the advice. I hope the formulation above > is also okay. It is based on the new wording from you and Dick in first > part of the proposal. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 07:24:51 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 14:24:51 +0100 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially the implementors to check, whether this interpretation is implemented in their MPI implementations, or does not contradict to the existing implementation. This is a follow up to: MPI_File_set_info in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion not yet existing ___________________________________ Proposal: Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the following sentences: With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting of all hints used by the system to this open file is updated by the (key,value) pairs in the info argument. ___________________________________ Rationale for this clarification: This text was missing. It was not clear, whether a info handles in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing the current set of used hints. The developers from ROMIO decided to update the current set of used hints. Therefore, this behavior should be the expected behavior of a majority of users. ___________________________________ Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From james.h.cownie at [hidden] Thu Jan 31 07:43:04 2008 From: james.h.cownie at [hidden] (Cownie, James H) Date: Thu, 31 Jan 2008 13:43:04 -0000 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: However, you have apparently lost the liberty to have undefined behavior which was there in the previous version. Maybe you should keep that, something like An implementation must support info objects as caches for arbitrary (key, value) pairs, regardless of whether it recognizes the keys. Each MPI function which takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. However if a function recognizes a key but not the associated value, then the behavior is undefined. (Modifications in italics) -- Jim James Cownie SSG/DPD/PAT Tel: +44 117 9071438 ________________________________ From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: 31 January 2008 13:29 To: Mailing list for discussion of MPI 2.1 Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation Your wording works for me Rolf. -- Thanks Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > I try to summarize all 3 replies in one proposal: > > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > If an implementation recognizes a key but does not recognize > the format of the corresponding value, the result is undefined. > but should read: > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the pairs. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > Although in MPI functions that take hints in form of an MPI_Info > (e.g., in process creation and management, one-sided communication, > or parallel file I/O), an implementation must be prepared to ignore > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > implementation must retain all (key,value) pairs so that layered > functionality can also use the Info object. > (End of advice to implementors.) > _____________________________ > Rationale for this clarification: > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > that are not recognized in routines in other chapters that > take hints with info arguments. > The proposed clarification is necessary when we assume, that > layered implementation of parts of the MPI-2 standard should > be possible and may use the MPI_Info objects for their needs. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Bronis, for me, your wording "an MPI implementation may restrict" was > in conflict with the rest of the advice. I hope the formulation above > is also okay. It is based on the new wording from you and Dick in first > part of the proposal. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 07:46:34 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 14:46:34 +0100 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: <11119E10-72BB-4149-A362-59DD0AFF4FA5@cisco.com> Message-ID: On Wed, 30 Jan 2008 21:34:31 -0500 Jeff Squyres wrote: >1. Why do we need to indicate the INOUT status of the back-end MPI >object in the language neutral bindings? All the bindings -- >regardless of language -- only deal with the MPI handles, not the back- >end MPI objects. > >2. Adding qualifiers on what is supposed to happen to the back-end MPI >object would seem to require additional semantics on the back-end MPI >object. Should we really be specifying what the implementation must/ >must not do with the back-end MPI object? Who benefits from that? After all the MPI_BOTTOM discussion and not knowing what future languages will bring, I didn't want to remove existing information from the standard. An opaque object in MPI consists always of two things: the handle and the object itself. The language independent interface should reflect this. I thought that especially for the const discussion it would be good to see the IN for the handle. For future HPCS languages, it may be also necessary see the INOUT for the object itself. Best regards Rolf > > >On Jan 29, 2008, at 12:07 PM, Rolf Rabenseifner wrote: > >> Proposal: >> The current text in MPI 2.0 Sect. 2.3 Procedure Specification, >> page 6 lines 30-34 read: >> >> There is one special case — if an argument is a handle to an >> opaque object (these terms are defined in Section 2.5.1), and >> the object is updated by the procedure call, then the argument >> is marked OUT. It is marked this way even though the handle >> itself is not modified — we use the OUT attribute to denote >> that what the handle references is updated. Thus, in C++, >> IN arguments are either references or pointers to const objects. >> >> but should read: >> >> There is one special case — if an argument is a handle to an >> opaque object (these terms are defined in Section 2.5.1), and >> the object is updated by the procedure call >> but the handle itself is not modified , then the argument >> is marked IN/INOUT. >> We use the first part (IN) to specify the use of the handle >> and the second part (INOUT) to specify the use of the opaque >> object. >> Thus, in C++, >> IN arguments are either references or pointers to const objects, >> IN/INOUT arguments are references to const handles to non-const >> objects. >> >> In all reoutines mentioned in the clarification below, >> the INOUT handle declaration (in MPI-2.0) and the IN handle >> declaration >> (in MPI-1.1) ist modified into a IN/INOUT handle declaaration. >> ____________________________ >> >> Rationale for this proposal: >> >> I have checked the total MPI 1.1 and 2.0 standard to find >> all routines with an argument specification according to >> the following declaration pattern: >> >> Language independent interface: >> INOUT handle >> C interface >> MPI_handletype handle >> >> We can find this pattern only in MPI-2.0 at following >> routines: >> >> MPI_INFO_SET / _DELETE >> MPI_xxxx_SET_ERRHANDLER with xxxx=COMM / TYPE / WIN >> MPI_GREQUEST_COMPLETE >> MPI_xxxx_SET_NAME with xxxx=COMM / TYPE / WIN >> MPI_xxxx_SET_ATTR with xxxx=COMM / TYPE / WIN >> MPI_xxxx_DELETE_ATTR with xxxx=COMM / TYPE / WIN >> MPI_FILE_SET_SIZE / _PREALLOCATE / _SET_INFO / _SET_VIEW >> MPI_FILE_WRITE_AT / _WRITE_AT_ALL / _IWRITE_AT >> MPI_FILE_READ / _READ_ALL / _WRITE / _WRITE_ALL >> MPI_FILE_IREAD / _IWRITE >> MPI_FILE_SEEK >> MPI_FILE_READ_SHARED / _WRITE_SHARED >> MPI_FILE_IREAD_SHARED / _IWRITE_SHARED >> MPI_FILE_READ_ORDERED / _WRITE_ORDERED >> MPI_FILE_SEEK_SHARED >> MPI_FILE_WRITE_AT_ALL_BEGIN / MPI_FILE_WRITE_AT_ALL_END >> MPI_FILE_READ_ALL_BEGIN / MPI_FILE_READ_ALL_END >> MPI_FILE_WRITE_ALL_BEGIN / MPI_FILE_WRITE_ALL_END >> MPI_FILE_READ_ORDERED_BEGIN / MPI_FILE_READ_ORDERED_END >> MPI_FILE_WRITE_ORDERED_BEGIN / MPI_FILE_WRITE_ORDERED_END >> MPI_FILE_SET_ATOMICITY / _SYNC >> >> All these routines keep the handle itself unchanged, but the >> opaque object is modified in a way, that with oother MPI routines >> this change can be detected. >> For example, an attribute is cached or changed, a file pointer >> is moved, the content of a file was modified. >> >> The current text in MPI 2.0 Sect. 2.3 Procedure Specification, >> page 6 lines 30-34 read: >> >> There is one special case — if an argument is a handle to an >> opaque object (these terms are defined in Section 2.5.1), and >> the object is updated by the procedure call, then the argument >> is marked OUT. It is marked this way even though the handle >> itself is not modified — we use the OUT attribute to denote >> that what the handle references is updated. Thus, in C++, >> IN arguments are either references or pointers to const objects. >> >> With this definition (that I want to change later on) the use >> of INOUT is correct. >> >> In MPI-1.1 we have several of such routines, but in all cases, >> the handle is declared only as IN. >> >> I hope, I could find all of them: >> >> MPI_ATTR_PUT / _ DELETE >> MPI_ERRHANDLER_SET >> (these routines are now deprecated, but compare with use of >> INOUT in MPI_COMM_SET_ATTR and MPI_COMM_DELETE_ATTR in MPI-2.0) >> >> The following proposal should remove this inconsistency and >> should be a basis for correct definition of const. >> ____________________________ >> >> I hope, with this proposal, the INOUT handle problem can be really >> solved. >> >> Best regards >> Rolf >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > >-- >Jeff Squyres >Cisco Systems > > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From treumann at [hidden] Thu Jan 31 08:20:24 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 09:20:24 -0500 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Jim - I was taking the view that the description of what to do for a recognized key but dubious value belongs to the function that recognizes the specific key. For example if MPI_File_open accepts a "buffer_size" hint with range "32K" to "16M" we may want to define the behavior of hints that are out of range. Once we say an info can have arbitrary keys we need to state that every info consumer must be prepared to ignore keys it does not recognize because we have made unrecognizable keys legitimate. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:43:04 AM: > However, you have apparently lost the liberty to have undefined > behavior which was there in the previous version. > > Maybe you should keep that, something like > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the keys. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. However if a > function recognizes a key but not the associated value, then the > behavior is undefined. > (Modifications in italics) > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: 31 January 2008 13:29 > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > Your wording works for me Rolf. -- Thanks > > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > > > I try to summarize all 3 replies in one proposal: > > > > ___________________________________ > > > > Proposal: > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > > If a function does not recognize a key, > > it will ignore it, unless otherwise specified. > > If an implementation recognizes a key but does not recognize > > the format of the corresponding value, the result is undefined. > > but should read: > > An implementation must support info objects as caches for arbitrary > > (key, value) pairs, regardless of whether it recognizes the pairs. > > Each MPI function which takes hints in the form of an MPI_Info must > > be prepared to ignore any key it does not recognize. > > > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > > paragraph: > > Advice to implementors. > > Although in MPI functions that take hints in form of an MPI_Info > > (e.g., in process creation and management, one-sided communication, > > or parallel file I/O), an implementation must be prepared to ignore > > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > > implementation must retain all (key,value) pairs so that layered > > functionality can also use the Info object. > > (End of advice to implementors.) > > _____________________________ > > Rationale for this clarification: > > > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > > that are not recognized in routines in other chapters that > > take hints with info arguments. > > The proposed clarification is necessary when we assume, that > > layered implementation of parts of the MPI-2 standard should > > be possible and may use the MPI_Info objects for their needs. > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > > ___________________________________ > > > > Bronis, for me, your wording "an MPI implementation may restrict" was > > in conflict with the rest of the advice. I hope the formulation above > > is also okay. It is based on the new wording from you and Dick in first > > part of the proposal. > > > > Best regards > > Rolf > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 08:21:18 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 15:21:18 +0100 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Sorry, that was my fault. Here again the full proposal including your sentence (and nothing else changed): ___________________________________ Proposal: MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: If a function does not recognize a key, it will ignore it, unless otherwise specified. If an implementation recognizes a key but does not recognize the format of the corresponding value, the result is undefined. but should read: An implementation must support info objects as caches for arbitrary (key, value) pairs, regardless of whether it recognizes the pairs. Each MPI function which takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. However if a function recognizes a key but not the associated value, then the behavior is undefined. Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new paragraph: Advice to implementors. Although in MPI functions that take hints in form of an MPI_Info (e.g., in process creation and management, one-sided communication, or parallel file I/O), an implementation must be prepared to ignore keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the implementation must retain all (key,value) pairs so that layered functionality can also use the Info object. (End of advice to implementors.) _____________________________ Rationale for this clarification: The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs that are not recognized in routines in other chapters that take hints with info arguments. The proposed clarification is necessary when we assume, that layered implementation of parts of the MPI-2 standard should be possible and may use the MPI_Info objects for their needs. This was a goal of the MPI-2 Forum and the MPI-2.0 specification. ___________________________________ Best regards Rolf On Thu, 31 Jan 2008 13:43:04 -0000 "Cownie, James H" wrote: >However, you have apparently lost the liberty to have undefined behavior >which was there in the previous version. > > > >Maybe you should keep that, something like > >An implementation must support info objects as caches for arbitrary >(key, value) pairs, regardless of whether it recognizes the keys. Each >MPI function which takes hints in the form of an MPI_Info must >be prepared to ignore any key it does not recognize. However if a >function recognizes a key but not the associated value, then the >behavior is undefined. > >(Modifications in italics) > >-- Jim > >James Cownie >SSG/DPD/PAT >Tel: +44 117 9071438 > >________________________________ > >From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On >Behalf Of Richard Treumann >Sent: 31 January 2008 13:29 >To: Mailing list for discussion of MPI 2.1 >Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > > >Your wording works for me Rolf. -- Thanks > > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > >> I try to summarize all 3 replies in one proposal: >> >> ___________________________________ >> >> Proposal: >> MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: >> If a function does not recognize a key, >> it will ignore it, unless otherwise specified. >> If an implementation recognizes a key but does not recognize >> the format of the corresponding value, the result is undefined. >> but should read: >> An implementation must support info objects as caches for arbitrary >> (key, value) pairs, regardless of whether it recognizes the pairs. >> Each MPI function which takes hints in the form of an MPI_Info must >> be prepared to ignore any key it does not recognize. >> >> Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new >> paragraph: >> Advice to implementors. >> Although in MPI functions that take hints in form of an MPI_Info >> (e.g., in process creation and management, one-sided communication, > >> or parallel file I/O), an implementation must be prepared to ignore > >> keys that it does not recognize, for the purpose of >MPI_INFO_GET_NKEYS, >> MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the >> implementation must retain all (key,value) pairs so that layered >> functionality can also use the Info object. >> (End of advice to implementors.) >> _____________________________ >> Rationale for this clarification: >> >> The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, >> MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs >> that are not recognized in routines in other chapters that >> take hints with info arguments. >> The proposed clarification is necessary when we assume, that >> layered implementation of parts of the MPI-2 standard should >> be possible and may use the MPI_Info objects for their needs. >> This was a goal of the MPI-2 Forum and the MPI-2.0 specification. >> ___________________________________ >> >> Bronis, for me, your wording "an MPI implementation may restrict" was >> in conflict with the rest of the advice. I hope the formulation above >> is also okay. It is based on the new wording from you and Dick in >first >> part of the proposal. >> >> Best regards >> Rolf >> >> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >--------------------------------------------------------------------- >Intel Corporation (UK) Limited >Registered No. 1134945 (England) >Registered Office: Pipers Way, Swindon SN3 1RJ >VAT No: 860 2173 47 > >This e-mail and any attachments may contain confidential material for >the sole use of the intended recipient(s). Any review or distribution >by others is strictly prohibited. If you are not the intended >recipient, please contact the sender and delete all copies. Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From treumann at [hidden] Thu Jan 31 08:25:28 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 09:25:28 -0500 Subject: [mpi-21] Ballot 4 - MPI_File_get_info In-Reply-To: Message-ID: I think this proposal is moot unless it is intended as a loophole to allow an implementation to provide a zero-functionality version of the INFO bindings. The "filename" hint is always present on an MPI_File so there should never be a empty MPI_FILE_GET_INFO return. Dick hh Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:01:20 AM: > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially the implementors to check, whether > this interpretation is implemented in their MPI implementations, > or does not contradict to the existing implementation. > > This is a follow up to: > MPI_File_get_info > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion not yet existing > ___________________________________ > > Proposal: > MPI-2.0 Sect. 9.2.8, File Info, page 219, lines 11-13 read: > > MPI_FILE_GET_INFO returns a new info object containing the hints > of the file associated with fh. The current setting of all hints > actually used by the system related to this open file is returned > in info_used. > The user is responsible for freeing info_used via MPI_INFO_FREE. > > but should read (" or an abort(errorcode)" removed): > > MPI_FILE_GET_INFO returns a new info object containing the hints > of the file associated with fh. The current setting of all hints > actually used by the system related to this open file is returned > in info_used. > If there does not exist such a hint, MPI_INFO_NULL is returned. > The user is responsible for freeing info_used via MPI_INFO_FREE > if info_used is not MPI_INFO_NULL. > ___________________________________ > Rationale for this clarification: > This text was missing. It was not clear, whether a MPI_Info handle > would be returned that would return nkeys=0 from MPI_INFO_GET_NKEYS. > From user's point of view, this behavior might have been expected > without this clarification. > ___________________________________ > > As far as I understand, ROMIO is using for all filesystems some default > hints and therefore "no-hints" is never returned. > MPI_File_set_view and MPI_File_set_info are only modifying values > but do not remove keys. > Therefore, the info handle cannot become empty. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 08:31:01 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 15:31:01 +0100 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially Greg Lindahl, the participants of the email-discussion in 2007, to review this proposal. This is a follow up to: Which thread is the funneled thread? in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/funneled/ ___________________________________ Proposal: MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: MPI_THREAD_FUNNELED The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are "funneled" to the main thread). but should read: MPI_THREAD_FUNNELED The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are "funneled" to the main thread, e.g., by using the OpenMP directive "master" in the application program). ___________________________________ Rationale for this clarification from the email from Greg Lindahl: The existing document doesn't make it clear that the MPI user has to funnel the calls to the main thread; it's not the job of the MPI library. I have seen multiple MPI users confused by this issue, and when I first read this section, I was confused by it, too. ___________________________________ Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From james.h.cownie at [hidden] Thu Jan 31 08:39:52 2008 From: james.h.cownie at [hidden] (Cownie, James H) Date: Thu, 31 Jan 2008 14:39:52 -0000 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: As you are saying, there are two different classes of errors here. 1) Keys which are not understood and need to be ignored by functions which don't grok them ("JIMS_SECRET_TAG","99") 2) Keys which are understood by a function, but with a value which is not ("buffer_size", "Hello") I think allowing the second type to have undefined behavior is the right thing to do, since it's the most general. If your implementation wants to define the behavior of some out-of-range values, that's fine and doesn't make you non-conforming, it just means you defined the previously undefined behavior for some set of values. Having that undefined-ness explicit here (in one central place) seems to make sense (if only because it may be omitted in one of the other places where it should appear). My addition does not alter the existing change which guarantees case 1, it's only concerned with case 2. -- Jim James Cownie SSG/DPD/PAT Tel: +44 117 9071438 ________________________________ From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: 31 January 2008 14:20 To: Mailing list for discussion of MPI 2.1 Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation Jim - I was taking the view that the description of what to do for a recognized key but dubious value belongs to the function that recognizes the specific key. For example if MPI_File_open accepts a "buffer_size" hint with range "32K" to "16M" we may want to define the behavior of hints that are out of range. Once we say an info can have arbitrary keys we need to state that every info consumer must be prepared to ignore keys it does not recognize because we have made unrecognizable keys legitimate. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:43:04 AM: > However, you have apparently lost the liberty to have undefined > behavior which was there in the previous version. > > Maybe you should keep that, something like > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the keys. > Each MPI function which takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. However if a > function recognizes a key but not the associated value, then the > behavior is undefined. > (Modifications in italics) > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: 31 January 2008 13:29 > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > Your wording works for me Rolf. -- Thanks > > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > > > I try to summarize all 3 replies in one proposal: > > > > ___________________________________ > > > > Proposal: > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > > If a function does not recognize a key, > > it will ignore it, unless otherwise specified. > > If an implementation recognizes a key but does not recognize > > the format of the corresponding value, the result is undefined. > > but should read: > > An implementation must support info objects as caches for arbitrary > > (key, value) pairs, regardless of whether it recognizes the pairs. > > Each MPI function which takes hints in the form of an MPI_Info must > > be prepared to ignore any key it does not recognize. > > > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > > paragraph: > > Advice to implementors. > > Although in MPI functions that take hints in form of an MPI_Info > > (e.g., in process creation and management, one-sided communication, > > or parallel file I/O), an implementation must be prepared to ignore > > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > > implementation must retain all (key,value) pairs so that layered > > functionality can also use the Info object. > > (End of advice to implementors.) > > _____________________________ > > Rationale for this clarification: > > > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > > that are not recognized in routines in other chapters that > > take hints with info arguments. > > The proposed clarification is necessary when we assume, that > > layered implementation of parts of the MPI-2 standard should > > be possible and may use the MPI_Info objects for their needs. > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > > ___________________________________ > > > > Bronis, for me, your wording "an MPI implementation may restrict" was > > in conflict with the rest of the advice. I hope the formulation above > > is also okay. It is based on the new wording from you and Dick in first > > part of the proposal. > > > > Best regards > > Rolf > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.h.cownie at [hidden] Thu Jan 31 08:44:08 2008 From: james.h.cownie at [hidden] (Cownie, James H) Date: Thu, 31 Jan 2008 14:44:08 -0000 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: A simpler change which would seem to achieve the desired clarification would be :- MPI_THREAD_FUNNELED The process may be multi-threaded, but only the main thread is allowed to make MPI calls. (and you could add If other threads make MPI calls the behavior is undefined. if you want to be verbose about it). -- Jim James Cownie SSG/DPD/PAT Tel: +44 117 9071438 > -----Original Message----- > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On > Behalf Of Rolf Rabenseifner > Sent: 31 January 2008 14:31 > To: mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, > topics, etc > > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially > Greg Lindahl, > the participants of the email-discussion in 2007, to review this proposal. > > This is a follow up to: > Which thread is the funneled thread? > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/funneled/ > ___________________________________ > > Proposal: > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > the main thread will make MPI calls (all MPI calls are "funneled" > to the main thread). > > but should read: > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > the main thread will make MPI calls (all MPI calls are "funneled" > to the main thread, e.g., by using the OpenMP directive "master" > in the application program). > ___________________________________ > Rationale for this clarification from the email from Greg Lindahl: > The existing document doesn't make it clear that > the MPI user has to funnel the calls to the main thread; > it's not the job of the MPI library. I have seen multiple > MPI users confused by this issue, and when I first read > this section, I was confused by it, too. > ___________________________________ > > > Best regards > Rolf > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From jsquyres at [hidden] Thu Jan 31 08:56:48 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 31 Jan 2008 09:56:48 -0500 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: Message-ID: <8C477515-A452-44A4-BDD3-A1ACCE9E4FF5@cisco.com> On Jan 31, 2008, at 8:46 AM, Rolf Rabenseifner wrote: >> 1. Why do we need to indicate the INOUT status of the back-end MPI >> object in the language neutral bindings? All the bindings -- >> regardless of language -- only deal with the MPI handles, not the >> back- >> end MPI objects. >> >> 2. Adding qualifiers on what is supposed to happen to the back-end >> MPI >> object would seem to require additional semantics on the back-end MPI >> object. Should we really be specifying what the implementation must/ >> must not do with the back-end MPI object? Who benefits from that? > > After all the MPI_BOTTOM discussion and not knowing what future > languages will bring, I didn't want to remove existing information > from the standard. An opaque object in MPI consists always > of two things: the handle and the object itself. > > The language independent interface should reflect this. > > I thought that especially for the const discussion it would be good > to see the IN for the handle. I agree. > For future HPCS languages, it may be > also necessary see the INOUT for the object itself. I guess my point is that the language bindings don't specify anything about the object itself anywhere else. If we also want to specify the behavior of the object, then a) that's a huge change (and not one that I'm convinced we need), and b) we need to add a second IN/OUT/INOUT to every language-neutral binding representing what happens to the object. Adding it to only a few of the bindings doesn't seem consistent to me. -- Jeff Squyres Cisco Systems From bronis at [hidden] Thu Jan 31 09:04:14 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Thu, 31 Jan 2008 07:04:14 -0800 (PST) Subject: [mpi-21] Ballot 4 - MPI_File_get_info In-Reply-To: Message-ID: Rolf: Re: > This is a proposal for MPI 2.1, Ballot 4. Minor wording tweak suggested below. > I'm asking especially the implementors to check, whether > this interpretation is implemented in their MPI implementations, > or does not contradict to the existing implementation. > > This is a follow up to: > MPI_File_get_info > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html > with mail discussion not yet existing > ___________________________________ > > Proposal: > MPI-2.0 Sect. 9.2.8, File Info, page 219, lines 11-13 read: > > MPI_FILE_GET_INFO returns a new info object containing the hints > of the file associated with fh. The current setting of all hints > actually used by the system related to this open file is returned > in info_used. > The user is responsible for freeing info_used via MPI_INFO_FREE. > > but should read (" or an abort(errorcode)" removed): > > MPI_FILE_GET_INFO returns a new info object containing the hints > of the file associated with fh. The current setting of all hints > actually used by the system related to this open file is returned > in info_used. > If there does not exist such a hint, MPI_INFO_NULL is returned. Change above to: MPI_INFO_NULL is returned if no such hint exists. Bronis > The user is responsible for freeing info_used via MPI_INFO_FREE > if info_used is not MPI_INFO_NULL. > ___________________________________ > Rationale for this clarification: > This text was missing. It was not clear, whether a MPI_Info handle > would be returned that would return nkeys=0 from MPI_INFO_GET_NKEYS. > From user's point of view, this behavior might have been expected > without this clarification. > ___________________________________ > > As far as I understand, ROMIO is using for all filesystems some default > hints and therefore "no-hints" is never returned. > MPI_File_set_view and MPI_File_set_info are only modifying values > but do not remove keys. > Therefore, the info handle cannot become empty. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > From bronis at [hidden] Thu Jan 31 09:10:39 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Thu, 31 Jan 2008 07:10:39 -0800 (PST) Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Rolf: Re: > Sorry, that was my fault. > Here again the full proposal including your sentence > (and nothing else changed): Still fine with me although there is aminor grammar issue that needs to be fixed. The "which" should be "that" - the correct difference between the two is to use "that" for "required" clauses (i.e., clauses withouth which the sentence no longer makes sense, essentially) and "which" preceded by a comma for others (typically apository clauses). See below. Bronis > ___________________________________ > > Proposal: > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > If a function does not recognize a key, > it will ignore it, unless otherwise specified. > If an implementation recognizes a key but does not recognize > the format of the corresponding value, the result is undefined. > but should read: > An implementation must support info objects as caches for arbitrary > (key, value) pairs, regardless of whether it recognizes the pairs. > Each MPI function which takes hints in the form of an MPI_Info must Change the above to: Each MPI function that takes hints in the form of an MPI_Info must > be prepared to ignore any key it does not recognize. > However if a function recognizes a key but not the associated value, > then the behavior is undefined. > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > paragraph: > Advice to implementors. > Although in MPI functions that take hints in form of an MPI_Info > (e.g., in process creation and management, one-sided communication, > or parallel file I/O), an implementation must be prepared to ignore > keys that it does not recognize, for the purpose of MPI_INFO_GET_NKEYS, > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > implementation must retain all (key,value) pairs so that layered > functionality can also use the Info object. > (End of advice to implementors.) > _____________________________ > Rationale for this clarification: > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > that are not recognized in routines in other chapters that > take hints with info arguments. > The proposed clarification is necessary when we assume, that > layered implementation of parts of the MPI-2 standard should > be possible and may use the MPI_Info objects for their needs. > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > ___________________________________ > > Best regards > Rolf > > On Thu, 31 Jan 2008 13:43:04 -0000 > "Cownie, James H" wrote: > >However, you have apparently lost the liberty to have undefined behavior > >which was there in the previous version. > > > > > > > >Maybe you should keep that, something like > > > >An implementation must support info objects as caches for arbitrary > >(key, value) pairs, regardless of whether it recognizes the keys. Each > >MPI function which takes hints in the form of an MPI_Info must > >be prepared to ignore any key it does not recognize. However if a > >function recognizes a key but not the associated value, then the > >behavior is undefined. > > > >(Modifications in italics) > > > >-- Jim > > > >James Cownie > >SSG/DPD/PAT > >Tel: +44 117 9071438 > > > >________________________________ > > > >From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On > >Behalf Of Richard Treumann > >Sent: 31 January 2008 13:29 > >To: Mailing list for discussion of MPI 2.1 > >Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > > > > > > >Your wording works for me Rolf. -- Thanks > > > > > >Dick Treumann - MPI Team/TCEM > >IBM Systems & Technology Group > >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >Tele (845) 433-7846 Fax (845) 433-8363 > > > > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > > > >> I try to summarize all 3 replies in one proposal: > >> > >> ___________________________________ > >> > >> Proposal: > >> MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > >> If a function does not recognize a key, > >> it will ignore it, unless otherwise specified. > >> If an implementation recognizes a key but does not recognize > >> the format of the corresponding value, the result is undefined. > >> but should read: > >> An implementation must support info objects as caches for arbitrary > >> (key, value) pairs, regardless of whether it recognizes the pairs. > >> Each MPI function which takes hints in the form of an MPI_Info must > >> be prepared to ignore any key it does not recognize. > >> > >> Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > >> paragraph: > >> Advice to implementors. > >> Although in MPI functions that take hints in form of an MPI_Info > >> (e.g., in process creation and management, one-sided communication, > > > >> or parallel file I/O), an implementation must be prepared to ignore > > > >> keys that it does not recognize, for the purpose of > >MPI_INFO_GET_NKEYS, > >> MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > >> implementation must retain all (key,value) pairs so that layered > >> functionality can also use the Info object. > >> (End of advice to implementors.) > >> _____________________________ > >> Rationale for this clarification: > >> > >> The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > >> MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > >> that are not recognized in routines in other chapters that > >> take hints with info arguments. > >> The proposed clarification is necessary when we assume, that > >> layered implementation of parts of the MPI-2 standard should > >> be possible and may use the MPI_Info objects for their needs. > >> This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > >> ___________________________________ > >> > >> Bronis, for me, your wording "an MPI implementation may restrict" was > >> in conflict with the rest of the advice. I hope the formulation above > >> is also okay. It is based on the new wording from you and Dick in > >first > >> part of the proposal. > >> > >> Best regards > >> Rolf > >> > >> > >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > >> _______________________________________________ > >> mpi-21 mailing list > >> mpi-21_at_[hidden] > >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > >--------------------------------------------------------------------- > >Intel Corporation (UK) Limited > >Registered No. 1134945 (England) > >Registered Office: Pipers Way, Swindon SN3 1RJ > >VAT No: 860 2173 47 > > > >This e-mail and any attachments may contain confidential material for > >the sole use of the intended recipient(s). Any review or distribution > >by others is strictly prohibited. If you are not the intended > >recipient, please contact the sender and delete all copies. > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > From treumann at [hidden] Thu Jan 31 09:23:05 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 10:23:05 -0500 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: Message-ID: I think we have an overall ambiguity about what the "current set of hints" is. This ambiguity is evident in the question about what MPI_FILE_INFO_GET returns and in this discussion too. If an implementation supports 5 file hints then it must select a value for each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that stipulates 2 of the hints then how many hints are in the "current set of hints"? 2 or 5? I would say there are 5 and I think it makes sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. Two more specific points - 1) I would expect that if at MPI_FILE_OPEN the implementation is given non-default hints ("A","yes") and ("B","no") and then at MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint "A" remains as set and hint "B" is altered (if possible). If there is a hint "C" which has never been mentioned it will have received a default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does not mention "C" will leave that default unchanged. Is the "clarification" saying hint "A" must return to default when MPI_FILE_INFO_SET fails to mention it? If that is the intent then I need to be convinced. If we decide this is to be blessed then we probably need to say that any use of MPI_FILE_SET_INFO must first call MPI_FILE_GET_INFO, tweek the INFO it gets back from MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid unexpected changes to the set of hints that is "in effect". 2) Since a hint is a hint, not a command, it can be rejected. It is possible that some hint can be honored at MPI_FILE_OPEN but once it has been honored, cannot be altered at reasonable cost. For example, maybe somebody's MPI_FILE_OPEN could accept a hint ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer but be prepared to accept changes to buffer size". If the user has set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation would omit whatever synchs are needed to preserve the ability to change on the fly. Passing ("buffer_size", "dynamic-16MB") to MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" at FILE_OPEN but would need to be ignored if he had not. For most implementations, a hint like "buffer_size" could not be honored at all after the first file read or write had been done. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially the implementors to check, whether > this interpretation is implemented in their MPI implementations, > or does not contradict to the existing implementation. > > This is a follow up to: > MPI_File_set_info > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion not yet existing > ___________________________________ > > Proposal: > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > following sentences: > > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting > of all hints used by the system to this open file is updated by > the (key,value) pairs in the info argument. > ___________________________________ > Rationale for this clarification: > This text was missing. It was not clear, whether a info handles > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing > the current set of used hints. > The developers from ROMIO decided to update the current set of used hints. > Therefore, this behavior should be the expected behavior of a majority > of users. > ___________________________________ > > Best regards > Rolf > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 09:26:39 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 16:26:39 +0100 Subject: [mpi-21] Ballot 4 proposal: INOUT arguments In-Reply-To: <8C477515-A452-44A4-BDD3-A1ACCE9E4FF5@cisco.com> Message-ID: Jeff, to your statement: >If we also want to specify the behavior of the object, then a) that's >a huge change (and not one that I'm convinced we need), and b) we need >to add a second IN/OUT/INOUT to every language-neutral binding >representing what happens to the object. Adding it to only a few of >the bindings doesn't seem consistent to me. The trick is, that we have this problem only with the handle/object arguments. In most cases we have - IN/IN which is abbreviated with IN, e.g., MPI_Send(IN comm), or - OUT/OUT which is abbreviated with OUT, e.g., in MPI_Isend(OUT rq), or - INOUT/INOUT which is abbreviated with INOUT, e.g. in MPI_Type_commit(INOUT datatype) The change, I'm proposing is based on your wish to see the IN for the handles that are kept constant. Changing no words would continue to show the INOUT for the objects itself (that's the way MPI-2 was written). I would not say, that it is a huge change to add the handle IN information to all the interfaces. It is not huge, although there are more routines affected (also many MPI_FILE routines) as you have mentioned in your first mails. Okay? Best regards Rolf On Thu, 31 Jan 2008 09:56:48 -0500 Jeff Squyres wrote: >On Jan 31, 2008, at 8:46 AM, Rolf Rabenseifner wrote: > >>> 1. Why do we need to indicate the INOUT status of the back-end MPI >>> object in the language neutral bindings? All the bindings -- >>> regardless of language -- only deal with the MPI handles, not the >>> back- >>> end MPI objects. >>> >>> 2. Adding qualifiers on what is supposed to happen to the back-end >>> MPI >>> object would seem to require additional semantics on the back-end MPI >>> object. Should we really be specifying what the implementation must/ >>> must not do with the back-end MPI object? Who benefits from that? >> >> After all the MPI_BOTTOM discussion and not knowing what future >> languages will bring, I didn't want to remove existing information >> from the standard. An opaque object in MPI consists always >> of two things: the handle and the object itself. >> >> The language independent interface should reflect this. >> >> I thought that especially for the const discussion it would be good >> to see the IN for the handle. > >I agree. > >> For future HPCS languages, it may be >> also necessary see the INOUT for the object itself. > >I guess my point is that the language bindings don't specify anything >about the object itself anywhere else. > >If we also want to specify the behavior of the object, then a) that's >a huge change (and not one that I'm convinced we need), and b) we need >to add a second IN/OUT/INOUT to every language-neutral binding >representing what happens to the object. Adding it to only a few of >the bindings doesn't seem consistent to me. > >-- >Jeff Squyres >Cisco Systems > >_______________________________________________ >mpi-21 mailing list >mpi-21_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From treumann at [hidden] Thu Jan 31 09:27:59 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 10:27:59 -0500 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: How about:: MPI_THREAD_FUNNELED The process may be multi-threaded, but the application must insure that only the main thread makes MPI calls. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > A simpler change which would seem to achieve the desired clarification > would be :- > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the > main > thread is allowed to make MPI calls. > > (and you could add > If other threads make MPI calls the behavior is undefined. > if you want to be verbose about it). > > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > > > > > -----Original Message----- > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On > > Behalf Of Rolf Rabenseifner > > Sent: 31 January 2008 14:31 > > To: mpi-21_at_[hidden] > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, > > topics, etc > > > > This is a proposal for MPI 2.1, Ballot 4. > > > > I'm asking especially > > Greg Lindahl, > > the participants of the email-discussion in 2007, to review this > proposal. > > > > This is a follow up to: > > Which thread is the funneled thread? > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/index.html > > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/discuss/funneled/ > > ___________________________________ > > > > Proposal: > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > the main thread will make MPI calls (all MPI calls are "funneled" > > to the main thread). > > > > but should read: > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > the main thread will make MPI calls (all MPI calls are "funneled" > > to the main thread, e.g., by using the OpenMP directive "master" > > in the application program). > > ___________________________________ > > Rationale for this clarification from the email from Greg Lindahl: > > The existing document doesn't make it clear that > > the MPI user has to funnel the calls to the main thread; > > it's not the job of the MPI library. I have seen multiple > > MPI users confused by this issue, and when I first read > > this section, I was confused by it, too. > > ___________________________________ > > > > > > Best regards > > Rolf > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 09:46:59 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 16:46:59 +0100 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: That's fine for me. On Thu, 31 Jan 2008 10:27:59 -0500 Richard Treumann wrote: >How about:: >MPI_THREAD_FUNNELED The process may be multi-threaded, but the application >must insure that only the main thread makes MPI calls. > > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > >> A simpler change which would seem to achieve the desired clarification >> would be :- >> >> MPI_THREAD_FUNNELED The process may be multi-threaded, but only the >> main >> thread is allowed to make MPI calls. >> >> (and you could add >> If other threads make MPI calls the behavior is undefined. >> if you want to be verbose about it). >> >> -- Jim >> >> James Cownie >> SSG/DPD/PAT >> Tel: +44 117 9071438 >> >> >> >> >> > -----Original Message----- >> > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >> On >> > Behalf Of Rolf Rabenseifner >> > Sent: 31 January 2008 14:31 >> > To: mpi-21_at_[hidden] >> > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, >> > topics, etc >> > >> > This is a proposal for MPI 2.1, Ballot 4. >> > >> > I'm asking especially >> > Greg Lindahl, >> > the participants of the email-discussion in 2007, to review this >> proposal. >> > >> > This is a follow up to: >> > Which thread is the funneled thread? >> > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> > errata/index.html >> > with mail discussion in >> > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> > errata/discuss/funneled/ >> > ___________________________________ >> > >> > Proposal: >> > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: >> > >> > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >> > the main thread will make MPI calls (all MPI calls are "funneled" >> > to the main thread). >> > >> > but should read: >> > >> > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >> > the main thread will make MPI calls (all MPI calls are "funneled" >> > to the main thread, e.g., by using the OpenMP directive "master" >> > in the application program). >> > ___________________________________ >> > Rationale for this clarification from the email from Greg Lindahl: >> > The existing document doesn't make it clear that >> > the MPI user has to funnel the calls to the main thread; >> > it's not the job of the MPI library. I have seen multiple >> > MPI users confused by this issue, and when I first read >> > this section, I was confused by it, too. >> > ___________________________________ >> > >> > >> > Best regards >> > Rolf >> > >> > >> > >> > >> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> > _______________________________________________ >> > mpi-21 mailing list >> > mpi-21_at_[hidden] >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> --------------------------------------------------------------------- >> Intel Corporation (UK) Limited >> Registered No. 1134945 (England) >> Registered Office: Pipers Way, Swindon SN3 1RJ >> VAT No: 860 2173 47 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rlgraham at [hidden] Thu Jan 31 09:50:35 2008 From: rlgraham at [hidden] (Richard Graham) Date: Thu, 31 Jan 2008 10:50:35 -0500 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: Why restrict this to a standard specified thread (main thread), why not word it as a singe thread, and let the app decide what thread this is, based on what ever criteria it wants to use to select this thread ? Rich On 1/31/08 10:27 AM, "Richard Treumann" wrote: > How about:: > MPI_THREAD_FUNNELED The process may be multi-threaded, but the application > must insure that only the main thread makes MPI calls. > > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > >> > A simpler change which would seem to achieve the desired clarification >> > would be :- >> > >> > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the >> > main >> > thread is allowed to make MPI calls. >> > >> > (and you could add >> > If other threads make MPI calls the behavior is undefined. >> > if you want to be verbose about it). >> > >> > -- Jim >> > >> > James Cownie >> > SSG/DPD/PAT >> > Tel: +44 117 9071438 >> > >> > >> > >> > >>> > > -----Original Message----- >>> > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >> > On >>> > > Behalf Of Rolf Rabenseifner >>> > > Sent: 31 January 2008 14:31 >>> > > To: mpi-21_at_[hidden] >>> > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, >>> > > topics, etc >>> > > >>> > > This is a proposal for MPI 2.1, Ballot 4. >>> > > >>> > > I'm asking especially >>> > > Greg Lindahl, >>> > > the participants of the email-discussion in 2007, to review this >> > proposal. >>> > > >>> > > This is a follow up to: >>> > > Which thread is the funneled thread? >>> > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>> > > errata/index.html >>> > > with mail discussion in >>> > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>> > > errata/discuss/funneled/ >>> > > ___________________________________ >>> > > >>> > > Proposal: >>> > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: >>> > > >>> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>> > > the main thread will make MPI calls (all MPI calls are "funneled" >>> > > to the main thread). >>> > > >>> > > but should read: >>> > > >>> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>> > > the main thread will make MPI calls (all MPI calls are "funneled" >>> > > to the main thread, e.g., by using the OpenMP directive "master" >>> > > in the application program). >>> > > ___________________________________ >>> > > Rationale for this clarification from the email from Greg Lindahl: >>> > > The existing document doesn't make it clear that >>> > > the MPI user has to funnel the calls to the main thread; >>> > > it's not the job of the MPI library. I have seen multiple >>> > > MPI users confused by this issue, and when I first read >>> > > this section, I was confused by it, too. >>> > > ___________________________________ >>> > > >>> > > >>> > > Best regards >>> > > Rolf >>> > > >>> > > >>> > > >>> > > >>> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >>> > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >>> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >>> > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >>> > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >>> > > _______________________________________________ >>> > > mpi-21 mailing list >>> > > mpi-21_at_[hidden] >>> > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> > --------------------------------------------------------------------- >> > Intel Corporation (UK) Limited >> > Registered No. 1134945 (England) >> > Registered Office: Pipers Way, Swindon SN3 1RJ >> > VAT No: 860 2173 47 >> > >> > This e-mail and any attachments may contain confidential material for >> > the sole use of the intended recipient(s). Any review or distribution >> > by others is strictly prohibited. If you are not the intended >> > recipient, please contact the sender and delete all copies. >> > >> > >> > _______________________________________________ >> > mpi-21 mailing list >> > mpi-21_at_[hidden] >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.h.cownie at [hidden] Thu Jan 31 09:54:55 2008 From: james.h.cownie at [hidden] (Cownie, James H) Date: Thu, 31 Jan 2008 15:54:55 -0000 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: Because that's how it's always been. We're not adding a restriction with the change, merely clarifying the existing restriction. -- Jim James Cownie SSG/DPD/PAT Tel: +44 117 9071438 ________________________________ From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Richard Graham Sent: 31 January 2008 15:51 To: Mailing list for discussion of MPI 2.1 Subject: Re: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc Why restrict this to a standard specified thread (main thread), why not word it as a singe thread, and let the app decide what thread this is, based on what ever criteria it wants to use to select this thread ? Rich On 1/31/08 10:27 AM, "Richard Treumann" wrote: How about:: MPI_THREAD_FUNNELED The process may be multi-threaded, but the application must insure that only the main thread makes MPI calls. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > A simpler change which would seem to achieve the desired clarification > would be :- > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the > main > thread is allowed to make MPI calls. > > (and you could add > If other threads make MPI calls the behavior is undefined. > if you want to be verbose about it). > > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > > > > > -----Original Message----- > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On > > Behalf Of Rolf Rabenseifner > > Sent: 31 January 2008 14:31 > > To: mpi-21_at_[hidden] > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, > > topics, etc > > > > This is a proposal for MPI 2.1, Ballot 4. > > > > I'm asking especially > > Greg Lindahl, > > the participants of the email-discussion in 2007, to review this > proposal. > > > > This is a follow up to: > > Which thread is the funneled thread? > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/index.html > > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/discuss/funneled/ > > ___________________________________ > > > > Proposal: > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > the main thread will make MPI calls (all MPI calls are "funneled" > > to the main thread). > > > > but should read: > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > the main thread will make MPI calls (all MPI calls are "funneled" > > to the main thread, e.g., by using the OpenMP directive "master" > > in the application program). > > ___________________________________ > > Rationale for this clarification from the email from Greg Lindahl: > > The existing document doesn't make it clear that > > the MPI user has to funnel the calls to the main thread; > > it's not the job of the MPI library. I have seen multiple > > MPI users confused by this issue, and when I first read > > this section, I was confused by it, too. > > ___________________________________ > > > > > > Best regards > > Rolf > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 ________________________________ _______________________________________________ mpi-21 mailing list mpi-21_at_[hidden] http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 10:09:17 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 17:09:17 +0100 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: Sorry Rich, but MPI 2.0 has defined "main thread", and not less! The open question was only, who is responsible for guaranteing this. Dicks text is fine: >> MPI_THREAD_FUNNELED The process may be multi-threaded, but the application >> must insure that only the main thread makes MPI calls. My first proposal >>>> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>> > > the main thread will make MPI calls (all MPI calls are "funneled" >>>> > > to the main thread, e.g., by using the OpenMP directive "master" >>>> > > in the application program). has the advantage, that "main thread" is defined by refering to the OpenMP-Standard where "OpenMP master thread" is defined (and not "main"). We can combine this to: MPI_THREAD_FUNNELED The process may be multi-threaded, but the application must insure that only the main thread makes MPI calls, e.g., by using the OpenMP directive "master". (This clearly tells that OpenMP single directive is not enough.) Best regards Rolf On Thu, 31 Jan 2008 10:50:35 -0500 Richard Graham wrote: >Why restrict this to a standard specified thread (main thread), why not word >it > as a singe thread, and let the app decide what thread this is, based on >what > ever criteria it wants to use to select this thread ? > >Rich > > >On 1/31/08 10:27 AM, "Richard Treumann" wrote: > >> How about:: >> MPI_THREAD_FUNNELED The process may be multi-threaded, but the application >> must insure that only the main thread makes MPI calls. >> >> >> Dick Treumann - MPI Team/TCEM >> IBM Systems & Technology Group >> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >> Tele (845) 433-7846 Fax (845) 433-8363 >> >> >> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: >> >>> > A simpler change which would seem to achieve the desired clarification >>> > would be :- >>> > >>> > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the >>> > main >>> > thread is allowed to make MPI calls. >>> > >>> > (and you could add >>> > If other threads make MPI calls the behavior is undefined. >>> > if you want to be verbose about it). >>> > >>> > -- Jim >>> > >>> > James Cownie >>> > SSG/DPD/PAT >>> > Tel: +44 117 9071438 >>> > >>> > >>> > >>> > >>>> > > -----Original Message----- >>>> > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >>> > On >>>> > > Behalf Of Rolf Rabenseifner >>>> > > Sent: 31 January 2008 14:31 >>>> > > To: mpi-21_at_[hidden] >>>> > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, >>>> > > topics, etc >>>> > > >>>> > > This is a proposal for MPI 2.1, Ballot 4. >>>> > > >>>> > > I'm asking especially >>>> > > Greg Lindahl, >>>> > > the participants of the email-discussion in 2007, to review this >>> > proposal. >>>> > > >>>> > > This is a follow up to: >>>> > > Which thread is the funneled thread? >>>> > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>>> > > errata/index.html >>>> > > with mail discussion in >>>> > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>>> > > errata/discuss/funneled/ >>>> > > ___________________________________ >>>> > > >>>> > > Proposal: >>>> > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: >>>> > > >>>> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>> > > the main thread will make MPI calls (all MPI calls are "funneled" >>>> > > to the main thread). >>>> > > >>>> > > but should read: >>>> > > >>>> > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>> > > the main thread will make MPI calls (all MPI calls are "funneled" >>>> > > to the main thread, e.g., by using the OpenMP directive "master" >>>> > > in the application program). >>>> > > ___________________________________ >>>> > > Rationale for this clarification from the email from Greg Lindahl: >>>> > > The existing document doesn't make it clear that >>>> > > the MPI user has to funnel the calls to the main thread; >>>> > > it's not the job of the MPI library. I have seen multiple >>>> > > MPI users confused by this issue, and when I first read >>>> > > this section, I was confused by it, too. >>>> > > ___________________________________ >>>> > > >>>> > > >>>> > > Best regards >>>> > > Rolf >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >>>> > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >>>> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >>>> > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >>>> > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >>>> > > _______________________________________________ >>>> > > mpi-21 mailing list >>>> > > mpi-21_at_[hidden] >>>> > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >>> > --------------------------------------------------------------------- >>> > Intel Corporation (UK) Limited >>> > Registered No. 1134945 (England) >>> > Registered Office: Pipers Way, Swindon SN3 1RJ >>> > VAT No: 860 2173 47 >>> > >>> > This e-mail and any attachments may contain confidential material for >>> > the sole use of the intended recipient(s). Any review or distribution >>> > by others is strictly prohibited. If you are not the intended >>> > recipient, please contact the sender and delete all copies. >>> > >>> > >>> > _______________________________________________ >>> > mpi-21 mailing list >>> > mpi-21_at_[hidden] >>> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 31 10:24:25 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 17:24:25 +0100 Subject: [mpi-21] Handling errors in handle transfer functions In-Reply-To: <[mpi-21] Handling errors in handle transfer functions> Message-ID: Mainly to Bill Gropp, Dick Treumann, and Marc Snir, who have contributed to this mail-discussion thread. This is a follow up to: Reporting invalid handles provided to handle conversion functions in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/handleerrors/ _________________________________________ When I understand correctly, then a clarification is not needed in the MPI standard. If somebody wants a clarification to be included into the standard and therefore in Ballot 4, please send me your wording with the page and line references included. If all agree, that no clarification is needed, then I would finish this discussion-thread. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 31 10:48:57 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 17:48:57 +0100 Subject: [mpi-21] Ballot 4 - User defined datarep - was Re: Question to MPI/IO Message-ID: This is a proposal for MPI 2.1, Ballot 4. I'm asking especially Hubert Ritzdorf, Jean-Pierre Prost, John May, Bill Nitzberg, the participants of the email-discussion in 1999, to review this proposal. This is a follow up to: Interpretation of user defined datarep in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/datarep/ ___________________________________ Proposal: MPI-2.0 Sect. 9.5.3 User-defined Data Representations, page 254, lines 13-15 read: Then in subsequent calls to the conversion function, MPI will increment the value in position by the count of items converted in the previous call. but should read: Then in subsequent calls to the conversion function, MPI will increment the value in position by the count of items converted in the previous call, and userbuf is kept unchanged. ___________________________________ Rationale for this clarification: It was not clear, whether the userbuf pointer must also be moved in the subsequent calls. This clarification was already done in 1999 and should already be implemented in existing implementations of user-defined data representations. ___________________________________ Total text page 254 lines 8-15: If MPI cannot allocate a buffer large enough to hold all the data to be converted from a read operation, it may call the conversion function repeatedly using the same datatype and userbuf, and reading successive chunks of data to be converted in filebuf. For the first call (and in the case when all the data to be converted fits into filebuf), MPI will call the function with position set to zero. Data converted during this call will be stored in the userbuf according to the first count data items in datatype. Then in subsequent calls to the conversion function, MPI will increment the value in position by the count of items converted in the previous call, (new text added:) and userbuf is kept unchanged. ___________________________________ Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 31 11:04:03 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 18:04:03 +0100 Subject: [mpi-21] Exactly one of MPI_MODE_RDONLY, MPI_MODE_RDWR, MPI_MODE_WRONLY Message-ID: I'm removing the question MPI_MODE_RDONLY, MPI_MODE_RDWR, MPI_MODE_WRONLY Is it an error to specify more than one of these? from the errata page, because MPI-2.0, Sect. 9.2.1, page 213 lines 3-4 read: Exactly one of MPI MODE RDONLY, MPI MODE RDWR, or MPI MODE WRONLY, must be specified. This means, the question is already fully answered. And there seems not to be any mail discussion on this topic. Was there any mail (not on the Forum reflector) asking this? If yes, this mail might be forwarded to the questioner. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 31 11:17:45 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 18:17:45 +0100 Subject: [mpi-21] Correction to One-sided communications, Section 6.7: Semantics and Correctness In-Reply-To: <[mpi-21] Correction to One-sided communications, Section 6.7: Semantics and Correctness> Message-ID: Dear all, I'm asking especially Jesper Traeff, Rajeev Thakur, and Dick Treumann, the participants of the email-discussion in 2003 for a proposal: Please can you all look, whether a clarification or correction is really needed and such a proposal would be a consensus. Please, if you make a proposal, clearly specify the page and line numbers. If it is done now, it may go into MPI 2.1, Ballot 4. ___________________________________ This is a follow up to: Are MPI_Win_wait and MPI_Win_post interchanged in the discussion of one-sided completion? in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html with mail discussion in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/waitpost/ ___________________________________ Thank you and best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From thakur at [hidden] Thu Jan 31 11:29:41 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 11:29:41 -0600 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: Message-ID: <008501c8642e$dcf1b380$860add8c@mcs.anl.gov> The intent is that if the user calls MPI_File_set_info (or MPI_File_set_view) twice, the 2nd call will only update (if possible) the key-vals passed in the 2nd call; others are unmodified. If the 2nd call passes MPI_INFO_NULL, nothing will change -- it won't nullify previously passed hints. Rajeev _____ From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: Thursday, January 31, 2008 9:23 AM To: Mailing list for discussion of MPI 2.1 Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement I think we have an overall ambiguity about what the "current set of hints" is. This ambiguity is evident in the question about what MPI_FILE_INFO_GET returns and in this discussion too. If an implementation supports 5 file hints then it must select a value for each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that stipulates 2 of the hints then how many hints are in the "current set of hints"? 2 or 5? I would say there are 5 and I think it makes sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. Two more specific points - 1) I would expect that if at MPI_FILE_OPEN the implementation is given non-default hints ("A","yes") and ("B","no") and then at MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint "A" remains as set and hint "B" is altered (if possible). If there is a hint "C" which has never been mentioned it will have received a default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does not mention "C" will leave that default unchanged. Is the "clarification" saying hint "A" must return to default when MPI_FILE_INFO_SET fails to mention it? If that is the intent then I need to be convinced. If we decide this is to be blessed then we probably need to say that any use of MPI_FILE_SET_INFO must first call MPI_FILE_GET_INFO, tweek the INFO it gets back from MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid unexpected changes to the set of hints that is "in effect". 2) Since a hint is a hint, not a command, it can be rejected. It is possible that some hint can be honored at MPI_FILE_OPEN but once it has been honored, cannot be altered at reasonable cost. For example, maybe somebody's MPI_FILE_OPEN could accept a hint ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer but be prepared to accept changes to buffer size". If the user has set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation would omit whatever synchs are needed to preserve the ability to change on the fly. Passing ("buffer_size", "dynamic-16MB") to MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" at FILE_OPEN but would need to be ignored if he had not. For most implementations, a hint like "buffer_size" could not be honored at all after the first file read or write had been done. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially the implementors to check, whether > this interpretation is implemented in their MPI implementations, > or does not contradict to the existing implementation. > > This is a follow up to: > MPI_File_set_info > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion not yet existing > ___________________________________ > > Proposal: > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > following sentences: > > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting > of all hints used by the system to this open file is updated by > the (key,value) pairs in the info argument. > ___________________________________ > Rationale for this clarification: > This text was missing. It was not clear, whether a info handles > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing > the current set of used hints. > The developers from ROMIO decided to update the current set of used hints. > Therefore, this behavior should be the expected behavior of a majority > of users. > ___________________________________ > > Best regards > Rolf > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at [hidden] Thu Jan 31 11:37:37 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 11:37:37 -0600 Subject: [mpi-21] Ballot 4 - User defined datarep - was Re: Question toMPI/IO In-Reply-To: Message-ID: <008a01c8642f$f8cbde90$860add8c@mcs.anl.gov> > Then in subsequent calls to the conversion function, > MPI will increment the value in position by the count of items > converted in the previous call, and userbuf is kept unchanged. Maybe we should be more clear and say that "the userbuf pointer is left unchanged" Rajeev > -----Original Message----- > From: mpi-21-bounces_at_[hidden] > [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Rolf Rabenseifner > Sent: Thursday, January 31, 2008 10:49 AM > To: mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 - User defined datarep - was Re: > Question toMPI/IO > > This is a proposal for MPI 2.1, Ballot 4. > > I'm asking especially > Hubert Ritzdorf, Jean-Pierre Prost, John May, Bill Nitzberg, > the participants of the email-discussion in 1999, to review > this proposal. > > This is a follow up to: > Interpretation of user defined datarep > in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/datarep/ > ___________________________________ > > Proposal: > MPI-2.0 Sect. 9.5.3 User-defined Data Representations, page 254, > lines 13-15 read: > Then in subsequent calls to the conversion function, > MPI will increment the value in position by the count of items > converted in the previous call. > > but should read: > Then in subsequent calls to the conversion function, > MPI will increment the value in position by the count of items > converted in the previous call, and userbuf is kept unchanged. > > ___________________________________ > Rationale for this clarification: > It was not clear, whether the userbuf pointer must also be moved > in the subsequent calls. This clarification was already done in > 1999 and should already be implemented in existing implementations > of user-defined data representations. > ___________________________________ > Total text page 254 lines 8-15: > If MPI cannot allocate a buffer large enough to hold all the > data to be converted from a read operation, it may call the > conversion function repeatedly using the same datatype and > userbuf, and reading successive chunks of data to be converted > in filebuf. For the first call (and in the case when all the data > to be converted fits into filebuf), MPI will call the function > with position set to zero. Data converted during this call will > be stored in the userbuf according to the first count data items > in datatype. Then in subsequent calls to the conversion function, > MPI will increment the value in position by the count of items > converted in the previous call, > (new text added:) > and userbuf is kept unchanged. > ___________________________________ > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > From bronis at [hidden] Thu Jan 31 11:57:59 2008 From: bronis at [hidden] (Bronis R. de Supinski) Date: Thu, 31 Jan 2008 09:57:59 -0800 (PST) Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: Jim and Rich: Then I suggest that fixing this broken semantic decision is something that we should consider for MPI 2.2. It should not break any existing programs and might even make some existing ones standards conforming. Although I can imagine ways for the MPI implementation to detect that the one thread is not the main thread it is not at all clear to me how it would matter to the implementation. Bronis On Thu, 31 Jan 2008, Cownie, James H wrote: > Because that's how it's always been. We're not adding a restriction with > the change, merely clarifying the existing restriction. > > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > ________________________________ > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On > Behalf Of Richard Graham > Sent: 31 January 2008 15:51 > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: > Attending, topics, etc > > > > Why restrict this to a standard specified thread (main thread), why not > word it > as a singe thread, and let the app decide what thread this is, based on > what > ever criteria it wants to use to select this thread ? > > Rich > > > On 1/31/08 10:27 AM, "Richard Treumann" wrote: > > How about:: > MPI_THREAD_FUNNELED The process may be multi-threaded, but the > application > must insure that only the main thread makes MPI calls. > > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > > > A simpler change which would seem to achieve the desired clarification > > would be :- > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only the > > main > > thread is allowed to make MPI calls. > > > > (and you could add > > If other threads make MPI calls the behavior is undefined. > > if you want to be verbose about it). > > > > -- Jim > > > > James Cownie > > SSG/DPD/PAT > > Tel: +44 117 9071438 > > > > > > > > > > > -----Original Message----- > > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > > > On > > > Behalf Of Rolf Rabenseifner > > > Sent: 31 January 2008 14:31 > > > To: mpi-21_at_[hidden] > > > Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: > Attending, > > > topics, etc > > > > > > This is a proposal for MPI 2.1, Ballot 4. > > > > > > I'm asking especially > > > Greg Lindahl, > > > the participants of the email-discussion in 2007, to review this > > proposal. > > > > > > This is a follow up to: > > > Which thread is the funneled thread? > > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > > errata/index.html > > > with mail discussion in > > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > > errata/discuss/funneled/ > > > ___________________________________ > > > > > > Proposal: > > > MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 read: > > > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > > the main thread will make MPI calls (all MPI calls are "funneled" > > > to the main thread). > > > > > > but should read: > > > > > > MPI_THREAD_FUNNELED The process may be multi-threaded, but only > > > the main thread will make MPI calls (all MPI calls are "funneled" > > > to the main thread, e.g., by using the OpenMP directive "master" > > > in the application program). > > > ___________________________________ > > > Rationale for this clarification from the email from Greg Lindahl: > > > The existing document doesn't make it clear that > > > the MPI user has to funnel the calls to the main thread; > > > it's not the job of the MPI library. I have seen multiple > > > MPI users confused by this issue, and when I first read > > > this section, I was confused by it, too. > > > ___________________________________ > > > > > > > > > Best regards > > > Rolf > > > > > > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email > rabenseifner_at_[hidden] > > > High Performance Computing Center (HLRS) . phone > ++49(0)711/685-65530 > > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / > 685-65832 > > > Head of Dpmt Parallel Computing . . . > www.hlrs.de/people/rabenseifner > > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > > _______________________________________________ > > > mpi-21 mailing list > > > mpi-21_at_[hidden] > > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > --------------------------------------------------------------------- > > Intel Corporation (UK) Limited > > Registered No. 1134945 (England) > > Registered Office: Pipers Way, Swindon SN3 1RJ > > VAT No: 860 2173 47 > > > > This e-mail and any attachments may contain confidential material for > > the sole use of the intended recipient(s). Any review or distribution > > by others is strictly prohibited. If you are not the intended > > recipient, please contact the sender and delete all copies. > > > > > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > ________________________________ > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > From bosilca at [hidden] Thu Jan 31 11:58:59 2008 From: bosilca at [hidden] (George Bosilca) Date: Thu, 31 Jan 2008 12:58:59 -0500 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: What is the definition of a "main thread" ? The OpenMP example is still vague. I think we should clarify what we expect the "main thread" to be. From my perspective, this "main thread" is the one that called the MPI_Init_thread function, as the MPI library is then allowed to attach some kind of private key(s) to it (pthread_key_create). Thanks, george. On Jan 31, 2008, at 11:09 AM, Rolf Rabenseifner wrote: > Sorry Rich, > > but MPI 2.0 has defined "main thread", and not less! > > The open question was only, who is responsible for guaranteing this. > > Dicks text is fine: >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the >>> application >>> must insure that only the main thread makes MPI calls. > > My first proposal >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>>>>> the main thread will make MPI calls (all MPI calls are >>>>>>> "funneled" >>>>>>> to the main thread, e.g., by using the OpenMP directive >>>>>>> "master" >>>>>>> in the application program). > has the advantage, that "main thread" is defined by refering to > the OpenMP-Standard where "OpenMP master thread" is defined (and not > "main"). > > We can combine this to: > MPI_THREAD_FUNNELED The process may be multi-threaded, but the > application > must insure that only the main thread makes MPI calls, e.g., by > using the > OpenMP directive "master". > > (This clearly tells that OpenMP single directive is not enough.) > > Best regards > Rolf > > > On Thu, 31 Jan 2008 10:50:35 -0500 > Richard Graham wrote: >> Why restrict this to a standard specified thread (main thread), why >> not word >> it >> as a singe thread, and let the app decide what thread this is, >> based on >> what >> ever criteria it wants to use to select this thread ? >> >> Rich >> >> >> On 1/31/08 10:27 AM, "Richard Treumann" wrote: >> >>> How about:: >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the >>> application >>> must insure that only the main thread makes MPI calls. >>> >>> >>> Dick Treumann - MPI Team/TCEM >>> IBM Systems & Technology Group >>> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >>> Tele (845) 433-7846 Fax (845) 433-8363 >>> >>> >>> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: >>> >>>>> A simpler change which would seem to achieve the desired >>>>> clarification >>>>> would be :- >>>>> >>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but >>>>> only the >>>>> main >>>>> thread is allowed to make MPI calls. >>>>> >>>>> (and you could add >>>>> If other threads make MPI calls the behavior is undefined. >>>>> if you want to be verbose about it). >>>>> >>>>> -- Jim >>>>> >>>>> James Cownie >>>>> SSG/DPD/PAT >>>>> Tel: +44 117 9071438 >>>>> >>>>> >>>>> >>>>> >>>>>>> -----Original Message----- >>>>>>> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden] >>>>>>> ] >>>>> On >>>>>>> Behalf Of Rolf Rabenseifner >>>>>>> Sent: 31 January 2008 14:31 >>>>>>> To: mpi-21_at_[hidden] >>>>>>> Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: >>>>>>> Attending, >>>>>>> topics, etc >>>>>>> >>>>>>> This is a proposal for MPI 2.1, Ballot 4. >>>>>>> >>>>>>> I'm asking especially >>>>>>> Greg Lindahl, >>>>>>> the participants of the email-discussion in 2007, to review this >>>>> proposal. >>>>>>> >>>>>>> This is a follow up to: >>>>>>> Which thread is the funneled thread? >>>>>>> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>>>>>> errata/index.html >>>>>>> with mail discussion in >>>>>>> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >>>>>>> errata/discuss/funneled/ >>>>>>> ___________________________________ >>>>>>> >>>>>>> Proposal: >>>>>>> MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 >>>>>>> read: >>>>>>> >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>>>>> the main thread will make MPI calls (all MPI calls are >>>>>>> "funneled" >>>>>>> to the main thread). >>>>>>> >>>>>>> but should read: >>>>>>> >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >>>>>>> the main thread will make MPI calls (all MPI calls are >>>>>>> "funneled" >>>>>>> to the main thread, e.g., by using the OpenMP directive >>>>>>> "master" >>>>>>> in the application program). >>>>>>> ___________________________________ >>>>>>> Rationale for this clarification from the email from Greg >>>>>>> Lindahl: >>>>>>> The existing document doesn't make it clear that >>>>>>> the MPI user has to funnel the calls to the main thread; >>>>>>> it's not the job of the MPI library. I have seen multiple >>>>>>> MPI users confused by this issue, and when I first read >>>>>>> this section, I was confused by it, too. >>>>>>> ___________________________________ >>>>>>> >>>>>>> >>>>>>> Best regards >>>>>>> Rolf >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >>>>>>> High Performance Computing Center (HLRS) . phone + >>>>>>> +49(0)711/685-65530 >>>>>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / >>>>>>> 685-65832 >>>>>>> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >>>>>>> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: >>>>>>> Allmandring 30) >>>>>>> _______________________________________________ >>>>>>> mpi-21 mailing list >>>>>>> mpi-21_at_[hidden] >>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >>>>> --------------------------------------------------------------------- >>>>> Intel Corporation (UK) Limited >>>>> Registered No. 1134945 (England) >>>>> Registered Office: Pipers Way, Swindon SN3 1RJ >>>>> VAT No: 860 2173 47 >>>>> >>>>> This e-mail and any attachments may contain confidential >>>>> material for >>>>> the sole use of the intended recipient(s). Any review or >>>>> distribution >>>>> by others is strictly prohibited. If you are not the intended >>>>> recipient, please contact the sender and delete all copies. >>>>> >>>>> >>>>> _______________________________________________ >>>>> mpi-21 mailing list >>>>> mpi-21_at_[hidden] >>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >>> >>> >>> _______________________________________________ >>> mpi-21 mailing list >>> mpi-21_at_[hidden] >>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2423 bytes Desc: smime.p7s URL: From treumann at [hidden] Thu Jan 31 12:05:13 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 13:05:13 -0500 Subject: [mpi-21] Handling errors in handle transfer functions In-Reply-To: Message-ID: No clarification needed is still my take on it Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 11:24:25 AM: > Mainly to Bill Gropp, Dick Treumann, and Marc Snir, > who have contributed to this mail-discussion thread. > > This is a follow up to: > Reporting invalid handles provided to handle conversion functions > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/handleerrors/ > > _________________________________________ > > When I understand correctly, then a clarification is not needed > in the MPI standard. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this discussion-thread. > > Best regards > Rolf > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Thu Jan 31 13:03:59 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 14:03:59 -0500 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: <008501c8642e$dcf1b380$860add8c@mcs.anl.gov> Message-ID: Rajeev I think you just agreed with the interpretation I advocated. Neither the proposal or rationale made this at all clear to me. How about? Proposal: Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the following sentences: When an info object that mentions a subset of valid hints is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there will be no effect on previously set or defaulted hints that the info does not mention. ___________________________________ Rationale for this clarification: This text was missing. It was not clear, whether an info object in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO was intended to replace only the mentioned hints or was intended to substitute a complete new set of hints for the prior set. ___________________________________ Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:29:41 PM: > The intent is that if the user calls MPI_File_set_info (or > MPI_File_set_view) twice, the 2nd call will only update (if > possible) the key-vals passed in the 2nd call; others are > unmodified. If the 2nd call passes MPI_INFO_NULL, nothing will > change -- it won't nullify previously passed hints. > > Rajeev > > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: Thursday, January 31, 2008 9:23 AM > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > I think we have an overall ambiguity about what the "current set of > hints" is. This ambiguity is evident in the question about what > MPI_FILE_INFO_GET returns and in this discussion too. If an > implementation supports 5 file hints then it must select a value for > each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that > stipulates 2 of the hints then how many hints are in the "current > set of hints"? 2 or 5? I would say there are 5 and I think it makes > sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. > > Two more specific points - > > 1) I would expect that if at MPI_FILE_OPEN the implementation is > given non-default hints ("A","yes") and ("B","no") and then at > MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint > "A" remains as set and hint "B" is altered (if possible). If there > is a hint "C" which has never been mentioned it will have received a > default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does > not mention "C" will leave that default unchanged. > > Is the "clarification" saying hint "A" must return to default when > MPI_FILE_INFO_SET fails to mention it? If that is the intent then I > need to be convinced. If we decide this is to be blessed then we > probably need to say that any use of MPI_FILE_SET_INFO must first > call MPI_FILE_GET_INFO, tweek the INFO it gets back from > MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid > unexpected changes to the set of hints that is "in effect". > > 2) Since a hint is a hint, not a command, it can be rejected. It is > possible that some hint can be honored at MPI_FILE_OPEN but once it > has been honored, cannot be altered at reasonable cost. > > For example, maybe somebody's MPI_FILE_OPEN could accept a hint > ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer > but be prepared to accept changes to buffer size". If the user has > set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation > would omit whatever synchs are needed to preserve the ability to > change on the fly. Passing ("buffer_size", "dynamic-16MB") to > MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" > at FILE_OPEN but would need to be ignored if he had not. > > For most implementations, a hint like "buffer_size" could not be > honored at all after the first file read or write had been done. > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: > > > This is a proposal for MPI 2.1, Ballot 4. > > > > I'm asking especially the implementors to check, whether > > this interpretation is implemented in their MPI implementations, > > or does not contradict to the existing implementation. > > > > This is a follow up to: > > MPI_File_set_info > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/index.html > > with mail discussion not yet existing > > ___________________________________ > > > > Proposal: > > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > > following sentences: > > > > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting > > of all hints used by the system to this open file is updated by > > the (key,value) pairs in the info argument. > > ___________________________________ > > Rationale for this clarification: > > This text was missing. It was not clear, whether a info handles > > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing > > the current set of used hints. > > The developers from ROMIO decided to update the current set of used hints. > > Therefore, this behavior should be the expected behavior of a majority > > of users. > > ___________________________________ > > > > Best regards > > Rolf > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Thu Jan 31 13:13:17 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 14:13:17 -0500 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: See MPI_IS_THREAD_MAIN on page 198. The definition of "main thread" is there. Maybepage 198 is not the best place for a definition of a term that is used on page 196 but the standard does provide one. I am reluctant to mention OpenMP in the context of defining MPI_THREAD_FUNNELED because then we would need to discuss other ways an application could become multi threaded too. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:58:59 PM: > What is the definition of a "main thread" ? The OpenMP example is > still vague. > > I think we should clarify what we expect the "main thread" to be. From > my perspective, this "main thread" is the one that called the > MPI_Init_thread function, as the MPI library is then allowed to attach > some kind of private key(s) to it (pthread_key_create). > > Thanks, > george. > > On Jan 31, 2008, at 11:09 AM, Rolf Rabenseifner wrote: > > > Sorry Rich, > > > > but MPI 2.0 has defined "main thread", and not less! > > > > The open question was only, who is responsible for guaranteing this. > > > > Dicks text is fine: > >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the > >>> application > >>> must insure that only the main thread makes MPI calls. > > > > My first proposal > >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only > >>>>>>> the main thread will make MPI calls (all MPI calls are > >>>>>>> "funneled" > >>>>>>> to the main thread, e.g., by using the OpenMP directive > >>>>>>> "master" > >>>>>>> in the application program). > > has the advantage, that "main thread" is defined by refering to > > the OpenMP-Standard where "OpenMP master thread" is defined (and not > > "main"). > > > > We can combine this to: > > MPI_THREAD_FUNNELED The process may be multi-threaded, but the > > application > > must insure that only the main thread makes MPI calls, e.g., by > > using the > > OpenMP directive "master". > > > > (This clearly tells that OpenMP single directive is not enough.) > > > > Best regards > > Rolf > > > > > > On Thu, 31 Jan 2008 10:50:35 -0500 > > Richard Graham wrote: > >> Why restrict this to a standard specified thread (main thread), why > >> not word > >> it > >> as a singe thread, and let the app decide what thread this is, > >> based on > >> what > >> ever criteria it wants to use to select this thread ? > >> > >> Rich > >> > >> > >> On 1/31/08 10:27 AM, "Richard Treumann" wrote: > >> > >>> How about:: > >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the > >>> application > >>> must insure that only the main thread makes MPI calls. > >>> > >>> > >>> Dick Treumann - MPI Team/TCEM > >>> IBM Systems & Technology Group > >>> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >>> Tele (845) 433-7846 Fax (845) 433-8363 > >>> > >>> > >>> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: > >>> > >>>>> A simpler change which would seem to achieve the desired > >>>>> clarification > >>>>> would be :- > >>>>> > >>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but > >>>>> only the > >>>>> main > >>>>> thread is allowed to make MPI calls. > >>>>> > >>>>> (and you could add > >>>>> If other threads make MPI calls the behavior is undefined. > >>>>> if you want to be verbose about it). > >>>>> > >>>>> -- Jim > >>>>> > >>>>> James Cownie > >>>>> SSG/DPD/PAT > >>>>> Tel: +44 117 9071438 > >>>>> > >>>>> > >>>>> > >>>>> > >>>>>>> -----Original Message----- > >>>>>>> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden] > >>>>>>> ] > >>>>> On > >>>>>>> Behalf Of Rolf Rabenseifner > >>>>>>> Sent: 31 January 2008 14:31 > >>>>>>> To: mpi-21_at_[hidden] > >>>>>>> Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: > >>>>>>> Attending, > >>>>>>> topics, etc > >>>>>>> > >>>>>>> This is a proposal for MPI 2.1, Ballot 4. > >>>>>>> > >>>>>>> I'm asking especially > >>>>>>> Greg Lindahl, > >>>>>>> the participants of the email-discussion in 2007, to review this > >>>>> proposal. > >>>>>>> > >>>>>>> This is a follow up to: > >>>>>>> Which thread is the funneled thread? > >>>>>>> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > >>>>>>> errata/index.html > >>>>>>> with mail discussion in > >>>>>>> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > >>>>>>> errata/discuss/funneled/ > >>>>>>> ___________________________________ > >>>>>>> > >>>>>>> Proposal: > >>>>>>> MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 > >>>>>>> read: > >>>>>>> > >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only > >>>>>>> the main thread will make MPI calls (all MPI calls are > >>>>>>> "funneled" > >>>>>>> to the main thread). > >>>>>>> > >>>>>>> but should read: > >>>>>>> > >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only > >>>>>>> the main thread will make MPI calls (all MPI calls are > >>>>>>> "funneled" > >>>>>>> to the main thread, e.g., by using the OpenMP directive > >>>>>>> "master" > >>>>>>> in the application program). > >>>>>>> ___________________________________ > >>>>>>> Rationale for this clarification from the email from Greg > >>>>>>> Lindahl: > >>>>>>> The existing document doesn't make it clear that > >>>>>>> the MPI user has to funnel the calls to the main thread; > >>>>>>> it's not the job of the MPI library. I have seen multiple > >>>>>>> MPI users confused by this issue, and when I first read > >>>>>>> this section, I was confused by it, too. > >>>>>>> ___________________________________ > >>>>>>> > >>>>>>> > >>>>>>> Best regards > >>>>>>> Rolf > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > >>>>>>> High Performance Computing Center (HLRS) . phone + > >>>>>>> +49(0)711/685-65530 > >>>>>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / > >>>>>>> 685-65832 > >>>>>>> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > >>>>>>> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: > >>>>>>> Allmandring 30) > >>>>>>> _______________________________________________ > >>>>>>> mpi-21 mailing list > >>>>>>> mpi-21_at_[hidden] > >>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >>>>> --------------------------------------------------------------------- > >>>>> Intel Corporation (UK) Limited > >>>>> Registered No. 1134945 (England) > >>>>> Registered Office: Pipers Way, Swindon SN3 1RJ > >>>>> VAT No: 860 2173 47 > >>>>> > >>>>> This e-mail and any attachments may contain confidential > >>>>> material for > >>>>> the sole use of the intended recipient(s). Any review or > >>>>> distribution > >>>>> by others is strictly prohibited. If you are not the intended > >>>>> recipient, please contact the sender and delete all copies. > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> mpi-21 mailing list > >>>>> mpi-21_at_[hidden] > >>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >>> > >>> > >>> _______________________________________________ > >>> mpi-21 mailing list > >>> mpi-21_at_[hidden] > >>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >> > >> > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > [attachment "smime.p7s" deleted by Richard > Treumann/Poughkeepsie/IBM] _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From treumann at [hidden] Thu Jan 31 13:16:20 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 14:16:20 -0500 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Jim The sense I get from the phrase "the behavior is undefined" is that defining the behavior is beyond the scope of the standard. The MPI standard does have some predefined keys and requires an implementation that supports any predefined key to support it as described by the standard. That can lead to one place in the standard saying "the behavior is undefined" and another saying "here is the definition of the behavior". Maybe I am reading more than others into the phrase "the behavior is undefined" but it does have this strong implication in my mind. How is this ? An implementation must support info objects as caches for arbitrary key, value) pairs, regardless of whether it recognizes the key. Each function that takes hints in the form of an MPI_Info must be prepared to ignore any key it does not recognize. This description of info objects does not attempt to define how a particular function should react if it recognizes a key but not the associated value. note: I also changed MPI function to simply function because with this free form approach to the info object, it seems to me a third party library that is intended to work as part of an MPI program may want to use MPI_Info objects too. If someone authors a parallel math library and wants the initialization routine to look like: init_pmath( MPI_Info info) why not? They should understand that init_pmath must ignore keys it does not recognize even if i is not an MPI_ routine. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:39:52 AM: > As you are saying, there are two different classes of errors here. > > 1) Keys which are not understood and need to be ignored by functions > which don’t grok them (“JIMS_SECRET_TAG”,”99”) > 2) Keys which are understood by a function, but with a value which > is not (“buffer_size”, “Hello”) > > I think allowing the second type to have undefined behavior is the > right thing to do, since it’s the most general. > If your implementation wants to define the behavior of some out-of- > range values, that’s fine and doesn’t make you non-conforming, it > just means you defined the previously undefined behavior for some > set of values. > > Having that undefined-ness explicit here (in one central place) > seems to make sense (if only because it may be omitted in one of the > other places where it should appear). > > My addition does not alter the existing change which guarantees case > 1, it’s only concerned with case 2. > -- Jim > > James Cownie > SSG/DPD/PAT > Tel: +44 117 9071438 > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: 31 January 2008 14:20 > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > Jim - > > I was taking the view that the description of what to do for a > recognized key but dubious value belongs to the function that > recognizes the specific key. For example if MPI_File_open accepts a > "buffer_size" hint with range "32K" to "16M" we may want to define > the behavior of hints that are out of range. > > Once we say an info can have arbitrary keys we need to state that > every info consumer must be prepared to ignore keys it does not > recognize because we have made unrecognizable keys legitimate. > > Dick > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:43:04 AM: > > > However, you have apparently lost the liberty to have undefined > > behavior which was there in the previous version. > > > > Maybe you should keep that, something like > > An implementation must support info objects as caches for arbitrary > > (key, value) pairs, regardless of whether it recognizes the keys. > > Each MPI function which takes hints in the form of an MPI_Info must > > be prepared to ignore any key it does not recognize. However if a > > function recognizes a key but not the associated value, then the > > behavior is undefined. > > (Modifications in italics) > > -- Jim > > > > James Cownie > > SSG/DPD/PAT > > Tel: +44 117 9071438 > > > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > > On Behalf Of Richard Treumann > > Sent: 31 January 2008 13:29 > > To: Mailing list for discussion of MPI 2.1 > > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > > > > Your wording works for me Rolf. -- Thanks > > > > > > Dick Treumann - MPI Team/TCEM > > IBM Systems & Technology Group > > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > > Tele (845) 433-7846 Fax (845) 433-8363 > > > > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > > > > > I try to summarize all 3 replies in one proposal: > > > > > > ___________________________________ > > > > > > Proposal: > > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > > > If a function does not recognize a key, > > > it will ignore it, unless otherwise specified. > > > If an implementation recognizes a key but does not recognize > > > the format of the corresponding value, the result is undefined. > > > but should read: > > > An implementation must support info objects as caches for arbitrary > > > (key, value) pairs, regardless of whether it recognizes the pairs. > > > Each MPI function which takes hints in the form of an MPI_Info must > > > be prepared to ignore any key it does not recognize. > > > > > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > > > paragraph: > > > Advice to implementors. > > > Although in MPI functions that take hints in form of an MPI_Info > > > (e.g., in process creation and management, one-sided communication, > > > or parallel file I/O), an implementation must be prepared to ignore > > > keys that it does not recognize, for the purpose of > MPI_INFO_GET_NKEYS, > > > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > > > implementation must retain all (key,value) pairs so that layered > > > functionality can also use the Info object. > > > (End of advice to implementors.) > > > _____________________________ > > > Rationale for this clarification: > > > > > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > > > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > > > that are not recognized in routines in other chapters that > > > take hints with info arguments. > > > The proposed clarification is necessary when we assume, that > > > layered implementation of parts of the MPI-2 standard should > > > be possible and may use the MPI_Info objects for their needs. > > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > > > ___________________________________ > > > > > > Bronis, for me, your wording "an MPI implementation may restrict" was > > > in conflict with the rest of the advice. I hope the formulation above > > > is also okay. It is based on the new wording from you and Dick in first > > > part of the proposal. > > > > > > Best regards > > > Rolf > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > > _______________________________________________ > > > mpi-21 mailing list > > > mpi-21_at_[hidden] > > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > --------------------------------------------------------------------- > > Intel Corporation (UK) Limited > > Registered No. 1134945 (England) > > Registered Office: Pipers Way, Swindon SN3 1RJ > > VAT No: 860 2173 47 > > > > This e-mail and any attachments may contain confidential material for > > the sole use of the intended recipient(s). Any review or distribution > > by others is strictly prohibited. If you are not the intended > > recipient, please contact the sender and delete all copies. > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at [hidden] Thu Jan 31 13:59:24 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 13:59:24 -0600 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: Message-ID: <00c901c86443$c72c8830$860add8c@mcs.anl.gov> Looks ok. Maybe use "specify" instead of "mention". When an info object that specifies a subset of valid hints is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there will be no effect on previously set or defaulted hints that the info does not specify. Rajeev _____ From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Richard Treumann Sent: Thursday, January 31, 2008 1:04 PM To: Mailing list for discussion of MPI 2.1 Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement Rajeev I think you just agreed with the interpretation I advocated. Neither the proposal or rationale made this at all clear to me. How about? Proposal: Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the following sentences: When an info object that mentions a subset of valid hints is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there will be no effect on previously set or defaulted hints that the info does not mention. ___________________________________ Rationale for this clarification: This text was missing. It was not clear, whether an info object in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO was intended to replace only the mentioned hints or was intended to substitute a complete new set of hints for the prior set. ___________________________________ Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:29:41 PM: > The intent is that if the user calls MPI_File_set_info (or > MPI_File_set_view) twice, the 2nd call will only update (if > possible) the key-vals passed in the 2nd call; others are > unmodified. If the 2nd call passes MPI_INFO_NULL, nothing will > change -- it won't nullify previously passed hints. > > Rajeev > > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: Thursday, January 31, 2008 9:23 AM > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > I think we have an overall ambiguity about what the "current set of > hints" is. This ambiguity is evident in the question about what > MPI_FILE_INFO_GET returns and in this discussion too. If an > implementation supports 5 file hints then it must select a value for > each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that > stipulates 2 of the hints then how many hints are in the "current > set of hints"? 2 or 5? I would say there are 5 and I think it makes > sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. > > Two more specific points - > > 1) I would expect that if at MPI_FILE_OPEN the implementation is > given non-default hints ("A","yes") and ("B","no") and then at > MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint > "A" remains as set and hint "B" is altered (if possible). If there > is a hint "C" which has never been mentioned it will have received a > default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does > not mention "C" will leave that default unchanged. > > Is the "clarification" saying hint "A" must return to default when > MPI_FILE_INFO_SET fails to mention it? If that is the intent then I > need to be convinced. If we decide this is to be blessed then we > probably need to say that any use of MPI_FILE_SET_INFO must first > call MPI_FILE_GET_INFO, tweek the INFO it gets back from > MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid > unexpected changes to the set of hints that is "in effect". > > 2) Since a hint is a hint, not a command, it can be rejected. It is > possible that some hint can be honored at MPI_FILE_OPEN but once it > has been honored, cannot be altered at reasonable cost. > > For example, maybe somebody's MPI_FILE_OPEN could accept a hint > ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer > but be prepared to accept changes to buffer size". If the user has > set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation > would omit whatever synchs are needed to preserve the ability to > change on the fly. Passing ("buffer_size", "dynamic-16MB") to > MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" > at FILE_OPEN but would need to be ignored if he had not. > > For most implementations, a hint like "buffer_size" could not be > honored at all after the first file read or write had been done. > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: > > > This is a proposal for MPI 2.1, Ballot 4. > > > > I'm asking especially the implementors to check, whether > > this interpretation is implemented in their MPI implementations, > > or does not contradict to the existing implementation. > > > > This is a follow up to: > > MPI_File_set_info > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > errata/index.html > > with mail discussion not yet existing > > ___________________________________ > > > > Proposal: > > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > > following sentences: > > > > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting > > of all hints used by the system to this open file is updated by > > the (key,value) pairs in the info argument. > > ___________________________________ > > Rationale for this clarification: > > This text was missing. It was not clear, whether a info handles > > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing > > the current set of used hints. > > The developers from ROMIO decided to update the current set of used hints. > > Therefore, this behavior should be the expected behavior of a majority > > of users. > > ___________________________________ > > > > Best regards > > Rolf > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at [hidden] Thu Jan 31 14:19:18 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 14:19:18 -0600 Subject: [mpi-21] Two MPI I/O questions In-Reply-To: Message-ID: <00da01c86446$8ef3f4a0$860add8c@mcs.anl.gov> I don't think any clarification is needed for the second topic. Rajeev > -----Original Message----- > From: Rolf Rabenseifner [mailto:rabenseifner_at_[hidden]] > Sent: Tuesday, January 29, 2008 6:22 AM > To: mpi-21_at_[hidden] > Cc: Leonard F. Wisniewski; Rajeev Thakur > Subject: Re: Two MPI I/O questions > > Mainly to Leonard Wisniewski and Rajeev Thakur > who have contributed to this mail-discussion thread. > > This is a follow up to: > Shared File Pointers > in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/sharedfile/ > > and here only to the second topic in the mails. > (The first topic is handled in the thread > "MPI C++ Constants conflict with stdio") > > _________________________________________ > > When I understand correctly, then a clarification is not needed > in the MPI standard. > > If somebody wants a clarification to be included into the standard > and therefore in Ballot 4, please send me your wording > with the page and line references included. > > If all agree, that no clarification is needed, then I would finish > this discussion-thread. > > Best regards > Rolf > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > From rabenseifner at [hidden] Thu Jan 31 14:26:07 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 21:26:07 +0100 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: Dick, I fully agree. I withdraw my proposals with OpenMP and come back to your text, adding the forward reference to MPI_IS_THREAD_MAIN: >> >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the >> >>> application >> >>> must insure that only the main thread makes MPI calls (for the definition of main thread, see MPI_IS_THREAD_MAIN). By the way, this clarification is independent of any additional level of thread support taht may be specified in further MPI versions. Best regards Rolf On Thu, 31 Jan 2008 14:13:17 -0500 Richard Treumann wrote: >See MPI_IS_THREAD_MAIN on page 198. The definition of "main thread" is >there. Maybepage 198 is not the best place for a definition of a term that >is used on page 196 but the standard does provide one. > >I am reluctant to mention OpenMP in the context of defining >MPI_THREAD_FUNNELED because then we would need to discuss other ways an >application could become multi threaded too. > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:58:59 PM: > >> What is the definition of a "main thread" ? The OpenMP example is >> still vague. >> >> I think we should clarify what we expect the "main thread" to be. From >> my perspective, this "main thread" is the one that called the >> MPI_Init_thread function, as the MPI library is then allowed to attach >> some kind of private key(s) to it (pthread_key_create). >> >> Thanks, >> george. >> >> On Jan 31, 2008, at 11:09 AM, Rolf Rabenseifner wrote: >> >> > Sorry Rich, >> > >> > but MPI 2.0 has defined "main thread", and not less! >> > >> > The open question was only, who is responsible for guaranteing this. >> > >> > Dicks text is fine: >> >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the >> >>> application >> >>> must insure that only the main thread makes MPI calls. >> > >> > My first proposal >> >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >> >>>>>>> the main thread will make MPI calls (all MPI calls are >> >>>>>>> "funneled" >> >>>>>>> to the main thread, e.g., by using the OpenMP directive >> >>>>>>> "master" >> >>>>>>> in the application program). >> > has the advantage, that "main thread" is defined by refering to >> > the OpenMP-Standard where "OpenMP master thread" is defined (and not >> > "main"). >> > >> > We can combine this to: >> > MPI_THREAD_FUNNELED The process may be multi-threaded, but the >> > application >> > must insure that only the main thread makes MPI calls, e.g., by >> > using the >> > OpenMP directive "master". >> > >> > (This clearly tells that OpenMP single directive is not enough.) >> > >> > Best regards >> > Rolf >> > >> > >> > On Thu, 31 Jan 2008 10:50:35 -0500 >> > Richard Graham wrote: >> >> Why restrict this to a standard specified thread (main thread), why >> >> not word >> >> it >> >> as a singe thread, and let the app decide what thread this is, >> >> based on >> >> what >> >> ever criteria it wants to use to select this thread ? >> >> >> >> Rich >> >> >> >> >> >> On 1/31/08 10:27 AM, "Richard Treumann" wrote: >> >> >> >>> How about:: >> >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the >> >>> application >> >>> must insure that only the main thread makes MPI calls. >> >>> >> >>> >> >>> Dick Treumann - MPI Team/TCEM >> >>> IBM Systems & Technology Group >> >>> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >> >>> Tele (845) 433-7846 Fax (845) 433-8363 >> >>> >> >>> >> >>> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:44:08 AM: >> >>> >> >>>>> A simpler change which would seem to achieve the desired >> >>>>> clarification >> >>>>> would be :- >> >>>>> >> >>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but >> >>>>> only the >> >>>>> main >> >>>>> thread is allowed to make MPI calls. >> >>>>> >> >>>>> (and you could add >> >>>>> If other threads make MPI calls the behavior is undefined. >> >>>>> if you want to be verbose about it). >> >>>>> >> >>>>> -- Jim >> >>>>> >> >>>>> James Cownie >> >>>>> SSG/DPD/PAT >> >>>>> Tel: +44 117 9071438 >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>>>> -----Original Message----- >> >>>>>>> From: mpi-21-bounces_at_[hidden] >[mailto:mpi-21-bounces_at_[hidden] >> >>>>>>> ] >> >>>>> On >> >>>>>>> Behalf Of Rolf Rabenseifner >> >>>>>>> Sent: 31 January 2008 14:31 >> >>>>>>> To: mpi-21_at_[hidden] >> >>>>>>> Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: >> >>>>>>> Attending, >> >>>>>>> topics, etc >> >>>>>>> >> >>>>>>> This is a proposal for MPI 2.1, Ballot 4. >> >>>>>>> >> >>>>>>> I'm asking especially >> >>>>>>> Greg Lindahl, >> >>>>>>> the participants of the email-discussion in 2007, to review this >> >>>>> proposal. >> >>>>>>> >> >>>>>>> This is a follow up to: >> >>>>>>> Which thread is the funneled thread? >> >>>>>>> in >http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> >>>>>>> errata/index.html >> >>>>>>> with mail discussion in >> >>>>>>> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> >>>>>>> errata/discuss/funneled/ >> >>>>>>> ___________________________________ >> >>>>>>> >> >>>>>>> Proposal: >> >>>>>>> MPI-2.0 Sect. 8.7.3, MPI_Init_thread, page 196, lines 25-26 >> >>>>>>> read: >> >>>>>>> >> >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >> >>>>>>> the main thread will make MPI calls (all MPI calls are >> >>>>>>> "funneled" >> >>>>>>> to the main thread). >> >>>>>>> >> >>>>>>> but should read: >> >>>>>>> >> >>>>>>> MPI_THREAD_FUNNELED The process may be multi-threaded, but only >> >>>>>>> the main thread will make MPI calls (all MPI calls are >> >>>>>>> "funneled" >> >>>>>>> to the main thread, e.g., by using the OpenMP directive >> >>>>>>> "master" >> >>>>>>> in the application program). >> >>>>>>> ___________________________________ >> >>>>>>> Rationale for this clarification from the email from Greg >> >>>>>>> Lindahl: >> >>>>>>> The existing document doesn't make it clear that >> >>>>>>> the MPI user has to funnel the calls to the main thread; >> >>>>>>> it's not the job of the MPI library. I have seen multiple >> >>>>>>> MPI users confused by this issue, and when I first read >> >>>>>>> this section, I was confused by it, too. >> >>>>>>> ___________________________________ >> >>>>>>> >> >>>>>>> >> >>>>>>> Best regards >> >>>>>>> Rolf >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email >rabenseifner_at_[hidden] >> >>>>>>> High Performance Computing Center (HLRS) . phone + >> >>>>>>> +49(0)711/685-65530 >> >>>>>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / >> >>>>>>> 685-65832 >> >>>>>>> Head of Dpmt Parallel Computing . . . >www.hlrs.de/people/rabenseifner >> >>>>>>> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: >> >>>>>>> Allmandring 30) >> >>>>>>> _______________________________________________ >> >>>>>>> mpi-21 mailing list >> >>>>>>> mpi-21_at_[hidden] >> >>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >>>>> >--------------------------------------------------------------------- >> >>>>> Intel Corporation (UK) Limited >> >>>>> Registered No. 1134945 (England) >> >>>>> Registered Office: Pipers Way, Swindon SN3 1RJ >> >>>>> VAT No: 860 2173 47 >> >>>>> >> >>>>> This e-mail and any attachments may contain confidential >> >>>>> material for >> >>>>> the sole use of the intended recipient(s). Any review or >> >>>>> distribution >> >>>>> by others is strictly prohibited. If you are not the intended >> >>>>> recipient, please contact the sender and delete all copies. >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> mpi-21 mailing list >> >>>>> mpi-21_at_[hidden] >> >>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >>> >> >>> >> >>> _______________________________________________ >> >>> mpi-21 mailing list >> >>> mpi-21_at_[hidden] >> >>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> >> >> >> > >> > >> > >> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> > _______________________________________________ >> > mpi-21 mailing list >> > mpi-21_at_[hidden] >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> >> [attachment "smime.p7s" deleted by Richard >> Treumann/Poughkeepsie/IBM] >_______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From thakur at [hidden] Thu Jan 31 14:36:55 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 14:36:55 -0600 Subject: [mpi-21] Ballot 4 - RE: clarification text for MPI_Reduce_scatter In-Reply-To: Message-ID: <00db01c86449$04c0e600$860add8c@mcs.anl.gov> The only change needed here is that the following sentence on ln 19, pg 163, MPI 2.0 should be deleted: "Note that the area occupied by the input data may be either longer or shorter than the data filled by the output data." The sentence makes no sense because the input data can never be shorter than the output data. The output, determined by recvcounts[i], is a subset of the input. Rajeev > -----Original Message----- > From: Rolf Rabenseifner [mailto:rabenseifner_at_[hidden]] > Sent: Tuesday, January 29, 2008 4:44 AM > To: mpi-21_at_[hidden] > Cc: Rajeev Thakur > Subject: Ballot 4 - RE: clarification text for MPI_Reduce_scatter > > This is a follow-up for MPI 2.1, Ballot 4. > > I'm asking especially > Rajeev Thakur, > the participant of the email-discussion in 2002, to review > this proposal. > > I would finish the following track without any proposed > clarification, because the text is already clear when the MPI-1.1 > part of MPI_Reduce_scatter is put in front of the MPI-2.0 part, > i.e., in the upcoming combined document MPI-2.1. > > This is a follow up to: > MPI_IN_PLACE for MPI_Reduce_scatter > in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/redscat/ > ___________________________________ > > The currently text in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > should not be accepted, because it is a significant modification of > the MPI-2.0 standard and would break user code: > > Proposed text to replace lines 16-20, pg 163 > The "in place" option for intracommunicators is specified by > passing MPI_IN_PLACE in the sendbuf argument. In this case, on > each process, the input data is taken from recvbuf. Process i gets > the ith segment of the result, and it is stored at the location > corresponding to segment i in recvbuf. > ___________________________________ > > Best regards > Rolf > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > From treumann at [hidden] Thu Jan 31 14:37:54 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 15:37:54 -0500 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: <00c901c86443$c72c8830$860add8c@mcs.anl.gov> Message-ID: I do like specifiy better - thx Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 02:59:24 PM: > Looks ok. Maybe use "specify" instead of "mention". > > When an info object that specifies a subset of valid hints > is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there > will be no effect on previously set or defaulted hints that > the info does not specify. > > Rajeev > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > On Behalf Of Richard Treumann > Sent: Thursday, January 31, 2008 1:04 PM > To: Mailing list for discussion of MPI 2.1 > Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > Rajeev > > I think you just agreed with the interpretation I advocated. Neither > the proposal or rationale made this at all clear to me. How about? > > Proposal: > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > following sentences: > > When an info object that mentions a subset of valid hints > is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there > will be no effect on previously set or defaulted hints that > the info does not mention. > > ___________________________________ > Rationale for this clarification: > This text was missing. It was not clear, whether an info object > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO was intended to > replace only the mentioned hints or was intended to substitute > a complete new set of hints for the prior set. > ___________________________________ > > Dick Treumann - MPI Team/TCEM > IBM Systems & Technology Group > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:29:41 PM: > > > The intent is that if the user calls MPI_File_set_info (or > > MPI_File_set_view) twice, the 2nd call will only update (if > > possible) the key-vals passed in the 2nd call; others are > > unmodified. If the 2nd call passes MPI_INFO_NULL, nothing will > > change -- it won't nullify previously passed hints. > > > > Rajeev > > > > > > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > > On Behalf Of Richard Treumann > > Sent: Thursday, January 31, 2008 9:23 AM > > To: Mailing list for discussion of MPI 2.1 > > Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > > > I think we have an overall ambiguity about what the "current set of > > hints" is. This ambiguity is evident in the question about what > > MPI_FILE_INFO_GET returns and in this discussion too. If an > > implementation supports 5 file hints then it must select a value for > > each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that > > stipulates 2 of the hints then how many hints are in the "current > > set of hints"? 2 or 5? I would say there are 5 and I think it makes > > sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. > > > > Two more specific points - > > > > 1) I would expect that if at MPI_FILE_OPEN the implementation is > > given non-default hints ("A","yes") and ("B","no") and then at > > MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint > > "A" remains as set and hint "B" is altered (if possible). If there > > is a hint "C" which has never been mentioned it will have received a > > default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does > > not mention "C" will leave that default unchanged. > > > > Is the "clarification" saying hint "A" must return to default when > > MPI_FILE_INFO_SET fails to mention it? If that is the intent then I > > need to be convinced. If we decide this is to be blessed then we > > probably need to say that any use of MPI_FILE_SET_INFO must first > > call MPI_FILE_GET_INFO, tweek the INFO it gets back from > > MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid > > unexpected changes to the set of hints that is "in effect". > > > > 2) Since a hint is a hint, not a command, it can be rejected. It is > > possible that some hint can be honored at MPI_FILE_OPEN but once it > > has been honored, cannot be altered at reasonable cost. > > > > For example, maybe somebody's MPI_FILE_OPEN could accept a hint > > ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer > > but be prepared to accept changes to buffer size". If the user has > > set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation > > would omit whatever synchs are needed to preserve the ability to > > change on the fly. Passing ("buffer_size", "dynamic-16MB") to > > MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" > > at FILE_OPEN but would need to be ignored if he had not. > > > > For most implementations, a hint like "buffer_size" could not be > > honored at all after the first file read or write had been done. > > > > Dick Treumann - MPI Team/TCEM > > IBM Systems & Technology Group > > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > > Tele (845) 433-7846 Fax (845) 433-8363 > > > > > > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: > > > > > This is a proposal for MPI 2.1, Ballot 4. > > > > > > I'm asking especially the implementors to check, whether > > > this interpretation is implemented in their MPI implementations, > > > or does not contradict to the existing implementation. > > > > > > This is a follow up to: > > > MPI_File_set_info > > > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > > > errata/index.html > > > with mail discussion not yet existing > > > ___________________________________ > > > > > > Proposal: > > > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the > > > following sentences: > > > > > > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting > > > of all hints used by the system to this open file is updated by > > > the (key,value) pairs in the info argument. > > > ___________________________________ > > > Rationale for this clarification: > > > This text was missing. It was not clear, whether a info handles > > > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing > > > the current set of used hints. > > > The developers from ROMIO decided to update the current set of > used hints. > > > Therefore, this behavior should be the expected behavior of a majority > > > of users. > > > ___________________________________ > > > > > > Best regards > > > Rolf > > > > > > > > > > > > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > > _______________________________________________ > > > mpi-21 mailing list > > > mpi-21_at_[hidden] > > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > _______________________________________________ > > mpi-21 mailing list > > mpi-21_at_[hidden] > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabenseifner at [hidden] Thu Jan 31 14:35:14 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 21:35:14 +0100 Subject: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement In-Reply-To: <00c901c86443$c72c8830$860add8c@mcs.anl.gov> Message-ID: Dick and Rajeev, that's fine. My word "updating" was not clear enough. But that is what I wanted to say. I'll put your text into the Ballot 4. On Thu, 31 Jan 2008 13:59:24 -0600 "Rajeev Thakur" wrote: >Looks ok. Maybe use "specify" instead of "mention". > > When an info object that specifies a subset of valid hints > is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there > will be no effect on previously set or defaulted hints that > the info does not specify. > >Rajeev > > > > _____ > >From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On >Behalf Of Richard Treumann >Sent: Thursday, January 31, 2008 1:04 PM >To: Mailing list for discussion of MPI 2.1 >Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > > > >Rajeev > >I think you just agreed with the interpretation I advocated. Neither the >proposal or rationale made this at all clear to me. How about? > >Proposal: >Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the >following sentences: > > When an info object that mentions a subset of valid hints > is passed to MPI_FILE_SET_VIEW or MPI_FILE_SET_INFO, there > will be no effect on previously set or defaulted hints that > the info does not mention. > >___________________________________ >Rationale for this clarification: > This text was missing. It was not clear, whether an info object > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO was intended to > replace only the mentioned hints or was intended to substitute > a complete new set of hints for the prior set. >___________________________________ > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 12:29:41 PM: > >> The intent is that if the user calls MPI_File_set_info (or >> MPI_File_set_view) twice, the 2nd call will only update (if >> possible) the key-vals passed in the 2nd call; others are >> unmodified. If the 2nd call passes MPI_INFO_NULL, nothing will >> change -- it won't nullify previously passed hints. >> >> Rajeev >> >> >> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >> On Behalf Of Richard Treumann >> Sent: Thursday, January 31, 2008 9:23 AM >> To: Mailing list for discussion of MPI 2.1 >> Subject: Re: [mpi-21] Ballot 4 - MPI_File_set_info update or replacement > >> I think we have an overall ambiguity about what the "current set of >> hints" is. This ambiguity is evident in the question about what >> MPI_FILE_INFO_GET returns and in this discussion too. If an >> implementation supports 5 file hints then it must select a value for >> each of these hints an MPI_FILE_OPEN. If there is an MPI_Info that >> stipulates 2 of the hints then how many hints are in the "current >> set of hints"? 2 or 5? I would say there are 5 and I think it makes >> sense for MPI_FILE_GET_INFO to return all 5 (key,value) pairs. >> >> Two more specific points - >> >> 1) I would expect that if at MPI_FILE_OPEN the implementation is >> given non-default hints ("A","yes") and ("B","no") and then at >> MPI_FILE_INFO_SET is given ("B","yes") the net effect is that hint >> "A" remains as set and hint "B" is altered (if possible). If there >> is a hint "C" which has never been mentioned it will have received a >> default value at MPI_FILE_OPEN and the MPI_FILE_INFO_SET which does >> not mention "C" will leave that default unchanged. >> >> Is the "clarification" saying hint "A" must return to default when >> MPI_FILE_INFO_SET fails to mention it? If that is the intent then I >> need to be convinced. If we decide this is to be blessed then we >> probably need to say that any use of MPI_FILE_SET_INFO must first >> call MPI_FILE_GET_INFO, tweek the INFO it gets back from >> MPI_FILE_GET_INFO and pass that to MPI_FILE_SET_INFO to avoid >> unexpected changes to the set of hints that is "in effect". >> >> 2) Since a hint is a hint, not a command, it can be rejected. It is >> possible that some hint can be honored at MPI_FILE_OPEN but once it >> has been honored, cannot be altered at reasonable cost. >> >> For example, maybe somebody's MPI_FILE_OPEN could accept a hint >> ("buffer_size", "dynamic-64MB") meaning "start with a 64MB buffer >> but be prepared to accept changes to buffer size". If the user has >> set hint ("buffer_size", "64MB") at FILE_OPEN, the implelentation >> would omit whatever synchs are needed to preserve the ability to >> change on the fly. Passing ("buffer_size", "dynamic-16MB") to >> MPI_FILE_SET_INFO could be honored if the user had chosen "dynamic" >> at FILE_OPEN but would need to be ignored if he had not. >> >> For most implementations, a hint like "buffer_size" could not be >> honored at all after the first file read or write had been done. >> >> Dick Treumann - MPI Team/TCEM >> IBM Systems & Technology Group >> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >> Tele (845) 433-7846 Fax (845) 433-8363 >> >> >> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:24:51 AM: >> >> > This is a proposal for MPI 2.1, Ballot 4. >> > >> > I'm asking especially the implementors to check, whether >> > this interpretation is implemented in their MPI implementations, >> > or does not contradict to the existing implementation. >> > >> > This is a follow up to: >> > MPI_File_set_info >> > in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- >> > errata/index.html >> > with mail discussion not yet existing >> > ___________________________________ >> > >> > Proposal: >> > Add in MPI-2.0 Sect. 9.2.8, File Info, page 218, after line 18 the >> > following sentences: >> > >> > With MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO the current setting >> > of all hints used by the system to this open file is updated by >> > the (key,value) pairs in the info argument. >> > ___________________________________ >> > Rationale for this clarification: >> > This text was missing. It was not clear, whether a info handles >> > in MPI_FILE_SET_VIEW and MPI_FILE_SET_INFO are updating or replacing >> > the current set of used hints. >> > The developers from ROMIO decided to update the current set of used >hints. >> > Therefore, this behavior should be the expected behavior of a majority >> > of users. >> > ___________________________________ >> > >> > Best regards >> > Rolf >> > >> > >> > >> > >> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> > _______________________________________________ >> > mpi-21 mailing list >> > mpi-21_at_[hidden] >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jan 31 14:55:15 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 31 Jan 2008 21:55:15 +0100 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Do we mean "implementation defined" instead of "undefined"? On Thu, 31 Jan 2008 14:16:20 -0500 Richard Treumann wrote: >Jim > >The sense I get from the phrase "the behavior is undefined" is that >defining the behavior is beyond the scope of the standard. > >The MPI standard does have some predefined keys and requires an >implementation that supports any predefined key to support it as described >by the standard. That can lead to one place in the standard saying "the >behavior is undefined" and another saying "here is the definition of the >behavior". > >Maybe I am reading more than others into the phrase "the behavior is >undefined" but it does have this strong implication in my mind. > >How is this ? > >An implementation must support info objects as caches for arbitrary >key, value) pairs, regardless of whether it recognizes the key. >Each function that takes hints in the form of an MPI_Info must >be prepared to ignore any key it does not recognize. This description >of info objects does not attempt to define how a particular function >should react if it recognizes a key but not the associated value. > >note: >I also changed MPI function to simply function because with this free form >approach to the info object, it seems to me a third party library that is >intended to work as part of an MPI program may want to use MPI_Info objects >too. If someone authors a parallel math library and wants the >initialization routine to look like: > init_pmath( MPI_Info info) >why not? They should understand that init_pmath must ignore keys it does >not recognize even if i is not an MPI_ routine. > > > Dick > >Dick Treumann - MPI Team/TCEM >IBM Systems & Technology Group >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >Tele (845) 433-7846 Fax (845) 433-8363 > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:39:52 AM: > >> As you are saying, there are two different classes of errors here. >> >> 1) Keys which are not understood and need to be ignored by functions >> which don’t grok them (“JIMS_SECRET_TAG”,”99”) >> 2) Keys which are understood by a function, but with a value which >> is not (“buffer_size”, “Hello”) >> >> I think allowing the second type to have undefined behavior is the >> right thing to do, since it’s the most general. >> If your implementation wants to define the behavior of some out-of- >> range values, that’s fine and doesn’t make you non-conforming, it >> just means you defined the previously undefined behavior for some >> set of values. >> >> Having that undefined-ness explicit here (in one central place) >> seems to make sense (if only because it may be omitted in one of the >> other places where it should appear). >> >> My addition does not alter the existing change which guarantees case >> 1, it’s only concerned with case 2. >> -- Jim >> >> James Cownie >> SSG/DPD/PAT >> Tel: +44 117 9071438 >> >> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >> On Behalf Of Richard Treumann >> Sent: 31 January 2008 14:20 >> To: Mailing list for discussion of MPI 2.1 >> Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation >> >> Jim - >> >> I was taking the view that the description of what to do for a >> recognized key but dubious value belongs to the function that >> recognizes the specific key. For example if MPI_File_open accepts a >> "buffer_size" hint with range "32K" to "16M" we may want to define >> the behavior of hints that are out of range. >> >> Once we say an info can have arbitrary keys we need to state that >> every info consumer must be prepared to ignore keys it does not >> recognize because we have made unrecognizable keys legitimate. >> >> Dick >> Dick Treumann - MPI Team/TCEM >> IBM Systems & Technology Group >> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >> Tele (845) 433-7846 Fax (845) 433-8363 >> >> >> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:43:04 AM: >> >> > However, you have apparently lost the liberty to have undefined >> > behavior which was there in the previous version. >> > >> > Maybe you should keep that, something like >> > An implementation must support info objects as caches for arbitrary >> > (key, value) pairs, regardless of whether it recognizes the keys. >> > Each MPI function which takes hints in the form of an MPI_Info must >> > be prepared to ignore any key it does not recognize. However if a >> > function recognizes a key but not the associated value, then the >> > behavior is undefined. >> > (Modifications in italics) >> > -- Jim >> > >> > James Cownie >> > SSG/DPD/PAT >> > Tel: +44 117 9071438 >> > >> > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] >> > On Behalf Of Richard Treumann >> > Sent: 31 January 2008 13:29 >> > To: Mailing list for discussion of MPI 2.1 >> > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation >> > >> > Your wording works for me Rolf. -- Thanks >> > >> > >> > Dick Treumann - MPI Team/TCEM >> > IBM Systems & Technology Group >> > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 >> > Tele (845) 433-7846 Fax (845) 433-8363 >> > >> > >> > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: >> > >> > > I try to summarize all 3 replies in one proposal: >> > > >> > > ___________________________________ >> > > >> > > Proposal: >> > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: >> > > If a function does not recognize a key, >> > > it will ignore it, unless otherwise specified. >> > > If an implementation recognizes a key but does not recognize >> > > the format of the corresponding value, the result is undefined. >> > > but should read: >> > > An implementation must support info objects as caches for >arbitrary >> > > (key, value) pairs, regardless of whether it recognizes the pairs. >> > > Each MPI function which takes hints in the form of an MPI_Info >must >> > > be prepared to ignore any key it does not recognize. >> > > >> > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new >> > > paragraph: >> > > Advice to implementors. >> > > Although in MPI functions that take hints in form of an MPI_Info >> > > (e.g., in process creation and management, one-sided >communication, >> > > or parallel file I/O), an implementation must be prepared to >ignore >> > > keys that it does not recognize, for the purpose of >> MPI_INFO_GET_NKEYS, >> > > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the >> > > implementation must retain all (key,value) pairs so that layered >> > > functionality can also use the Info object. >> > > (End of advice to implementors.) >> > > _____________________________ >> > > Rationale for this clarification: >> > > >> > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, >> > > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs >> > > that are not recognized in routines in other chapters that >> > > take hints with info arguments. >> > > The proposed clarification is necessary when we assume, that >> > > layered implementation of parts of the MPI-2 standard should >> > > be possible and may use the MPI_Info objects for their needs. >> > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. >> > > ___________________________________ >> > > >> > > Bronis, for me, your wording "an MPI implementation may restrict" was >> > > in conflict with the rest of the advice. I hope the formulation above >> > > is also okay. It is based on the new wording from you and Dick in >first >> > > part of the proposal. >> > > >> > > Best regards >> > > Rolf >> > > >> > > >> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] >> > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 >> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 >> > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner >> > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) >> > > _______________________________________________ >> > > mpi-21 mailing list >> > > mpi-21_at_[hidden] >> > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> > --------------------------------------------------------------------- >> > Intel Corporation (UK) Limited >> > Registered No. 1134945 (England) >> > Registered Office: Pipers Way, Swindon SN3 1RJ >> > VAT No: 860 2173 47 >> > >> > This e-mail and any attachments may contain confidential material for >> > the sole use of the intended recipient(s). Any review or distribution >> > by others is strictly prohibited. If you are not the intended >> > recipient, please contact the sender and delete all copies. >> > _______________________________________________ >> > mpi-21 mailing list >> > mpi-21_at_[hidden] >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >> --------------------------------------------------------------------- >> Intel Corporation (UK) Limited >> Registered No. 1134945 (England) >> Registered Office: Pipers Way, Swindon SN3 1RJ >> VAT No: 860 2173 47 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> _______________________________________________ >> mpi-21 mailing list >> mpi-21_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From thakur at [hidden] Thu Jan 31 15:06:50 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Thu, 31 Jan 2008 15:06:50 -0600 Subject: [mpi-21] Ballot 4 - Re: MPI-2 thread safety and collectives In-Reply-To: Message-ID: <00ec01c8644d$32947a70$860add8c@mcs.anl.gov> Rolf, Karl's original mail in this thread asked a simpler question, namely, what is considered as conflicting calls when using the same communicator. His scenario 1 is Thread 1: MPI_Allreduce(..., comm) Thread 2: MPI_File_open(..., comm, ...) His scenario 2 is Thread 1: MPI_Allreduce(..., MPI_SUM, comm) Thread 2: MPI_Allreduce(..., MPI_MAX, comm) I don't think there is any doubt about scenario 2 being conflicting. In my opinion, even scenario 1 is conflicting because they are 2 MPI collective calls explicitly on the same communicator (the file handle is not yet created). He is asking for some clarification on scenario 1. Your proposed advice to users is for collective calls on different objects (file handles, window objects) derived from the same communicator. (It is nonetheless useful to have it in addition.) Rajeev > -----Original Message----- > From: owner-mpi-21_at_[hidden] > [mailto:owner-mpi-21_at_[hidden]] On Behalf Of Rolf Rabenseifner > Sent: Monday, January 21, 2008 11:26 AM > To: mpi-21_at_[hidden] > Cc: mpi-21_at_[hidden] > Subject: [mpi-21] Ballot 4 - Re: MPI-2 thread safety and collectives > > This is a proposal for MPI 2.1, Ballot 4. > > This is a follow up to: > Thread safety and collective communication > in > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/index.html > with mail discussion in > > http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi- > errata/discuss/thread-safety/index.htm > > After checking the e-mails and looking at > - MPI-2 8.7.2 page 195 lines 6-9 > Collective calls Matching of collective calls on a communicator, > window, or file handle is done according to the order in > which the > calls are issued at each process. If concurrent threads > issue such > calls on the same communicator, window or file handle, it > is up to > the user to make sure the calls are correctly ordered, using > interthread synchronization. > - MPI-2 6.2.1 Window Creation, page 110, lines 10-12: > The call returns an opaque object that represents the group of > processes that own and access the set of windows, and the > attributes > of each window, as specified by the initialization call. > - MPI-2 9.2. Opening a File, page 211, line 46 - page 212, line 2: > Note that the communicator comm is unaffected by MPI FILE OPEN > and continues to be usable in all MPI routines (e.g., MPI SEND). > Furthermore, the use of comm will not interfere with I/O behavior. > it seems that the standard should be clarified. > > > Proposal for MPI 2.1, Ballot 4: > ------------------------------- > Add new paragraphs after MPI-2, 8.7.2 page 195 line 9 (the > end of the clarification on "Collective calls"): > > Advice to users. > With three concurrent threads in each MPI process of a > communicator comm, > it is allowed that thread A in each MPI process calls a collective > operation on comm, thread B calls a file operation on an existing > filehandle that was formerly opened on comm, and thread C > invokes one-sided > operations on an existing window handle that was also > formerly created > on comm. > (End of advice to users.) > > Rationale. > As already specified in MP_FILE_OPEN and MI_WIN_CREATE, a > file handle and > a window handle inherit only the group of processes of the > underlying > communicator, but not the communicator itself. Accesses to > communicators, > window handles and file handles cannot affect one another. > (End of rationale.) > > Advice to implementors. > If the implementation of file or window operations wants to > internally > use MPI communication then a duplicated communicator handle > may be cached > on the file or window handle. > (End of advice to implementors.) > ------------------------------- > > Reason: The emails have shown, that the current MPI-2 text can be > well misunderstood. > ------------------------------- > > Discussion should be done through the new mailing list > mpi-21_at_cs.uiuc.edu. > > I have sent out this mail with CC through the old general list > mpi-21_at_[hidden] > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > > From treumann at [hidden] Thu Jan 31 15:41:09 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 16:41:09 -0500 Subject: [mpi-21] Ballot 4 - Re: Request for interpretation In-Reply-To: Message-ID: Not quite; For example the standard offers: { cb_buffer_size} (integer) {SAME}: This hint specifies the total buffer space that can be used for collective buffering on each target node, usually a multiple of cb_block_size. The standard says a program is "erroneous" if the value is not the same at all callers. That is not exactly the same as the standard defining behavior for MPI_FILE_OPEN with a valid key and garbage value but pretty close. If MPI 3 adds new defined keys for new specific functions it may or may not want to go into defining behavior for some kinds of bad input. Dick Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 03:55:15 PM: > Do we mean "implementation defined" instead of "undefined"? > > On Thu, 31 Jan 2008 14:16:20 -0500 > Richard Treumann wrote: > >Jim > > > >The sense I get from the phrase "the behavior is undefined" is that > >defining the behavior is beyond the scope of the standard. > > > >The MPI standard does have some predefined keys and requires an > >implementation that supports any predefined key to support it as described > >by the standard. That can lead to one place in the standard saying "the > >behavior is undefined" and another saying "here is the definition of the > >behavior". > > > >Maybe I am reading more than others into the phrase "the behavior is > >undefined" but it does have this strong implication in my mind. > > > >How is this ? > > > >An implementation must support info objects as caches for arbitrary > >key, value) pairs, regardless of whether it recognizes the key. > >Each function that takes hints in the form of an MPI_Info must > >be prepared to ignore any key it does not recognize. This description > >of info objects does not attempt to define how a particular function > >should react if it recognizes a key but not the associated value. > > > >note: > >I also changed MPI function to simply function because with this free form > >approach to the info object, it seems to me a third party library that is > >intended to work as part of an MPI program may want to use MPI_Info objects > >too. If someone authors a parallel math library and wants the > >initialization routine to look like: > > init_pmath( MPI_Info info) > >why not? They should understand that init_pmath must ignore keys it does > >not recognize even if i is not an MPI_ routine. > > > > > > Dick > > > >Dick Treumann - MPI Team/TCEM > >IBM Systems & Technology Group > >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >Tele (845) 433-7846 Fax (845) 433-8363 > > > > > >mpi-21-bounces_at_[hidden] wrote on 01/31/2008 09:39:52 AM: > > > >> As you are saying, there are two different classes of errors here. > >> > >> 1) Keys which are not understood and need to be ignored by functions > >> which don’t grok them (“JIMS_SECRET_TAG”,”99”) > >> 2) Keys which are understood by a function, but with a value which > >> is not (“buffer_size”, “Hello”) > >> > >> I think allowing the second type to have undefined behavior is the > >> right thing to do, since it’s the most general. > >> If your implementation wants to define the behavior of some out-of- > >> range values, that’s fine and doesn’t make you non-conforming, it > >> just means you defined the previously undefined behavior for some > >> set of values. > >> > >> Having that undefined-ness explicit here (in one central place) > >> seems to make sense (if only because it may be omitted in one of the > >> other places where it should appear). > >> > >> My addition does not alter the existing change which guarantees case > >> 1, it’s only concerned with case 2. > >> -- Jim > >> > >> James Cownie > >> SSG/DPD/PAT > >> Tel: +44 117 9071438 > >> > >> From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > >> On Behalf Of Richard Treumann > >> Sent: 31 January 2008 14:20 > >> To: Mailing list for discussion of MPI 2.1 > >> Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > >> > >> Jim - > >> > >> I was taking the view that the description of what to do for a > >> recognized key but dubious value belongs to the function that > >> recognizes the specific key. For example if MPI_File_open accepts a > >> "buffer_size" hint with range "32K" to "16M" we may want to define > >> the behavior of hints that are out of range. > >> > >> Once we say an info can have arbitrary keys we need to state that > >> every info consumer must be prepared to ignore keys it does not > >> recognize because we have made unrecognizable keys legitimate. > >> > >> Dick > >> Dick Treumann - MPI Team/TCEM > >> IBM Systems & Technology Group > >> Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >> Tele (845) 433-7846 Fax (845) 433-8363 > >> > >> > >> mpi-21-bounces_at_[hidden] wrote on 01/31/2008 08:43:04 AM: > >> > >> > However, you have apparently lost the liberty to have undefined > >> > behavior which was there in the previous version. > >> > > >> > Maybe you should keep that, something like > >> > An implementation must support info objects as caches for arbitrary > >> > (key, value) pairs, regardless of whether it recognizes the keys. > >> > Each MPI function which takes hints in the form of an MPI_Info must > >> > be prepared to ignore any key it does not recognize. However if a > >> > function recognizes a key but not the associated value, then the > >> > behavior is undefined. > >> > (Modifications in italics) > >> > -- Jim > >> > > >> > James Cownie > >> > SSG/DPD/PAT > >> > Tel: +44 117 9071438 > >> > > >> > From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] > >> > On Behalf Of Richard Treumann > >> > Sent: 31 January 2008 13:29 > >> > To: Mailing list for discussion of MPI 2.1 > >> > Subject: Re: [mpi-21] Ballot 4 - Re: Request for interpretation > >> > > >> > Your wording works for me Rolf. -- Thanks > >> > > >> > > >> > Dick Treumann - MPI Team/TCEM > >> > IBM Systems & Technology Group > >> > Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > >> > Tele (845) 433-7846 Fax (845) 433-8363 > >> > > >> > > >> > mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:25:46 AM: > >> > > >> > > I try to summarize all 3 replies in one proposal: > >> > > > >> > > ___________________________________ > >> > > > >> > > Proposal: > >> > > MPI 2.0, Sect. 4.10 Info Objects, page 43, line 38-40 read: > >> > > If a function does not recognize a key, > >> > > it will ignore it, unless otherwise specified. > >> > > If an implementation recognizes a key but does not recognize > >> > > the format of the corresponding value, the result is undefined. > >> > > but should read: > >> > > An implementation must support info objects as caches for > >arbitrary > >> > > (key, value) pairs, regardless of whether it recognizes the pairs. > >> > > Each MPI function which takes hints in the form of an MPI_Info > >must > >> > > be prepared to ignore any key it does not recognize. > >> > > > >> > > Add after MPI 2.0, Sect. 4.10 Info Objects, page 44, line 22 a new > >> > > paragraph: > >> > > Advice to implementors. > >> > > Although in MPI functions that take hints in form of an MPI_Info > >> > > (e.g., in process creation and management, one-sided > >communication, > >> > > or parallel file I/O), an implementation must be prepared to > >ignore > >> > > keys that it does not recognize, for the purpose of > >> MPI_INFO_GET_NKEYS, > >> > > MPI_INFO_GET_NTHKEY, MPI_INFO_GET_VALUELEN, and MPI_INFO_GET, the > >> > > implementation must retain all (key,value) pairs so that layered > >> > > functionality can also use the Info object. > >> > > (End of advice to implementors.) > >> > > _____________________________ > >> > > Rationale for this clarification: > >> > > > >> > > The MPI-2.0 text allowed that also MPI_INFO_DELETE, MPI_INFO_SET, > >> > > MPI_INFO_GET, and MPI_INFO_DUP could ignore (key,value) pairs > >> > > that are not recognized in routines in other chapters that > >> > > take hints with info arguments. > >> > > The proposed clarification is necessary when we assume, that > >> > > layered implementation of parts of the MPI-2 standard should > >> > > be possible and may use the MPI_Info objects for their needs. > >> > > This was a goal of the MPI-2 Forum and the MPI-2.0 specification. > >> > > ___________________________________ > >> > > > >> > > Bronis, for me, your wording "an MPI implementation may restrict" was > >> > > in conflict with the rest of the advice. I hope the formulation above > >> > > is also okay. It is based on the new wording from you and Dick in > >first > >> > > part of the proposal. > >> > > > >> > > Best regards > >> > > Rolf > >> > > > >> > > > >> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > >> > > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > >> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > >> > > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > >> > > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > >> > > _______________________________________________ > >> > > mpi-21 mailing list > >> > > mpi-21_at_[hidden] > >> > > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >> > --------------------------------------------------------------------- > >> > Intel Corporation (UK) Limited > >> > Registered No. 1134945 (England) > >> > Registered Office: Pipers Way, Swindon SN3 1RJ > >> > VAT No: 860 2173 47 > >> > > >> > This e-mail and any attachments may contain confidential material for > >> > the sole use of the intended recipient(s). Any review or distribution > >> > by others is strictly prohibited. If you are not the intended > >> > recipient, please contact the sender and delete all copies. > >> > _______________________________________________ > >> > mpi-21 mailing list > >> > mpi-21_at_[hidden] > >> > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > >> --------------------------------------------------------------------- > >> Intel Corporation (UK) Limited > >> Registered No. 1134945 (England) > >> Registered Office: Pipers Way, Swindon SN3 1RJ > >> VAT No: 860 2173 47 > >> > >> This e-mail and any attachments may contain confidential material for > >> the sole use of the intended recipient(s). Any review or distribution > >> by others is strictly prohibited. If you are not the intended > >> recipient, please contact the sender and delete all copies. > >> _______________________________________________ > >> mpi-21 mailing list > >> mpi-21_at_[hidden] > >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lindahl at [hidden] Thu Jan 31 16:56:55 2008 From: lindahl at [hidden] (Greg Lindahl) Date: Thu, 31 Jan 2008 14:56:55 -0800 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: Message-ID: <20080131225655.GC840@bx9.net> On Thu, Jan 31, 2008 at 09:26:07PM +0100, Rolf Rabenseifner wrote: > Dick, I fully agree. I withdraw my proposals with OpenMP and come > back to your text, adding the forward reference to MPI_IS_THREAD_MAIN: > > >> >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the > >> >>> application > >> >>> must insure that only the main thread makes MPI calls > (for the definition of main thread, see MPI_IS_THREAD_MAIN). > > By the way, this clarification is independent of any > additional level of thread support taht may be specified in > further MPI versions. I also agree with this version of the clarification. BTW, it should be "ensure", not "insure". The current MPI 2.0 doc has "ensure" 28 times and "insure" 0 times. -- greg From treumann at [hidden] Thu Jan 31 18:19:21 2008 From: treumann at [hidden] (Richard Treumann) Date: Thu, 31 Jan 2008 19:19:21 -0500 Subject: [mpi-21] Ballot 4 - MPI_THREAD_FUNNELED - was Re: Attending, topics, etc In-Reply-To: <20080131225655.GC840@bx9.net> Message-ID: Greg - you are correct and I am a bit embarassed by my word choice. To "insure" can mean "to join with others to distribute or smooth risk". To "ensure" is to "make certain it will be so". Thanks Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-21-bounces_at_[hidden] wrote on 01/31/2008 05:56:55 PM: > On Thu, Jan 31, 2008 at 09:26:07PM +0100, Rolf Rabenseifner wrote: > > > Dick, I fully agree. I withdraw my proposals with OpenMP and come > > back to your text, adding the forward reference to MPI_IS_THREAD_MAIN: > > > > >> >>> MPI_THREAD_FUNNELED The process may be multi-threaded, but the > > >> >>> application > > >> >>> must insure that only the main thread makes MPI calls > > (for the definition of main thread, see MPI_IS_THREAD_MAIN). > > > > By the way, this clarification is independent of any > > additional level of thread support taht may be specified in > > further MPI versions. > > I also agree with this version of the clarification. > > BTW, it should be "ensure", not "insure". The current MPI 2.0 doc has > "ensure" 28 times and "insure" 0 times. > > -- greg > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 * -------------- next part -------------- An HTML attachment was scrubbed... URL: