From rabenseifner at [hidden] Sat Mar 1 09:53:54 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Sat, 01 Mar 2008 16:53:54 +0100 Subject: [Mpi-22] [mpi-forum] Error in RMA examples? In-Reply-To: Message-ID: I'm sure that p is the number of processes in this example and therefore dividing by p is correct, but j always without +1 (because j is a process number) and k always with +1 (because indix is starting with 1 in Fortran). According to official Postscript version of the standard: Page 117 lines 44-45 division by p instead of m is necessary. Line 44 without +1, Line 45 with +1. Page 118 line 45 with +1. Page 120 lines 33 with +1. About Page 117, lines 47-48, I'm not sure. I propose to put this into MPI 2.2, because this example should be tested before correcting it. Best regards Rolf On Fri, 8 Feb 2008 12:06:26 -0300 "Lisandro Dalcin" wrote: >Can someone take a look at the second example in: > >http://www.mpi-forum.org/docs/mpi-20-html/node124.htm > >and also to the example in: > >http://www.mpi-forum.org/docs/mpi-20-html/node125.htm > >I believe in both cases the lines saying > > j = map(i)/p > k = MOD(map(i),p) > >should read > > j = map(i)/m > k = MOD(map(i),m) > > > >Additionally, in the first example again in: > >http://www.mpi-forum.org/docs/mpi-20-html/node124.htm > >in the part where origin and target indices are computed, there are >two lines reading: > > > oindex(total(j) + count(j)) = i > tindex(total(j) + count(j)) = k > >Should'nt it read like this (for indices being zero-based)? > > oindex(total(j) + count(j)) = i - 1 > tindex(total(j) + count(j)) = k - 1 > > >Regards, > >-- >Lisandro Dalcín >--------------- >Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) >Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) >Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) >PTLC - Güemes 3450, (3000) Santa Fe, Argentina >Tel/Fax: +54-(0)342-451.1594 > >_______________________________________________ >mpi-forum mailing list >mpi-forum_at_[hidden] >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-forum Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From koziol at [hidden] Sat Mar 1 17:00:13 2008 From: koziol at [hidden] (Quincey Koziol) Date: Sat, 1 Mar 2008 17:00:13 -0600 Subject: [Mpi-22] Proposed amendment to application cleanup callback text Message-ID: <4CF1A84F-4B1B-49E9-BD9D-729A1F66BB76@hdfgroup.org> Hi all, I've got a problem which I think might be a candidate for a fix in the 2.2 standard: Is it legal to make MPI calls in the 'delete' callback for an attribute attached to MPI_COMM_SELF which is being freed during a call to MPI_Finalize? Section 4.8 in the 2.0 standard doesn't really say, I think. Here's the text: > 4.8 Allowing User Functions at Process Termination > > There are times in which it would be convenient to have actions > happen when an MPI process > finishes. For example, a routine may do initializations that are > useful until the MPI job (or > that part of the job that being terminated in the case of > dynamically created processes) is > finished. This can be accomplished in MPI-2 by attaching an > attribute to MPI_COMM_SELF > with a callback function. When MPI_FINALIZE is called, it will first > execute the equivalent > of an MPI_COMM_FREE on MPI_COMM_SELF. This will cause the delete > callback function > to be executed on all keys associated with MPI_COMM_SELF, in an > arbitrary order. If no > key has been attached to MPI_COMM_SELF, then no callback is invoked. > The “freeing” of > MPI_COMM_SELF occurs before any other parts of MPI are affected. > Thus, for example, > calling MPI_FINALIZED will return false in any of these callback > functions. Once done with > MPI_COMM_SELF, the order and rest of the actions taken by > MPI_FINALIZE is not specified. > > Advice to implementors. Since attributes can be added from any > supported language, > the MPI implementation needs to remember the creating language so > the correct > callback is made. (End of advice to implementors.) Here's my use case: we cache modified file information in the HDF5 library and need to be able to flush that data to the file if the user calls MPI_Finalize without calling the HDF5 routines to close (or flush) that file first. It looks like adding an attribute onto MPI_COMM_SELF is the right thing to do, and the standard seems to imply that it's possible to make MPI calls from that callback, but it's not certain to me. (Obviously, the HDF5 library needs to make I/ O calls and possible may need to exchange data or invoke a barrier, etc) Is it reasonable to add a sentence to the effect that "all MPI operations (except MPI_Finalize and MPI_Init) are possible from the attribute's 'delete' callback"? Here's a copy of section 4.8, with text to that effect worked in: > 4.8 Allowing User Functions at Process Termination > > There are times in which it would be convenient to have actions > happen when an MPI process > finishes. For example, a routine may do initializations that are > useful until the MPI job (or > that part of the job that being terminated in the case of > dynamically created processes) is > finished. This can be accomplished in MPI-2 by attaching an > attribute to MPI_COMM_SELF > with a callback function. When MPI_FINALIZE is called, it will first > execute the equivalent > of an MPI_COMM_FREE on MPI_COMM_SELF. This will cause the delete > callback function > to be executed on all keys associated with MPI_COMM_SELF, in an > arbitrary order. If no > key has been attached to MPI_COMM_SELF, then no callback is invoked. > The “freeing” of > MPI_COMM_SELF occurs before any other parts of MPI are affected and > all MPI operations (except MPI_Finalize and MPI_Init) are available > to the attribute's 'delete' callback. Thus, for example, > calling MPI_FINALIZED will return false in any of these callback > functions. Once done with > MPI_COMM_SELF, the order and rest of the actions taken by > MPI_FINALIZE is not specified. > > Advice to implementors. Since attributes can be added from any > supported language, > the MPI implementation needs to remember the creating language so > the correct > callback is made. (End of advice to implementors.) Thanks, Quincey Koziol The HDF Group From Dries.Kimpe at [hidden] Sun Mar 2 06:01:16 2008 From: Dries.Kimpe at [hidden] (Dries Kimpe) Date: Sun, 2 Mar 2008 13:01:16 +0100 Subject: [Mpi-22] determine if running in a heterogenous environment Message-ID: <20080302120116.GA12616@mhdmobile> Below: proposal for MPI-2.2 that adds support for querying if a communicator is heterogenous. ---------------------------------------------------------------------------- Proposal: ---------------------------------------------------------------------------- Provide some method to determine if a communicator is heterogenous. There are a number of different possibilities to provide this capability: 1) Provide a predefined integer valued attribute which can be used to query a communicator. (for example MPI_HETEROGENOUS) -- or -- 2) Create a seperate function: MPI_Comm_is_heterogenous (MPI_Comm comm, int * flag); ---------------------------------------------------------------------------- Rationale: ---------------------------------------------------------------------------- Currently, MPI-2.0 does not provide a portable way for an application to determine if it is running in a heterogenous environment. Some applications are not written with heterogenous environments in mind; They do not (always) use correct datatype descriptions when sending or receiving data, but instead treat the data as an array of bytes, relying on all datatypes having the same memory representation on both sender and receiver. Most often, this is done to avoid the added complexity of creating the correct datatypes. Although the pack/unpack functions provide an alternative, the do not have the same memory requirements (need an extra buffer to receive the data) or performance characteristics (need an extra copy). Adding a way for an application to test if it is currently running in a heterogenous environment enables it take appropriate action (aborting, switching to type-safe functions, ...) The MPI implementation -- if it supports heterogenous environments -- already needs to determine this information because it is responsible for performing type conversions in heterogenous communicators. Providing this information on a per-communicator base instead of returning it for the whole MPI_UNIVERSE / MPI_COMM_WORLD enables both the MPI implementation and the user application to avoid overhead in case of a homogenous communicator that is a subset of an inhomogenous MPI_COMM_WORLD. ---------------------------------------------------------------------------- Alternative ways to get the same information without modifying the standard: ---------------------------------------------------------------------------- * Store information about the architecture when compiling the application; Compare this information at runtime with all other members of the communicator. * (not 100% correct): calculate at run time the size of a number of elementary datatypes, compare information with the other ranks in the communicator. * -------------- next part -------------- A non-text attachment was scrubbed... Name: 01-part Type: application/pgp-signature Size: 190 bytes Desc: not available URL: From jsquyres at [hidden] Mon Mar 3 10:09:13 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 3 Mar 2008 11:09:13 -0500 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: <20080302120116.GA12616@mhdmobile> Message-ID: I think that coming up with a precise definition for "heterogeneous" could be problematic... On Mar 2, 2008, at 7:01 AM, Dries Kimpe wrote: > Below: proposal for MPI-2.2 that adds support for querying if a > communicator is heterogenous. > > ---------------------------------------------------------------------------- > Proposal: > ---------------------------------------------------------------------------- > > Provide some method to determine if a communicator is heterogenous. > > There are a number of different possibilities to provide this > capability: > > 1) Provide a predefined integer valued attribute which can be used > to query a > communicator. (for example MPI_HETEROGENOUS) > > -- or -- > > 2) Create a seperate function: MPI_Comm_is_heterogenous (MPI_Comm > comm, > int * flag); > > ---------------------------------------------------------------------------- > Rationale: > ---------------------------------------------------------------------------- > > Currently, MPI-2.0 does not provide a portable way for an > application to > determine if it is running in a heterogenous environment. > > Some applications are not written with heterogenous environments in > mind; > They do not (always) use correct datatype descriptions when sending or > receiving data, but instead treat the data as an array of bytes, > relying on all datatypes having the same memory representation on both > sender and receiver. Most often, this is done to avoid the added > complexity > of creating the correct datatypes. > > Although the pack/unpack functions provide an alternative, the do > not have > the same memory requirements (need an extra buffer to receive the > data) or > performance characteristics (need an extra copy). > > Adding a way for an application to test if it is currently running > in a > heterogenous environment enables it take appropriate action (aborting, > switching to type-safe functions, ...) > > The MPI implementation -- if it supports heterogenous environments -- > already needs to determine this information because it is > responsible for > performing type conversions in heterogenous communicators. > > Providing this information on a per-communicator base instead of > returning > it for the whole MPI_UNIVERSE / MPI_COMM_WORLD enables both the MPI > implementation and the user application to avoid overhead > in case of a homogenous communicator that is a subset of an > inhomogenous > MPI_COMM_WORLD. > > ---------------------------------------------------------------------------- > Alternative ways to get the same information without modifying the > standard: > ---------------------------------------------------------------------------- > > * Store information about the architecture when compiling the > application; > Compare this information at runtime with all other members of the > communicator. > > * (not 100% correct): calculate at run time the size of a number of > elementary datatypes, compare information with the other ranks in the > communicator. > > _______________________________________________ > Mpi-22 mailing list > Mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems From treumann at [hidden] Mon Mar 3 11:01:12 2008 From: treumann at [hidden] (Richard Treumann) Date: Mon, 3 Mar 2008 12:01:12 -0500 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: Message-ID: The real concern here is: Does the MPI implementation need to provide data conversion services between any pair of tasks in the communiator? If I have some tasks on a slow node and other tasks on a fast one we could debate whether that is heterogeneous. I think the proposal has merit but we need to be specific that only data representation conversion for data transfer routines is involved in the meaning of "heterogeneous". Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 03/03/2008 11:09:13 AM: > I think that coming up with a precise definition for "heterogeneous" > could be problematic... > > > On Mar 2, 2008, at 7:01 AM, Dries Kimpe wrote: > > > Below: proposal for MPI-2.2 that adds support for querying if a > > communicator is heterogenous. > > > > ---------------------------------------------------------------------------- > > Proposal: > > ---------------------------------------------------------------------------- > > > > Provide some method to determine if a communicator is heterogenous. > > > > There are a number of different possibilities to provide this > > capability: > > > > 1) Provide a predefined integer valued attribute which can be used > > to query a > > communicator. (for example MPI_HETEROGENOUS) > > > > -- or -- > > > > 2) Create a seperate function: MPI_Comm_is_heterogenous (MPI_Comm > > comm, > > int * flag); > > > > ---------------------------------------------------------------------------- > > Rationale: > > ---------------------------------------------------------------------------- > > > > Currently, MPI-2.0 does not provide a portable way for an > > application to > > determine if it is running in a heterogenous environment. > > > > Some applications are not written with heterogenous environments in > > mind; > > They do not (always) use correct datatype descriptions when sending or > > receiving data, but instead treat the data as an array of bytes, > > relying on all datatypes having the same memory representation on both > > sender and receiver. Most often, this is done to avoid the added > > complexity > > of creating the correct datatypes. > > > > Although the pack/unpack functions provide an alternative, the do > > not have > > the same memory requirements (need an extra buffer to receive the > > data) or > > performance characteristics (need an extra copy). > > > > Adding a way for an application to test if it is currently running > > in a > > heterogenous environment enables it take appropriate action (aborting, > > switching to type-safe functions, ...) > > > > The MPI implementation -- if it supports heterogenous environments -- > > already needs to determine this information because it is > > responsible for > > performing type conversions in heterogenous communicators. > > > > Providing this information on a per-communicator base instead of > > returning > > it for the whole MPI_UNIVERSE / MPI_COMM_WORLD enables both the MPI > > implementation and the user application to avoid overhead > > in case of a homogenous communicator that is a subset of an > > inhomogenous > > MPI_COMM_WORLD. > > > > ---------------------------------------------------------------------------- > > Alternative ways to get the same information without modifying the > > standard: > > ---------------------------------------------------------------------------- > > > > * Store information about the architecture when compiling the > > application; > > Compare this information at runtime with all other members of the > > communicator. > > > > * (not 100% correct): calculate at run time the size of a number of > > elementary datatypes, compare information with the other ranks in the > > communicator. > > > > _______________________________________________ > > Mpi-22 mailing list > > Mpi-22_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > Mpi-22 mailing list > Mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Mon Mar 3 11:08:47 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 3 Mar 2008 12:08:47 -0500 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: Message-ID: <7165BB7F-65D5-4576-B06A-7FA53FE0F447@cisco.com> On Mar 3, 2008, at 12:01 PM, Richard Treumann wrote: > The real concern here is: > > Does the MPI implementation need to provide data conversion services > between any pair of tasks in the communiator? If I have some tasks > on a slow node and other tasks on a fast one we could debate whether > that is heterogeneous. > > I think the proposal has merit but we need to be specific that only > data representation conversion for data transfer routines is > involved in the meaning of "heterogeneous". > Narrowing the scope to "whether translation functionality is required" is good. But note that whether the translation functionality is used may be specific to a given (communicator,peer,datatype) tuple. -- Jeff Squyres Cisco Systems From Dries.Kimpe at [hidden] Mon Mar 3 11:45:47 2008 From: Dries.Kimpe at [hidden] (Dries Kimpe) Date: Mon, 3 Mar 2008 18:45:47 +0100 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: Message-ID: <20080303174547.GA26807@mhdmobile.wis.kuleuven.be> * Richard Treumann [2008-03-03 12:01:12]: > The real concern here is: > Does the MPI implementation need to provide data conversion services > between any pair of tasks in the communiator? If I have some tasks on a > slow node and other tasks on a fast one we could debate whether that is > heterogeneous. > I think the proposal has merit but we need to be specific that only data > representation conversion for data transfer routines is involved in the > meaning of "heterogeneous". This is what I meant: that's why I talked about not specifying the correct datatype, but instead using MPI_BYTE and relying on the fact that the in-memory representation on both ranks is the same. So, my definition of 'heterogenous': not having the same in-memory data representation. Greetings, Dries > > > Some applications are not written with heterogenous environments in > > > mind; > > > They do not (always) use correct datatype descriptions when sending or > > > receiving data, but instead treat the data as an array of bytes, > > > relying on all datatypes having the same memory representation on both > > > sender and receiver. Most often, this is done to avoid the added > > > complexity > > > of creating the correct datatypes. * -------------- next part -------------- A non-text attachment was scrubbed... Name: 01-part Type: application/pgp-signature Size: 190 bytes Desc: not available URL: From Dries.Kimpe at [hidden] Mon Mar 3 11:54:34 2008 From: Dries.Kimpe at [hidden] (Dries Kimpe) Date: Mon, 3 Mar 2008 18:54:34 +0100 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: <7165BB7F-65D5-4576-B06A-7FA53FE0F447@cisco.com> Message-ID: <20080303175434.GB26807@mhdmobile.wis.kuleuven.be> * Jeff Squyres [2008-03-03 12:08:47]: > > I think the proposal has merit but we need to be specific that only > > data representation conversion for data transfer routines is > > involved in the meaning of "heterogeneous". > Narrowing the scope to "whether translation functionality is required" > is good. But note that whether the translation functionality is used > may be specific to a given (communicator,peer,datatype) tuple. You're right that this depends on the communicator and on the peer; It would not be uncommon to have one 'different (memory representation wise)' node in the communicator. This could be a login node, a visualisation node, ... So, from a performance point of view (and also for the mpi implementation internally), it's actually better to record this information for (communicator, source, dest). However, my main reason for this proposal was to enable an application that doesn't care about datatypes (might be more common than you'd think, looking at some of the mpi-subset ideas) to abort execution if it detects that data conversion is needed when communicating to its peers. Greetings, Dries * -------------- next part -------------- A non-text attachment was scrubbed... Name: 01-part Type: application/pgp-signature Size: 190 bytes Desc: not available URL: From jsquyres at [hidden] Mon Mar 3 12:24:11 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 3 Mar 2008 13:24:11 -0500 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: <20080303175434.GB26807@mhdmobile.wis.kuleuven.be> Message-ID: <0557CD0A-2133-43A2-A954-0C295643C930@cisco.com> On Mar 3, 2008, at 12:54 PM, Dries Kimpe wrote: >> Narrowing the scope to "whether translation functionality is >> required" >> is good. But note that whether the translation functionality is used >> may be specific to a given (communicator,peer,datatype) tuple. > > You're right that this depends on the communicator and on the peer; ...and the datatype. > It would not be uncommon to have one 'different (memory representation > wise)' node in the communicator. This could be a login node, a > visualisation node, ... > > So, from a performance point of view (and also for the mpi > implementation > internally), it's actually better to record this information for > (communicator, source, dest). However, my main reason for this > proposal > was to enable an application that doesn't care about datatypes > (might be > more common than you'd think, looking at some of the mpi-subset > ideas) to > abort execution if it detects that data conversion is needed when > communicating to its peers. My point is that some datatypes may require translation while others may not. It's not always just endian differences -- sometimes there's floating point representation differences, or size differences (e.g., sizeof(int) != sizeof(int)). -- Jeff Squyres Cisco Systems From treumann at [hidden] Mon Mar 3 12:49:07 2008 From: treumann at [hidden] (Richard Treumann) Date: Mon, 3 Mar 2008 13:49:07 -0500 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: <0557CD0A-2133-43A2-A954-0C295643C930@cisco.com> Message-ID: The simple question seems to be: Can MPI_BYTE be used safely on this communicator? If even one datatype requires translation then the answer is "no". How about just calling the per communicator attribute something like "MPI_BYTE_SAFE" and leave other debates about the meaning of heterogeneous aside? There is clearly room to add complexity but is there a real need? It seems to me that communicator attribute "MPI_BYTE_SAFE" provides 99% of the value and is easy to do. Dick Treumann - MPI Team/TCEM IBM Systems & Technology Group Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 03/03/2008 01:24:11 PM: > On Mar 3, 2008, at 12:54 PM, Dries Kimpe wrote: > > >> Narrowing the scope to "whether translation functionality is > >> required" > >> is good. But note that whether the translation functionality is used > >> may be specific to a given (communicator,peer,datatype) tuple. > > > > You're right that this depends on the communicator and on the peer; > > ...and the datatype. > > > It would not be uncommon to have one 'different (memory representation > > wise)' node in the communicator. This could be a login node, a > > visualisation node, ... > > > > So, from a performance point of view (and also for the mpi > > implementation > > internally), it's actually better to record this information for > > (communicator, source, dest). However, my main reason for this > > proposal > > was to enable an application that doesn't care about datatypes > > (might be > > more common than you'd think, looking at some of the mpi-subset > > ideas) to > > abort execution if it detects that data conversion is needed when > > communicating to its peers. > > > My point is that some datatypes may require translation while others > may not. It's not always just endian differences -- sometimes there's > floating point representation differences, or size differences (e.g., > sizeof(int) != sizeof(int)). > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > Mpi-22 mailing list > Mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dries.Kimpe at [hidden] Mon Mar 3 12:59:17 2008 From: Dries.Kimpe at [hidden] (Dries Kimpe) Date: Mon, 3 Mar 2008 19:59:17 +0100 Subject: [Mpi-22] determine if running in a heterogenous environment In-Reply-To: <0557CD0A-2133-43A2-A954-0C295643C930@cisco.com> Message-ID: <20080303185917.GA2027@mhdmobile.lan> You have to draw the line somewhere ;-) Especially since -- assuming the user uses MPI_BYTE to describe everything -- the MPI implementation cannot know if any of those datatypes needing adjustment are used. Of course, the application could know this if detailed information about the So, as soon as one of the predefined MPI datatypes is different I would flag the peer (or communicator) as heterogen(e)ous. For example, X86 vs X86_64 and gcc: All C types, except for pointers and long integers are the same; Still, I would say they are heterogen(e)ous without giving more details. Dries * Jeff Squyres [2008-03-03 13:24:11]: > My point is that some datatypes may require translation while others > may not. It's not always just endian differences -- sometimes there's > floating point representation differences, or size differences (e.g., > sizeof(int) != sizeof(int)). * -------------- next part -------------- A non-text attachment was scrubbed... Name: 01-part Type: application/pgp-signature Size: 190 bytes Desc: not available URL: From jsquyres at [hidden] Wed Mar 19 09:10:41 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Wed, 19 Mar 2008 10:10:41 -0400 Subject: [Mpi-22] MPI 2.2 ballots Message-ID: <3E3BE220-77E3-431D-8AC3-296AFE1A521A@cisco.com> Bill -- Let us know when the first round of MPI 2.2 ballots will be coming up. Some of the C++ stuff (e.g., const) got moved to MPI 2.2. There was some confusion about it at the last Chicago meeting -- I couldn't remember enough context on the spot in Chicago to remember why "const" was Good for all C++ handles, for example. I'd like to bring that discussion forward again at the appropriate time. Thanks. -- Jeff Squyres Cisco Systems From jsquyres at [hidden] Mon Mar 31 07:04:36 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Mon, 31 Mar 2008 08:04:36 -0400 Subject: [Mpi-22] 2.2 at April meeting? Message-ID: <426BFCCF-2ACE-4278-A582-232D9A99C2F2@cisco.com> Bill -- Do we plan to have any time allocated in the April meeting to MPI 2.2, or are we still in a holding pattern waiting for 2.1? If we do discuss 2.2 in April, I'd like a few minutes to discuss the whole "const" issues surrounding the C++ bindings. A few people asked me last meeting to rehash the issues, etc. In short: my position is that *all* predefined handles in the C++ bindings should be "const" (reversing some of the [incorrect, I believe] errata) and fixing a few C++ API bindings to compensate. MPI::BOTTOM is the only exception, but we have discussed that one to death already. If we push all the 2.2 discussions out to the June meeting, no problem -- I just wanted to queue up the "const" discussion for whenever the 2.2 discussions occur. -- Jeff Squyres Cisco Systems