From traff at [hidden] Mon Jan 12 08:19:12 2009 From: traff at [hidden] (Jesper Larsson Traeff) Date: Mon, 12 Jan 2009 15:19:12 +0100 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <042.73353ede2c581268cc00abe5d1a70740@lists.mpi-forum.org> Message-ID: <20090112141912.GA29253@fourier.it.neclab.eu> For non-associative operators, in order to get any meaningful result, all you can do is to perform the operations in some predefined order (eg. rank-order) with a predefined "bracketing", ((((0+1)+2)+3)+...)+n). That seems to imply a linear time algorithm? Thus, I think it is right that the standard requires the operators to be associative. Operators on FLOATs are usually treated as if they are associative (with some/many MPI libraries providing a possibility to switch to an algorithm with some predefined ordering and bracketing, for the cases where this is desired). I therefore think this extension requires a lot more thought for it to be "mathematically well-defined" and also otherwise make sense Jesper On Fri, Jan 02, 2009 at 05:26:48PM -0000, MPI Forum wrote: > #96: MPI_OP_CREATE and associativity > -------------------------------------+-------------------------------------- > Reporter: htor | Owner: htor > Type: Enhancements to standard | Status: new > Priority: Forum feedback requested | Milestone: 2009/02/09 California > Version: MPI 2.2 | Keywords: > -------------------------------------+-------------------------------------- > Information about associativity can not be attached to user-defined > operations. This prevents any dynamic reordering of trees, based on > different arrival patterns (strongly recommended by the Advice to > Implementors in Sec. 5.9.1). Architecture-dependent associativity can be > derived for predefined types (e.g., MPI_INT is associative while MPI_FLOAT > is not on most architectures). No such statements can be made about user- > defined operations. This seems to be mathematically incomplete. > > == History == > This has been the status quo since MPI-1. > > == Proposed Solution == > Add an additional (bool) in-argument to MPI_OP_CREATE that indicates if > the operation is associative or not. > > == Impact on Implementations == > low, nothing needs to be changed and the argument can be ignored. However, > it reveals rather interesting optimization possibilities at large scale. > > == Impact on Applications / Users == > low, an additional argument can be supplied which could just be "false" to > retain the status quo. > > == Alternative Solutions == > none > > == Entry for the Change Log == > added associativity argument to MPI_OP_CREATE > > -- > Ticket URL: > MPI Forum > MPI Forum From htor at [hidden] Mon Jan 12 09:36:58 2009 From: htor at [hidden] (Torsten Hoefler) Date: Mon, 12 Jan 2009 10:36:58 -0500 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <20090112141912.GA29253@fourier.it.neclab.eu> Message-ID: <20090112153657.GJ18142@benten.cs.indiana.edu> On Mon, Jan 12, 2009 at 03:19:12PM +0100, Jesper Larsson Traeff wrote: > > For non-associative operators, in order to get any meaningful result, > all you can do is to perform the operations in some predefined order > (eg. rank-order) with a predefined "bracketing", ((((0+1)+2)+3)+...)+n). > That seems to imply a linear time algorithm? yes, it does but: linear time in computation, not in communication though! I know, this sounds counter-intuitive, but I heard from other scientists that there are fields that need this functionality (i.e., they can't guarantee numerical stability). Examples are mainly financial markets not so much simulations (most simulations are full of approximations anyway). > Thus, I think it is right that the standard requires the operators to > be associative. That's exactly what I would criticize. Don't get me wrong, I am not pushing hard, but the argument that there seems limited optimization potential, and thus we should ignore mathematical properties seems weak. Also, optimization potential is actually not that limited. The two implementations that I've seen just do a loop over all ranks: if(rank == 0) for(i=0;i Operators on FLOATs are usually treated as if they are associative > (with some/many MPI libraries providing a possibility to switch to an > algorithm with some predefined ordering and bracketing, for the cases > where this is desired). yes, they can still be treated as associative (for all internal types), but thet user should have a chance to define non-associative reductions. > I therefore think this extension requires a lot more thought for it to > be "mathematically well-defined" and also otherwise make sense why? Making it mathematically consistent is as simple as stated. Some optimization possibilities are shown above. What else is required? Don't get me wrong, I'm not pushing and I'm happy to withdraw the ticket if the general consensus is that it's nonsense. It just seems rather inconsistent to me that associativity is just half-assumed. It is stated on page 160:44: "The operation op is always assumed to be associative." but then on page 161:1, we admit that we actually know the mathematical properties of IEEE-754 and say: "This may change the result of the reduction for operations that are not strictly associative and commutative, such as floating point addition.". And the following advice to implementors makes the situation even worse (I think), because it says that there is only one wrong result that is right for MPI. This advice also disables optimizations that could be done based on arrival patterns (as Bronis mentioned before). This would be much clearer if we would make the associativity explicit. But this might as well be an MPI-3 discussion. Best, Torsten -- bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ ----- Torsten Hoefler | Postdoctoral Researcher Open Systems Lab | Indiana University 150 S. Woodlawn Ave. | Bloomington, IN, 474045, USA Lindley Hall Room 135 | +01 (812) 855-3608 * -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From traff at [hidden] Mon Jan 12 10:11:24 2009 From: traff at [hidden] (Jesper Larsson Traeff) Date: Mon, 12 Jan 2009 17:11:24 +0100 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <20090112153657.GJ18142@benten.cs.indiana.edu> Message-ID: <20090112161124.GA30522@fourier.it.neclab.eu> On Mon, Jan 12, 2009 at 10:36:58AM -0500, Torsten Hoefler wrote: > > To optimize this, and retain the order of summation, one could employ > similar strategies as for gather to reduce the messaging complexity > (e.g. from O(N) to O(log_N)). Since you cannot do anything on intermediate nodes in your gather tree, the message complexity is O(Nm), m: size of vector... You only gain something for small problems by reducing the number of receives at the root. In the case where you use some dynamic algorithm where things may arrive in some unspecified order, you'd again have to buffer the contributions from the processes before you can do the reduction. > yes, they can still be treated as associative (for all internal types), > but thet user should have a chance to define non-associative reductions. > In the apps you refer to, I guess they often want an MPI_SUM on an MPI_FLOAT done in some particular order with a particular bracketing. Thus, what you need is some ability to control each individual MPI_Reduce call? Jesper From erezh at [hidden] Mon Jan 12 13:02:02 2009 From: erezh at [hidden] (Erez Haba) Date: Mon, 12 Jan 2009 11:02:02 -0800 Subject: [Mpi-22] chapter authors 5, 6, 8, 10, 13, 15, 16 please review ticket #46 Message-ID: <6B68D01C00C9994A8E150183E62A119E7B6415F7BB@NA-EXMSG-C105.redmond.corp.microsoft.com> Add const Keyword to the C bindings https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/46 Chapter authors, please review this proposal that it identifies the right functions in your chapters. Chapters 3,4,7,9 and 11 already reviewed. Thanks to Rich and Jesper. Still need to review. Chapter 5 Adam Moody, Collective Communication Chapter 6 Richard Treumann, Groups, Contexts, and Communicators Chapter 10 David Solt, Process Creation and Management Chapter 13 Rajeev Thakur, I/O Chapter 15 Rolf Rabenseifner, Deprecated Functions Chapter 16 Jeff Squyres, Language Bindings To make it easier for the chapter authors, the functions are listed in chapter order in the proposal. Thanks, .Erez * -------------- next part -------------- An HTML attachment was scrubbed... URL: From htor at [hidden] Mon Jan 12 13:05:47 2009 From: htor at [hidden] (Torsten Hoefler) Date: Mon, 12 Jan 2009 14:05:47 -0500 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <20090112161124.GA30522@fourier.it.neclab.eu> Message-ID: <20090112190547.GK18142@benten.cs.indiana.edu> On Mon, Jan 12, 2009 at 05:11:24PM +0100, Jesper Larsson Traeff wrote: > On Mon, Jan 12, 2009 at 10:36:58AM -0500, Torsten Hoefler wrote: > > > > To optimize this, and retain the order of summation, one could employ > > similar strategies as for gather to reduce the messaging complexity > > (e.g. from O(N) to O(log_N)). > Since you cannot do anything on intermediate nodes in your gather tree, > the message complexity is O(Nm), m: size of vector... yes - details actually depend on the model :). Let's use the well-known LogGP with the extension, that the reduction per byte costs x mus as a base for discussions. I talked about a direct vs. tree algorithm (which of course only makes sense for small data!): I assume that all nodes start their participation in the call at the same time and the root is the one who finishes last. I don't model CPU overhead because this doesn't add more information. Direct: t=L + (P-2)g + (s-1)G + P*s*x BinomialTree: t=log_2(P)L+(log_2(P)-1)g + sum_i=1^(log_2(P)) (i*(s-1)G) + P*s*x The direct receive is simple and needs no explanation. The longest path in the bintree has a length of log_2(P), and the rank 0 sends to log_2(P) nodes. The messages grow along the critical path (thus the sum). > You only gain something for small problems by reducing the number of > receives at the root. yes, however, the definition of "small" changes with scale (see above models). Also, the transmission of mid-sized messages can be handled in the same model with double or fractional trees (see Karp's article "Optimal Broadcast and Summation in the LogP Model" from '93). Yes, and I know that this only affects data motion, the reduction has to be done on order on rank 0. However, it is often the case that (especially for larger messages and modern architectures such as GPUs on Chip/Larrabee), the bandwidth of the processor is much higher than the bandwidth of the network. Large messages could use algorithms that control congestion (such in large Gathers), i.e., only a subset of the processes send at the same time. The data could also reduced in order here. All of those optimizations are not trivial and very network-dependent. > In the case where you use some dynamic algorithm where things may arrive in > some unspecified order, you'd again have to buffer the contributions from > the processes before you can do the reduction. As I said, this depends. And you have the same problems with the current MPI reduction rules, i.e., you either have to buffer or to synchronize. Out of order doesn't exist. So in this implementation, you'll either buffer (small messages) or synchronize (large messages) as you do for the current reductions. > > yes, they can still be treated as associative (for all internal types), > > but thet user should have a chance to define non-associative reductions. > > > In the apps you refer to, I guess they often want an MPI_SUM on an MPI_FLOAT > done in some particular order with a particular bracketing. Thus, what > you need is some ability to control each individual MPI_Reduce call? yes, but I guess controlling each call is not necessary. However, controlling the order of operations might be necessary to ensure correctness. But I don't know a particular app, so the whole argument is weak. I just want to discuss the issue and either understand why we don't have this feature or discuss its addition. My point is that there are (limited) optimization possibilities and that we limit ourselves enourmously by enforcing the same reduction order for all float/double operations and all user-defined operations in MPI-2.1. If associativity would be an explicit property of a user-defined operation, then we could apply dynamic optimizations based on arrival time, which we can't right now (side-note: we can do this for associative ops, such as all ops on non-fp types). Thanks & Best, Torsten -- bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ ----- Torsten Hoefler | Postdoctoral Researcher Open Systems Lab | Indiana University 150 S. Woodlawn Ave. | Bloomington, IN, 474045, USA Lindley Hall Room 135 | +01 (812) 855-3608 * -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From traff at [hidden] Tue Jan 13 02:23:20 2009 From: traff at [hidden] (Jesper Larsson Traeff) Date: Tue, 13 Jan 2009 09:23:20 +0100 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <20090112190547.GK18142@benten.cs.indiana.edu> Message-ID: <20090113082320.GA8286@fourier.it.neclab.eu> On Mon, Jan 12, 2009 at 02:05:47PM -0500, Torsten Hoefler wrote: > > > > yes, they can still be treated as associative (for all internal types), > > > but thet user should have a chance to define non-associative reductions. > > > > > In the apps you refer to, I guess they often want an MPI_SUM on an MPI_FLOAT > > done in some particular order with a particular bracketing. Thus, what > > you need is some ability to control each individual MPI_Reduce call? > yes, but I guess controlling each call is not necessary. However, > controlling the order of operations might be necessary to ensure > correctness. But I don't know a particular app, so the whole argument is > weak. I just want to discuss the issue and either understand why we > don't have this feature or discuss its addition. My point is that there > are (limited) optimization possibilities and that we limit ourselves > enourmously by enforcing the same reduction order for all float/double > operations and all user-defined operations in MPI-2.1. If associativity > would be an explicit property of a user-defined operation, then we could > apply dynamic optimizations based on arrival time, which we can't right > now (side-note: we can do this for associative ops, such as all ops on > non-fp types). > maybe I misunderstand you, but I don't see that allowing operators to be explicitly non-associative would give you more freedom for optimizations. Doesn't your arguments above show that there are rather less than for associative ops? But maybe it is better to discuss this at the next meeting? If your argument is: "there are natural (user-defined) operators in app A, B and C that are mathematically not associative, and users want to perform reductions with these" - then I agree that MPI is missing something. Maybe an "MPI_Op_create_nonassoc(...)"? (I also don't think an extra function for setting the associativity of an already defined function fits with MPI; although a function like MPI_Op_use_some_special_canonical_order(MPI_Op f), where f could be a user as well as a predefined MPI_Op, could perhaps make sense, and would allow to control the evaluation order each time MPI_Reduce/MPI_Allreduce/... is called, but then such a function would either have to be collective or users would be required to use it consistently, ... that is, many complications) Note also that non-associativity does put some non-trivial burden on implementers: for non-associative (with or without hyphen?) operators, a special algorithm that does the reduction in the sequential order is needed Jesper From htor at [hidden] Mon Jan 19 16:16:31 2009 From: htor at [hidden] (Torsten Hoefler) Date: Mon, 19 Jan 2009 17:16:31 -0500 Subject: [Mpi-22] [MPI Forum] #96: MPI_OP_CREATE and associativity In-Reply-To: <20090113082320.GA8286@fourier.it.neclab.eu> Message-ID: <20090119221631.GK21932@benten.cs.indiana.edu> On Tue, Jan 13, 2009 at 09:23:20AM +0100, Jesper Larsson Traeff wrote: > maybe I misunderstand you, but I don't see that allowing operators to > be explicitly non-associative would give you more freedom for optimizations. > Doesn't your arguments above show that there are rather less than for > associative ops? But maybe it is better to discuss this at the next meeting? just for the other readers: we clarified the discussion during the last Collectives Workinggroup Teleconference. And decided to update ticket #96 and merge ticket #95 into it. For details see: https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/96 > If your argument is: "there are natural (user-defined) operators in app > A, B and C that are mathematically not associative, and users want to perform > reductions with these" - then I agree that MPI is missing something. yes, exactly. Such operators are MPI_SUM of MPI_FLOAT and friends. > Maybe an "MPI_Op_create_nonassoc(...)"? (I also don't think an extra function > for setting the associativity of an already defined function fits with MPI; > although a function like MPI_Op_use_some_special_canonical_order(MPI_Op f), > where f could be a user as well as a predefined MPI_Op, > could perhaps make sense, and would allow to control the evaluation order > each time MPI_Reduce/MPI_Allreduce/... is called, but then such a function > would either have to be collective or users would be required to use it > consistently, ... that is, many complications) exactly, this is why the whole discussion was moved from the context of MPI-2.2 to MPI-3 into the collective working group. > Note also that non-associativity does put some non-trivial burden on > implementers: for non-associative (with or without hyphen?) operators, > a special algorithm that does the reduction in the sequential order is needed I don't see a big/non-trivial problem. I copied the collective working group list in order to take further discussion off the MPI-2.2 list. All the Best, Torsten -- bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ ----- Torsten Hoefler | Postdoctoral Researcher Open Systems Lab | Indiana University 150 S. Woodlawn Ave. | Bloomington, IN, 474045, USA Lindley Hall Room 135 | +01 (812) 855-3608 * -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From wgropp at [hidden] Tue Jan 27 16:35:16 2009 From: wgropp at [hidden] (William Gropp) Date: Tue, 27 Jan 2009 16:35:16 -0600 Subject: [Mpi-22] [MPI Forum] #55: MPI-2.1 Cross-language attribute example is wrong In-Reply-To: <055.519c6f24707e884e2af30cb1500332a9@lists.mpi-forum.org> Message-ID: <97E99BF3-F315-40B0-977C-E23F33F64132@illinois.edu> On the last point, the real answer is that since the attributes behave as if they were set by the now deprecated functions, they should themselves be deprecated, since we must not change their behavior. This is also the source of the int/MPI_Fint issue - there was no MPI_Fint in MPI-1. Bill On Jan 27, 2009, at 4:09 PM, MPI Forum wrote: > #55: MPI-2.1 Cross-language attribute example is wrong > ----------------------------------- > +---------------------------------------- > Reporter: jsquyres | Owner: jsquyres > Type: Correction to standard | Status: new > Priority: Waiting for reviews | Milestone: 2008/12/15 Menlo > Park > Version: MPI 2.2 | Resolution: > Keywords: | > ----------------------------------- > +---------------------------------------- > > Comment(by jsquyres): > > I am re-working this proposal and will post a new one soon. Some > things > that have been consistently missed by multiple reviewers: > > * The original example is '''definitely wrong'''. > MPI_COMM_PUT_ATTR does > not exist; the correct call is MPI_COMM_SET_ATTR. > * The original example is '''definitely wrong'''. Example B sets an > INTEGER(KIND=MPI_ADDRESS_KIND) value, and therefore the output C > type must > be MPI_Aint* (not (int*)) -- regardless of the behavior of predefined > attributes. > * If predefined attributes are supposed to behave as if they were > set via > the (deprecated!) Fortran MPI-1 function, then the value when > retrieved in > C (e.g., MPI_TAG_UB) should actually be of type MPI_Fint, not int. I > think that the text in MPI-2.1 p488:3 is wrong on this point (i.e., > s/int/MPI_Fint/). > > Per the last point, I think it is pretty good luck that existing MPI > apps > work when using (int*) instead of (MPI_Fint*) -- indeed, most > compilers > have the default sizeof(INTEGER) == sizeof(int). But that does not > have > to be the case; MPI_Fint is technically more correct. > > Also per the last point, why are we specifying that predefined > attributes > behave as if they were set by deprecated functions? That seems > strange. > > -- > Ticket URL: > > MPI Forum > MPI Forum William Gropp Deputy Director for Research Institute for Advanced Computing Applications and Technologies Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign From jsquyres at [hidden] Tue Jan 27 16:49:02 2009 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 27 Jan 2009 17:49:02 -0500 Subject: [Mpi-22] [MPI Forum] #55: MPI-2.1 Cross-language attribute example is wrong In-Reply-To: <97E99BF3-F315-40B0-977C-E23F33F64132@illinois.edu> Message-ID: <4C59E10F-7282-493D-8386-D637EF6C3DDF@cisco.com> On Jan 27, 2009, at 5:35 PM, William Gropp wrote: > On the last point, the real answer is that since the attributes > behave as if they were set by the now deprecated functions, they > should themselves be deprecated, since we must not change their > behavior. This is also the source of the int/MPI_Fint issue - there > was no MPI_Fint in MPI-1. Wow, the scope of this ticket keeps expanding. :-) I see the following attributes in OMPI's mpi.h: /* MPI-1 */ MPI_TAG_UB, MPI_HOST, MPI_IO, MPI_WTIME_IS_GLOBAL, /* MPI-2 */ MPI_APPNUM, MPI_LASTUSEDCODE, MPI_UNIVERSE_SIZE, MPI_WIN_BASE, MPI_WIN_SIZE, MPI_WIN_DISP_UNIT, Excluding MPI_WIN_BASE (which, IIRC, is the only address-sized attribute in this list), if we deprecate these names, I assume we'll simply replace them with new names that behave as if they were set from Fortran MPI_COMM_SET_ATTR. Specifically, the sizes of the values will be such that you have to access them with INTEGER(KIND=MPI_ADDRESS_KIND) and (MPI_Aint*). Is that what you're thinking? If so, how about s/MPI_/MPI_ATTR_/ in all the names (perhaps also creating an alias for MPI_WIN_BASE -> MPI_ATTR_WIN_BASE just for symmetry)? I'm open to suggestions for new names. I'd almost prefer to make that a separate (but related) ticket -- the issues in #55 are already quite tangled... (i.e., leave out any mention of the predefined attributes from #55 and have a new ticket for those issues) Sound reasonable? -- Jeff Squyres Cisco Systems From wgropp at [hidden] Tue Jan 27 16:56:26 2009 From: wgropp at [hidden] (William Gropp) Date: Tue, 27 Jan 2009 16:56:26 -0600 Subject: [Mpi-22] [MPI Forum] #55: MPI-2.1 Cross-language attribute example is wrong In-Reply-To: <4C59E10F-7282-493D-8386-D637EF6C3DDF@cisco.com> Message-ID: <23A6823A-B7EB-42B1-BF67-D72B20C7B469@illinois.edu> Eek! Not for 2.2 ! Maybe a historical advice to users .... Bill On Jan 27, 2009, at 4:49 PM, Jeff Squyres wrote: > On Jan 27, 2009, at 5:35 PM, William Gropp wrote: > >> On the last point, the real answer is that since the attributes >> behave as if they were set by the now deprecated functions, they >> should themselves be deprecated, since we must not change their >> behavior. This is also the source of the int/MPI_Fint issue - there >> was no MPI_Fint in MPI-1. > > Wow, the scope of this ticket keeps expanding. :-) > > I see the following attributes in OMPI's mpi.h: > > /* MPI-1 */ > MPI_TAG_UB, > MPI_HOST, > MPI_IO, > MPI_WTIME_IS_GLOBAL, > > /* MPI-2 */ > MPI_APPNUM, > MPI_LASTUSEDCODE, > MPI_UNIVERSE_SIZE, > MPI_WIN_BASE, > MPI_WIN_SIZE, > MPI_WIN_DISP_UNIT, > > Excluding MPI_WIN_BASE (which, IIRC, is the only address-sized > attribute in this list), if we deprecate these names, I assume we'll > simply replace them with new names that behave as if they were set > from Fortran MPI_COMM_SET_ATTR. Specifically, the sizes of the values > will be such that you have to access them with > INTEGER(KIND=MPI_ADDRESS_KIND) and (MPI_Aint*). > > Is that what you're thinking? If so, how about s/MPI_/MPI_ATTR_/ in > all the names (perhaps also creating an alias for MPI_WIN_BASE -> > MPI_ATTR_WIN_BASE just for symmetry)? I'm open to suggestions for new > names. > > I'd almost prefer to make that a separate (but related) ticket -- the > issues in #55 are already quite tangled... (i.e., leave out any > mention of the predefined attributes from #55 and have a new ticket > for those issues) Sound reasonable? > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 William Gropp Deputy Director for Research Institute for Advanced Computing Applications and Technologies Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign From jsquyres at [hidden] Tue Jan 27 17:08:43 2009 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 27 Jan 2009 18:08:43 -0500 Subject: [Mpi-22] [MPI Forum] #55: MPI-2.1 Cross-language attribute example is wrong In-Reply-To: <23A6823A-B7EB-42B1-BF67-D72B20C7B469@illinois.edu> Message-ID: <2DD06E6C-654F-499D-948F-63B3FC0DB0CA@cisco.com> On Jan 27, 2009, at 5:56 PM, William Gropp wrote: > Eek! Not for 2.2 ! Maybe a historical advice to users .... There are two issues: 1. The current text about the C type for predefined attributes is erroneous. p487:46 through 488-3 says that predefined attributes were set by "a Fortran call" (it doesn't say which Fortran call), but then says the result in C is of type int. There is no Fortran attribute call that sets an attribute value that is exactly equivalent to a C int. That's why MPI-2 created MPI_Fint, right? So I think that we must do *something* about this issue. 2. Deprecating the old attribute names now (as you suggested) is not a Bad Thing. Right now, we have a discontinuity of deprecations: some functions are deprecated, but values that are supposedly set by those functions are *not* deprecated. If we deprecate those values now and provide replacements, it opens the door in MPI-3 (or later) to actually whack them from the spec and only use the new names. Specifically: deprecating the old names doesn't mean that we have to break user apps for MPI-2.2. -- Jeff Squyres Cisco Systems From wgropp at [hidden] Tue Jan 27 21:22:28 2009 From: wgropp at [hidden] (William Gropp) Date: Tue, 27 Jan 2009 21:22:28 -0600 Subject: [Mpi-22] [MPI Forum] #55: MPI-2.1 Cross-language attribute example is wrong In-Reply-To: <2DD06E6C-654F-499D-948F-63B3FC0DB0CA@cisco.com> Message-ID: <332F973B-6450-483B-8E88-2DB93DEA462D@illinois.edu> Probably the least damaging thing to do now is to make them MPI_Fints - this (probably) will have no affect in any current implementation, and is consistent with most of the text. I agree that it would be good to deprecate these values, even if we don't replace them in 2.2 . Bill On Jan 27, 2009, at 5:08 PM, Jeff Squyres wrote: > On Jan 27, 2009, at 5:56 PM, William Gropp wrote: > >> Eek! Not for 2.2 ! Maybe a historical advice to users .... > > There are two issues: > > 1. The current text about the C type for predefined attributes is > erroneous. > > p487:46 through 488-3 says that predefined attributes were set by "a > Fortran call" (it doesn't say which Fortran call), but then says the > result in C is of type int. There is no Fortran attribute call that > sets an attribute value that is exactly equivalent to a C int. That's > why MPI-2 created MPI_Fint, right? > > So I think that we must do *something* about this issue. > > 2. Deprecating the old attribute names now (as you suggested) is not a > Bad Thing. Right now, we have a discontinuity of deprecations: some > functions are deprecated, but values that are supposedly set by those > functions are *not* deprecated. If we deprecate those values now and > provide replacements, it opens the door in MPI-3 (or later) to > actually whack them from the spec and only use the new names. > Specifically: deprecating the old names doesn't mean that we have to > break user apps for MPI-2.2. > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 William Gropp Deputy Director for Research Institute for Advanced Computing Applications and Technologies Paul and Cynthia Saylor Professor of Computer Science University of Illinois Urbana-Champaign From jsquyres at [hidden] Thu Jan 29 11:57:39 2009 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 29 Jan 2009 12:57:39 -0500 Subject: [Mpi-22] Please review tickets Message-ID: <6A2AA24B-0565-4011-9EEC-5D246B19B9EC@cisco.com> Hi all. Don't forget that we are about 1.5 weeks away from the San Jose meeting. If you are a reviewer, please review your MPI-2.2 tickets so that we can discuss them at the meeting. Thanks! -- Jeff Squyres Cisco Systems From treumann at [hidden] Thu Jan 29 13:33:06 2009 From: treumann at [hidden] (Richard Treumann) Date: Thu, 29 Jan 2009 14:33:06 -0500 Subject: [Mpi-22] Please review tickets In-Reply-To: <6A2AA24B-0565-4011-9EEC-5D246B19B9EC@cisco.com> Message-ID: Jeff - Do you know of a way for a Forum member to pull a list of tickets he is obliged to review? Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 mpi-22-bounces_at_[hidden] wrote on 01/29/2009 12:57:39 PM: > [image removed] > > [Mpi-22] Please review tickets > > Jeff Squyres > > to: > > MPI 2.2 > > 01/29/2009 01:01 PM > > Sent by: > > mpi-22-bounces_at_[hidden] > > Please respond to "MPI 2.2" > > Hi all. Don't forget that we are about 1.5 weeks away from the San > Jose meeting. > > If you are a reviewer, please review your MPI-2.2 tickets so that we > can discuss them at the meeting. > > Thanks! > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Thu Jan 29 15:36:48 2009 From: jsquyres at [hidden] (Jeff Squyres) Date: Thu, 29 Jan 2009 16:36:48 -0500 Subject: [Mpi-22] Please review tickets In-Reply-To: Message-ID: <93F27B10-0680-47E6-B528-9EC2186C0CF5@cisco.com> I think the best way might be to search for your Trac ID and/or email address using the "search" box. On the result page of the search, if you turn up more info than you need, you can limit the search to just tickets. On Jan 29, 2009, at 2:33 PM, Richard Treumann wrote: > Jeff - Do you know of a way for a Forum member to pull a list of > tickets he is obliged to review? > > Dick Treumann - MPI Team > IBM Systems & Technology Group > Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 > > > mpi-22-bounces_at_[hidden] wrote on 01/29/2009 12:57:39 PM: > > > [image removed] > > > > [Mpi-22] Please review tickets > > > > Jeff Squyres > > > > to: > > > > MPI 2.2 > > > > 01/29/2009 01:01 PM > > > > Sent by: > > > > mpi-22-bounces_at_[hidden] > > > > Please respond to "MPI 2.2" > > > > Hi all. Don't forget that we are about 1.5 weeks away from the San > > Jose meeting. > > > > If you are a reviewer, please review your MPI-2.2 tickets so that we > > can discuss them at the meeting. > > > > Thanks! > > > > -- > > Jeff Squyres > > Cisco Systems > > > > _______________________________________________ > > mpi-22 mailing list > > mpi-22_at_[hidden] > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 > _______________________________________________ > mpi-22 mailing list > mpi-22_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22 -- Jeff Squyres Cisco Systems