From ritzdorf at [hidden] Wed Feb 6 04:13:01 2008 From: ritzdorf at [hidden] (Hubert Ritzdorf) Date: Wed, 06 Feb 2008 11:13:01 +0100 Subject: [mpi-22] FW: [mpi-21] Proposal: MPI_OFFSET built-in type In-Reply-To: Message-ID: <47A9882D.3010006@it.neclab.eu> And the length of these datatypes in external32 representation has to be defined if it should be allowed to read/write these datatypes. Hubert Jeff Squyres wrote: > Don't forget MPI::OFFSET (and MPI::AINT). > > And they should be const. ;-) > > > On Jan 24, 2008, at 12:55 PM, Rajeev Thakur wrote: > > >> Is it an interface change? It would be an addition to the set of >> predefined >> types, such as MPI_INT, MPI_CHAR, so it should just work I think. >> >> >>>> MPI Datatype: MPI_OFFSET >>>> Corresponding C type: long long int >>>> Corresponding Fortran type: INTEGER(KIND=MPI_OFFSET_KIND) >>>> >> The corresponding C type should be "implementation defined". It >> could be int >> for example in an implementation that only supports 32-bit file sizes. >> >> Rajeev >> >> >> >>> -----Original Message----- >>> From: mpi-22-bounces_at_[hidden] >>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard Graham >>> Sent: Thursday, January 24, 2008 11:48 AM >>> To: mpi-22_at_[hidden] >>> Subject: Re: [mpi-22] FW: [mpi-21] Proposal: MPI_OFFSET built-in type >>> >>> Do you think this should go into 2.2 or 3.0 ? I ask not >>> because the change >>> is large, but because it will change the signature of several >>> interface >>> functions (presumably marking the older functions as deprecated - for >>> removal in X number of years). There is another change that >>> I would like to >>> see added to the standard, which is adding a way to convey >>> information >>> between MPI_Alloc_mem() to the functions using this memory, >>> w/o forcing the >>> implementations to try and use all sorts of non-portable >>> solutions to figure >>> out if this memory can be used "as-is" by the network layer, >>> of if it needs >>> to be "prepared" (pinned, and ?). >>> >>> It seems prudent to combine such interface changes into the >>> smallest number >>> of changes (1 is preferred). >>> >>> What do you think ? >>> Rich >>> >>> >>> On 1/24/08 12:18 PM, "Rajeev Thakur" wrote: >>> >>> >>>> A similar one is needed for MPI_Aint. >>>> >>>> Rajeev >>>> >>>> >>>>> -----Original Message----- >>>>> From: mpi-22-bounces_at_[hidden] >>>>> [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard Graham >>>>> Sent: Thursday, January 24, 2008 11:09 AM >>>>> To: mpi-22_at_[hidden] >>>>> Subject: [mpi-22] FW: [mpi-21] Proposal: MPI_OFFSET built-in type >>>>> >>>>> Moving this to the appropriate list. >>>>> >>>>> Rich >>>>> >>>>> ------ Forwarded Message >>>>> From: Robert Latham >>>>> Reply-To: "Mailing list for discussion of MPI 2.1" >>>>> >>>>> Date: Thu, 24 Jan 2008 10:41:52 -0600 >>>>> To: >>>>> Subject: [mpi-21] Proposal: MPI_OFFSET built-in type >>>>> >>>>> I hope this is less contentious than adding 'const' keywords... >>>>> >>>>> >>>>> I would like to propose a new built-in type MPI_OFFSET, >>>>> >>> defined to be >>> >>>>> a type corresponding to INTEGER(KIND=MPI_OFFSET_KIND) or MPI_Offset >>>>> >>>>> This is a minor addition to the standard, which would have >>>>> >>> no impact >>> >>>>> on existing code while serving to simplify code which >>>>> >>> exchanges file >>> >>>>> offsets among processes. >>>>> >>>>> There is a workaround in the standard: a user can define a >>>>> >>> type from >>> >>>>> MPI_BYTE: >>>>> >>>>> MPI_Type_contiguous(sizeof(MPI_Offset), MPI_BYTE, &offtype); >>>>> >>>>> However, it would clearly be more convienient to operate >>>>> >>> on built-in >>> >>>>> types. >>>>> >>>>> MPI Datatype: MPI_OFFSET >>>>> Corresponding C type: long long int >>>>> Corresponding Fortran type: INTEGER(KIND=MPI_OFFSET_KIND) >>>>> >>>>> >>>>> Thanks >>>>> ==rob >>>>> >>>>> -- >>>>> Rob Latham >>>>> Mathematics and Computer Science Division A215 0178 >>>>> >>> EA2D B059 8CDF >>> >>>>> Argonne National Lab, IL USA B29D F333 >>>>> >>> 664A 4280 315B >>> >>>>> _______________________________________________ >>>>> mpi-21 mailing list >>>>> mpi-21_at_[hidden] >>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21 >>>>> >>>>> ------ End of Forwarded Message >>>>> >>>>> _______________________________________________ >>>>> mpi-22 mailing list >>>>> mpi-22_at_[hidden] >>>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-22 >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> mpi-22 mailing list >>>> mpi-22_at_[hidden] >>>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-22 >>>> >>> _______________________________________________ >>> mpi-22 mailing list >>> mpi-22_at_[hidden] >>> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-22 >>> >>> >>> >> _______________________________________________ >> mpi-22 mailing list >> mpi-22_at_[hidden] >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-22 >> > > > * -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3245 bytes Desc: S/MIME Cryptographic Signature URL: From traff at [hidden] Fri Feb 8 03:58:23 2008 From: traff at [hidden] (Jesper Larsson Traeff) Date: Fri, 8 Feb 2008 10:58:23 +0100 Subject: [mpi-22] Proposal: Correction to one-sided completion semantics, Section 6.7 Message-ID: <20080208095823.GA5798@fourier.it.neclab.eu> dear All, This is a proposal for correction to the semantics of one-sided communication, Section 6.7 (p. 137ff), where there (in my opinion) is a mistake in need of correction. I propose the corrections below for one of the next ballots (probably for MPI 2.2) best regards Jesper ------------- RATIONALE (not to be put in the standard): the example below - which to my understanding should be correct - will fail as rules 5 and 6 are now. To fix, I propose splitting rule 6 into two parts, and making the change from MPI_WIN_POST to MPI_WIN_WAIT in rule 5 ------------- p. 138, line 21-28 - replace 5. A local update of a location in a private window copy in process memory becomes visible in the public window copy at the latest when an ensuing call to MPI_WIN_WAIT, MPI_WIN_FENCE, or MPI_WIN_UNLOCK is executed on that window by the window owner. 6. With active target communication an update by a put or accumulate call to a public window copy becomes visible in the private copy in process memory at the latest when the exposure epoch ends, that is when an ensuing call to either MPI_WIN_WAIT or MPI_WIN_FENCE is executed on that window by the window owner. 7. With passive target communication an update by a put or accumulate call to a public window copy becomes visible in the private copy in process memory at the latest when the next exposure epoch is started, that is when an ensuing call to either MPI_WIN_POST, MPI_WIN_FENCE or MPI_WIN_LOCK is executed on that window by the window owner. p. 138, after line 41 - add explanation/examples The rules (5,6,7) make it possible to switch between synchronization modes as explained on p. 140, in particular between passive synchronization and synchronization with post-wait. The following example illustrates this: [EXAMPLE] Process A Process B --------- --------- MPI_Win_post(A,B) MPI_Win_start(A) MPI_Win_start(A) local update of x in A MPI_Put(A,y) /* to different location */ MPI_Win_complete MPI_Win_complete MPI_Win_wait local access to y ----------------------- MPI_Barrier ----------------------- MPI_Win_lock(A) MPI_Get(A,x) MPI_Put(A,y) /* to different location */ MPI_Win_unlock(A) ----------------------- MPI_Barrier ----------------------- MPI_Win_Post(A) or MPI_Win_lock(A) local access to y in A MPI_Win_wait or MPI_Win_unlock(A) [END OF EXAMPLE] Since MPI_WIN_WAIT makes process A's local update to x visible in the public window (rule 5), it can be accessed by process B after the first barrier in the access epoch started by MPI_WIN_LOCK. Process B's update to the public window copy of y can be accessed by process A after the second barrier (rule 7). The local access to y after MPI_Win_Wait before the first barrier is guaranteed to access the value put by process B (rule 6). Note that in the following (erroneous) passive target communication example, process A is *not* guaranteed to see the value put by process B. In order for A to access the public copy of y it must enter an exposure epoch by calling either MPI_WIN_LOCK, MPI_WIN_POST, or MPI_WIN_FENCE. [EXAMPLE] Process A Process B --------- --------- ----------------------- MPI_Barrier ----------------------- MPI_Win_lock(A) MPI_Put(A,y) MPI_Win_unlock(A) ----------------------- MPI_Barrier ----------------------- local access to y in A /* unsafe! */ [END OF EXAMPLE]