[Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements

Rolf Rabenseifner rabenseifner at hlrs.de
Thu Oct 24 01:19:16 CDT 2019


Dear Jeff and all,

> Does MPI support segmented address spaces or not?

When  I understand correctly then MPI is and was always prepared for segmented address spaces as long as all integer types are signed integers and the segmentation is done through some higher order bits.

Best regards
Rolf


----- Rolf Rabenseifner via mpiwg-large-counts <mpiwg-large-counts at lists.mpi-forum.org> wrote:
> See below
> 
> ----- Original Message -----
> > From: "Jeff Squyres" <jsquyres at cisco.com>
> > To: "Rolf Rabenseifner" <rabenseifner at hlrs.de>
> > Cc: "mpiwg-large-counts" <mpiwg-large-counts at lists.mpi-forum.org>
> > Sent: Wednesday, October 23, 2019 5:12:33 PM
> > Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements
> 
> > On Oct 23, 2019, at 10:44 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
> >> 
> >>>   typedef struct { uint64_t context; uint64_t pointer; } MPI_Aint;
> >>> 
> >>> Is that incorrect?
> >> 
> >> I expect, that it is not correct. My apologies for that.
> >> 
> >> In MPI-1.0 - MPI-2.1 all address calculation had to be done with - and +
> >> operators.
> >> Only for the case, that somebody switches integer overflow detection on,
> >> we added MPI_Aint_add and MPI_Aint_diff.
> > 
> > I'm confused with how to reconcile that with MPI-3.1 4.1.12 "Correct Use of
> > Addresses", p115:13-15:
> > 
> > "Also, in machines with a segmented address space, addresses are not unique and
> > address arithmetic has some peculiar properties. Thus, the use of addresses,
> > that is, displacements relative to the start address MPI_BOTTOM, has to be
> > restricted."
> > 
> > Doesn't this explicitly state that segmented address spaces are supported?
> 
> Yes, some higher bits may be used as a flag.
> If two absolute addresses are within the same array or structure (i.e. variable),
> i.e., within the save sequential storage, then a minus operator must still work.
> 
> 
> >> We asked ourself, whether we want to deprecate the use of + and - operators,
> >> i.e., to add this to Table 2.1 on page 18 and the forum decided "no" per straw
> >> vote.
> >> We definitely never removed the operators + and - from the list of
> >> valid operations for MPI_Aint.
> > 
> > I'm not sure how to reconcile your statements with MPI-3.1 2.5.6 "Absolute
> > Addresses and Relative Address Displacements" p16:39-43:
> > 
> > "For retrieving absolute addresses or any calculation with absolute addresses,
> > one should use the routines and functions provided in Section 4.1.5. Section
> > 4.1.12 provides additional rules for the correct use of absolute addresses. For
> > expressions with relative displacements or other usage without absolute
> > addresses, intrinsic operators (e.g., +, -, *) can be used."
> > 
> > If MPI_Aint is supposed to be used for absolute addresses, this tells me that
> > MPI_Aint_add/diff must be used for all mathematical operations.
> 
> The following definitions apply in understanding how to implement an 
> ISO International Standard and other normative ISO deliverables (TS, PAS, IWA).
> 
> - "shall" indicates a requirement.
> - "should" indicates a recommendation.
> 
> > ... any calculation with absolute addresses,
> > one ***should*** use the routines and functions provided in Section 4.1.5
> 
> ... indicates a recommendation, not a must 
> 
> > 
> >> MPI-3.1 on page 17 lines 16-18 clearly tells that any int, MPI:Aint, and
> >> MPI_Offset value
> >> can be assigned to a MPI_Count variable, i.e., all 4 must be integers of some
> >> byte-size.
> > 
> > I think it actually says something subtly different than that.  It says:
> > 
> > "The size of the MPI_Count type is determined by the MPI implementation with the
> > restriction that it must be minimally capable of encoding any value that may be
> > stored in a variable of type int, MPI_Aint, or MPI_Offset in C and of type
> > INTEGER, INTEGER (KIND=MPI_ADDRESS_KIND), or INTEGER (KIND=MPI_OFFSET_KIND) in
> > Fortran."
> > 
> > Meaning: an MPI_Count must be big enough to hold an MPI_Aint.  It does not say
> > that you can assign one to the other.  In my hypothetical "typedef struct"
> > example, this means that MPI_Count would need to be a 128 bit value.
> > 
> > ----
> > 
> > All this being said, what all this discussion probably means is that there are
> > discrepancies in the standard that should be fixed and/or clarified.  But I
> > think the discussion starts with a fundamental question:
> > 
> > Does MPI support segmented address spaces or not?
> > 
> > --
> > Jeff Squyres
> > jsquyres at cisco.com
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de .
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
> _______________________________________________
> mpiwg-large-counts mailing list
> mpiwg-large-counts at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de .
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .


More information about the mpiwg-large-counts mailing list