[Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements

HOLMES Daniel d.holmes at epcc.ed.ac.uk
Thu Oct 24 11:41:34 CDT 2019


Hi Rolf & Jeff,

I think this wiki article is instructive on this topic also:
https://en.wikipedia.org/wiki/X86_memory_segmentation

This seems like a crazy memory addressing system to me personally, but it is a (historic) example of a segmented addressing approach that MPI_Aint can support.

The “strange properties” for arithmetic are strange indeed, depending on what the MPI_Aint stores and how.

If MPI_Aint was 20 bits long and stores only the address, then it cannot be used to determine uniquely which segment is being used or what the offset is within that segment (there are 4096 possible answers). Does MPI need that more detailed information? Probably - because segments were a way of implementing memory protection, i.e. accessing a segment you did not have permission to access led to a “segmentation fault” error. I do not know enough about these old architectures to say whether an attempt to access the *same byte* using two different segment:offset pairs that produce the *same* address could result in different behaviour. That is, if I have access permissions for segment 3 but not for segment 4, I can access {seg=3,offset=2^16-16} but can I access {segment=4,offset=2^16-32}, which is the same byte? If not, then MPI needs to store segment and offset inside MPI_Aint to be able to check and to set registers correctly.

If MPI_Aint is 32 bits long and stores the segment in the first 16 bits and the offset in the last 16 bits, then the 20 bit address can be computed in a single simple instruction and both segment and offset are immediately retrievable. However, doing ordinary arithmetic with this bitwise representation is unwise because it is a compound structure type. Let us subtract 1 from an MPI_Aint of this layout which stores offset of 0 and some non-zero segment. We get offset (2^16-1) in segment (s-1), which is not 1 byte before the previous MPI_Aint because segments overlap. The same happens when adding and overflowing the offset portion - it changes the segment in an incorrect way. Segment++ moves the address forward only 16 bytes, not 2^16 bytes.

The wrap-around from the end of the address space back to the beginning is also a source of strange properties for arithmetic.

One of the key statements from that wiki page is this:

The root of the problem is that no appropriate address-arithmetic instructions suitable for flat addressing of the entire memory range are available.[citation needed] Flat addressing is possible by applying multiple instructions, which however leads to slower programs.

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Architect (HPC Research)
d.holmes at epcc.ed.ac.uk<mailto:d.holmes at epcc.ed.ac.uk>
Phone: +44 (0) 131 651 3465
Mobile: +44 (0) 7940 524 088
Address: Room 2.09, Bayes Centre, 47 Potterrow, Central Area, Edinburgh, EH8 9BT
—
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.
—

On 24 Oct 2019, at 17:27, Jeff Squyres (jsquyres) via mpiwg-large-counts <mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>> wrote:

On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner <rabenseifner at hlrs.de<mailto:rabenseifner at hlrs.de>> wrote:

For me, it looked like that there was some misunderstanding
of the concept that absolute and relative addresses
and number of bytes that can be stored in MPI_Aint.

...with the caveat that MPI_Aint -- as it is right now -- does not support modern segmented memory systems (i.e., where you need more than a small number of bits to indicate the segment where the memory lives).

I think that changes the conversation entirely, right?

--
Jeff Squyres
jsquyres at cisco.com<mailto:jsquyres at cisco.com>

_______________________________________________
mpiwg-large-counts mailing list
mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-large-counts/attachments/20191024/1129a257/attachment-0001.html>


More information about the mpiwg-large-counts mailing list