<div dir="ltr">This sounds to me like it is creating again the same problem we have with MPI_Aint --- one type doing too many things. If MPI_Aint can't accommodate absolute addresses in the I/O interfaces, we should consider adding a new type like MPI_Faint (file address int) for this quantity and include accessor routines to ensure manipulations of file addresses respect the implementation defined meaning of the bits. Even in C, it is not portable to do arithmetic on intptr_t because the integer representation of an address is implementation defined. We were careful in the definition of MPI_Aint_add and diff to describe them in terms of casting the absolute address arguments back to pointers before performing arithmetic.<div><br></div><div> ~Jim.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 30, 2019 at 5:18 AM Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear all and Jim,<br>
<br>
Jim asked:<br>
> When you assign an MPI_Aint to an MPI_Count, there are two cases depending<br>
> on what the bits in the MPI_Aint represent: absolute address and relative<br>
> displacements. The case where you assign an address to a count doesn't<br>
> make sense to me. Why would one do this and why should MPI support it?<br>
> The case where you assign a displacement to a count seems fine, you would<br>
> want sign extension to happen.<br>
<br>
The answer is very simple:<br>
All derived datatype routines serve describing of memory **and** file space.<br>
<br>
Therefore, the large count working group should decide:<br>
- Should the new large count routines be prepared for more than 10 or 20 Exabyte<br>
files where we need 64/65 or 65/66 unsigned/signed integers for relative byte<br>
displacements or byte counts?<br>
If yes, then all MPI_Aint arguments must be substituted by MPI_Count.<br>
(In other words, do we want to be prepared for another 25 years of MPI? :-)<br>
- Should we allow that these new routines are also used for memory description,<br>
where we typically need only the large MPI_Count "count" arguments?<br>
(or should we provide two different new routines for each routine that<br>
currently has int Count/... and MPI_Aint disp/... arguments) <br>
- Should we allow a mix of old and new routines, especially for memory-based<br>
usage, that old-style MPI_Get_address is used to retrieve an absolute <br>
address and then, e.g., new style MPI_Type_create_struct with <br>
MPI_Count blocklength and displacements is used?<br>
- Do we want to require for this type cast of MPI_Aint addr into MPI_Count<br>
that it is allowed to do this cast with a normal assignment, rather than <br>
a special MPI function? <br>
<br>
If we answer all four questions with yes (and in my opinion, we must)<br>
then Jim's question<br>
"Why would one do this [assign an address to a Count] <br>
and why should MPI support it?"<br>
is answered with this set of reasons.<br>
<br>
I would say, that this is the most complex decision that the<br>
large count working group has to decide. <br>
A wrong decision would be hard to be fixed in the future.<br>
<br>
Best regards<br>
Rolf<br>
<br>
----- Original Message -----<br>
> From: "Jim Dinan" <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>><br>
> To: "Rolf Rabenseifner" <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
> Cc: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> Sent: Tuesday, October 29, 2019 10:28:46 PM<br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<br>
<br>
> If you do pointer arithmetic, the compiler will ensure that the result is<br>
> correct. If you convert a pointer into an integer and then do the<br>
> arithmetic, the compiler can't help you and the result is not portable.<br>
> This is why MPI_Aint_add describes what it does in terms of pointer<br>
> arithmetic. The confusing and frustrating thing about MPI_Aint is that<br>
> it's one type for two very different purposes. Allowing direct +/- on<br>
> MPI_Aint values that represent addresses is not portable and is a mistake<br>
> that we tried to correct with MPI_Aint_add/diff (I am happy to strengthen<br>
> should to must if needed). It's perfectly fine to do arithmetic on<br>
> MPI_Aint values that are displacements.<br>
> <br>
> When you assign an MPI_Aint to an MPI_Count, there are two cases depending<br>
> on what the bits in the MPI_Aint represent: absolute address and relative<br>
> displacements. The case where you assign an address to a count doesn't<br>
> make sense to me. Why would one do this and why should MPI support it?<br>
> The case where you assign a displacement to a count seems fine, you would<br>
> want sign extension to happen.<br>
> <br>
> ~Jim.<br>
> <br>
> On Tue, Oct 29, 2019 at 4:52 PM Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
> wrote:<br>
> <br>
>> Dear Jim,<br>
>><br>
>> > (a3) Section 4.1.5 of MPI 3.1 states "To ensure portability, arithmetic<br>
>> on<br>
>> > absolute addresses should not be performed with the intrinsic operators<br>
>> \-"<br>
>> > and \+".<br>
>><br>
>> The major problem is, that we decided "should" and not "maust" or "shall",<br>
>> because there is such many existing MPI-1 ... MPI-3.0 code that must have<br>
>> used + or - operators.<br>
>><br>
>> The only objective, that is true from the beginning, that MPI addresses<br>
>> must be<br>
>> retrieved with MPI_Get_address.<br>
>><br>
>> And the second also Major Problem is the new assigment of an MPI_Aint<br>
>> value<br>
>> into an MPI_Count variable with MPI_Count larger than MPI_Aint.<br>
>><br>
>> Therefore, I would prefere, that we keep this "should" and design in long<br>
>> term<br>
>> MPI_Get_address in a way that in principle MPI_Aint_diff and _add<br>
>> need not to do anythin else as the + or - operator.<br>
>><br>
>> And this depends on the meaning of the unsigned addresses, i.e.,<br>
>> what is the sequence of addresses (i.e., is it really going from<br>
>> 0 to FFFF...FFFF) and than mapping these addreses to the mathematical<br>
>> sequence<br>
>> of MPI_Aint which starts at -2**(n-1) and ends at 2**(n-1)-1.<br>
>><br>
>> Thats all. For the moment, as far as the web and some emails told us,<br>
>> we are fare away from this contiguous 64-bit address space (0 to<br>
>> FFFF...FFFF).<br>
>><br>
>> But we should be correctly prepared.<br>
>><br>
>> Or in other words:<br>
>> > (a2) Should be solved by MPI_Aint_add/diff.<br>
>> In my opinion no, it must be solved by MPI_Get_addr<br>
>> and MPI_Aint_add/diff can stay normal + or - operators.<br>
>><br>
>> I should also mention, that of course all MPI routines that<br>
>> accept MPI_BOOTOM must reverse the work of MPI_Get_address<br>
>> to get back the real "unsigned" virtual addresses of the OS.<br>
>><br>
>> The same what we already had if an implementation has chosen<br>
>> to use the address of an MPI common block as base for MPI_BOTTOM.<br>
>> Here, the MPI lib had the freedom to revert the mapping<br>
>> within MPI_Get_addr or within all functions called with MPI_BOTTOM.<br>
>><br>
>> Best regards<br>
>> Rolf<br>
>><br>
>><br>
>><br>
>> ----- Original Message -----<br>
>> > From: "Jim Dinan" <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>><br>
>> > To: "Rolf Rabenseifner" <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
>> > Cc: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
>> > Sent: Tuesday, October 29, 2019 3:58:18 PM<br>
>> > Subject: Re: [Mpiwg-large-counts] Large Count - the principles for<br>
>> counts, sizes, and byte and nonbyte displacements<br>
>><br>
>> > Hi Rolf,<br>
>> ><br>
>> > (a1) seems to me like another artifact of storing an unsigned quantity<br>
>> in a<br>
>> > signed variable, i.e., the quantity in an MPI_Aint can be an unsigned<br>
>> > address or a signed displacement. Since we don't have an unsigned type<br>
>> for<br>
>> > addresses, the user can't portably fix this above MPI. We will need to<br>
>> add<br>
>> > functions to deal with combinations of MPI_Aint and MPI_Counts. This is<br>
>> > essentially why we needed MPI_Aint_add/diff. Or ... the golden (Au is<br>
>> > gold) int ... MPI_Auint.<br>
>> ><br>
>> > (a2) Should be solved by MPI_Aint_add/diff.<br>
>> ><br>
>> > (a3) Section 4.1.5 of MPI 3.1 states "To ensure portability, arithmetic<br>
>> on<br>
>> > absolute addresses should not be performed with the intrinsic operators<br>
>> \-"<br>
>> > and \+". MPI_Aint_add was written carefully to indicate that the "base"<br>
>> > argument is treated as an unsigned address and the "disp" argument is<br>
>> > treated as a signed displacement.<br>
>> ><br>
>> > ~Jim.<br>
>> ><br>
>> > On Tue, Oct 29, 2019 at 5:19 AM Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
>> > wrote:<br>
>> ><br>
>> >> Dear Jim and all,<br>
>> >><br>
>> >> I'm not sure whether I'm really able to understand your email.<br>
>> >><br>
>> >> I take the MPI view:<br>
>> >><br>
>> >> (1) An absolute address can stored in an MPI_Aint variable<br>
>> >> with and only with MPI_Get_address or MPI_Aint_add.<br>
>> >><br>
>> >> (2) A positive or negative number of bytes or a relative address<br>
>> >> which is by definition the amount of bytes between two locations<br>
>> >> in a MPI "sequential storage" (MPI-3.1 page 115)<br>
>> >> can be assigned with any method to an MPI_Aint variable<br>
>> >> as long as the original value fits into MPI_Aint.<br>
>> >> In both languages automatic type cast (i.e., sign expansion)<br>
>> >> is done.<br>
>> >><br>
>> >> (3) If users misuse MPI_Aint for storing anything else into MPI_Aint<br>
>> >> variable then this is out of scope of MPI.<br>
>> >> If such values are used in a minus operation then it is<br>
>> >> out of the scope of MPI whether this makes sense.<br>
>> >> If the user is sure that the new value falls into category (2)<br>
>> >> then all is fine as long as the user is correct.<br>
>> >><br>
>> >> I expect that your => is not a "greater or equal than".<br>
>> >> I expect that you noticed assignments.<br>
>> >><br>
>> >> > intptr_t => MPI_Aint<br>
>> >> "intptr_t: integer type capable of holding a pointer."<br>
>> >><br>
>> >> > uintptr_t => ??? (Anyone remember the MPI_Auint "golden Aint"<br>
>> proposal?)<br>
>> >> "uintptr_t: unsigned integer type capable of holding a pointer."<br>
>> >><br>
>> >> may fall exactly exactly into (3) when used for pointers.<br>
>> >><br>
>> >><br>
>> >> Especially on a 64 bit system the user may have in the future exactly<br>
>> >> the problems (a), (a1), (a2) and (b) as described below.<br>
>> >> But here, the user is responsible, to for example implement (a3),<br>
>> >> whereas for MPI_Get_address, the implementors of the MPI library<br>
>> >> are responsible and the MPI Forum may be responsible for giving<br>
>> >> the correct advices.<br>
>> >><br>
>> >> By the way, the golden MPI_Auint was never golden.<br>
>> >> Such need was "resolved" by introducing MPI_Aint_diff and MPI_Aint_add<br>
>> >> in MPI-3.1.<br>
>> >><br>
>> >><br>
>> >> > ptrdiff_t => MPI_Aint<br>
>> >> "std::ptrdiff_t is the signed integer type of the result of subtracting<br>
>> >> two pointers."<br>
>> >><br>
>> >> may perfectly fit to (2).<br>
>> >><br>
>> >> All of the following falls into category (2):<br>
>> >><br>
>> >> > size_t (sizeof) => MPI_Count, int<br>
>> >> "sizeof( type ) (1)<br>
>> >> sizeof expression (2)<br>
>> >> Both versions are constant expressions of type std::size_t."<br>
>> >><br>
>> >> > size_t (offsetof) => MPI_Aint, int<br>
>> >> "Defined in header <cstddef><br>
>> >> #define offsetof(type, member) /*implementation-defined*/<br>
>> >> The macro offsetof expands to an integral constant expression<br>
>> >> of type std::size_t, the value of which is the offset, in bytes,<br>
>> >> from the beginning of an object of specified type to ist<br>
>> >> specified member, including padding if any."<br>
>> >><br>
>> >> Note that this offsetof has nothing to do with MPI_Offset.<br>
>> >><br>
>> >> On a system with less than 2*31 byte and 4-byte int, it is guaranteed<br>
>> >> that size_t => int works.<br>
>> >><br>
>> >> On a system with less than 2*63 byte and 8-byte MPI_Aint, it is<br>
>> guaranteed<br>
>> >> that size_t => MPI_Aint works.<br>
>> >><br>
>> >> Problem: size_t is unsigned, int and MPI_Aint are signed.<br>
>> >><br>
>> >> MPI_Count should be defined in a way that on systems with more than<br>
>> >> 2**63 Bytes of disc space, that MPI_Count can hold such values,<br>
>> >> because<br>
>> >> int .LE. {MPI_Aint, MPI_Offset} .LE. MPI_Count<br>
>> >><br>
>> >> Therefore size_t => MPI_Count should always work.<br>
>> >><br>
>> >> > ssize_t => Mostly for error handling. Out of scope for MPI?<br>
>> >> "In short, ssize_t is the same as size_t, but is a signed type -<br>
>> >> read ssize_t as “signed size_t”. ssize_t is able to represent<br>
>> >> the number -1, which is returned by several system calls<br>
>> >> and library functions as a way to indicate error.<br>
>> >> For example, the read and write system calls: ...<br>
>> >> ssize_t read(int fildes, void *buf, size_t nbyte); ..."<br>
>> >><br>
>> >> ssize_t fits therefore better to MPI_Aint, because both<br>
>> >> are signed types that can hold byte counts, but<br>
>> >> the value -1 in a MPI_Aint variable stands for a<br>
>> >> byte displacement of -1 bytes and not for an error code -1.<br>
>> >><br>
>> >><br>
>> >> All use of (2) is in principle no problem.<br>
>> >> ------------------------------------------<br>
>> >><br>
>> >> All the complex discussiuon of the last days is about (1):<br>
>> >><br>
>> >> (1) An absolute address can stored in an MPI_Aint variable<br>
>> >> with and only with MPI_Get_address or MPI_Aint_add.<br>
>> >><br>
>> >> In MPI-1 to MPI-3.0 and still in MPI-3.1 (here as may be not portable),<br>
>> >> we also allow<br>
>> >> MPI_Aint variable := absolute address in MPI_Aint variable<br>
>> >> + or -<br>
>> >> a number of bytes (in any integer type).<br>
>> >><br>
>> >> The result is then still in category (1).<br>
>> >><br>
>> >><br>
>> >> For the difference of two absolute addresses,<br>
>> >> MPI_Aint_diff can be used. The result is than MPI_Aint of category (2)<br>
>> >><br>
>> >> In MPI-1 to MPI-3.0 and still in MPI-3.1 (here as may be not portable),<br>
>> >> we also allow<br>
>> >> MPI_Aint variable := absolute address in MPI_Aint variable<br>
>> >> - absolute address in MPI_Aint variable.<br>
>> >><br>
>> >> The result is then in category (2).<br>
>> >><br>
>> >><br>
>> >> The problems we discuss the last days are about systems<br>
>> >> that internally use unsigned addresses and the MPI library stores<br>
>> >> these addresses into MPI_Aint variables and<br>
>> >><br>
>> >> (a) a sequential storage can have virtual addresses that<br>
>> >> are both in the area with highest bit =0 and other addresses<br>
>> >> in the same sequential storage (i.e., same array or structure)<br>
>> >> with highest bit =1.<br>
>> >><br>
>> >> or<br>
>> >> (b) some higher bits contain segment addresses.<br>
>> >><br>
>> >> (b) is not a problem as long as a sequential storage resides<br>
>> >> always within one Segment.<br>
>> >><br>
>> >> Therefore, we only have to discuss (a).<br>
>> >><br>
>> >> The two problems that we have is<br>
>> >> (a1) that for the minus operations an integer overflow will<br>
>> >> happen and must be ignored.<br>
>> >> (a2) if such addresses are expanded to larger variables,<br>
>> >> e.g., MPI_Count with more bits in MPI_Count than in MPI_Aint,<br>
>> >> sign expansion will result in completely wring results.<br>
>> >><br>
>> >> And here, the most simple trick is,<br>
>> >> (a3) that MPI_Get_address really shall<br>
>> >> map the contiguous unsigned range from 0 to 2**64-1 to the<br>
>> >> signed (and also contiguous) range from -2**63 to 2**63-1<br>
>> >> by simple subtracting 2**63.<br>
>> >> With this simple trick in MPI_Get_address, Problems<br>
>> >> 8a1) and (a2) are resolved.<br>
>> >><br>
>> >> It looks like that (a) and therefore (a1) and (a2)<br>
>> >> may be far in the future.<br>
>> >> But they may be less far in the future, if a system may<br>
>> >> map the whole applications cluster address space<br>
>> >> into virtual memory (not cache coherent, but accessible).<br>
>> >><br>
>> >><br>
>> >> And all this is never or only partial written into the<br>
>> >> MPI Standard, also all is (well) known by the MPI Forum,<br>
>> >> with the following exceptions:<br>
>> >> - (a2) is new.<br>
>> >> - (a1) is solved in MPI-3.1 only for MPI_Aint_diff and<br>
>> >> MPI_Aint_add, but not for the operators - and +<br>
>> >> if a user will switch on integer overflow detection<br>
>> >> in the future when we will have such large systems.<br>
>> >> - (a3) is new and in principle solves the problem also<br>
>> >> for + and - operators.<br>
>> >><br>
>> >> At lease (a1)+(a2) should be added as rationale to MPI-4.0<br>
>> >> and (a3) as advice to implementors within the framework<br>
>> >> of big count, because (a2) is newly coming with big count.<br>
>> >><br>
>> >> I hope this helps a bit if you took the time to read<br>
>> >> this long email.<br>
>> >><br>
>> >> Best regards<br>
>> >> Rolf<br>
>> >><br>
>> >><br>
>> >><br>
>> >> ----- Original Message -----<br>
>> >> > From: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
>> >> > To: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
>> >> > Cc: "Jim Dinan" <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>>, "James Dinan" <<br>
>> >> <a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a>><br>
>> >> > Sent: Monday, October 28, 2019 5:07:37 PM<br>
>> >> > Subject: Re: [Mpiwg-large-counts] Large Count - the principles for<br>
>> >> counts, sizes, and byte and nonbyte displacements<br>
>> >><br>
>> >> > Still not sure I see the issue. MPI's memory-related integers should<br>
>> map<br>
>> >> to<br>
>> >> > types that serve the same function in C. If the base language is<br>
>> broken<br>
>> >> for<br>
>> >> > segmented addressing, we won't be able to fix it in a library. Looking<br>
>> >> at the<br>
>> >> > mapping below, I don't see where we would have broken it:<br>
>> >> ><br>
>> >> > intptr_t => MPI_Aint<br>
>> >> > uintptr_t => ??? (Anyone remember the MPI_Auint "golden Aint"<br>
>> proposal?)<br>
>> >> > ptrdiff_t => MPI_Aint<br>
>> >> > size_t (sizeof) => MPI_Count, int<br>
>> >> > size_t (offsetof) => MPI_Aint, int<br>
>> >> > ssize_t => Mostly for error handling. Out of scope for MPI?<br>
>> >> ><br>
>> >> > It sounds like there are some places where we used MPI_Aint in place<br>
>> of<br>
>> >> size_t<br>
>> >> > for sizes. Not great, but MPI_Aint already needs to be at least as<br>
>> large<br>
>> >> as<br>
>> >> > size_t, so this seems benign.<br>
>> >> ><br>
>> >> > ~Jim.<br>
>> >> ><br>
>> >> > On Fri, Oct 25, 2019 at 8:25 PM Dinan, James via mpiwg-large-counts <<br>
>> [<br>
>> >> > mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Jeff, thanks so much for opening up these old wounds. I’m not sure I<br>
>> >> have enough<br>
>> >> > context to contribute to the discussion. Where can I read up on the<br>
>> >> issue with<br>
>> >> > MPI_Aint?<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > I’m glad to hear that C signed integers will finally have a<br>
>> well-defined<br>
>> >> > representation.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > ~Jim.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > From: Jeff Hammond < [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> |<br>
>> >> <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
>> >> > ><br>
>> >> > Date: Thursday, October 24, 2019 at 7:03 PM<br>
>> >> > To: "Jeff Squyres (jsquyres)" < [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> |<br>
>> >> <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a><br>
>> >> > ] ><br>
>> >> > Cc: MPI BigCount Working Group < [ mailto:<br>
>> >> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
>> >> > | <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] >, "Dinan, James" < [<br>
>> >> > mailto:<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a> | <a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a> ] ><br>
>> >> > Subject: Re: [Mpiwg-large-counts] Large Count - the principles for<br>
>> >> counts,<br>
>> >> > sizes, and byte and nonbyte displacements<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Jim (cc) suffered the most in MPI 3.0 days because of AINT_DIFF and<br>
>> >> AINT_SUM, so<br>
>> >> > maybe he wants to create this ticket.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Jeff<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > On Thu, Oct 24, 2019 at 2:41 PM Jeff Squyres (jsquyres) < [<br>
>> >> > mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] > wrote:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Not opposed to ditching segmented addressing at all. We'd need a<br>
>> ticket<br>
>> >> for this<br>
>> >> > ASAP, though.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > This whole conversation is predicated on:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > - MPI supposedly supports segmented addressing<br>
>> >> ><br>
>> >> ><br>
>> >> > - MPI_Aint is not sufficient for modern segmented addressing (i.e.,<br>
>> >> representing<br>
>> >> > an address that may not be in main RAM and is not mapped in to the<br>
>> >> current<br>
>> >> > process' linear address space)<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > If we no longer care about segmented addressing, that makes a whole<br>
>> >> bunch of<br>
>> >> > BigCount stuff a LOT easier. E.g., MPI_Aint can basically be a<br>
>> >> > non-segment-supporting address integer. AINT_DIFF and AINT_SUM can go<br>
>> >> away,<br>
>> >> > too.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > On Oct 24, 2019, at 5:35 PM, Jeff Hammond via mpiwg-large-counts < [<br>
>> >> > mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Rolf:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Before anybody spends any time analyzing how we handle segmented<br>
>> >> addressing, I<br>
>> >> > want you to provide an example of a platform where this is relevant.<br>
>> What<br>
>> >> > system can you boot today that needs this and what MPI libraries have<br>
>> >> expressed<br>
>> >> > an interest in supporting it?<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > For anyone who didn't hear, ISO C and C++ have finally committed to<br>
>> >> > twos-complement integers ( [<br>
>> >> > <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a><br>
>> |<br>
>> >> > <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a><br>
>> ]<br>
>> >> , [<br>
>> >> > <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> |<br>
>> >> > <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> ] ) because<br>
>> >> modern<br>
>> >> > programmers should not be limited by hardware designs from the 1960s.<br>
>> We<br>
>> >> should<br>
>> >> > similarly not waste our time on obsolete features like segmentation.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > Jeff<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > On Thu, Oct 24, 2019 at 10:13 AM Rolf Rabenseifner via<br>
>> >> mpiwg-large-counts < [<br>
>> >> > mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> >> I think that changes the conversation entirely, right?<br>
>> >> ><br>
>> >> > Not the first part, the state-of-current-MPI.<br>
>> >> ><br>
>> >> > It may change something for the future, or a new interface may be<br>
>> needed.<br>
>> >> ><br>
>> >> > Please, can you describe how MPI_Get_address can work with the<br>
>> >> > different variables from different memory segments.<br>
>> >> ><br>
>> >> > Or whether a completely new function or a set of functions is needed.<br>
>> >> ><br>
>> >> > If we can still express variables from all memory segments as<br>
>> >> > input to MPI_Get_address, there may be still a way to flatten<br>
>> >> > the result of some internal address-iquiry into a flattened<br>
>> >> > signed integer with the same behavior as MPI_Aint today.<br>
>> >> ><br>
>> >> > If this is impossible, then new way of thinking and solution<br>
>> >> > may be needed.<br>
>> >> ><br>
>> >> > I really want to see examples for all current stuff as you<br>
>> >> > mentioned in your last email.<br>
>> >> ><br>
>> >> > Best regards<br>
>> >> > Rolf<br>
>> >> ><br>
>> >> > ----- Original Message -----<br>
>> >> >> From: "Jeff Squyres" < [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> |<br>
>> <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a><br>
>> >> ] ><br>
>> >> >> To: "Rolf Rabenseifner" < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> |<br>
>> >> <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ]<br>
>> >> >> ><br>
>> >> >> Cc: "mpiwg-large-counts" < [ mailto:<br>
>> >> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> >> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] ><br>
>> >> >> Sent: Thursday, October 24, 2019 5:27:31 PM<br>
>> >> >> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for<br>
>> >> counts,<br>
>> >> >> sizes, and byte and nonbyte displacements<br>
>> >> ><br>
>> >> >> On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner<br>
>> >> >> < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] <mailto: [<br>
>> >> >> mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] >> wrote:<br>
>> >> >><br>
>> >> >> For me, it looked like that there was some misunderstanding<br>
>> >> >> of the concept that absolute and relative addresses<br>
>> >> >> and number of bytes that can be stored in MPI_Aint.<br>
>> >> >><br>
>> >> >> ...with the caveat that MPI_Aint -- as it is right now -- does not<br>
>> >> support<br>
>> >> >> modern segmented memory systems (i.e., where you need more than a<br>
>> small<br>
>> >> number<br>
>> >> >> of bits to indicate the segment where the memory lives).<br>
>> >> >><br>
>> >> >> I think that changes the conversation entirely, right?<br>
>> >> >><br>
>> >> >> --<br>
>> >> >> Jeff Squyres<br>
>> >> >> [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] <mailto: [<br>
>> >> >> mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Dr. Rolf Rabenseifner . . . . . . . . . .. email [ mailto:<br>
>> >> <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> |<br>
>> >> > <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] .<br>
>> >> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530<br>
>> .<br>
>> >> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832<br>
>> .<br>
>> >> > Head of Dpmt Parallel Computing . . . [<br>
>> >> <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">http://www.hlrs.de/people/rabenseifner</a> |<br>
>> >> > <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> ] .<br>
>> >> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<br>
>> .<br>
>> >> > _______________________________________________<br>
>> >> > mpiwg-large-counts mailing list<br>
>> >> > [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
>> >> > [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
>> >> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> ><br>
>> >> ><br>
>> >> > Jeff Hammond<br>
>> >> > [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
>> >> > [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
>> >> ><br>
>> >> ><br>
>> >> > _______________________________________________<br>
>> >> > mpiwg-large-counts mailing list<br>
>> >> > [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
>> >> > [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
>> >> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Jeff Squyres<br>
>> >> > [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ]<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> ><br>
>> >> ><br>
>> >> > Jeff Hammond<br>
>> >> > [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
>> >> > [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
>> >> > _______________________________________________<br>
>> >> > mpiwg-large-counts mailing list<br>
>> >> > [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
>> >> > [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
>> >> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
>> >> ><br>
>> >> > _______________________________________________<br>
>> >> > mpiwg-large-counts mailing list<br>
>> >> > <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
>> >> > <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a><br>
>> >><br>
>> >> --<br>
>> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
>> >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
>> >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
>> >> Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
>> >> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
>><br>
>> --<br>
>> Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
>> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
>> Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
<br>
-- <br>
Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
</blockquote></div>