<div dir="ltr">Still not sure I see the issue. MPI's memory-related integers should map to types that serve the same function in C. If the base language is broken for segmented addressing, we won't be able to fix it in a library. Looking at the mapping below, I don't see where we would have broken it:<div><br></div><div>intptr_t => MPI_Aint</div><div>uintptr_t => ??? (Anyone remember the MPI_Auint "golden Aint" proposal?)</div><div>ptrdiff_t => MPI_Aint</div><div>size_t (sizeof) => MPI_Count, int</div><div>size_t (offsetof) => MPI_Aint, int</div><div>ssize_t => Mostly for error handling. Out of scope for MPI?</div><div><br></div><div>It sounds like there are some places where we used MPI_Aint in place of size_t for sizes. Not great, but MPI_Aint already needs to be at least as large as size_t, so this seems benign.</div><div><br></div><div> ~Jim.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Oct 25, 2019 at 8:25 PM Dinan, James via mpiwg-large-counts <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US">
<div class="gmail-m_6621273115356735079WordSection1">
<p class="MsoNormal">Jeff, thanks so much for opening up these old wounds. I’m not sure I have enough context to contribute to the discussion. Where can I read up on the issue with MPI_Aint?</p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I’m glad to hear that C signed integers will finally have a well-defined representation.</p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">~Jim.</p>
<p class="MsoNormal"><u></u> <u></u></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal"><b><span style="font-size:12pt;color:black">From: </span></b><span style="font-size:12pt;color:black">Jeff Hammond <<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>><br>
<b>Date: </b>Thursday, October 24, 2019 at 7:03 PM<br>
<b>To: </b>"Jeff Squyres (jsquyres)" <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>><br>
<b>Cc: </b>MPI BigCount Working Group <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>>, "Dinan, James" <<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a>><br>
<b>Subject: </b>Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Jim (cc) suffered the most in MPI 3.0 days because of AINT_DIFF and AINT_SUM, so maybe he wants to create this ticket.
</p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Jeff</p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Thu, Oct 24, 2019 at 2:41 PM Jeff Squyres (jsquyres) <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<p class="MsoNormal">Not opposed to ditching segmented addressing at all. We'd need a ticket for this ASAP, though.
</p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">This whole conversation is predicated on:</p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">- MPI supposedly supports segmented addressing</p>
</div>
<div>
<p class="MsoNormal">- MPI_Aint is not sufficient for modern segmented addressing (i.e., representing an address that may not be in main RAM and is not mapped in to the current process' linear address space)</p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">If we no longer care about segmented addressing, that makes a whole bunch of BigCount stuff a LOT easier. E.g., MPI_Aint can basically be a non-segment-supporting address integer. AINT_DIFF and AINT_SUM can go away, too.</p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal" style="margin-bottom:12pt"><u></u> <u></u></p>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p class="MsoNormal">On Oct 24, 2019, at 5:35 PM, Jeff Hammond via mpiwg-large-counts <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:</p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">Rolf: </p>
<div>
<p class="MsoNormal"><br>
Before anybody spends any time analyzing how we handle segmented addressing, I want you to provide an example of a platform where this is relevant. What system can you boot today that needs this and what MPI libraries have expressed an interest in supporting it?</p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">For anyone who didn't hear, ISO C and C++ have finally committed to twos-complement integers (<a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a>, <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a>)
because modern programmers should not be limited by hardware designs from the 1960s. We should similarly not waste our time on obsolete features like segmentation.</p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Jeff</p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Thu, Oct 24, 2019 at 10:13 AM Rolf Rabenseifner via mpiwg-large-counts <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<p class="MsoNormal">> I think that changes the conversation entirely, right?<br>
<br>
Not the first part, the state-of-current-MPI.<br>
<br>
It may change something for the future, or a new interface may be needed.<br>
<br>
Please, can you describe how MPI_Get_address can work with the <br>
different variables from different memory segments.<br>
<br>
Or whether a completely new function or a set of functions is needed.<br>
<br>
If we can still express variables from all memory segments as <br>
input to MPI_Get_address, there may be still a way to flatten<br>
the result of some internal address-iquiry into a flattened<br>
signed integer with the same behavior as MPI_Aint today.<br>
<br>
If this is impossible, then new way of thinking and solution <br>
may be needed.<br>
<br>
I really want to see examples for all current stuff as you<br>
mentioned in your last email.<br>
<br>
Best regards<br>
Rolf<br>
<br>
----- Original Message -----<br>
> From: "Jeff Squyres" <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>><br>
> To: "Rolf Rabenseifner" <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
> Cc: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> Sent: Thursday, October 24, 2019 5:27:31 PM<br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<br>
<br>
> On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner<br>
> <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>>> wrote:<br>
> <br>
> For me, it looked like that there was some misunderstanding<br>
> of the concept that absolute and relative addresses<br>
> and number of bytes that can be stored in MPI_Aint.<br>
> <br>
> ...with the caveat that MPI_Aint -- as it is right now -- does not support<br>
> modern segmented memory systems (i.e., where you need more than a small number<br>
> of bits to indicate the segment where the memory lives).<br>
> <br>
> I think that changes the conversation entirely, right?<br>
> <br>
> --<br>
> Jeff Squyres<br>
> <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a><mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>><br>
<br>
-- <br>
Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">
rabenseifner@hlrs.de</a> .<br>
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">
www.hlrs.de/people/rabenseifner</a> .<br>
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
_______________________________________________<br>
mpiwg-large-counts mailing list<br>
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a></p>
</blockquote>
</div>
<p class="MsoNormal"><br clear="all">
</p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal">-- </p>
<div>
<p class="MsoNormal">Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br>
<a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></p>
</div>
<p class="MsoNormal">_______________________________________________<br>
mpiwg-large-counts mailing list<br>
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a></p>
</div>
</blockquote>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal"><br>
-- <br>
Jeff Squyres<br>
<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> </p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
</blockquote>
</div>
<p class="MsoNormal"><br clear="all">
</p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal">-- </p>
<div>
<p class="MsoNormal">Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br>
<a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></p>
</div>
</div>
</div>
_______________________________________________<br>
mpiwg-large-counts mailing list<br>
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a><br>
</blockquote></div>