[Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements

Jeff Squyres (jsquyres) jsquyres at cisco.com
Thu Oct 24 16:41:32 CDT 2019


Not opposed to ditching segmented addressing at all.  We'd need a ticket for this ASAP, though.

This whole conversation is predicated on:

- MPI supposedly supports segmented addressing
- MPI_Aint is not sufficient for modern segmented addressing (i.e., representing an address that may not be in main RAM and is not mapped in to the current process' linear address space)

If we no longer care about segmented addressing, that makes a whole bunch of BigCount stuff a LOT easier.  E.g., MPI_Aint can basically be a non-segment-supporting address integer.  AINT_DIFF and AINT_SUM can go away, too.



On Oct 24, 2019, at 5:35 PM, Jeff Hammond via mpiwg-large-counts <mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>> wrote:

Rolf:

Before anybody spends any time analyzing how we handle segmented addressing, I want you to provide an example of a platform where this is relevant.  What system can you boot today that needs this and what MPI libraries have expressed an interest in supporting it?

For anyone who didn't hear, ISO C and C++ have finally committed to twos-complement integers (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html, http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm) because modern programmers should not be limited by hardware designs from the 1960s.  We should similarly not waste our time on obsolete features like segmentation.

Jeff

On Thu, Oct 24, 2019 at 10:13 AM Rolf Rabenseifner via mpiwg-large-counts <mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>> wrote:
> I think that changes the conversation entirely, right?

Not the first part, the state-of-current-MPI.

It may change something for the future, or a new interface may be needed.

Please, can you describe how MPI_Get_address can work with the
different variables from different memory segments.

Or whether a completely new function or a set of functions is needed.

If we can still express variables from all memory segments as
input to MPI_Get_address, there may be still a way to flatten
the result of some internal address-iquiry into a flattened
signed integer with the same behavior as MPI_Aint today.

If this is impossible, then new way of thinking and solution
may be needed.

I really want to see examples for all current stuff as you
mentioned in your last email.

Best regards
Rolf

----- Original Message -----
> From: "Jeff Squyres" <jsquyres at cisco.com<mailto:jsquyres at cisco.com>>
> To: "Rolf Rabenseifner" <rabenseifner at hlrs.de<mailto:rabenseifner at hlrs.de>>
> Cc: "mpiwg-large-counts" <mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>>
> Sent: Thursday, October 24, 2019 5:27:31 PM
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements

> On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner
> <rabenseifner at hlrs.de<mailto:rabenseifner at hlrs.de><mailto:rabenseifner at hlrs.de<mailto:rabenseifner at hlrs.de>>> wrote:
>
> For me, it looked like that there was some misunderstanding
> of the concept that absolute and relative addresses
> and number of bytes that can be stored in MPI_Aint.
>
> ...with the caveat that MPI_Aint -- as it is right now -- does not support
> modern segmented memory systems (i.e., where you need more than a small number
> of bits to indicate the segment where the memory lives).
>
> I think that changes the conversation entirely, right?
>
> --
> Jeff Squyres
> jsquyres at cisco.com<mailto:jsquyres at cisco.com><mailto:jsquyres at cisco.com<mailto:jsquyres at cisco.com>>

--
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de<mailto:rabenseifner at hlrs.de> .
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner<http://www.hlrs.de/people/rabenseifner> .
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
_______________________________________________
mpiwg-large-counts mailing list
mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts


--
Jeff Hammond
jeff.science at gmail.com<mailto:jeff.science at gmail.com>
http://jeffhammond.github.io/
_______________________________________________
mpiwg-large-counts mailing list
mpiwg-large-counts at lists.mpi-forum.org<mailto:mpiwg-large-counts at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts


--
Jeff Squyres
jsquyres at cisco.com<mailto:jsquyres at cisco.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-large-counts/attachments/20191024/8c2f755c/attachment.html>


More information about the mpiwg-large-counts mailing list