[Mpi3-rma] Target displacement sign issue
jhammond at alcf.anl.gov
Tue Aug 28 14:02:09 CDT 2012
Breaking backwards compatibility is worse than whatever portability
issues you can come up with, especially with something as fundamental
On Tue, Aug 28, 2012 at 1:45 PM, Dries Kimpe <dkimpe at mcs.anl.gov> wrote:
> * Jeff Hammond <jhammond at alcf.anl.gov> [2012-08-28 13:22:47]:
>> I imagine that HPC-oriented operating systems do not provide virtual
>> addresses in the problematic range anyways. Maybe VM randomization
>> causes this but that's not common in HPC.
>> In any case, MPI RMA requires MPI_Alloc_mem for portable performance.
>> MPI_Alloc_mem should be capable of returning only pointers in the
>> lower half of the address space, if by no other means than using mmap
>> I believe that this is only a problem on 32-bit systems where losing 2
>> of the 4 GB is a problem. On a 64-bit system, losing half of the 18
>> exabytes in the address space is unlikely to be a problem, given that
>> an exabyte of DRAM would require something like a white dwarf to
>> power, meaning that it is likely that the OS can find a way to VM map
>> the physical memory present into the lower half of the address range.
> On 32 bit, it is very likely that addresses from the stack variables will
> have the highest bit set. This is even more true for 32 bit programs
> running under 64 bit kernels.
> In linux, on 64 bit X86_64, this will _currently_ not be a problem. All
> user memory will be between 0000000000000000 - 00007fffffffffff.
> Also, remember that these are virtual addresses. If MPI_Alloc_mem is
> calling malloc/new (which can call mmap for large allocations), addresses
> will be from the high range, which can have the sign bit set.
> But all this is besides the point. The address map/choices are not fixed,
> and can change at any time (and do differ between operating systems),
> and MPI cannot change how the C library/runtime returns memory from
> (And then we're ignoring tools like valgrind/memory profilers/...)
> Ignoring the sign issue will very likely cause portability issues.
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
More information about the mpiwg-rma