[Mpi3-rma] [EXTERNAL] Re: Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED

Jeff Hammond jhammond at alcf.anl.gov
Tue Jun 4 12:02:52 CDT 2013


> As I've learned from you both, I would write:
>
> --------------
> Caution:
>  On some systems
>  (e.g., when MPI shared memory support is based on POSIX shared memory)
>   - the number of shared memory windows, and
>   - the total size of shared memory windows
>  may be limited.
>  Some OS systems may provide Options, e.g.,
>   - at job launch, or
>   - MPI process start,
>  to enlarge restricting defaults.
> ---------------
>
> Would you modify this text, or do you have additional remarks?

I think what you've written is excellent.  It's both necessary and
sufficient.  It is also worth mentioning that the trivial case where
MPI_COMM_SPLIT_TYPE(type=MPI_COMM_TYPE_SHARED) returns MPI_COMM_SELF
is always going to work and should have no limitations due to the
implementation of shared memory in the OS, etc.

The other thing is that it is entirely possible for an implementation
to have limitations on the number and size of windows due to how they
use pinned buffers for RDMA, but I don't know of any cases where this
is actually a problem.  Infiniband implementations don't register so
aggressively that window creation fails due to ib_reg_mr failure, but
I know that ARMCI, which registers aggressively, hits OS/NIC limits
for a related usage.

>> I would say that the standard is not the right place to document this
>> sort of thing, but hopefully vendors and computing facilities will
>> document it prominently because it's an essential consideration for
>> MPI+MPI. It would be useful if someone would set up a resource
>> documenting the hoops one has to jump through on all major platforms.
>
> That would be helpful.

This is the type of thing that I try to capture on my Wiki.  For
example, asynchronous progress is very platform-specific so I document
the details: https://wiki.alcf.anl.gov/parts/index.php/MPI#Asynchronous_Progress.
 There are not yet enough MPI-3 implementations to bother with such
details yet but I assume by the end of the year that there will be.
We don't currently use shared memory windows in ARMCI-MPI but if we
do, then I'll definitely have documentation on how to use them
properly on http://wiki.mpich.org/armci-mpi/index.php/Main_Page.

Jeff

-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
ALCF docs: http://www.alcf.anl.gov/user-guides



More information about the mpiwg-rma mailing list