[Mpi3-rma] [EXTERNAL] Re: Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED
Rolf Rabenseifner
rabenseifner at hlrs.de
Tue Jun 4 11:13:56 CDT 2013
Jeff, Brian, Jed, and all,
I'm currently preparing slides for our ISC13 tutorial
on SMP parallel programming with MPI and OpenMP.
We'll present the new MPI shared memory interface as
an alternative to OpenMP, i.e., MPI+MPI instead of MPI+OpenMP.
We cite
Torsten Hoefler, James Dinan, Darius Buntinas, Pavan Balaji, Brian Barrett,
Ron Brightwell, William Gropp, Vivek Kale, Rajeev Thakur:
MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory.
http://link.springer.com/content/pdf/10.1007%2Fs00607-013-0324-2.pdf
But I also want to mention restrictions.
As I've learned from you both, I would write:
--------------
Caution:
On some systems
(e.g., when MPI shared memory support is based on POSIX shared memory)
- the number of shared memory windows, and
- the total size of shared memory windows
may be limited.
Some OS systems may provide Options, e.g.,
- at job launch, or
- MPI process start,
to enlarge restricting defaults.
---------------
Would you modify this text, or do you have additional remarks?
> I would say that the standard is not the right place to document this
> sort of thing, but hopefully vendors and computing facilities will
> document it prominently because it's an essential consideration for
> MPI+MPI. It would be useful if someone would set up a resource
> documenting the hoops one has to jump through on all major platforms.
That would be helpful.
I would like to include and cite it in our SC13 tutorial in November.
For ISC13 now, it may be to short time to get the needed Details.
> > In principle, to stay within one programming model is attractive.
> > Therefore, I want to learn what are the limits of
> > MPI_WIN_ALLOCATE_SHARED.
>
> As Jeff says, it's more about the operating system. The MPI standard
> does not specify whether environment steps will be necessary in order
> to use MPI_Win_allocate_shared.
Yes, this is not a discussion on the MPI Standard.
It is a discussion on the real implementations and the
implications for users who want to use these new Features.
Best regards and many thanks to your helpful answers so far
Rolf
----- Original Message -----
> From: "Brian W Barrett" <bwbarre at sandia.gov>
> To: "MPI 3.0 Remote Memory Access working group" <mpi3-rma at lists.mpi-forum.org>, "Rolf Rabenseifner"
> <rabenseifner at hlrs.de>
> Sent: Tuesday, June 4, 2013 5:15:50 PM
> Subject: Re: [EXTERNAL] Re: [Mpi3-rma] Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED
> On 6/4/13 4:18 AM, "Jed Brown" < jedbrown at mcs.anl.gov > wrote:
>
>
>
>
>
>
> Rolf Rabenseifner < rabenseifner at hlrs.de > writes:
>
>
> I would say that the standard is not the right place to document this
> sort of thing, but hopefully vendors and computing facilities will
> document it prominently because it's an essential consideration for
> MPI+MPI. It would be useful if someone would set up a resource
> documenting the hoops one has to jump through on all major platforms.
>
>
> I think we're still waiting to figure out what those hoops are. For
> example, on platforms with XPMEM (Cray XT/XE/XC), the answer is have
> as many windows that are as big as you can allocate memory for. On
> most of the other platforms with Open MPI, it'll be more along the
> lines of what you could allocate with POSIX shared memory interfaces,
> which is usually a high number of segments, but the total space is
> probably limited to a relatively small subset of the total memory
> (25%?).
>
>
> Brian
>
>
>
>
>
>
> --
> Brian W. Barrett
> Scalable System Software Group
> Sandia National Laboratories
----- Original Message -----
> From: "Jed Brown" <jedbrown at mcs.anl.gov>
> To: "Rolf Rabenseifner" <rabenseifner at hlrs.de>, "MPI 3.0 Remote Memory Access working group"
> <mpi3-rma at lists.mpi-forum.org>
> Sent: Tuesday, June 4, 2013 1:18:21 PM
> Subject: Re: [Mpi3-rma] Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED
> Rolf Rabenseifner <rabenseifner at hlrs.de> writes:
>
> > In principle, to stay within one programming model is attractive.
> > Therefore, I want to learn what are the limits of
> > MPI_WIN_ALLOCATE_SHARED.
>
> As Jeff says, it's more about the operating system. The MPI standard
> does not specify whether environment steps will be necessary in order
> to
> use MPI_Win_allocate_shared. On a system without virtual memory (like
> CNK on BG/Q), you have to reserve a chunk of address space when the
> node
> is booted. In these environments, you need contiguous physical memory
> in order to address it contiguously, as provided by
> MPI_Win_allocate_shared.
>
> I would say that the standard is not the right place to document this
> sort of thing, but hopefully vendors and computing facilities will
> document it prominently because it's an essential consideration for
> MPI+MPI. It would be useful if someone would set up a resource
> documenting the hoops one has to jump through on all major platforms.
--
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
More information about the mpiwg-rma
mailing list