[Mpi3-rma] Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED

Rolf Rabenseifner rabenseifner at hlrs.de
Tue Jun 4 00:32:48 CDT 2013


Jeff and all,

I'm not familiar with Limits of POSIX shared memory.

Are there significant limits? 
Or is it possible to use most of the physical memory as shared memory?

Best regards
Rolf

----- Original Message -----
> From: "Jeff Hammond" <jeff.science at gmail.com>
> To: "MPI 3.0 Remote Memory Access working group" <mpi3-rma at lists.mpi-forum.org>
> Sent: Monday, June 3, 2013 10:42:37 PM
> Subject: Re: [Mpi3-rma] Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED
> This is totally platform specific. MPICH on BGQ is different than on
> Linux, for example. This question is thus mostly unanswerable. Your
> best bet is to consider the POSIX shared memory spec and assume that
> MPI uses it.
> 
> Jeff
> 
> Sent from my iPhone
> 
> On Jun 3, 2013, at 3:14 PM, Rolf Rabenseifner <rabenseifner at hlrs.de>
> wrote:
> 
> > Dear implementers of MPI-3.0 MPI_WIN_ALLOCATE_SHARED,
> >
> > my question is not about the Interface, but on the implementations:
> >
> > If a set of MPI processes within a shared memory node and
> > Communicator
> > wants to allocate a shared memory window:
> >
> > - Are there similar limits as if a set of threads wants to allocate
> >   memory and use it as global Memory by all threads, i.e.,
> >   is it possible to allocate most physical
> >   memory with MPI_WIN_ALLOCATE_SHARED?
> >
> > - Is there an additional restriction on the number of shared memory
> >   windows?
> >
> > - Is there an additional restriction if one process defines the
> >   whole size as window size and all other processes within
> >   the SMP node use size=0?
> >
> > - Do you know other restrictions if I want to substitute
> >   OpenMP (not OpenMPI!) by shared memory MPI programming,
> >   i.e. hybrid MPI+MPI instead of hybrid MPI+OpenMP?
> >
> > Is your answer generally valid for mpich and OpenMPI and must
> > I expect some additional restrictions on some vendor's MPIs?
> >
> > It is about the real implementations, e.g. in mpich and OpenMPI,
> > but I expect that our MPI-3 RMA working group knows the answer.
> >
> > Best regards
> > Rolf
> >
> >
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email
> > rabenseifner at hlrs.de
> > High Performance Computing Center (HLRS) . phone
> > ++49(0)711/685-65530
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
> > 685-65832
> > Head of Dpmt Parallel Computing . . .
> > www.hlrs.de/people/rabenseifner
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
> > 1.307)
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-rma mailing list