[Mpi3-rma] Available size and number of shared memory windows with MPI_WIN_ALLOCATE_SHARED
maik peterson
maikpeterson at googlemail.com
Tue Jun 4 03:52:38 CDT 2013
people, shmem, posix or not is technology from yesterday ! wake up.
2013/6/3 Rolf Rabenseifner <rabenseifner at hlrs.de>
> Dear implementers of MPI-3.0 MPI_WIN_ALLOCATE_SHARED,
>
> my question is not about the Interface, but on the implementations:
>
> If a set of MPI processes within a shared memory node and Communicator
> wants to allocate a shared memory window:
>
> - Are there similar limits as if a set of threads wants to allocate
> memory and use it as global Memory by all threads, i.e.,
> is it possible to allocate most physical
> memory with MPI_WIN_ALLOCATE_SHARED?
>
> - Is there an additional restriction on the number of shared memory
> windows?
>
> - Is there an additional restriction if one process defines the
> whole size as window size and all other processes within
> the SMP node use size=0?
>
> - Do you know other restrictions if I want to substitute
> OpenMP (not OpenMPI!) by shared memory MPI programming,
> i.e. hybrid MPI+MPI instead of hybrid MPI+OpenMP?
>
> Is your answer generally valid for mpich and OpenMPI and must
> I expect some additional restrictions on some vendor's MPIs?
>
> It is about the real implementations, e.g. in mpich and OpenMPI,
> but I expect that our MPI-3 RMA working group knows the answer.
>
> Best regards
> Rolf
>
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20130604/8434a6e2/attachment-0001.html>
More information about the mpiwg-rma
mailing list