[mpiwg-rma] Single RMA synchronization for several window handles
Jeff Hammond
jeff.science at gmail.com
Fri Aug 8 12:53:04 CDT 2014
...because dynamic windows require non-scalable metadata and can
inhibit performance on networks that require memory registration.
Pavan can elaborate.
Jeff
On Fri, Aug 8, 2014 at 8:06 AM, William Gropp <wgropp at illinois.edu> wrote:
> Why not simply use MPI_Win_create_dynamic, attach the separate memory
> regions, and use that MPI_Win? Adding an option for a collective memory
> allocator would make this almost equivalent, and wouldn’t add additional
> overhead beyond what is already requried.
>
> Bill
>
> William Gropp
> Director, Parallel Computing Institute
> Thomas M. Siebel Chair in Computer Science
> University of Illinois Urbana-Champaign
>
>
>
>
>
> On Aug 8, 2014, at 9:18 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
>
> Jim,
>
> your topic "Reducing Synchronization Overhead Through Bundled
> Communication" may get also help if we would be able to
> combine several window handles to one superset window handle.
>
> If you have several windows for different buffers, but
> only one synchronization pattern, e.g. MPI_Win_fince
> then currently you must call MPI_Win_fence seperately
> for each window handle.
>
> I would propose:
>
> MPI_Win_combine (/*IN*/ int count,
> /*IN*/ MPI_Win *win,
> /*IN*/ MPI_Comm comm,
> /*OUT*/ MPI_Win *win_combined)
>
> The process group of comm must contain the process groups of all win.
> The resulting window handle win_combined can be used only
> in RMA synchronization calls and other helper routines,
> but not for dynamic window allocation nor for any
> RMA communication routine.
> Collective synchronization routines must be called by all processes
> of comm.
> The semantics of an RMA synchronization call using win_combined
> is defined as if the calls were seperately issued for
> each window handle of the array win. If group handles
> are part of the argument list of the synchronization call
> then the appropriate subset is used for each window handle in win.
>
> What do you think about this idea for MPI-4.0?
>
> Best regards
> Rolf
>
> ----- Original Message -----
>
> From: "Jim Dinan" <james.dinan at gmail.com>
> To: "MPI WG Remote Memory Access working group"
> <mpiwg-rma at lists.mpi-forum.org>
> Sent: Thursday, August 7, 2014 4:08:32 PM
> Subject: [mpiwg-rma] RMA Notification
>
>
>
> Hi All,
>
>
> I have added a new proposal for an RMA notification extension:
> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/439
>
>
> I would like to bring this forward for the RMA WG to consider as an
> MPI-4 extension.
>
>
> Cheers,
> ~Jim.
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
More information about the mpiwg-rma
mailing list