[mpiwg-rma] Single RMA synchronization for several window handles

William Gropp wgropp at illinois.edu
Fri Aug 8 10:06:14 CDT 2014


Why not simply use MPI_Win_create_dynamic, attach the separate memory regions, and use that MPI_Win?  Adding an option for a collective memory allocator would make this almost equivalent, and wouldn’t add additional overhead beyond what is already requried.

Bill

William Gropp
Director, Parallel Computing Institute
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign





On Aug 8, 2014, at 9:18 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:

> Jim,
> 
> your topic "Reducing Synchronization Overhead Through Bundled
> Communication" may get also help if we would be able to 
> combine several window handles to one superset window handle.
> 
> If you have several windows for different buffers, but
> only one synchronization pattern, e.g. MPI_Win_fince
> then currently you must call MPI_Win_fence seperately 
> for each window handle.
> 
> I would propose:
> 
> MPI_Win_combine (/*IN*/  int count, 
>                 /*IN*/  MPI_Win *win,
>                 /*IN*/  MPI_Comm comm, 
>                 /*OUT*/ MPI_Win *win_combined)
> 
> The process group of comm must contain the process groups of all win.
> The resulting window handle win_combined can be used only 
> in RMA synchronization calls and other helper routines, 
> but not for dynamic window allocation nor for any
> RMA communication routine.
> Collective synchronization routines must be called by all processes 
> of comm.
> The semantics of an RMA synchronization call using win_combined
> is defined as if the calls were seperately issued for 
> each window handle of the array win. If group handles
> are part of the argument list of the synchronization call
> then the appropriate subset is used for each window handle in win.
> 
> What do you think about this idea for MPI-4.0?
> 
> Best regards
> Rolf 
> 
> ----- Original Message -----
>> From: "Jim Dinan" <james.dinan at gmail.com>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Sent: Thursday, August 7, 2014 4:08:32 PM
>> Subject: [mpiwg-rma] RMA Notification
>> 
>> 
>> 
>> Hi All,
>> 
>> 
>> I have added a new proposal for an RMA notification extension: 
>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/439
>> 
>> 
>> I would like to bring this forward for the RMA WG to consider as an
>> MPI-4 extension.
>> 
>> 
>> Cheers,
>>  ~Jim.
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20140808/9bd6e8e8/attachment-0001.html>


More information about the mpiwg-rma mailing list