[mpiwg-rma] Single RMA synchronization for several window handles
Jeff Hammond
jeff.science at gmail.com
Fri Aug 8 13:00:59 CDT 2014
On Fri, Aug 8, 2014 at 7:18 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
> Jim,
>
> your topic "Reducing Synchronization Overhead Through Bundled
> Communication" may get also help if we would be able to
> combine several window handles to one superset window handle.
>
> If you have several windows for different buffers, but
> only one synchronization pattern, e.g. MPI_Win_fince
> then currently you must call MPI_Win_fence seperately
> for each window handle.
>
> I would propose:
>
> MPI_Win_combine (/*IN*/ int count,
> /*IN*/ MPI_Win *win,
> /*IN*/ MPI_Comm comm,
> /*OUT*/ MPI_Win *win_combined)
>
Let's consider whether the following makes sense:
MPI_Comm_combine(/*IN*/ int count,
/*IN*/ MPI_Comm comm[],
/*OUT*/ MPI_Comm *comm_combined);
This function would allow me to have an object to Barrier across
multiple communicators at once.
Make sense? I didn't think so...
I would much rather think about MPI_Win_flush(_all)(_local)v that take
a vector of windows.
I don't see the need for MPI_Win_fencev. If the implementation can
sync all the Windows on the first call, the subsequent calls should be
cheap.
...
MPI_Put(win1)
MPI_Put(win2)
MPI_Put(win3)
MPI_Win_fence(win1) <<< all three MPI_Put operations are completed here
MPI_Win_fence(win2) <<< essentially a no-op
MPI_Win_fence(win3) <<< essentially a no-op
...
Maybe I am forgetting that MPI_Win_fence requires a Barrier...
Jeff
> The process group of comm must contain the process groups of all win.
> The resulting window handle win_combined can be used only
> in RMA synchronization calls and other helper routines,
> but not for dynamic window allocation nor for any
> RMA communication routine.
> Collective synchronization routines must be called by all processes
> of comm.
> The semantics of an RMA synchronization call using win_combined
> is defined as if the calls were seperately issued for
> each window handle of the array win. If group handles
> are part of the argument list of the synchronization call
> then the appropriate subset is used for each window handle in win.
>
> What do you think about this idea for MPI-4.0?
>
> Best regards
> Rolf
>
> ----- Original Message -----
>> From: "Jim Dinan" <james.dinan at gmail.com>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Sent: Thursday, August 7, 2014 4:08:32 PM
>> Subject: [mpiwg-rma] RMA Notification
>>
>>
>>
>> Hi All,
>>
>>
>> I have added a new proposal for an RMA notification extension:
>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/439
>>
>>
>> I would like to bring this forward for the RMA WG to consider as an
>> MPI-4 extension.
>>
>>
>> Cheers,
>> ~Jim.
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
More information about the mpiwg-rma
mailing list