[mpiwg-rma] Synchronization on shared memory windows

Balaji, Pavan balaji at anl.gov
Tue Feb 4 11:01:54 CST 2014


Rolf,

You still need MPI_WIN_SYNC in UNIFIED to ensure memory access ordering.

  — Pavan

On Feb 4, 2014, at 10:59 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:

> Jeff, 
> 
> thank you for the MPI_FREE_MEM hint. Yes, I'll fix it in my examples.
> 
> About the synchronization problems:
> If I use shared memory windows and direct remote load and store
> instead of the RMA functions PUT or GET, 
> is it then correct, when I never use MPI_WIN-synchronization routines?
> 
> I would expect this, because in the unified RMA modell,
> the load and store accesses to the neighbors memory are done
> directly and MPI is not involved.
> Because of the unified RMA modell (must be for shared memory windows),
> there should be no need for cache flush routines.
> Correct?
> 
> Therefore, my halo_1sided_put_win_alloc_shared_othersync_20.f90
> should be correct?
> 
> Best regards
> Rolf
> 
> ----- Original Message -----
>> From: "Jeff Hammond" <jeff.science at gmail.com>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Cc: "Stefan Andersson" <stefan at cray.com>, "Bill Long" <longb at cray.com>
>> Sent: Tuesday, February 4, 2014 5:46:28 PM
>> Subject: Re: [mpiwg-rma] Synchronization on shared memory windows
>> 
>> Using 2-sided with window memory is no different from load-store.
>> You
>> need to synchronize to maintain consistency if you are doing both RMA
>> and non-RMA on the window.  If you merely use the WIN calls to
>> allocate memory and only access the private copy, you shouldn't need
>> any sync.
>> 
>> I have not explored if your use of active-target sync is valid or not
>> because I think these modes are almost entirely useless and pay
>> almost
>> no attention to their semantics.  This is a matter of opinion that no
>> one else is required to share, of course.
>> 
>> Jeff
>> 
>> On Tue, Feb 4, 2014 at 10:03 AM, Rolf Rabenseifner
>> <rabenseifner at hlrs.de> wrote:
>>> Dear all,
>>> 
>>> I tested MPI-3.0 1-sided synchronization on shared memory windows.
>>> A lot didn't worked as I expected:
>>> 
>>> - assertions fail with MPI_Win_fence
>>>   (there is no restriction defined on MPI-3.0 p452:8-19),
>>> 
>>> - Post-Start-Complete-Wait fully fails
>>>   (compare MPI-3.0 p410:16-19, especially the ref to Sect.11.5),
>>> 
>>> - and MPI_Free_mem also fails for the shared memory windows
>>>   (compare MI-3.0 p409:23-24, especially that MPI_FREE_MEM
>>>   is mentioned here).
>>> 
>>> Attached are some files:
>>> - halo_1sided_put_win_alloc_20.f90
>>> 
>>>   This is the basis and works.
>>>   It is with normal (distributed) windows.
>>> 
>>> - halo_1sided_put_win_alloc_shared_20.f90
>>> 
>>>   It is the first shared memory example.
>>>   It causes several errors with the test on our Cray system:
>>>    - The assertions on MPI_WIN_FENCE do not work
>>>    - The MPI_FREE_MEM does not work for the shared buffers
>>> 
>>>   Is my program wrong? it is a simple left and right
>>>   1-dim halo exchange.
>>> 
>>> - halo_1sided_put_win_alloc_shared_20_w-a-cray.f90
>>> 
>>>   This is a workaround that works on our Cray:
>>>    - assertions MPI_MODE_NOPRECEDE and MPI_MODE_NOSUCCEED removed
>>>    - MPI_FREE_MEM removed
>>> 
>>> - halo_1sided_put_win_alloc_shared_pscw_20.f90
>>> 
>>>   With Post-Start-Complete-Wait, nothing works!
>>>   No work-around.
>>> 
>>> - halo_1sided_put_win_alloc_shared_othersync_20.f90
>>> 
>>>   In this example, I fully substituted the RMA synchronization
>>>   by point-to-point synchronization with Irecv, send and waitall.
>>>   Is this allowed?
>>>   Was it intended?
>>>   Is there any wording about in MPI-3.0?
>>>   Will we have any wording in MPI-next? (3.1 or 4.0)?
>>> 
>>> I hope, someone knows the answers.
>>> 
>>> Best regards
>>> Rolf
>>> 
>>> --
>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email
>>> rabenseifner at hlrs.de
>>> High Performance Computing Center (HLRS) . phone
>>> ++49(0)711/685-65530
>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
>>> 685-65832
>>> Head of Dpmt Parallel Computing . . .
>>> www.hlrs.de/people/rabenseifner
>>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
>>> 1.307)
>>> 
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>> 
>> 
>> 
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>> 
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma




More information about the mpiwg-rma mailing list