[mpiwg-rma] Synchronization on shared memory windows
Jeff Hammond
jeff.science at gmail.com
Tue Feb 4 10:10:45 CST 2014
I can only assume you tested a particular implementation of MPI-3,
since the MPI standard itself does not execute on any platform of
which I know.
Can I assume you have identified bugs or semantic misinterpretations
present in CrayMPI, which is a derivative of MPICH? If so, I
recommend replacing the MPI-3 RMA working group with the Cray and
MPICH developers.
Best,
Jeff
On Tue, Feb 4, 2014 at 10:03 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
> Dear all,
>
> I tested MPI-3.0 1-sided synchronization on shared memory windows.
> A lot didn't worked as I expected:
>
> - assertions fail with MPI_Win_fence
> (there is no restriction defined on MPI-3.0 p452:8-19),
>
> - Post-Start-Complete-Wait fully fails
> (compare MPI-3.0 p410:16-19, especially the ref to Sect.11.5),
>
> - and MPI_Free_mem also fails for the shared memory windows
> (compare MI-3.0 p409:23-24, especially that MPI_FREE_MEM
> is mentioned here).
>
> Attached are some files:
> - halo_1sided_put_win_alloc_20.f90
>
> This is the basis and works.
> It is with normal (distributed) windows.
>
> - halo_1sided_put_win_alloc_shared_20.f90
>
> It is the first shared memory example.
> It causes several errors with the test on our Cray system:
> - The assertions on MPI_WIN_FENCE do not work
> - The MPI_FREE_MEM does not work for the shared buffers
>
> Is my program wrong? it is a simple left and right
> 1-dim halo exchange.
>
> - halo_1sided_put_win_alloc_shared_20_w-a-cray.f90
>
> This is a workaround that works on our Cray:
> - assertions MPI_MODE_NOPRECEDE and MPI_MODE_NOSUCCEED removed
> - MPI_FREE_MEM removed
>
> - halo_1sided_put_win_alloc_shared_pscw_20.f90
>
> With Post-Start-Complete-Wait, nothing works!
> No work-around.
>
> - halo_1sided_put_win_alloc_shared_othersync_20.f90
>
> In this example, I fully substituted the RMA synchronization
> by point-to-point synchronization with Irecv, send and waitall.
> Is this allowed?
> Was it intended?
> Is there any wording about in MPI-3.0?
> Will we have any wording in MPI-next? (3.1 or 4.0)?
>
> I hope, someone knows the answers.
>
> Best regards
> Rolf
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
--
Jeff Hammond
jeff.science at gmail.com
More information about the mpiwg-rma
mailing list