[mpiwg-rma] MPI_Lock... for MPI shared memory

William Gropp wgropp at illinois.edu
Tue Sep 13 08:17:22 CDT 2016


I agree with Jeff - this is a simple errata item to be fixed.

Bill

William Gropp
Acting Director and Chief Scientist, NCSA
Director, Parallel Computing Institute
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign





On Sep 13, 2016, at 8:09 AM, Jeff Hammond <jeff.science at gmail.com> wrote:

> This is a bug in the text that should be fixed by an erratum. 
> 
> Jeff 
> 
> Sent from my iPhone
> 
>> On Sep 13, 2016, at 5:36 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
>> 
>> Hello,
>> 
>> my colleague Joseph detected that an MPI implementation is allowed 
>> to not allow passive target communication/synchronization (i.e.,
>> all forms of MPI_Lock and Unlock) on MPI shared memory windows.
>> 
>> Was it intented or is it only a historical review-bug that 
>> MPI_WIN_ALLOCATE_SHARED was never added to the list of 
>> MPI_ALLOC_MEM (since MPI-2.0), and MPI_WIN_ALLOCATE & 
>> MPI_WIN_ATTACH (since MPI-3.0) in the following text.
>> 
>> 
>> MPI-3.1 Sect. 11.5.3 on page 448 lines 1-12 say
>> 
>> "Implementors may restrict the use of RMA communication 
>> that is synchronized by lock calls to windows in 
>> memory allocated by MPI_ALLOC_MEM (Section 8.2),
>> MPI_WIN_ALLOCATE (Section 11.2.2), or attached with 
>> MPI_WIN_ATTACH (Section 11.2.4). 
>> Locks can be used portably only in such memory.
>> 
>> Rationale. The implementation of passive target 
>> communication when Memory is not shared may require 
>> an asynchronous software agent. Such an agent can be
>> implemented more easily, and can achieve better 
>> performance, if restricted to specially allocated memory. 
>> It can be avoided altogether if shared memory is used. 
>> It seems natural to impose restrictions that allows 
>> one to use shared memory for third party communication 
>> in shared memory machines.
>> (End of rationale.)"
>> 
>> 
>> The rationale not really fits to the current state
>> that MPI_WIN_ALLOCATE_SHARED is missing in the list. 
>> It would fit if MPI_WIN_ALLOCATE_SHARED would be added
>> to the list.
>> 
>> Best regards
>> Rolf
>> 
>> -- 
>> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
>> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
>> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20160913/5c246c7f/attachment-0001.html>


More information about the mpiwg-rma mailing list