[mpiwg-rma] Memory model and source code portability

Jeff Hammond jeff.science at gmail.com
Wed Oct 1 10:47:49 CDT 2014


This is the same chicken little story that McClaren told us a while back. No C compiler is going to do what you suggested. It's perverse and not within the realm of possibility. 

And async thread isn't required for this theoretical problem. RDMA would do the same thing. Or interrupts on BG or anything else consistent with async progress or overlap of comm, both of which the user wants. 

Async communication isn't broken and we don't need to fix it. We have millions of lines of nonblocking MPI code running correctly every day on RDMA networks. The sky is not falling. 

Jeff

Sent from my iPhone

On Oct 1, 2014, at 1:23 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:

>> If you agree with Boehm’s article in the strongest form, then you can
>> maintain source code portability by avoiding the use of features
>> that are not well defined in the source language - i.e., the use of
>> shared memory.
> 
> Bill, please read my example. It is not using shared memory.
> If we agree with Boehm's article, then it has direct implication
> to normal nonblocking pt-to-pt as soon as the MPI library
> uses internally a progress thread. This should be allowed.
> Or in other words, not only Fortran is a special case.
> With C, we Need not to talk to the C standardization body.
> Similar to Fortran TS29113, the C standardization body
> already did what MPI needs.
> 
> As soon as we say, MPI has a C binding, and allow nonblocking pt-to-pt
> and allow parallel progress engines, we should say that this 
> requires the C11 memory model, i.e., the user must not modify 
> ist optimization in a way that the C11 Memory model is no longer
> guaranteed.
> 
> MPI provides a C binding, C does not provide an MPI binding.
> Therefore, it is our job.
> Otherwise MPI does not provide source code portability.
> 
> Rolf
> 
> 
> 
> ----- Original Message -----
>> From: "William Gropp" <wgropp at illinois.edu>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Sent: Tuesday, September 30, 2014 6:43:39 PM
>> Subject: Re: [mpiwg-rma] Memory model and source code portability
>> 
>> 
>> Not at all.  It just isn’t our place to talk about other standards.
>>  Fortran was a slightly different case because of the discussions
>> between members of the MPI Forum and the Fortran standards
>> committee.  We don’t have that option with C, and I don’t think we
>> ever will, since MPI programs are not an important part of the use
>> of C.
>> 
>> 
>> If you agree with Boehm’s article in the strongest form, then you can
>> maintain source code portability by avoiding the use of features
>> that are not well defined in the source language - i.e., the use of
>> shared memory.  The current straw vote addresses something at (a) is
>> within the definitions under control of the MPI Forum and (b) allows
>> users that are willing to do what virtually everyone currently using
>> threads and shared memory does - rely on the compiler and processing
>> environment to implement something other than the standard language.
>>  But that latter case does not belong in a standard document.
>> 
>> 
>> Bill
>> 
>> 
>> 
>> On Sep 30, 2014, at 11:23 AM, Rolf Rabenseifner <
>> rabenseifner at hlrs.de > wrote:
>> 
>> 
>> For me it looks like that you give up with the important goal
>> that the MPI standard should provide source code portability.
>> 
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma



More information about the mpiwg-rma mailing list