[mpiwg-rma] Memory model and source code portability

Rolf Rabenseifner rabenseifner at hlrs.de
Wed Oct 1 11:10:45 CDT 2014


> This is the same chicken little story that McClaren told us a while
> back. No C compiler is going to do what you suggested. It's perverse
> and not within the realm of possibility.

It was my message that Compiler don't do this, because 
they implement C11 memory model.
I other words, Boehms problems are historical for C.
When I understood Boehm correctly, then it was a problem,
i.e. not "perverse" and within "the realm of possibility".

> Async communication isn't broken and we don't need to fix it. We have
> millions of lines of nonblocking MPI code running correctly every
> day on RDMA networks. The sky is not falling.

It was Bill's idea that we should check Boehm's paper
and this idea was good. The answer is: resolved by C11.
And by the real compilers because they needed to be
without conflicts to pthreads.

Rolf

----- Original Message -----
> From: "Jeff Hammond" <jeff.science at gmail.com>
> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
> Sent: Wednesday, October 1, 2014 5:47:49 PM
> Subject: Re: [mpiwg-rma] Memory model and source code portability
> 
> This is the same chicken little story that McClaren told us a while
> back. No C compiler is going to do what you suggested. It's perverse
> and not within the realm of possibility.
> 
> And async thread isn't required for this theoretical problem. RDMA
> would do the same thing. Or interrupts on BG or anything else
> consistent with async progress or overlap of comm, both of which the
> user wants.
> 
> Async communication isn't broken and we don't need to fix it. We have
> millions of lines of nonblocking MPI code running correctly every
> day on RDMA networks. The sky is not falling.
> 
> Jeff
> 
> Sent from my iPhone
> 
> On Oct 1, 2014, at 1:23 AM, Rolf Rabenseifner <rabenseifner at hlrs.de>
> wrote:
> 
> >> If you agree with Boehm’s article in the strongest form, then you
> >> can
> >> maintain source code portability by avoiding the use of features
> >> that are not well defined in the source language - i.e., the use
> >> of
> >> shared memory.
> > 
> > Bill, please read my example. It is not using shared memory.
> > If we agree with Boehm's article, then it has direct implication
> > to normal nonblocking pt-to-pt as soon as the MPI library
> > uses internally a progress thread. This should be allowed.
> > Or in other words, not only Fortran is a special case.
> > With C, we Need not to talk to the C standardization body.
> > Similar to Fortran TS29113, the C standardization body
> > already did what MPI needs.
> > 
> > As soon as we say, MPI has a C binding, and allow nonblocking
> > pt-to-pt
> > and allow parallel progress engines, we should say that this
> > requires the C11 memory model, i.e., the user must not modify
> > ist optimization in a way that the C11 Memory model is no longer
> > guaranteed.
> > 
> > MPI provides a C binding, C does not provide an MPI binding.
> > Therefore, it is our job.
> > Otherwise MPI does not provide source code portability.
> > 
> > Rolf
> > 
> > 
> > 
> > ----- Original Message -----
> >> From: "William Gropp" <wgropp at illinois.edu>
> >> To: "MPI WG Remote Memory Access working group"
> >> <mpiwg-rma at lists.mpi-forum.org>
> >> Sent: Tuesday, September 30, 2014 6:43:39 PM
> >> Subject: Re: [mpiwg-rma] Memory model and source code portability
> >> 
> >> 
> >> Not at all.  It just isn’t our place to talk about other
> >> standards.
> >>  Fortran was a slightly different case because of the discussions
> >> between members of the MPI Forum and the Fortran standards
> >> committee.  We don’t have that option with C, and I don’t think we
> >> ever will, since MPI programs are not an important part of the use
> >> of C.
> >> 
> >> 
> >> If you agree with Boehm’s article in the strongest form, then you
> >> can
> >> maintain source code portability by avoiding the use of features
> >> that are not well defined in the source language - i.e., the use
> >> of
> >> shared memory.  The current straw vote addresses something at (a)
> >> is
> >> within the definitions under control of the MPI Forum and (b)
> >> allows
> >> users that are willing to do what virtually everyone currently
> >> using
> >> threads and shared memory does - rely on the compiler and
> >> processing
> >> environment to implement something other than the standard
> >> language.
> >>  But that latter case does not belong in a standard document.
> >> 
> >> 
> >> Bill
> >> 
> >> 
> >> 
> >> On Sep 30, 2014, at 11:23 AM, Rolf Rabenseifner <
> >> rabenseifner at hlrs.de > wrote:
> >> 
> >> 
> >> For me it looks like that you give up with the important goal
> >> that the MPI standard should provide source code portability.
> >> 
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> > 
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email
> > rabenseifner at hlrs.de
> > High Performance Computing Center (HLRS) . phone
> > ++49(0)711/685-65530
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
> > 685-65832
> > Head of Dpmt Parallel Computing . . .
> > www.hlrs.de/people/rabenseifner
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
> > 1.307)
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-rma mailing list