[MPI3 Fortran] MPI non-blocking transfer
Craig Rasmussen
crasmussen at lanl.gov
Wed Feb 11 11:24:33 CST 2009
Sure. I'm attaching 08-185r1.txt. Please note this document is a bit
dated in that there is now more support from the J3 and WG5 committees
for doing something beyond asking programmers to put the VOLATILE
attribute on variables (you've seen email from Nick, for example).
Also the TYPE(*) syntax is moving forward.
Cheers,
Craig
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 08-185r1.txt
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20090211/360dabc5/attachment-0001.txt>
-------------- next part --------------
On Feb 10, 2009, at 6:42 PM, Rolf Rabenseifner wrote:
> Craig,
> It would be nice, if you can send it to me again.
>
> Thank you
> Rolf
>
> On Tue, 10 Feb 2009 15:40:32 -0700
> Craig Rasmussen <crasmussen at lanl.gov> wrote:
>> On Feb 9, 2009, at 4:06 PM, Aleksandar Donev wrote:
>>> On Monday 09 February 2009 14:57, Rolf Rabenseifner wrote:
>>>> 2.3 The existing solution must continue to work, i.e.,
>>>> all existing and correct MPI applications must continue
>>>> to work.
>>> There is no "existing solution", unless you count VOLATILE, which is
>>> only standard as of Fortran 2003. As you say,
>> I would like to reiterate, the MPI Forum has already received
>> guidance from the J3 Fortran committee regarding this issue. The
>> guidance is to use the volatile attribute. I can dig up and
>> resend the official J3 document if you want. I'll also discuss it
>> further this afternoon and tomorrow. But I'm not sure the
>> guidance will change. Somehow the user must give the compiler
>> specific instruction about the usage of the buffer within the
>> Fortran language.
>> You don't really want Fortran to treat buffers like C does as that
>> will slow down all of your Fortran program.
>> Cheers,
>> Craig
>>>
>>>> 2.4 It is not my goal to automatically correct existing wrong
>>>> MPI applications, i.e., applications without the DD trick
>>> the DD thing is a trick (I call them hacks), and actually does
>>> not solve
>>> the full problem, e.g., Nick has pointed out that problems can
>>> occur at
>>> the site of the call to MPI_Isend, *not* just the site of the
>>> wait (but
>>> the cases when it fails are marginal and likely do not happen often
>>> enough to notice).
>>> Best,
>>> Aleks
>>> --
>>> Aleksandar Donev, Ph.D.
>>> Lawrence Postdoctoral Fellow @ Lawrence Livermore National
>>> Laboratory
>>> High Performance Computational Materials Science and Chemistry
>>> E-mail: donev1 at llnl.gov
>>> Phone: (925) 424-6816 Fax: (925) 423-0785
>>> Address: P.O.Box 808, L-367, Livermore, CA 94551-9900
>>> Web: http://cherrypit.princeton.edu/donev
>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
More information about the mpiwg-fortran
mailing list