[MPI3 Fortran] Results from the Fortran plenary session yesterday

Rolf Rabenseifner rabenseifner at hlrs.de
Tue Oct 12 16:28:34 CDT 2010


Nick,

thank you for your detailed answers.

Sorry that I forget to mention, that the tickets are not yet updated.
The most updated text are the slides in my last e-mail.

Further comments are below with your text.
I'm looking forward to your answers to my many questions and commnts.

----- Original Message -----
> From: "N.M. Maclaren" <nmm1 at cam.ac.uk>
> To: "MPI-3 Fortran working group" <mpi3-fortran at lists.mpi-forum.org>
> Cc: "Rolf Rabenseifner" <rabenseifner at hlrs.de>
> Sent: Tuesday, October 12, 2010 8:20:47 PM
> Subject: Re: [MPI3 Fortran] Results from the Fortran plenary session yesterday
>
> Thank you for posting this. Here are some comments on the substantive
> issues, referring to the tickets as needed. Please note that they are
> my personal ones, and I am not speaking for WG5 or anyone else.
> 
> 
> Page 4. As you know, some of us are working on this in WG5 and J3.

Yes, this slide was to inform the whole MPI-3 Forum about this ongoing issue.


Ticket #232-D and slide 12:
> #232. This is a mistake. Some compilers do not have such hacks, and
> it isn't desirable to forbid them to support a MPI module, especially
> if
> they do at present. The words "if possible" should be added.

Yes, you are right. On the slide 12, I had already:
"Only if the compiler has no such method, then implicit interface is still okay".


Slide 14 and ticket #234-F:
> Page 14. A Technical Report is, indeed, 'official', but it is also
> optional on vendors. Most have stated that they will support at least
> the parts that MPI needs. The current TR is definitely unstable,
> though
> TYPE(*) and DIMENSION(..) are among the stable parts.

ave we understood correctly, that in June 2011, when this TR is
approved by the world Fortran standardization body,
it will be part of the official Fortran standard?


Slide 16 and ticket #236-H: 
> #236. The Fortran term is 'array sections'.

I will use this term when updating the ticket.

> There is no realistic change of vector subscripts being permitted for
> use as buffers in non-blocking calls, so it's a permanent restriction.

Yes, and we (MPI Forum) do also not expect changes in the Fortran standard,
i.e., we will describe thsi as a permanent restriction.


Slide 17 and ticket #237-I:
> #237. Please be careful. It would be terribly easy to specify more
> than is guaranteed or reliable.

When people are using sequence derived types, is there any risk that
the displacements (memory addresses relative to the beginning of the 
derived type) may be modified at compile time or are not in
the sequence of the elements as specified in the derived type or ...?

I.e., is my wording on slide 17 correct?
My wording is:
   "Reality is that MPI datatypes work correctly with Fortran 
   sequence derived types, and are not guaranteed to work for 
   Fortran non-sequence derived types."


Slide 18 and ticket #238-K:
> #238. The word 'register' should be removed, effectively throughout
> 16.2, as the majority of problems have nothing to do with registers.
> It is misleading.

How should I describe these problems?
  "Statements accessing a buffer or parts of such a statement 
  may be moved accross an MPI call if this buffer is not an actual 
  argument on this MPI call."

> I can find no reference to TARGET in ticket #238.

Yes. because the slides are newer as the ticket.
 
> As I believe that you
> may intend to say, the advice on using one-sided communication should
> be that the TARGET attribute on the window is adequate for active
> one-sided communication, but VOLATILE is needed for passive.

No, I never wanted to say that VOLATILE is needed.
With passive target communication, the target process can access the window
safely only within a pair MPI_WIN_LOCK/UNLOCK. And here, TARGET also is enough.

> Actually, I believe that ASYNCHRONOUS would do as well, but should
> prefer to run that one past WG5.

As long as ASYNCHRONOUS is only defined based on asynchronous Fortran I/O,
ASYNCHRONOUS helps only in a guaranteed way as long as there is a clear
"path" that maps MPI function to functions that use internally Fortran
asynchronous I/O.
Especially with nonblocking routines, this is easy:
MPI_Isend/Irecv may start internally an asynchrnous Fortran I/O
and MPI_Wait is finishing this I/O.
With usage of MPI_BOTTOM in MPI_Send, there is no such analogy,
i.e., the buffer argument is never visible.
Therefore, the Fortran compiler may not implement the needed guarantees.

> The proposed solution is incorrect. Even if the variables are stored
> in modules or COMMON blocks, the VOLATILE or TARGET attribute is still
> needed. The data may not go away, but a compiler is allowed to make
> several assumptions that are incompatible with one-sided
> communication.
> VOLATILE or TARGET is both necessary and sufficient.

I do not understand these arguments.
 
> On this topic, it is NOT possible for MPI implementations to avoid
> this
> for conforming C programs, with a strict interpretation of the C
> standard, but the C standard is such a mess that there is no point in
> trying to say anything else. And, yes, it really does cause trouble.
> If anyone is interested, I can describe some of the issues, but I
> suggest just ignoring the matter.
> 
> I suggest playing down that DD() trick, as it is VERY deceptive. In
> particular, as soon as a user enables significant optimisation, it is
> likely to trigger global optimisation and inlining, which will stop
> the trick from working! The ways of preventing that are so repulsive
> that you don't want to describe them - and are unreliable anyway.
> 
> MPI_F_SYNC_REG/MPI_SYNC_REG/MPI_FLUSH_REG is a really bad idea. It has
> been tried many times in many interfaces, and has never worked
> reliably.
> That is both because flushing registers is a compiler and not a
> library
> issue and because the problems often occur in a calling procedure. For
> a specification that attempts to take this approach and get it right,
> look at the IA64 architecture, especially with regard to unwind
> sections. My executive summary is "Don't go there".

I do not understand, why DD or MPI_F_SYNC_REG should not work
to protect moving of buffer accesses across an MPI call that does not
have this buffer argument on its actual argument list, e.g., MPI_Wait,
or MPI_Send(MPI_BOTTOM,...).

(Sorry about the different names MPI_F_SYNC_REG/MPI_SYNC_REG/MPI_FLUSH_REG:
 They all mean the same routine.)


Slide 22 and Ticket #242-N:
> #242. Unless I have missed something more than usually subtle, this
> ticket misunderstands the meaning of INTENT(IN). There is no problem
> in
> specifying it for ALL input-only arguments, as it would not stop MPI
> 'constants' from being passed. It constrains the use of the argument
> and not the actual argument.
> 
> Not specifying INTENT(IN) has a lot of harmful effects, including
> degraded optimisation.
> 
> Why do people believe that there is a problem?

I have only a problem with specifying OUT or INOUT if it is allowed
to call the routine with an actual argument that is one of the 
constants MPI_BOTTOM, MPI_IN_PLACE, MPI_STATUS(ES)_IGNORE, 
MPI_ERRCODES_IGNORE, and MPI_UNWEIGHTED.

Do I expect correctly that it is harmless to omit INTENT(INOUT).

When I understand correctly, then these special constants are
special commonblock variables (i.e., fixed addresses)
and therefore I can be conviced that we use INTENT
without looking on these constants.

This would allowed to specify INTENT(IN/OUT/INOUT)
also for the buffers.
Would there be any problem in conjuction with TYPE(*),DIMENSION(..) ?

All other IN arguments are already specified with INTENT(IN). 

Sliede 26 and Ticket #246-R 
> #246. This is a really good idea because, inter alia, 'Cray pointers'
> are seriously non-portable even to compilers that claim to support
> them.
> However, it's more than a description - it's a change of
> specification.
> 
> Do you need help with the specification of the new interface?

Yes, please.
We want to have two routines:
 - One should be nearly identical to the existing, i.e., as 
   on slide 26.
   Here I need a Fortran application code to show how to use such
   a routine istead of using the Cray Pointers.
 - The second should be usable for allocatable arrays.
   Here, I need the routine's specification and how to use this routine.

> Regards,
> Nick Maclaren.

Thank you very much for your detailed comments.
Best regards
Rolf

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the mpiwg-fortran mailing list