[MPI3 Fortran] Results from the Fortran plenary session yesterday

N.M. Maclaren nmm1 at cam.ac.uk
Wed Oct 13 07:09:07 CDT 2010


On Oct 12 2010, Rolf Rabenseifner wrote:
>
>thank you for your detailed answers.
>
>Sorry that I forget to mention, that the tickets are not yet updated.
>The most updated text are the slides in my last e-mail.

Thanks.

In the following, I am pretty sure of my facts, but it is easy to miss
something in documents the size of the Fortran and MPI standards.  As
you point out, I have already done so :-) Malcolm Cohen knows the
Fortran rules better than I do.

I have marked some points that I regard as important with ====.


>Slide 17 and ticket #237-I:
>> #237. Please be careful. It would be terribly easy to specify more
>> than is guaranteed or reliable.
>
>When people are using sequence derived types, is there any risk that
>the displacements (memory addresses relative to the beginning of the 
>derived type) may be modified at compile time or are not in
>the sequence of the elements as specified in the derived type or ...?

"Or", mainly.

The displacements and order are specified by the standard, and so are
solid.  What is NOT the case is that the alignments will be the same as
C's (insofar that C's is defined).  No change there.  But, because of
that, it is forbidden to specify both SEQUENCE and BIND(C) [C1501]!

The new Fortran 2008 function C_SIZEOF [15.2.3.7] is more-or-less
guaranteed to work for sequence derived types, but it is only slightly
less safe for a much wider class of derived types.

Where sequence types do specify something that other derived types
don't is that explicit-shape array components are required to be
inline, whereas they aren't otherwise.

====
Something that would help would be a clear statement from the MPI forum
that there is a critical need to be able to pass non-interoperable types
as anonymous storage (i.e. with just an address and length).
====

>I.e., is my wording on slide 17 correct?
>My wording is:
>   "Reality is that MPI datatypes work correctly with Fortran 
>   sequence derived types, and are not guaranteed to work for 
>   Fortran non-sequence derived types."

Not really, but it's hard to do better until the TR stabilises.


>Slide 18 and ticket #238-K:
>> #238. The word 'register' should be removed, effectively throughout
>> 16.2, as the majority of problems have nothing to do with registers.
>> It is misleading.
>
>How should I describe these problems?
>  "Statements accessing a buffer or parts of such a statement 
>  may be moved accross an MPI call if this buffer is not an actual 
>  argument on this MPI call."

You need to add "and if the buffer does not have the TARGET,
ASYNCHRONOUS or VOLATILE attributes."

>> I can find no reference to TARGET in ticket #238.
>
>Yes. because the slides are newer as the ticket.

Ah.  Sorry.

>> As I believe that you
>> may intend to say, the advice on using one-sided communication should
>> be that the TARGET attribute on the window is adequate for active
>> one-sided communication, but VOLATILE is needed for passive.
>
> No, I never wanted to say that VOLATILE is needed. With passive target 
> communication, the target process can access the window safely only 
> within a pair MPI_WIN_LOCK/UNLOCK. And here, TARGET also is enough.

====
Oh, no, it isn't!  The MPI_WIN_LOCK/UNLOCK are executed by another
process, so the compiler is entitled to assume that the window doesn't
change unless it does something on that process that might change it.
You need VOLATILE for passive windows, but TARGET is correct for
active-only ones.
====

>> Actually, I believe that ASYNCHRONOUS would do as well, but should
>> prefer to run that one past WG5.
>
>With usage of MPI_BOTTOM in MPI_Send, there is no such analogy,
>i.e., the buffer argument is never visible.
>Therefore, the Fortran compiler may not implement the needed guarantees.

Oops.  You're right.  MPI_BOTTOM is a semantic horror, isn't it?

>> The proposed solution is incorrect. Even if the variables are stored
>> in modules or COMMON blocks, the VOLATILE or TARGET attribute is still
>> needed. The data may not go away, but a compiler is allowed to make
>> several assumptions that are incompatible with one-sided
>> communication.
>> VOLATILE or TARGET is both necessary and sufficient.
>
>I do not understand these arguments.

The easier bit is that TARGET or VOLATILE (for active and passive
windows) is sufficient.

Incidentally, they must be specified for ALL procedures that declare the
buffer, up to the common parent of the start and finish - for example,
for MPI_Isend/MPI_Wait:

Fred -> Joe -> Bert -> Bill -> MPI_Isend
                    <-      <-
                    -> Pete -> Nick
            <-      <-      <-
            -> Alf  -> MPI_Wait

If the buffer is declared in any of those procedures (even if it is
never used) except Fred, it must have the relevant attribute.  This is
definitely true if it is passed as an argument, and may be if it is
included using USE.  Malcolm might know, but I am not sure.  In any
case, problems are only likely to arise with using it in arguments.

The harder bit is that putting it in a module or COMMON isn't enough.

COMMON has a nasty gotcha, and SAVE is needed as well - 16.6.6
para. 1(3)(a) was originally there to support overlays, but is relevant
to DLLs as well.  But the real problem is when they are passed as
arguments, and you REALLY don't want to try to describe the rules
on how to do it safely.  They are all sane, but are very complicated.

So I recommend just recommending TARGET or VOLATILE, as appropriate,
saying that the buffer must have a lifetime between start and finish and
have the attribute on ALL declarations during that time, and leaving it
at that.

>I do not understand, why DD or MPI_F_SYNC_REG should not work
>to protect moving of buffer accesses across an MPI call that does not
>have this buffer argument on its actual argument list, e.g., MPI_Wait,
>or MPI_Send(MPI_BOTTOM,...).
>
>(Sorry about the different names MPI_F_SYNC_REG/MPI_SYNC_REG/MPI_FLUSH_REG:
> They all mean the same routine.)

Yes, I deduced that.

Let's deal with DD.  That assumes that the compiler will assume that DD
might modify the buffer.

Well, firstly, the compiler may be able to see the code of DD and work
out that can't happen, which is increasingly common with modern
compilers and high levels of optimisation (the usual buzzword is IPO).
And you REALLY don't want to document how to fool the compiler!

Secondly, it doesn't help, anyway.  If all of the other arguments to
MPI_Recv are local or immutable in some way, then it can move the
statements, because MPI_Recv doesn't use or change buf.  For example:

    val_old = buf
    CALL DD(buf)
    CALL MPI_Recv(MPI_BOTTOM,...)
    CALL DD(buf)
    val_new = buf

can be changed to:

    val_old = buf
    CALL DD(buf)
    CALL DD(buf)
    val_new = buf
    CALL MPI_Recv(MPI_BOTTOM,...)

The same applies to any MPI_SYNC_REG function, UNLESS the compiler
recognises it.  If the compiler does recognise it, you still have
the IA64 unwind section problem that I referred to.

But there's more.  Without ASYNCHRONOUS, TARGET or VOLATILE, a compiler
may copy an object for any reason or none, and some do just that.
Copy-in/copy-out can improve efficiency considerably, which is why it
became popular, but there are other ways in which it can be used.

====
So, please stick to the simple rules:

    ASYNCHRONOUS for non-blocking buffers, from the transfer to the
wait, back to a common parent of them, and at least for every place that
they occur as an argument.
    TARGET for windows that are used for active transfers only, during
an exposure epoch, with the same rules.
    VOLATILE for other one-sided windows, unfortunately for the WHOLE
life of the buffer, with the same rules.
====


>Slide 22 and Ticket #242-N:
>> #242. Unless I have missed something more than usually subtle, this
>> ticket misunderstands the meaning of INTENT(IN). There is no problem
>> in
>> specifying it for ALL input-only arguments, as it would not stop MPI
>> 'constants' from being passed. It constrains the use of the argument
>> and not the actual argument.
>> 
>> Not specifying INTENT(IN) has a lot of harmful effects, including
>> degraded optimisation.
>> 
>> Why do people believe that there is a problem?
>
>I have only a problem with specifying OUT or INOUT if it is allowed
>to call the routine with an actual argument that is one of the 
>constants MPI_BOTTOM, MPI_IN_PLACE, MPI_STATUS(ES)_IGNORE, 
>MPI_ERRCODES_IGNORE, and MPI_UNWEIGHTED.

====
Right.  That's correct.  In those cases, I regret that the only
two legal options are INTENT(IN) and no INTENT.
====

>Do I expect correctly that it is harmless to omit INTENT(INOUT).

It reduces checking, so it's better not to.

>When I understand correctly, then these special constants are
>special commonblock variables (i.e., fixed addresses)
>and therefore I can be conviced that we use INTENT
>without looking on these constants.

INTENT(IN), anyway.  As you say, don't use INTENT(OUT) and
INTENT(INOUT) if they can be used.

>This would allowed to specify INTENT(IN/OUT/INOUT)
>also for the buffers.

====
INTENT(IN) is always safe for read-only arguments, and the others
should be used if those horrible constants are NOT allowed.
====

>Would there be any problem in conjuction with TYPE(*),DIMENSION(..) ?

No.  INTENT is entirely separate.

>All other IN arguments are already specified with INTENT(IN). 

Yes, which is good.


>Sliede 26 and Ticket #246-R 
>> #246. This is a really good idea because, inter alia, 'Cray pointers'
>> are seriously non-portable even to compilers that claim to support
>> them.
>> However, it's more than a description - it's a change of
>> specification.
>> 
>> Do you need help with the specification of the new interface?
>
>Yes, please.
>
>We want to have two routines:
> - One should be nearly identical to the existing, i.e., as 
>   on slide 26.
>   Here I need a Fortran application code to show how to use such
>   a routine istead of using the Cray Pointers.
> - The second should be usable for allocatable arrays.
>   Here, I need the routine's specification and how to use this routine.

I will draft one and post.


Regards,
Nick.




More information about the mpiwg-fortran mailing list