[MPI3 Fortran] [Fwd: Library-based ASYNCHRONOUS I/O and SYNC MEMORY]

Dan Nagle dannagle at verizon.net
Tue Sep 9 18:15:16 CDT 2008


Hi,

Let's suppose, for simplicity, that the call
to the start the transfer routine and the call
to wait are in the same scoping unit (say, subroutine).

Putting an attribute on the name of the buffer
in that scope is wrong, because it applies before
the initiating call and after the wait.

Using volatile for the buffer argument in the interface
of the initial call is wrong because that tells the compiler
to reload from memory, and the last thing anyone wants
to do is to reload the buffer before the wait call.

Using volatile on the wait call is not an option
because the buffer name isn't on the wait call.

Trying to apply volatile to every variable on the initial call
is wrong, because most of them aren't volatile.

Using asynchronous is wrong because asynchronous already
has a meaning within the Fortran i/o subsystem, and
convolving that usage with external libraries seems
to be A Bad Thing (to me at least).

The question is:  How to *connect* the initial call
with the wait call when the buffer variable doesn't appear
on the wait call?

I think one must use some kind of argument intent,
since we're discussing the use to which the buffer variable
will undergo.  But I'm flexible on spelling.

I think this is a difficult question, and I really don't
want to be rushed into an answer, given that Fortran
and MPI coexist, if in a somewhat ad hoc fashion, and
have for some time.

If rushed to publish something (due to the emergency :-) ),
it would look something like this:

interface

subroutine mpi_isend( buffer, ... , id, ...)
    <type>, ..., intent( in asynch) :: buffer
    ...
    integer, intent( out wait( buffer) ) :: id
    ...
end subroutine mpi_isend

subroutine mpi_irecv( buffer, ..., id, ...)
    <type>, ..., intent( out asynch) :: buffer
    ...
    integer( intent( out wait( buffer) ) :: id
    ...
end subroutine mpi_recv

subroutine mpi_wait( ..., id, ...)
    ...
    integer, intent( in wait) :: id
    ...
end subroutine mpi_wait

end interface

The rules are that a dummy argument subject to asynchronous
processing *after the return of the call* has the asynch
intent.  A wait intent with the out specifier declares
which dummy argument is awaited.  A wait intent with an in specifier
declares that the routine waits for the event the actual argument
signals.

This is not a finished work, just a thought.
I'm not at all sure I like the spelling.  This is a hack.
And I'm certainly not ready to write this into a paper!

On Sep 9, 2008, at 3:02 PM, Iain Bason wrote:

>
> On Sep 9, 2008, at 1:12 PM, Dan Nagle wrote:
>
>> Hi,
>>
>> On Sep 9, 2008, at 12:46 PM, Iain Bason wrote:
>>>
>>> Well, my opinion at present is that we should allow the VOLATILE  
>>> attribute to be specified for procedures.  Any volatile procedure  
>>> would be presumed able to modify any variable.  Any procedure that  
>>> calls a volatile procedure must itself have the VOLATILE attribute.
>>
>> Volatile has already been proposed for procedures,
>> with other semantics.
>
> So use a different spelling.
>
>> Volatile is far to big a hammer, IMHO.
>
> Do you mean that preventing any code motion across a call to  
> MPI_WAIT is too big a hammer, or is this just a reaction to the word  
> "volatile"?
>
> If the former, it would be a good idea to figure out what the hammer  
> is squashing that you don't want squashed.  It's easy to say it is  
> too big a hammer.  It is harder to show it.
>
>>> That's just a sketch, of course, but making mpi_wait volatile  
>>> would inhibit any code motion across a call to it.
>>
>> The only code motion needing inhibition is code motion
>> involving actual arguments.
>
> I don't disagree.  However, I question the need to specify that to  
> the compiler.  I suspect that such specification will be difficult  
> to design, and won't gain you any noticeable performance.
>
>>> The question is how much does the compiler have to know?  I  
>>> contend it needs to know a lot less than the programmer.
>>
>> For the present, that may be true.  As memory hierarchies
>> become more complex, I'm not so sure.
>
> I'm having trouble seeing how complex memory hierarchies relate to  
> asynchronous procedures.  For HPF it was kind of a nightmare, but  
> isn't MPI by its nature putting that burden on the programmer's  
> shoulders?
>
> Iain
>
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran

-- 
Cheers!

Dan Nagle







More information about the mpiwg-fortran mailing list