[mpiwg-hybridpm] First draft of Continuations section

Joachim Protze protze at itc.rwth-aachen.de
Tue Aug 17 16:20:30 CDT 2021


Hi Joseph, all,

I played around to get this running with Fortran a while ago. Here is
the code, I wrote based on my initial interface:

https://github.com/jprotze/fortran-detach/blob/main/detach-mpi/test-mpi-detach-loop.f90

I ran into the issue, that one of the few implementations supporting
detached tasks (Cray) did not allow me to take the address of the OpenMP
function - which was not an Fortran-specific issue. GCC/11 works both ways.
Interestingly, ifort still doesn't seem to support detached tasks,
although the feature originally was an Intel proposal.

Best
Joachim

Am 04.08.21 um 20:02 schrieb Joseph Schuchart via mpiwg-hybridpm:
> Thanks Jeff! I am less worried about the out-of-band update of the
> status objects but the whole thing with callbacks and function pointers.
> The only place in the MPI standard using callbacks right now is the
> tools chapter and that is explicitly C-only. Looking at
> https://scicomp.stackexchange.com/a/286 it seems to be possible to pass
> around function pointers in Fortran but the comments suggest that its
> not compatible with C?
> 
> Thanks
> Joseph
> 
> On 8/4/21 1:00 PM, Jeff Hammond wrote:
>> Regarding Fortran…
>>
>> In theory, you need ASYNCHRONOUS/VOLATILE attributes on visible state
>> that’s updated out of band.
>>
>> In practice, MPI implementations have done this with nonblocking
>> send-recv for 25 years and it hasn’t been an issue.
>>
>> Sent from my iPhone
>>
>>> On Aug 4, 2021, at 7:36 PM, Joseph Schuchart via mpiwg-hybridpm
>>> <mpiwg-hybridpm at lists.mpi-forum.org> wrote:
>>>
>>> Dear all,
>>>
>>> I have written and attached a first draft of the continuations
>>> section that I would like to discuss in one of the upcoming meetings.
>>> Right now it is embedded at the end Section 3 (Point-to-Point
>>> following test and wait etc) where I felt it would fit well, I'm open
>>> to other suggestions though.
>>>
>>> There are still some open TODOs that I'd like to discuss. In
>>> particular, I'm grappling with the handling of statuses. As suggested
>>> during previous discussions, I removed the flag argument to
>>> MPI_Continue such that the MPI implementation is allowed to invoke
>>> the continuations directly (can be controlled through an info key
>>> though). That means that after the call to MPI_Continue there is no
>>> good reason to inspect the statuses anymore. However, we may still
>>> want the user to provide the status object(s) in order to avoid
>>> exposing implementation-internal memory in the callback. But then I
>>> am worried that this is a source of error, with users providing
>>> pointers to stack memory that goes out of scope before the
>>> continuation is invoked.
>>>
>>> Also, I am unsure whether and how this whole mechanism works with
>>> Fortran. The python generator happily generates the interface but I
>>> have no idea whether that is correct...
>>>
>>> Looking forward to discussing this on the call!
>>>
>>> Cheers
>>> Joseph
>>> <mpi40-report-continuations.pdf>
>>> _______________________________________________
>>> mpiwg-hybridpm mailing list
>>> mpiwg-hybridpm at lists.mpi-forum.org
>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm
> 
> _______________________________________________
> mpiwg-hybridpm mailing list
> mpiwg-hybridpm at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm


-- 
Dipl.-Inf. Joachim Protze

IT Center
Group: High Performance Computing
Division: Computational Science and Engineering
RWTH Aachen University
Seffenter Weg 23
D 52074  Aachen (Germany)
Tel: +49 241 80- 24765
Fax: +49 241 80-624765
protze at itc.rwth-aachen.de
www.itc.rwth-aachen.de

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5327 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20210817/184d0a9e/attachment.p7s>


More information about the mpiwg-hybridpm mailing list