[Mpi-forum] Big Fortran hole in MPI-4.0 Embiggening

Wesley Bland work at wesbland.com
Mon Jan 11 09:21:59 CST 2021


I agree with Jeff that this is a non-issue. Bill’s assertion that adding Fortran support for large count once you have C support has been true in MPICH development at least and no vender would drop anything just to put MPI 4.0 compatibility on a marketing slide. They’d just put it on the slide anyway. Every implementation out there has some sort of exception, intentional or otherwise, that makes them technically non-compliant. In the end, their users determine 1) whether they care enough to request the feature and 2) whether they care enough to switch to a competing library.

I think the best way forward is to do something like what had been proposed when the embiggening/pythonization started and separate bindings from specification and just say that an MPI implementation has MPI 4.0 C bindings and MPI 3.1 F08 bindings and MPI 5.0 Python bindings. That means each binding can be updated in whatever way makes sense for them. For 4.0, I don’t think this is an issue that needs to be resolved. Us telling the implementations what they have to do will have essentially no impact anyway.

Thanks,
Wes

> On Jan 11, 2021, at 6:26 AM, Rolf Rabenseifner via mpi-forum <mpi-forum at lists.mpi-forum.org> wrote:
> 
> Dear Jeff and all,
> 
>> I think their users would be very unhappy.
> 
> Yes, users can be unhappy and compute centers may definitely recommend 
> an other MPI library - at least for any software development.
> 
> I want to remember that MPI-3.1 requires for the mpi module:
> 
> "Provide explicit interfaces according to the Fortran routine interface specications.
> This module therefore guarantees compile-time argument checking and allows positional
> and keyword-based argument lists. If an implementation is paired with a
> compiler that either does not support TYPE(*), DIMENSION(..) from TS 29113, or
> is otherwise unable to ignore the types of choice buffers, then the implementation must
> provide explicit interfaces only for MPI routines with no choice buer arguments. See
> Section 17.1.6 for more details."
> 
> Although there are no more such compilers that do not provide one of the two methods,
> there are more than 5 years after MPI-3.1 still MPI libraries that do not
> provide keyword-based argument lists with the mpi module.
> And those libraries provide such support for the mpi_f08 module.
> This means, they prove that they are not MPI-3.1 compliant :-)
> 
> 
> Sometimes implementors ignore the goals and the wording of the MPI standard. 
> 
> 
> And there is also no request that users want to have a less-quality
> mpi module definition.
> 
> Best regards
> Rolf
> 
> 
> ----- Original Message -----
>> From: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org>
>> To: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org>
>> Cc: "Jeff Squyres" <jsquyres at cisco.com>
>> Sent: Sunday, January 10, 2021 5:09:55 PM
>> Subject: Re: [Mpi-forum] Big Fortran hole in MPI-4.0 Embiggening
> 
>> I can't imagine a vendor would:
>> 
>> - support mpi_f08
>> - then stop supporting mpi_f08 just so that they can get an "MPI-4.0" release
>> out
>> - then support mpi_f08 again
>> 
>> I think their users would be very unhappy.
>> 
>> 
>> 
>> 
>> 
>> On Jan 10, 2021, at 10:55 AM, William Gropp via mpi-forum < [
>> mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ] > wrote:
>> 
>> I agree with Dan that this is a big change from the RCM. Further, the approach
>> in MPI has always been to encourage the users to make it clear to the vendors
>> what is acceptable in implementations, especially implementation schedules.
>> Nothing in the standard prohibits implementors from continuing to provide MPI
>> 3.x implementations while they work to provide a full MPI 4.0 implementation.
>> The MPI forum has no enforcement power on the implementors, and I believe this
>> text is unnecessary and will not provide the guarantee that Rolf wants.
>> Further, frankly once the C embiggened interface is implemented, creating the
>> mpi_f08 version is relatively straightforward.
>> 
>> Bill
>> 
>> William Gropp
>> Director, NCSA
>> Thomas M. Siebel Chair in Computer Science
>> University of Illinois Urbana-Champaign
>> IEEE-CS President-Elect
>> 
>> 
>> 
>> 
>> 
>> 
>> On Jan 10, 2021, at 7:44 AM, HOLMES Daniel via mpi-forum < [
>> mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ] > wrote:
>> 
>> Hi Rolf,
>> 
>> This is a (somewhat contrived, arguably) reason for taking another tiny step
>> towards removing the “mpif.h” method of Fortran support and pushing users and
>> implementations towards preferring the Fortran 2008 interface, which is a
>> direction of travel that I fully support.
>> 
>> I think this might be seen as quite a big change for implementers, especially if
>> it were to occur between the RCM and FRM.
>> 
>> Cheers,
>> Dan.
>>>> Dr Daniel Holmes PhD
>> Architect (HPC Research)
>> [ mailto:d.holmes at epcc.ed.ac.uk <mailto:d.holmes at epcc.ed.ac.uk> | d.holmes at epcc.ed.ac.uk <mailto:d.holmes at epcc.ed.ac.uk> ]
>> Phone: +44 (0) 131 651 3465
>> Mobile: +44 (0) 7940 524 088
>> Address: Room 2.09, Bayes Centre, 47 Potterrow, Central Area, Edinburgh, EH8 9BT
>>>> The University of Edinburgh is a charitable body, registered in Scotland, with
>> registration number SC005336.
>>>> 
>> 
>> 
>> 
>> On 10 Jan 2021, at 12:22, Rolf Rabenseifner via mpi-forum < [
>> mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ] > wrote:
>> 
>> This email was sent to you by someone outside the University.
>> You should only click on links or attachments if you are certain that the email
>> is genuine and the content is safe.
>> 
>> Dear MPI-Forum members,
>> 
>> MPI-3.1 and MPI-4.0 includes the following rule on top of the 2nd page
>> of the Language Binding chapter:
>> 
>> | MPI implementations providing a Fortran interface must provide
>> | one or both of the following:
>> | - The USE mpi_f08 Fortran support method.
>> | - The USE mpi and INCLUDE 'mpif.h' Fortran support methods.
>> 
>> The embiggening was included only in the C and mpi_f08 bindings.
>> 
>> Most implementors nowadays provide all three MPI Fortran support methods,
>> i.e., the mpi_f08 and the mpi module and mpif.h.
>> 
>> For all MPI-3.1 libraries that provide all three MPI Fortran support methods,
>> the most easiest and fastest way to provide MPI-4.0 for C and Fortran is,
>> - to implement the embiggening for C,
>> - and to remove the mpi_f08 module,
>> - and maybe months or years later, to provide mpi_f08 again, now embiggened.
>> 
>> This implementation path (by removing mpi_f08 from MPI-3.1 and to name
>> it MPI-4.0 without Fortran embiggening) was of course never intended,
>> when we decided to require the embiggened routines only for mpi_f08.
>> The goal was
>> - to not require additional work from the implementors for the old mpi module
>> - convince the users that it is a good idea to make a transition
>> to mpi_f08.
>> 
>> The most simplest way to resolve this problem would be to require
>> mpi_f08 for MPI-4.0, i.e., to change the text to
>> 
>> | MPI implementations providing a Fortran interface
>> | - must provide the USE mpi_f08 Fortran support method,
>> | - and additionally may provide both,
>> | the USE mpi and INCLUDE 'mpif.h' Fortran support methods.
>> 
>> What is your opinion?
>> 
>> I expect that we should discuss this next Wednesday at our MPI Forum telcon.
>> 
>> Best regards
>> Rolf
>> 
>> --
>> Dr. Rolf Rabenseifner . . . . . . . . . .. [ mailto:rabenseifner at hlrs.de <mailto:rabenseifner at hlrs.de> | email
>> rabenseifner at hlrs.de <mailto:rabenseifner at hlrs.de> ] .
>> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
>> Head of Dpmt Parallel Computing . . . [
>> https://urldefense.com/v3/__http://www.hlrs.de/people/rabenseifner__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimsJIa6Siw$ <https://urldefense.com/v3/__http://www.hlrs.de/people/rabenseifner__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimsJIa6Siw$>
>> | www.hlrs.de/people/rabenseifner <http://www.hlrs.de/people/rabenseifner> ] .
>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
>> _______________________________________________
>> mpi-forum mailing list
>> [ mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ]
>> [
>> https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$ <https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$>
>> | https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum> ]
>> 
>> _______________________________________________
>> mpi-forum mailing list
>> [ mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ]
>> [
>> https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$ <https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$>
>> |
>> https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$ <https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!DZ3fjg!pq7EsFbFnneGZo_3EudtIahAxSmoQr0t9AbsAl3CqBoVuMfHl3ICDYnCimt6yFS01Q$>
>> ]
>> 
>> _______________________________________________
>> mpi-forum mailing list
>> [ mailto:mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> | mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org> ]
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum>
>> 
>> 
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org>
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum>
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de <mailto:rabenseifner at hlrs.de> .
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner <http://www.hlrs.de/people/rabenseifner> .
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org>
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum <https://lists.mpi-forum.org/mailman/listinfo/mpi-forum>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20210111/0b714f80/attachment-0001.html>


More information about the mpi-forum mailing list