[Mpi-forum] Giving up on C11 _Generic

Joseph Schuchart schuchart at hlrs.de
Thu Aug 15 03:44:08 CDT 2019


Jed,

On 8/13/19 5:54 AM, Jed Brown via mpi-forum wrote:
> "Jeff Squyres \(jsquyres\) via mpi-forum" <mpi-forum at lists.mpi-forum.org> writes:
> 
>> Let me ask a simple question: how will users to write portable MPI programs in C with large count values?
>>
>> Answer: they will explicitly call MPI_Send_x(), and not rely on C11 _Generic.
> 
> Few packages will accept a hard dependency on MPI-4 for at least 10
> years.  MS-MPI still doesn't fully support MPI-2.1, for example, and
> PETSc only recently began requiring MPI-2.0.

That is a sad truth: I just had a colleague asking me about a >6y old 
Open MPI release...

 >
> I'm not taking a position on C11 _Generic in the standard, but it would
> significantly reduce the configure complexity for apps to upgrade to
> MPI_Count without dropping support for previous standards.

I don't see how C11 _Generic will make life easier. if an application 
expects to transfer >2G elements it will eventually transition to 
MPI_Count. With C11 _Generic (as originally proposed), checks are 
required to ensure that both the compiler and the MPI implementation 
support it to avoid silent corruption. Without _Generic, the compiler 
has no role in this and compilation will fail if the MPI implementation 
does not offer the MPI_*_x calls. The error is thus obvious. Thus, the 
application has to have configure checks and wrapper calls to properly 
support large transfers. And if they are not needed then there is no 
reason to use the MPI_*_x calls in the first place.

The point Jeff is making below is totally valid.

Joseph

> 
>> Which then raises the question: what's the point of using C11
>> _Generic?  Its main functionality can lead to [potentially] silent
>> run-time errors, and/or require additional error checking in every
>> single code path by the implementation.  That just seems like bad
>> design, IMNSHO.  That's why the WG decided to bring this to the Forum
>> list (especially given the compressed timeframe for MPI-4).
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> 


More information about the mpi-forum mailing list