[Mpi-forum] Giving up on C11 _Generic

Jed Brown jed at jedbrown.org
Fri Aug 16 19:41:07 CDT 2019

Joseph Schuchart via mpi-forum <mpi-forum at lists.mpi-forum.org> writes:

>> I'm not taking a position on C11 _Generic in the standard, but it would
>> significantly reduce the configure complexity for apps to upgrade to
>> MPI_Count without dropping support for previous standards.
> I don't see how C11 _Generic will make life easier. if an application 
> expects to transfer >2G elements it will eventually transition to 
> MPI_Count. With C11 _Generic (as originally proposed), checks are 
> required to ensure that both the compiler and the MPI implementation 
> support it to avoid silent corruption. 

Most apps today do not have code to detect when message sizes exceed int
and work around the limit or fail gracefully.  Instead, they just crash
unpredictably when those limits are exceeded.  I don't see that changing
for the majority of applications, but I do see them switching types to
MPI_Count so that they can get correct behavior when C11 generics are
present and the MPI implementation supports it.  They might even build
using -Wconversion (with a C11 compiler) as a diagnostic to check that
they have used MPI_Count everywhere that it is needed.  But the fallback
will just rely on silent (default -Wno-conversion) conversion to int,
thereby preserving today's behavior.

It isn't robust, but it is *pragmatic* and will work when the
environment allows.  I see the question as whether the Forum wishes to
implicitly endorse this short-cut by including the C11 generics or leave
that to apps to learn about and include an internal header with the
_Generic macros.

> Without _Generic, the compiler has no role in this and compilation
> will fail if the MPI implementation does not offer the MPI_*_x
> calls. The error is thus obvious. Thus, the application has to have
> configure checks and wrapper calls to properly support large
> transfers. And if they are not needed then there is no reason to use
> the MPI_*_x calls in the first place.

More information about the mpi-forum mailing list