[Mpi-forum] Discussion points from the MPI-<next> discussion today

N.M. Maclaren nmm1 at cam.ac.uk
Fri Sep 21 13:19:18 CDT 2012


On Sep 21 2012, Jeff Hammond wrote:
>
>Most of the criticism here is spurious and would have sunk either
>MPI_Irecv (which is just MPI_Irecv_reduce with MPI_REPLACE - yes I
>know this is for MPI_Accumulate only) or MPI_Ireduce.  I believe all
>these comments apply to MPI_Ireduce with a user-defined reduction and
>that ship has sailed.

No, they most definitely do NOT apply to MPI_Irecv, because it doesn't
call user code.  I accept that they apply to MPI_Ireduce, though it
could be argued not to, because there are a lot of optimisations of that
that do not involve calling a user-defined operator except at the wait.
And MPI could fix its current languages standard incompatibility by
specifying that.

But, even if they do, does making an error once justify making it again?

>For ANY nonblocking operation, nothing is guaranteed to happen until
>the MPI_Wait (or equivalent) is called.  The MPI standard NEVER
>species when any background activity has to occur.  This is entirely
>implementation defined.  It is sufficient to have everything happen
>during the synchronization call (e.g. MPI_Wait).

That doesn't help, for the reason given above.

MPI had no option but to go outside the Fortran and C standards for
choice arguments (in the former) and non-blocking transfer for both, but
both standards have been updated to enable those aspects of MPI to
operate cleanly (though, in the case of C, it's the interpretation, not
the wording).  But it is very disturbing the way that MPI has started to
specify facilities that are seriously incompatible with both language
standards and many implementations of them.

I will not follow up on this aspect after this message.



On Sep 21 2012, Jed Brown wrote:
>
>> On the contrary - that is essential for any kind of sane specification
>> or implementation of MPI_Irecv_reduce, just as it is for one-sided.
>> Sorry, but that's needed to get even plausible conformance with any of
>> the languages MPI is likely to be used from.  MPI_Recv_reduce doesn't
>> have the same problems.
>
>> The point is that none of them allow more-or-less arbitrary functions
>> to be called asynchronously, and that has been horribly sick in every
>> modern system that I have looked into in any depth.  It used to work
>> on some mainframes, but hasn't worked reliably since.  That is precisely
>> why POSIX has deprecated calling signal handlers asynchronously.  Please
>> don't perpetrate another feature like passive one-sided!
>
>This is totally different than passive one-sided because it has a request
>and isn't guaranteed to make progress when not in the MPI stack. An
>implementation using comm threads also need not use interrupts.

Er, no.  That's true in theory, but it is NOT true in practice :-(

If you think that the reduction can be done in another thread, then I am
afraid that you are mistaken.  It can't be, any more than passive
one-sided can be (though I agree that the problems are VASTLY less nasty
for MPI_Irecv_reduce).  It probably could be if you required the program
to be run with multiple threads and to dedicate one thread for the use
of MPI_Irecv_reduce, but that's a non-starter for anything even
approaching standards conformance or portability.

I would rather not get started on the utter ghastliess of the multiple
and incompatible threading specifications, but Fortran doesn't support
threading at all.  Even worse, all of the others use an explicit
threading model and require both sides to handshake even when one side
uses an object read-only.  So the user-defined operator has to know that
it may be run in another thread and has to synchronise appropriately,
even when it is itself entirely pure.  And the program has to know that
MPI is using a thread (and which).

>A lot of the controversy seems to come down to not trusting the user to be
>able to write a pure function that is actually pure.

It has nothing whatsoever to do with trusting the user, but what the
languages specify and therefore what is likely to be implemented with
any degree of reliability.  Fortran has the concept of a PURE function,
but it does NOT specify that one can be called in the way this needs.
Even worse, almost no other currently relevant languages do - and, in
particular, C and C++ most assuredly do not.

>You should realize
>that in many cases (including the most important ones to me), the MPI_Op is
>just SUM or MAX, but applied to datatypes that MPI does not cover (e.g.
>quad precision).  ...

Well, obviously.  But that doesn't help.


Regards,
Nick Maclaren.




More information about the mpi-forum mailing list