[Mpi-forum] Discussion points from the MPI-<next> discussion today
jhammond at alcf.anl.gov
Sat Sep 22 19:28:29 CDT 2012
On Sat, Sep 22, 2012 at 10:17 AM, N.M. Maclaren <nmm1 at cam.ac.uk> wrote:
> How would you explain non-deterministic effects in quantum mechanics
> to someone who was not prepared to move beyond a 19th century mindset?
I'm sorry. I did not realize that you were the Einstein of parallel
computing. Forgive me for not being sufficiently reverent.
> You have been given clear references to the sequence point rules, which
> make it clear that there must be a sequence point between updates.
I'm not going to read the entire C standard in response to your
Chicken Little proclamations about the complete invalidity of MPI-RMA.
Try to make an actual argument using actual code if you want to be
> There is and can be no such sequence point when a location is updated
> by passive one-sided communication and is later used in the process
> that owns the data. Muttering about fences is irrelevant, because the
> same applies even when you have them. The other MPI facilities were
> carefully designed to ensure that there IS such a sequence point.
Clearly, you are right and everyone else is wrong, including everyone
who develops Linux. Memory fences (and related operations) are
sufficient to implement consistent shared memory parallel programs.
Deal with it.
Please tell me why MPI_WIN_LOCK and MPI_WIN_UNLOCK do not sufficiently
define a synchronization model that allows for valid use of RMA.
> You are clearly thinking entirely in terms of trivial problems, where
> failure is reliable and deterministic. Well, I have news for you.
Nope. Sorry. Not thinking about trivial problems. Focused on
scientific applications totaling 5M lines of code on the biggest
systems in the world.
Unfortunately, despite my repeated prodding, you do not have news for
me because you continue to not provide a meaningful level of detail in
your "the sky is falling!" proclamations about MPI.
> Shared memory parallelism is not like that, and we didn't put all that
> effort into the wording of the various standards for no reason (and I
> have been involved in all of C, C++ and Fortran in this area). You
I'm involved in the UPC spec effort but that doesn't mean I understand
UPC inside and out. If nothing else, your involvement in all three
language efforts suggests you can't possible understand any of them
sufficiently since I believe that no one on earth can keep the
thousands of pages of Fortran, C and C++ specification documentations
straight in their head.
Jack of all trades, master of none comes to mind here. When was the
last time you even came to the MPI Forum anyways? Cambridge hasn't
voted in the last 3 years that I know of.
> won't believe me, but you will see similar remarks here:
This is some dude's homepage. What did you expect me to find here? I
did like the hiking pictures.
> As I said, I have exposed those issues on many systems, including
> the SGI Origin, SunFire F15K and others - and they were not bugs in
> their implementations. Could I guarantee to repeat them on other
Did you read the source of each implementation from beginning to end?
If not, I call bollocks on your claim of correctness. SGI MPI-RMA is
far from a beacon of standard-compliance even today.
> systems? Probably, but I would need to be hands-on to a suitably
> large system and experiment with hammering it to hell and back until
> I found a partially reproducible one. Even with access, I have much
> better things to do. Can I give you a simple, repeatable one? Don't
You have nothing better to do than troll this list with
unsubstantiated rantings. The least you could do is waste some of
your time coming up with evidence.
> be silly. The issue you have missed is that this happens with low
You are the silly person for repeatedly failing to deliver on your
grandiose proclamations with logical arguments and code.
> probability in real programs, but most people never track it down.
> Enough is enough.
What is enough? Unless you're the Almighty God of Heaven, you have to
back up your statements with logic and/or evidence. I've seen nothing
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
More information about the mpi-forum