[Mpi-forum] Good luck with MPI 3.0

Richard Treumann treumann at us.ibm.com
Mon Dec 6 08:55:42 CST 2010

To MPI Forum members

I have decided to retire from IBM at the end of 2010 so I will no longer 
be involved in the Forum.

I have been involved with MPI and with the MPI Forum since the days when 
we were debating whether an MPI 2.0 effort was really needed. I attended 
almost every MPI 2.0 meeting as IBM's representative from development, 
working with Marc Snir and people from IBM Research that he brought in. In 
the MPI 2.1 and 2.2 effort, I have been part of a broader group of IBM 
representatives and only attended a few meetings.

I am proud of what the Forum accomplished over almost 2 decades and of my 
small part in that. I have met a lot of amazingly creative people. I will 
miss looking at the ideas people bring forward, thinking through the 
implications and helping to refine them to enough clarity to be either 
adopted or tabled. 

I wish all of you well as you try to work out how the MPI model should 
adapt to scales of parallelism that were barely imaginable in the early 

Dealing with a million or more hardware threads without requiring every 
thread to maintain local state about all million of them is really hard. 

With such huge numbers of hardware threads, the failure of a thread 
(processor, node, link) in the lifetime of an application run becomes 
common and looking for ways to pick up and keep going becomes 
irresistible.  The trouble is, it is realy, really hard and is at odds 
with the original MPI philosophy which built on the idea that applications 
should be designed around algorithms with the freedom to assume 
reliability.  (If your algorithm was good, you could assume hardware 
failures were too rare to matter)

Good wishes to MPI as a standard and to all of you personally


Dick Treumann  -  MPI Team 
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20101206/1ba70e74/attachment-0001.html>

More information about the mpi-forum mailing list