<div style="font-family: Helvetica; font-size: 13px; ">I think you're right. We just need to make sure we're careful. In that case, I withdraw my suggestion.<br></div>
<div></div>
<p style="color: #A0A0A8;">On Thursday, August 1, 2013 at 5:09 AM, George Bosilca wrote:</p>
<blockquote type="cite" style="border-left-style:solid;border-width:1px;margin-left:0px;padding-left:10px;">
<span><div><div><div>On Wed, Jul 31, 2013 at 7:28 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:</div><blockquote type="cite"><div><div>George,</div><div><br></div><div>I'm not talking about RMA here. I'm just talking about send/recv. I'm also</div><div>not talking about simultaneous receives posted on the same buffer. They are</div><div>separated by failure notifications, COMM_AGREE, whatever else you need. I</div><div>think Sayantan got the point Wesley mentioned and his solution is correct</div><div>(assuming enough support from the network to do so). Though I'm not</div><div>convinced it's not adding overhead. Sayantan should comment on this.</div></div></blockquote><div><br></div><div>If you're not talking RMA then Sayantan is absolutely right, this is a</div><div>non issue as any message from a process considered as dead should be</div><div>discarded. In fact the current ULFM implementation already take in</div><div>account such cases, as they can arrive in any multi-rail situation.</div><div><br></div><div> George.</div><div><br></div><blockquote type="cite"><div><div>With respect to the K Computer, I don't have a link. This was my</div><div>understanding from what the Fujitsu folks mentioned (and was the reason why</div><div>they didn't want to release their low-level API; since the hardware has no</div><div>protection, anyone could write to anyone's memory). And I'm not trying to</div><div>prove that K computer is not good enough in any way. That was just an</div><div>example. My point was only that you should consider the case that not all</div><div>networks would have such protection capabilities.</div><div><br></div><div> -- Pavan</div><div><br></div><div><br></div><div>On 07/31/2013 11:50 AM, George Bosilca wrote:</div><blockquote type="cite"><div><div><br></div><div>On Tue, Jul 30, 2013 at 10:47 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:</div><blockquote type="cite"><div><div><br></div><div><br></div><div>Hmm. Yes, you are right. Generating different per-process rkeys is an</div><div>option on IB. Though that's obviously less scalable than a single rkey</div><div>and</div><div>hurts performance too because a new rkey has to be generated for each</div><div>process. Even more of a reason for FT to have a requested/provided</div><div>option</div><div>like threads.</div></div></blockquote><div><br></div><div><br></div><div>I fail to understand the scenario allowing you to reach such a</div><div>conclusion. You want to have an MPI_RECV with a buffer where multiple</div><div>senders can do RMA operations. The only way this can be done in the</div><div>context of the MPI standard is if each of the receives on this</div><div>particular buffer are using non-contiguous datatypes. Thus, unlike</div><div>what you suggest in your answer above, this is not hurting performance</div><div>as you are already in a niche mode (I'm not even talking about the</div><div>fact that usually non-contiguous datatypes conflicts with RMA</div><div>operations). Moreover, you suppose that the detection of a dead</div><div>process and the re-posting of the receive buffer can happen faster</div><div>than an RMA message cross the network. The only potential case where</div><div>such a scenario can happen is when multiple paths between the source</div><div>and the destination exist, and the failure detection happen on one</div><div>path while the RMA message took another one. This is highly improbable</div><div>in most cases.</div><div><br></div><div>There are too many ifs in this scenario to make it plausible. Even if</div><div>we suppose that all those ifs will be true, as Rich said, this is an</div><div>issue of [quality of] implementation not MPI standard. A high quality</div><div>MPI implementation will delay reporting the process failure error on</div><div>that particular MPI_RECV until all possible RMA from the dead process</div><div>were either discarded by the network, or written to the memory.</div><div><br></div><blockquote type="cite"><div><div><br></div><div>However, please also think about this problem for other networks that</div><div>might</div><div>not have such hardware protection capabilities (K Computer comes to</div><div>mind).</div></div></blockquote><div><br></div><div><br></div><div>K computer ? My understanding is that there are such capabilities in</div><div>the TOFU network. I might be wrong thou, in which case I would</div><div>definitively appreciate if you can you pinpoint me to a</div><div>link/documentation that proves your point?</div><div><br></div><blockquote type="cite"><div>Maybe they cannot provide MPI-specified FT, and that would be fine.</div></blockquote><div><br></div><div><br></div><div>Not really, FT can be supported without overhead for the normal</div><div>execution even for the types of netwrok you mention. The solution I</div><div>presented above, uses the timeouts of the network layer to ensure no</div><div>delivery can occur after the error reporting, by delaying the error</div><div>reporting until all timeout occurred. Trivial to implement, and</div><div>without impact on the normal execution path.</div><div><br></div><div> Thanks,</div><div> George.</div><div><br></div><div><br></div><blockquote type="cite"><div><div><br></div><div> -- Pavan</div><div><br></div><div><br></div><div>On 07/30/2013 02:59 PM, Sur, Sayantan wrote:</div><blockquote type="cite"><div><div><br></div><div><br></div><div>Hi Wesley,</div><div><br></div><div>Looks like your attachment didn’t make it through. Using IB, one can</div><div>generate rkeys for each sender and just invalidate the key for the</div><div>observed failed process. HW can just drop the “slow” message when it</div><div>arrives. I’m assuming that generating keys should be fast in the future</div><div>given that recently announced HW/firmware has support for on-demand</div><div>registration. In any case, it is not a restriction of IB per se.</div><div><br></div><div>Thanks,</div><div><br></div><div>Sayantan</div><div><br></div><div>*From:*<a href="mailto:mpi3-ft-bounces@lists.mpi-forum.org">mpi3-ft-bounces@lists.mpi-forum.org</a></div><div>[<a href="mailto:mpi3-ft-bounces@lists.mpi-forum.org">mailto:mpi3-ft-bounces@lists.mpi-forum.org</a>] *On Behalf Of *Wesley Bland</div><div>*Sent:* Tuesday, July 30, 2013 11:04 AM</div><div>*To:* MPI3-FT Working Group</div><div>*Subject:* [Mpi3-ft] Problem with reusing rendezvous memory buffers</div><div><br></div><div><br></div><div>Pavan pointed out a problem to me yesterday related to memory buffers</div><div>used with rendezvous protocols. If a process passes a piece of memory to</div><div>the library in an MPI_RECV and the library gives that memory to the</div><div>hardware, where it is pinned, we can get into trouble if one of the</div><div>processes that could write into that memory fails. The problem comes</div><div>from a process sending a slow message and then dying. It is possible</div><div>that the other processes could detect and handle the failure before the</div><div>slow message arrives. Then when the message does arrive, it could</div><div>corrupt the memory without the application having a way to handle this.</div><div>My whiteboard example is attached as an image.</div><div><br></div><div>We can't just unmap memory from the NIC when a failure occurs because</div><div>that memory is still being used by another process's message. Some</div><div>hardware supports unmapping memory for specific senders which would</div><div>solve this issue, but some don't, such as InfiniBand, where the memory</div><div>region just has a key and unmapping it removes it for all senders.</div><div><br></div><div>This problem doesn't have a good solution (that I've come up with), but</div><div>I did come up with a solution. We would need to introduce another error</div><div>code (something like MPI_ERR_BUFFER_UNUSABLE) that would be able to tell</div><div>the application that the buffer that the library was using is no longer</div><div>usable because it might be corrupted. For some hardware, this wouldn't</div><div>have to be returned, but for hardware where this isn't possible, the</div><div>library could pass this error to the implementation to say that I need a</div><div>new buffer in order to complete this operation. On the sender side, the</div><div>operation would probably complete successfully since to it, the memory</div><div>was still available. That means that there will be some rollback</div><div>necessary, but that's up to the application to figure out.</div><div><br></div><div>I know this is an expensive and painful solution, but this is all I've</div><div>come up with so far. Thoughts from the group?</div><div><br></div><div>Thanks,</div><div><br></div><div>Wesley</div><div><br></div><div><br></div><div><br></div><div>_______________________________________________</div><div>mpi3-ft mailing list</div><div><a href="mailto:mpi3-ft@lists.mpi-forum.org">mpi3-ft@lists.mpi-forum.org</a></div><div><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft</a></div></div></blockquote><div><br></div><div>--</div><div>Pavan Balaji</div><div><a href="http://www.mcs.anl.gov/~balaji">http://www.mcs.anl.gov/~balaji</a></div><div><br></div><div>_______________________________________________</div><div>mpi3-ft mailing list</div><div><a href="mailto:mpi3-ft@lists.mpi-forum.org">mpi3-ft@lists.mpi-forum.org</a></div><div><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft</a></div></div></blockquote><div><br></div><div><br></div><div>_______________________________________________</div><div>mpi3-ft mailing list</div><div><a href="mailto:mpi3-ft@lists.mpi-forum.org">mpi3-ft@lists.mpi-forum.org</a></div><div><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft</a></div></div></blockquote><div><br></div><div>--</div><div>Pavan Balaji</div><div><a href="http://www.mcs.anl.gov/~balaji">http://www.mcs.anl.gov/~balaji</a></div></div></blockquote><div><br></div><div>_______________________________________________</div><div>mpi3-ft mailing list</div><div><a href="mailto:mpi3-ft@lists.mpi-forum.org">mpi3-ft@lists.mpi-forum.org</a></div><div><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft</a></div></div></div></span>
</blockquote>
<div>
<br>
</div>