<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I'll reiterate again my comments/suggestions regarding the tool daemon launch extension.<br><br>1. A number of debuggers and tools use the MPI runtime to launch their server processes rather than having to implement a second parallel launch mechanism. In lieu of standard infrastructure for launching tools, it makes sense to use the MPI runtime if possible. This approach is currently supported by Open MPI, MPICH2, PE and probably others as well.<br><br>2. The current extension relies on reading/writing memory in the starter process. This is adequate (although complicated) for debuggers, but does not work with other sorts of tools. In order to address this, I would like to see these features available from the command line as well, and would suggest a requirement that the tool daemon variables be able to be specified on the command line of the starter process.<br><br>3. The extension provides no control of where or how many server processes are launched. I presume the intention is one server process per MPI process, but this is not specified, and is probably not desirable on shared memory and other types of architectures. At a minimum, a "one server process per node" variable is desirable, such as "MPIR_one_server_per_node" or something similar. However, it may be desirable to make this completely general by allowing the use of the same process allocation mechanism provided by the MPI implementation.<br><br>4. The extension does not provide any mechanism for server processes to identify themselves or the MPI process(es) they are interested in. I would suggest a requirement that the each server command line or environment be supplied with (a) the MPI rank(s) of the MPI process(es) associated with each server; and (b) the PID's of the MPI process(es) if launched by the MPI runtime.<br><br>5. Some tools would prefer to start the MPI processes rather than attach to existing processes (which implies a ptrace requirement). I would suggest the optional variable "MPIR_external_spawn" be specified that indicates to the MPI runtime that the processes will be spawned externally. For this mode of launch, the identification information provided in (3) would be used by the MPI implementation to complete the MPI initialization, as well as the by tool. This variable would be available only on MPI implementations that support this startup mechanism. <br><br>As mentioned, a number of MPI implementations already provide most of these features, so the burden of adding this support should not be great. Providing a standard mechanism for tool daemon launch would go a long way to addressing some of the tool infrastructure problems that affect most systems today.<div><br></div><div>The current tool daemon launch extension is clearly targeted at TotalView and not designed to be flexible enough for other tools and debuggers. If the document goes ahead as-is, it can hardly be said to be a "gold standard" for tool writers.</div><div><br></div><div>Regards,</div><div>Greg</div><div><br></div><div><br><div><div>On Jun 14, 2010, at 12:27 PM, Martin Schulz wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>Hi all,</div><div><br></div><div>Attached is the latest and updated version of the MPIR document, which John</div><div>DelSignore put together. The intent is still to publish this through the MPI forum</div><div>as an official document. The details for this are still tbd. and Jeff will lead a</div><div>discussion on this topic during the forum this week.</div><div><br></div><div>We don't have a tools WG meeting scheduled for meeting, but if you have </div><div>any comments or feedback (on the document or how we should publish it), </div><div>please post it to the list. If necessary or useful, we can also dedicate one</div><div>of the upcoming tools TelCons for this.</div><div><br></div><div>Thanks!</div><div><br></div><div>Martin</div><div><br></div><div>PS: Feel free to distribute the document further, in particular to tool and</div><div>MPI developers.</div><div><br></div><div><font class="Apple-style-span"><br></font></div></div></div><span><MPIR Process Acquisition Interface 2010-06-11.pdf></span><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div></div></div><br><div apple-content-edited="true"> <span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>________________________________________________________________________</div><div>Martin Schulz, <a href="mailto:schulzm@llnl.gov">schulzm@llnl.gov</a>, <a href="http://people.llnl.gov/schulzm">http://people.llnl.gov/schulzm</a></div><div>CASC @ Lawrence Livermore National Laboratory, Livermore, USA</div><div><br></div></div></div></span><br class="Apple-interchange-newline"> </div><br></div>_______________________________________________<br>Mpi3-tools mailing list<br><a href="mailto:Mpi3-tools@lists.mpi-forum.org">Mpi3-tools@lists.mpi-forum.org</a><br>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools<br></blockquote></div><br></div></body></html>