[Mpi3-tools] DRAFT of the MPIR Process Acquisition Interface document
Greg Watson
g.watson at computer.org
Sat May 15 10:06:19 CDT 2010
Some comments/suggestions regarding the tool daemon launch extension below. I'm using "server process" to refer to the tool daemon server process and "MPI process" to refer to the MPI application process.
1. A number of debuggers and tools use the MPI runtime to launch their server processes rather than having to implement a second parallel launch mechanism. In lieu of standard infrastructure for launching tools, it makes sense to use the MPI runtime if possible. This approach is currently supported by Open MPI, MPICH2, PE and probably others as well.
2. The current extension relies on reading/writing memory in the starter process. This is adequate (although complicated) for debuggers, but does not work with other sorts of tools. In order to address this, I would like to see these features available from the command line as well, and would suggest a requirement that the tool daemon variables be able to be specified on the command line of the starter process.
3. The extension provides no control of where or how many server processes are launched. I presume the intention is one server process per MPI process, but this is not specified, and is probably not desirable on shared memory and other types of architectures. At a minimum, a "one server process per node" variable is desirable, such as "MPIR_one_server_per_node" or something similar. However, it may be desirable to make this completely general by allowing the use of the same process allocation mechanism provided by the MPI implementation.
4. The extension does not provide any mechanism for server processes to identify themselves or the MPI process(es) they are interested in. I would suggest a requirement that the each server command line or environment be supplied with (a) the MPI rank(s) of the MPI process(es) associated with each server; and (b) the PID's of the MPI process(es) if launched by the MPI runtime.
5. Some tools would prefer to start the MPI processes rather than attach to existing processes (which implies a ptrace requirement). I would suggest the optional variable "MPIR_external_spawn" be specified that indicates to the MPI runtime that the processes will be spawned externally. For this mode of launch, the identification information provided in (3) would be used by the MPI implementation to complete the MPI initialization, as well as the by tool. This variable would be available only on MPI implementations that support this startup mechanism.
As mentioned, a number of MPI implementations already provide most of these features, so the burden of adding this support should not be great. Providing a standard mechanism for tool daemon launch would go a long way to addressing some of the tool infrastructure problems that affect most systems today.
Regards,
Greg
On May 13, 2010, at 12:42 PM, John DelSignore wrote:
> Hi all,
>
> Attached is the second DRAFT of the MPIR Process Acquisition Interface document for discussion during the 5/17/10 concall.
>
> I also included a copy with change tracking turned on so that you can see the changes. I incorporated most of the comments I received from Bill Gropp and Jeff Squyres on the first draft.
>
> TO DO items:
> * Review this draft to see if people like it any better than the first draft.
> * The document master is still in Word, and should to get converted to LaTex. I'd rather someone else do that, not me.
> * This document describes the MPIR Process Acquisition Interface, but has a placeholder for the MPIR Message Queue Display Interface. I think all of the MQD stuff should be contained in a separate document, and this document should be restructured accordingly.
> * The MQD stuff needs to get written up.
>
> Cheers, John D.
>
> <MPIR Process Acquisition Interface 2010-05-13.pdf><MPIR Process Acquisition Interface 2010-05-13 diffs.pdf>_______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools
More information about the mpiwg-tools
mailing list