[Mpi3-hybridpm] endpoints -- initialize & finalize

Marc Snir snir at mcs.anl.gov
Mon Feb 13 10:31:04 CST 2012


I did not get to right a details proposal -- so I send a few points for discussion today

Let's assume first that the use decides what thread is attached to what endpoint. I propose that we leave INIT and FINALIZE as is, and add an MPI_ATTACH(int) that can be called before MPI is initialized, or after, for a thread to join an endpoint (an MPI_PROCESS). We can decide later if this is a one time deal, or whether a thread can move form an endpoint to another. 

We have three cases:

1. Set of threads used by a program in an address space is fixed and set before the user code starts executing.
This is the case for UPC, or PGAS, in general, and for OpenMP, when dyn_var=false. A programmer can enforce that OpenMP use a fixed number of threads by calling mop_set_dynamic(0).

This is an easy case -- no matter how we define the behavior of MPI_INIT and MPI_FINALIZE: We use MPI_ATTACH to attach each thread to an endpoint, and than have one thread per endpoint call INIT. Life would be easier if replicated initializations were allowed, but that's second order. Finalization replicates the initialization process.

2. Set of threads used by a program changes, under user control. This will be the case, for programs written using the pthread library. Typically, such a program will start single-threaded. The user will need to attach newly created threads to the proper endpoint. Allowing only one initialization per address space -- and also one finalization -- makes life easier. But the difference is not great, since each thread needs to be explicitly attached to an endpoint. Either the user "knows" whether the endpoint was already initialized and writes code accordingly; or a thread firs attaches to an endpoint, then  invokes MPI_INITIALIZED to decide whether the endpoint needs to be initialized.

3. Set of threads used by a program changes, and the changes are not under user control. This is the case for OpenMP, with dyn_var=true. I see no convenient way of handling this case: At any point where the set of threads may change (in OpenMP, at any point where a parallel construct starts or ends), one will need (a) to make sure that all MPI calls are complete; (b) reattach threads to endpoints. I suggest not to handle this case.

The alternative is that what thread is attached to what endpoint is not under user control -- but is done automatically by the system. This is the current situation when we have only one endpoint, and we should preserve this option, for compatibility: No call to MPI_ATTACH needed in such a case. In such a case, it would be convenient to allow only one call to MPI_INIT and one call to MPI_FINALIZE per (address space) program. But I am not sure I want to spend time arguing about this model, where the runtime dynamically attaches threads to endpoints.





More information about the mpiwg-hybridpm mailing list