[mpiwg-hybridpm] Hybrid telecon fiasco

Jeff Squyres (jsquyres) jsquyres at cisco.com
Mon Apr 7 12:20:51 CDT 2014


On Apr 7, 2014, at 1:09 PM, Jim Dinan <james.dinan at gmail.com> wrote:

> ** Interface for freeing endpoints communicators:
> 
> Current, we use MPI_Comm_free to free endpoints communicators, and we extend the MPI_Comm_free semantics to allows one thread to call this routine for each endpoint.  If we didn't have this extension to Comm_free, you can create endpoints, e.g. in MPI_THREAD_SERIALIZED mode, but you can't free them.  The Forum dislikes this new semantic and as suggested that we remove it.
> 
> Dan suggested a third approach where we leave the Comm_free semantic unchanged and add a new function, MPI_Comm_free_endpoints(MPI_Comm comm_handles[], int my_num_ep), that has symmetry with Comm_create_endpoints.  An endpoints communicator can be freed with either Comm_free or Comm_free_endpoints.  They differ in their concurrency requirement.

I'm not sure I understand:

- MPI_COMM_FREE_ENDPOINTS: does the array have to contain *all* the communicator handles for an endpoint communicator?  Or is the requirement that all communicator handles must *eventually* be freed (via free or free_endpoints)?  Is FREE_ENDPOINTS collective?

My $0.02: I'm not a big fan giving comm handles to threads, but then requiring that there be a single collection point / destruction point for all those handles.  Why can't the threads destroy them individually, if they want to?

- MPI_COMM_FREE: so you're saying that COMM_FREE will stay exactly as it is.

What happens in this scenario:

- CREATE_ENDPOINTS creates 4 local communicator handles: c0 through c3.
- c0 and c1 are put into foo[], and a single thread calls MPI_COMM_FREE_ENDPOINTS(foo, 2)
- c2 ans c3 each calls MPI_COMM_FREE(&my_comm)

Is this legal?  I.e., can COMM_FREE_ENDPOINTS "match" COMM_FREE in the same process?

-- 
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/




More information about the mpiwg-hybridpm mailing list