[Mpi3-hybridpm] Endpoints Proposal

Sur, Sayantan sayantan.sur at intel.com
Tue Mar 19 12:56:38 CDT 2013


> ii. There was consensus among the architects that the Endpoint comm
> created once and freed n times is not a preferred way to go. I think I have
> already made this point many times during the plenary :-)
> 

Forgot to add the style we would prefer if we do go through with this style of API. The comm_create endpoints would get the maximum number of threads that could join in this ep. But attach/detach would be able to allocate only as many resources as required. But this does bring into question as to whether MPI_Comm_attach_endpoint() is a local op or not. We would need to discuss it further.

int main(int argc, char **argv) 
{
int world_rank, tl; 
MPI_Comm omp_comm;

MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &tl);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

MPI_Comm_create_endpoints(MPI_COMM_WORLD, omp_get_max_threads(),
  MPI_INFO_NULL, &omp_comm);

#pragma omp parallel
{

MPI_Comm_attach_endpoint(... &ep_comm);
#pragma omp for
	for (...) {
		... // use ep_comm;
	}
MPI_Comm_detach_endpoint(&ep_comm);
}

MPI_Comm_free(&omp_comm);

MPI_Finalize(); 
return 0;
}



> Another comment on the UPC example from my side:
> 
> iii. The UPCMPI_World_comm_query(&upc_comm) call demonstrates that
> you need to interact with the UPC runtime so that it does the "right thing"
> based on whether it launched UPC in threaded or process mode. If that is the
> case, what is preventing the UPC community from just coming up with a
> UPCMPI_Allreduce() that would adapt to either thread/process cases. In the
> thread case, it could provide MPI with a datatype that points to memory bits
> being used for reduction. In the process case, it could pass the data straight
> into MPI. What do we gain by making the UPC program call MPI directly?
> 
> Thanks,
> Sayantan
> 
> 
> > -----Original Message-----
> > From: mpi3-hybridpm-bounces at lists.mpi-forum.org [mailto:mpi3-
> hybridpm-
> > bounces at lists.mpi-forum.org] On Behalf Of Jim Dinan
> > Sent: Thursday, March 14, 2013 7:30 AM
> > To: mpi3-hybridpm at lists.mpi-forum.org
> > Subject: [Mpi3-hybridpm] Endpoints Proposal
> >
> > Hi All,
> >
> > I've attached the slides from the endpoints presentation yesterday.  I
> > updated the slides with corrections, additions, and suggestions
> > gathered during the presentation.
> >
> > We received a lot of feedback, and a lot of support from the Forum.  A
> > refinement to the interface that eliminates MPI_Comm_attach was
> > suggested:
> >
> > int MPI_Comm_create_endpoints(MPI_Comm parent_comm, int
> my_num_ep,
> >                                MPI_Info info, MPI_Comm
> > output_comms[]);
> >
> > This function would be collective over parent_comm, and produce an
> > array of communicator handles, one per endpoint.  Threads pick up the
> > endpoint they wish to use, and start using it; there would be no need
> > for attach/detach.
> >
> > This interface addresses two concerns about the interface originally
> > presented that were raised by the Forum.  (1) The suggested interface
> > does not require THREAD_MULTIPLE -- the original attach function could
> > always require multiple.  (2) It places fewer dependencies on the threading
> model.
> > In particular, stashing all relevant state in the MPI_Comm object
> > removes a dependence on thread-local storage.
> >
> > Thanks to everyone for your help and feedback.  Let's have some
> > discussion about the suggested interface online, and follow up in a
> > couple weeks with a WG meeting.
> >
> > Cheers,
> >   ~Jim.
> 
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm




More information about the mpiwg-hybridpm mailing list