[Mpi3-hybridpm] Endpoints Proposal
Sur, Sayantan
sayantan.sur at intel.com
Wed Mar 20 10:11:00 CDT 2013
Hi Pavan,
I am interested in understanding the problem WG is trying to solve vs. the solution. This is why I'm asking to clearly list the motivations.
>The motivation is not to
> have multiple threads to drive the network. You are right; this can also be
> done with processes (probably not as effectively, but that's besides the
> point).
This seems to be Jeff's primary motivation based on his responses to the list.
> There are several motivations for endpoints. One of them is to share
> resources between "MPI ranks". The MPI implementation can already do
> this for memory resources (through MPI-3 shared memory or through
> internal shared memory). However, endpoints provides a more flexible
> model for sharing even more resources (such as TLB space).
>
This is again mixing optimization with functionality. It is possible to share TLB with MPI_THREAD_MULTIPLE. I agree that N/W performance with MPI_THREAD_MULTIPLE might not be stellar since there is shared state that gets touched. But looking at the current proposal, not sure if this is made any better.
It is listed on Slide 7 that Threads make progress on their "process" and on attached endpoints. Am I reading that right?
> Another motivation (probably a bigger one) is to interact with models that
> use threads internally. The UPC example was shown in the Forum.
> For a UPC implementation that uses threads internally (or a hybrid of
> processes and threads), currently there's no way for those applications to
> use MPI, since they cannot portably know whether UPC is using processes or
> threads internally. The example we showed demonstrates how the UPC
> implementation can allow applications to work correctly.
>
Correct. I agree that the EP approach is providing this interop. However, I am not convinced that it is the only possible approach. I will think about it more and let you know if I find another approach.
Thanks,
Sayantan.
> -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
More information about the mpiwg-hybridpm
mailing list