[Mpi3-hybridpm] Endpoints Proposal

Jeff Hammond jhammond at alcf.anl.gov
Wed Mar 20 15:10:05 CDT 2013


Yes, I am thinking about endpoints generally, not the specific
proposal on the table, which I see as a being one good but not
necessarily exclusive solution for the MPI-UPC issues, as others have
clearly noted.

I hope that a second proposal will be able to address concurrency
issues that I have with MPI, but I recognize that this is more
controversial.  I think I have a more general solution to this problem
but it will take me some time to document properly.

Best,

Jeff

On Wed, Mar 20, 2013 at 10:17 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>
> On 03/20/2013 10:11 AM US Central Time, Sur, Sayantan wrote:
>>> The motivation is not to have multiple threads to drive the
>>> network.  You are right; this can also be done with processes
>>> (probably not as effectively, but that's besides the point).
>>
>> This seems to be Jeff's primary motivation based on his responses to
>> the list.
>
> I think Jeff is pointing out what he'd like to see, not what's there in
> the proposal.  Please don't confuse the two.
>
>>> There are several motivations for endpoints.  One of them is to
>>> share resources between "MPI ranks".  The MPI implementation can
>>> already do this for memory resources (through MPI-3 shared memory
>>> or through internal shared memory).  However, endpoints provides a
>>> more flexible model for sharing even more resources (such as TLB
>>> space).
>>
>> This is again mixing optimization with functionality. It is possible
>> to share TLB with MPI_THREAD_MULTIPLE. I agree that N/W performance
>> with MPI_THREAD_MULTIPLE might not be stellar since there is shared
>> state that gets touched. But looking at the current proposal, not
>> sure if this is made any better.
>
> You can share TLB space with current MPI+threads, of course.  The point
> is to share it across "MPI processes" (or MPI ranks).  That way, in many
> environments (such as UPC) applications or libraries can continue
> functioning as earlier without much disruption.
>
>> It is listed on Slide 7 that Threads make progress on their "process"
>> and on attached endpoints. Am I reading that right?
>
> Put another way -- each thread makes progress on all its ranks.
>
>>> Another motivation (probably a bigger one) is to interact with
>>> models that use threads internally.  The UPC example was shown in
>>> the Forum. For a UPC implementation that uses threads internally
>>> (or a hybrid of processes and threads), currently there's no way
>>> for those applications to use MPI, since they cannot portably know
>>> whether UPC is using processes or threads internally.  The example
>>> we showed demonstrates how the UPC implementation can allow
>>> applications to work correctly.
>>
>> Correct. I agree that the EP approach is providing this interop.
>> However, I am not convinced that it is the only possible approach. I
>> will think about it more and let you know if I find another
>> approach.
>
> If there are other ways to get this functionality, we could certainly
> pursue it.  The simpler the solution, the better.
>
>  -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm



-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond



More information about the mpiwg-hybridpm mailing list