[mpiwg-rma] schedule telecon

Pavan Balaji balaji at mcs.anl.gov
Sun Nov 17 17:20:28 CST 2013


And one more item —

We need an MPI_IN_PLACE capability for MPI_GET_ACCUMULATE, i.e., the origin and result buffer is the same.

  — Pavan

On Nov 16, 2013, at 1:28 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:

> 
> Sorry to keep adding more items, but one more item that needs discussion is the statement in the MPI-3 standard that could potentially disallow loads and PUTs to nonoverlapping locations in SEPARATE windows.  Specifically, see #3 on page 455.
> 
> “A put or accumulate must not access a target window once a load/store update or a put or accumulate update to another (overlapping) target window has started on a location in the target window, until the update becomes visible in the public copy of the window."
> 
> The first line should either say “load/store access” or “store updates”.  “load/store updates” doesn’t make sense.
> 
> Now which of the two it is, is unclear.  Is load + PUTs valid?  This will be fine for hardware managed caches (even noncoherent ones), but not for software managed caches.
> 
> One way to decouple this issue is to add more memory models — one that allows this and another that disallows this.  IMO, there should be 4 different memory models that cover all cases:
> 
> 1. MPI_WIN_SUCKY_MEMORY — for software managed cache type of systems.  Even load + PUTs to nonoverlapping memory regions is not valid.
> 
> 2. MPI_WIN_SEPARATE — current SEPARATE memory model, but we allow load + PUTs.  This will work as long as the cache is hardware managed.
> 
> 3. MPI_WIN_UNIFIED — this is what we decided on for our UNIFIED model.  WIN_SYNC is required for overlapping memory regions.  Windows are not always and immediately unified.  They’ll eventually become unified without MPI calls.
> 
> 4. MPI_WIN_SUPERUNIFIED — no WIN_SYNC required.  Application can assume that the windows are truly unified.  This works on x86 type of architectures.
> 
> Of course, all this will be heavily debated.  I’m just queuing up the discussion item.  :-)
> 
>  — Pavan
> 
> On Nov 4, 2013, at 1:55 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> 
>> 369 seems trivial.  Can we just enumerate the changes on the ticket
>> and vote yes/no on moving forward with a reading in December on the
>> telecon?
>> 
>> Jeff
>> 
>> On Mon, Nov 4, 2013 at 12:51 PM, Jim Dinan <james.dinan at gmail.com> wrote:
>>> We also need to make sure that #349 is ready for a reading in December:
>>> 
>>> MPI_Aint addressing arithmetic:
>>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/349
>>> 
>>> I think the main todo here is to update the dynamic window examples.  I also
>>> have another open ticket that we should move forward:
>>> 
>>> RMA needs same_disp info key:
>>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/369
>>> 
>>> ~Jim.
>>> 
>>> 
>>> On Sat, Nov 2, 2013 at 8:40 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>>> 
>>>> Yeah, I'll setup a doodle but I wanted to confirm the scope.  Since
>>>> Pavan greatly expanded the possible scope, I will propose a doodle
>>>> that is oriented at discovering more than one time-slot since I
>>>> suspect we will need that.
>>>> 
>>>> Jeff
>>>> 
>>>> On Sat, Nov 2, 2013 at 5:04 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>>>>> OK, when?. Do you want to set up a Doodle poll or something?
>>>>> 
>>>>> Rajeev
>>>>> 
>>>>> On Nov 2, 2013, at 4:12 PM, Jeff Hammond wrote:
>>>>> 
>>>>>> I would like to schedule a telecon for the RMA WG to discuss the
>>>>>> following three tickets.  We've discussed them all over email but I
>>>>>> want to get more conclusive feedback before the Chicago meeting.
>>>>>> 
>>>>>> - extend the use of MPI_WIN_SHARED_QUERY to all windows
>>>>>> (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397)
>>>>>> - request-based remote completion for RMA
>>>>>> (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/398)
>>>>>> - generalize same_op_no_op and allow user to specify all ops to be
>>>>>> used (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/399)
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Jeff
>>>>>> 
>>>>>> --
>>>>>> Jeff Hammond
>>>>>> jeff.science at gmail.com
>>>>>> _______________________________________________
>>>>>> mpiwg-rma mailing list
>>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>> 
>>>>> _______________________________________________
>>>>> mpiwg-rma mailing list
>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Jeff Hammond
>>>> jeff.science at gmail.com
>>>> _______________________________________________
>>>> mpiwg-rma mailing list
>>>> mpiwg-rma at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>> 
>> 
>> 
>> -- 
>> Jeff Hammond
>> jeff.science at gmail.com
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> 

--
Pavan Balaji
http://www.mcs.anl.gov/~balaji




More information about the mpiwg-rma mailing list