[mpiwg-rma] iflush ticket needed

Jeff Hammond jeff.science at gmail.com
Fri Dec 12 18:57:04 CST 2014


if put 2 is allowed to overtake put 1, then that basically says that
only the last iflush in a series has an effect, which makes this
feature less useful.

what you seem to be saying is that you want this:

put 1
iflush -> r1
put 2
iflush -> r2
wait(r1,r1)

to be no different than this:

put 1
put 2
iflush -> r
wait(r)

because that's what you'd get if you allow the iflush calls to reorder.

jeff

On Fri, Dec 12, 2014 at 4:50 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>
> "Not allowed to overtake" is hard depending on how the MPI implementation is written.  This is even more painful if this test used multiple threads.
>
> Anyway, I'll make slides.
>
>   -- Pavan
>
>> On Dec 12, 2014, at 4:54 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>
>>> 1. What happens when one thread is waiting on an iflush request while another thread keeps posting more operations.
>>
>> I agree with my understanding of Bill's comments, which is that
>> waiting on the iflush request is not a synchronization point with
>> respect to any operations issued after the iflush was called.
>>
>>> 2. What are the ordering semantics for two consecutive nonblocking iflush operations?  Can they finish out of order?  If they finish out of order, what are the semantics of accumulate operations with these epochs?  Are they ordered?
>>
>> They should not be allowed to finish out of order.
>>
>>> 3. What are the semantics of put operations in two nonblocking epochs?  Can they overwrite each other?
>>
>> You mean this?
>>
>> put 1
>> iflush -> r1
>> put 2
>> iflush -> r2
>> wait(r1,r1)
>>
>> The only reasonable way to define this is that put 1 is flushed before
>> put 2.  Obviously, with put-flush-put-flush today, the second put can
>> overwrite the first.  However, it should not be permitted to overtake
>> it.
>>
>> Jeff
>>
>>
>> On Fri, Dec 12, 2014 at 2:40 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>>>
>>> I can make slides to describe the options.  I don't want to spend time making a standard-level document, since we are inevitably going to argue about and change.
>>>
>>> FWIW, the below definition is what I'd expect, but it still doesn't address the other questions in my list.
>>>
>>>  -- Pavan
>>>
>>>> On Dec 12, 2014, at 3:40 PM, William Gropp <wgropp at illinois.edu> wrote:
>>>>
>>>> I’d like to see a document that describes the options, then arrange a telecon with the interested folks (I do find a well-prepared telecon useful).  The telecon need not involve the entire WG.
>>>>
>>>> As a start, I’d define completing an “iflush” as completing RMA operations initiated before the iflush was called.  It would say nothing (either way) about operations initiated after the iflush.  Most of the answers should be derivable from this definition, which matches the other uses of nonblocking in MPI.
>>>>
>>>> Bill
>>>>
>>>> On Dec 11, 2014, at 11:19 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>>>>
>>>>>
>>>>> Can we discuss this on the WG telecon (which the chair needs to restart)?
>>>>>
>>>>> I'd like to first put together rough thoughts on the semantics in slides before writing up text.  Specific issues to clarify:
>>>>>
>>>>> 1. What happens when one thread is waiting on an iflush request while another thread keeps posting more operations.
>>>>>
>>>>> 2. What are the ordering semantics for two consecutive nonblocking iflush operations?  Can they finish out of order?  If they finish out of order, what are the semantics of accumulate operations with these epochs?  Are they ordered?
>>>>>
>>>>> 3. What are the semantics of put operations in two nonblocking epochs?  Can they overwrite each other?
>>>>>
>>>>> There might be more issues that need clarification too.  I'm happy to make slides discussing a first draft of these (and more perhaps), but I don't want the discussion to be over email.
>>>>>
>>>>> -- Pavan
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>>> On Dec 10, 2014, at 2:57 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>>>>>
>>>>>> We really need a ticket i.e. a substantive written proposal for
>>>>>> iflush, etc. in order to make progress on the umbrella topic that
>>>>>> includes it and #459.
>>>>>>
>>>>>> Pavan: Please let me know when you will have something for this.
>>>>>>
>>>>>> Jeff
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: MPI Forum <mpi-forum at lists.mpi-forum.org>
>>>>>> Date: Wed, Dec 10, 2014 at 11:11 AM
>>>>>> Subject: Re: [MPI Forum] #459: RMA sync ops with vector of windows
>>>>>> To:
>>>>>>
>>>>>>
>>>>>> #459: RMA sync ops with vector of windows
>>>>>> -------------------------------------+-------------------------------------
>>>>>> Reporter:  jhammond                  |                  Owner:  jhammond
>>>>>> Type:  New routine(s)            |                 Status:  assigned
>>>>>> Priority:  Not ready / author        |              Milestone:  2014/12/08
>>>>>> rework                             |  California, USA
>>>>>> Version:  MPI 4.0                   |             Resolution:
>>>>>> Keywords:  RMA                       |  Implementation status:  Waiting
>>>>>> -------------------------------------+-------------------------------------
>>>>>>
>>>>>> Comment (by gropp):
>>>>>>
>>>>>> The WG found this interesting, but notes that there are alternatives that
>>>>>> may provide the same capability.  These include nonblocking flush.  In a
>>>>>> straw vote,  iflush received 11 votes and nflush received 3; in contrast,
>>>>>> nsync received 9 and isync received 4.
>>>>>>
>>>>>> --
>>>>>> Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/459#comment:3>
>>>>>> MPI Forum <https://svn.mpi-forum.org/>
>>>>>> MPI Forum
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jeff Hammond
>>>>>> jeff.science at gmail.com
>>>>>> http://jeffhammond.github.io/
>>>>>> _______________________________________________
>>>>>> mpiwg-rma mailing list
>>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>> _______________________________________________
>>>>> mpiwg-rma mailing list
>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>
>>>> _______________________________________________
>>>> mpiwg-rma mailing list
>>>> mpiwg-rma at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>
>>> --
>>> Pavan Balaji  ✉️
>>> http://www.mcs.anl.gov/~balaji
>>>
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
> --
> Pavan Balaji  ✉️
> http://www.mcs.anl.gov/~balaji
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/



More information about the mpiwg-rma mailing list