[mpiwg-rma] need MPI_MODE_NOSTORE (and MPI_MODE_NOLOAD) for passive-target

Jeff Hammond jeff.science at gmail.com
Tue Oct 22 10:21:01 CDT 2013


https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/396 created for
tracking purposes.  Please comment on the ticket as appropriate.

Jeff

On Mon, Oct 21, 2013 at 12:13 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> MPI_MODE_NOFLUSHALL might allow an implementation to avoid using state
> to track active target ranks.  For example, if I want to optimize the
> sparse-rank-list case of FLUSH_ALL, I might keep track of all the
> ranks that I target so I can flush them rather than flushing all ranks
> for the window (which is sufficient but not necessary).  If the user
> intends to do this above MPI or otherwise does not need FLUSH_ALL,
> then the implementation can skip this state and always flush the
> remote rank in FLUSH.  I recognize that an implementation might use
> that active target list to make FLUSH a no-op, but this seems like a
> much less important optimization since it is reasonable to assume a
> use case where FLUSH is used when necessary.
>
> Jeff
>
> On Mon, Oct 21, 2013 at 12:06 PM, Jim Dinan <james.dinan at gmail.com> wrote:
>> Sounds reasonable to me.  Are there other assertions that would be useful to
>> consider?  I can't think of a use case, but MPI_MODE_NOFLUSH could be given
>> in conventional lock/unlock epochs.
>>
>>  ~Jim.
>>
>>
>> On Mon, Oct 21, 2013 at 12:30 PM, Jeff Hammond <jeff.science at gmail.com>
>> wrote:
>>>
>>> Now that we allow both load-store and Put/Get within an epoch in the
>>> UNIFIED model, we should enable the MPI_MODE_NOSTORE as well as the
>>> new assertion MPI_MODE_NOLOAD for MPI_WIN_LOCK(_ALL) in MPI 3.0 \S
>>> 11.5.5.
>>>
>>> This suggestion was inspired by Brian's comment about NIC-cached
>>> atomics.  I presume that implementation can relax the consistent
>>> enforced if these assertions are used.
>>>
>>> This is not necessarily a complete statement of the issues at hand but
>>> I presume we can get there rather quickly if people think about this.
>>>
>>> Jeff
>>>
>>> --
>>> Jeff Hammond
>>> jeff.science at gmail.com
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>
>>
>>
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the mpiwg-rma mailing list