[mpiwg-rma] RMA WG Telecon
Jeff Hammond
jeff.science at gmail.com
Wed Sep 12 08:12:03 CDT 2018
On the topic of accumulate info keys,
https://github.com/mpi-forum/mpi-issues/issues/36 is the necessary first
step to make the default sufficiently general for RMA to support all usage
models.
https://github.com/mpi-forum/mpi-issues/issues/46 was my idea for the next
step. It is not very specific, but the intent was to allow the user to
describe their use of RMA in detail. This could go all the way to a
database of (op,basic_datatype,complete_datatype,max_count).
I suspect a more pragmatic approach is to define additional keys we think
implementations will actually use, such as:
* no_noncontiguous_datatypes - exclude e.g. MPI_Type_vector with stride!=1
and anything else that might lead to pack/unpack.
* max_count (or should it be max_target_count) - specify upper bound on
message size.
Jeff
On Wed, Sep 12, 2018 at 1:26 AM, Joseph Schuchart via mpiwg-rma <
mpiwg-rma at lists.mpi-forum.org> wrote:
> Pavan,
>
> I listened in for the MPI-RMA WG telco yesterday and found some
> interesting points in it. In particular, I am interested in the discussion
> on atomics and same_op. I support the notion that MPI should choose a
> conservative default to make sure users do not run into surprising UB
> because the implementation expects or supports only same_op, which the user
> may not be aware of.
>
> As a developer of a framework built on top of MPI RMA I would also be
> interested in getting information from the MPI implementation on which
> atomic operations are actually supported in hardware on the current system,
> which would allow me to pick different implementations of a specific
> feature to fully exploit the available hardware capabilities (similar to
> C++ `std::atomic_is_lock_free`). Are there any plans to provide such an
> interface? This could be used in combination with an info key (say
> `native_op`) that promises that the user will only use a mix of operations
> that are supported in hardware, which would then avoid the required
> fall-back to active messages discussed yesterday.
>
> Last but not least, I am also interested in the shared memory optimization
> for collectives (iirc, MPI_DISCARD?). I couldn't find an issue on Github on
> this. Is there any publicly available information you can share?
>
> Any input would be much appreciated.
>
> Many thanks in advance,
> Joseph
>
>
> On 09/10/2018 09:21 PM, Balaji, Pavan via mpiwg-rma wrote:
>
>>
>>
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>>
>> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20180912/26141c9d/attachment.html>
More information about the mpiwg-rma
mailing list