<div dir="ltr">
<p class="">I do like #397. The info key is something that could go in 3.1, and helps an existing usage pattern (e.g. in OSHMPI or ARMCI-MPI). Ticket #397 looks like something that we will have to target at MPI 4.</p><p class="">IIUC, MPICH has started detecting this automatically without the info key, so this ticket may be moot. Hopefully an MPICH developer familiar with this optimization can comment.</p><p class=""> ~Jim.</p></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 14, 2014 at 10:58 AM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I contend that <a href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397" target="_blank">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397</a><br>
is the right way to solve this problem. Using overlapping windows<br>
here is unscalable because if you have one win_create for every<br>
process in the node on top of one win_allocate_shared per node, you<br>
end up with ppn+1 windows to hack the effect of one win_allocate if my<br>
ticket is passed. This is pretty significant overhead if one wants to<br>
run OpenSHMEM on an Intel Xeon Phi...<br>
<br>
And note that the time to setup all these windows properly is<br>
nontrivial. It has been measured in the context of another project.<br>
I can't share the details since it's not published and I'm not in a<br>
position to decide to share it at this point.<br>
<br>
Jeff<br>
<div class="HOEnZb"><div class="h5"><br>
On Tue, Oct 14, 2014 at 7:42 AM, Jim Dinan <<a href="mailto:james.dinan@gmail.com">james.dinan@gmail.com</a>> wrote:<br>
> This is an info key that e.g. OSHMPI could use to tell MPI that the buffer<br>
> being passed to MPI_Win_create was allocated via MPI_Win_allocate_shared.<br>
> If this is already covered by another ticket, let me know the existing trac<br>
> number and I will close this one. If we want to pursue this as a 3.0<br>
> erratum, we could utilize this new ticket.<br>
><br>
> ~Jim.<br>
><br>
> On Tue, Oct 14, 2014 at 10:38 AM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>><br>
> wrote:<br>
>><br>
>> I don't fully understand. Is this designed to make it easier to use the<br>
>> overlapping windows trick because we can get at shm inside of other window<br>
>> types? I have a ticket that solves that issue holistically already...<br>
>><br>
>> Sent from my iPhone<br>
>><br>
>> On Oct 14, 2014, at 7:17 AM, Jim Dinan <<a href="mailto:james.dinan@gmail.com">james.dinan@gmail.com</a>> wrote:<br>
>><br>
>> Hi All,<br>
>><br>
>> Please see the ticket:<br>
>> <a href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/460" target="_blank">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/460</a><br>
>><br>
>> For a new info key that was proposed by Mikhail Brinsky of Intel.<br>
>><br>
>> Thanks,<br>
>> ~Jim.<br>
>><br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>><br>
>><br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> mpiwg-rma mailing list<br>
> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
<a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a><br>
</font></span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
mpiwg-rma mailing list<br>
<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
</div></div></blockquote></div><br></div>