[mpiwg-rma] Proposed Info Key

Jim Dinan james.dinan at gmail.com
Tue Oct 14 10:43:25 CDT 2014


I do like #397.  The info key is something that could go in 3.1, and helps
an existing usage pattern (e.g. in OSHMPI or ARMCI-MPI).  Ticket #397 looks
like something that we will have to target at MPI 4.

IIUC, MPICH has started detecting this automatically without the info key,
so this ticket may be moot.  Hopefully an MPICH developer familiar with
this optimization can comment.

 ~Jim.

On Tue, Oct 14, 2014 at 10:58 AM, Jeff Hammond <jeff.science at gmail.com>
wrote:

> I contend that https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397
> is the right way to solve this problem.  Using overlapping windows
> here is unscalable because if you have one win_create for every
> process in the node on top of one win_allocate_shared per node, you
> end up with ppn+1 windows to hack the effect of one win_allocate if my
> ticket is passed.  This is pretty significant overhead if one wants to
> run OpenSHMEM on an Intel Xeon Phi...
>
> And note that the time to setup all these windows properly is
> nontrivial.  It has been measured in the context of another project.
> I can't share the details since it's not published and I'm not in a
> position to decide to share it at this point.
>
> Jeff
>
> On Tue, Oct 14, 2014 at 7:42 AM, Jim Dinan <james.dinan at gmail.com> wrote:
> > This is an info key that e.g. OSHMPI could use to tell MPI that the
> buffer
> > being passed to MPI_Win_create was allocated via MPI_Win_allocate_shared.
> > If this is already covered by another ticket, let me know the existing
> trac
> > number and I will close this one.  If we want to pursue this as a 3.0
> > erratum, we could utilize this new ticket.
> >
> >  ~Jim.
> >
> > On Tue, Oct 14, 2014 at 10:38 AM, Jeff Hammond <jeff.science at gmail.com>
> > wrote:
> >>
> >> I don't fully understand. Is this designed to make it easier to use the
> >> overlapping windows trick because we can get at shm inside of other
> window
> >> types? I have a ticket that solves that issue holistically already...
> >>
> >> Sent from my iPhone
> >>
> >> On Oct 14, 2014, at 7:17 AM, Jim Dinan <james.dinan at gmail.com> wrote:
> >>
> >> Hi All,
> >>
> >> Please see the ticket:
> >> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/460
> >>
> >> For a new info key that was proposed by Mikhail Brinsky of Intel.
> >>
> >> Thanks,
> >>  ~Jim.
> >>
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>
> >>
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >
> >
> >
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20141014/f00e8f69/attachment-0001.html>


More information about the mpiwg-rma mailing list