[mpiwg-p2p] Meeting Today?

Jeff Hammond jeff.science at gmail.com
Mon Mar 27 17:32:44 CDT 2017


On Mon, Mar 27, 2017 at 3:10 PM, Dan Holmes <d.holmes at epcc.ed.ac.uk> wrote:

> Hi Jeff,
>
> I wrote some notes about our discussion.
> https://github.com/mpiwg-p2p/p2p-issues/wiki/notes-2017-03-27
>
>
Indeed, as we discussed in San Jose last year, I don't think we need to
make all 36 A-F pairs.

Fsend doesn't block on any thing, so it is effectively
Ifsend(req=REQUEST_NULL).  Synchronous freeing send (Fssend or Sfsend?) is
valid and arguably useful for all of the reasons that Issend is.

Ready freeing send (same naming quandry) is valid but since I've heard lots
of criticism of ready send before, I won't try to defend it in this context.

Arecv and Iarecv both make sense.  And they take "void**" (identical to
void* in C, of course) just like MPI_Alloc_mem (and not like MPI_Recv).


> In short, the memory must be returned to the original owner for
> de-allocation.
>

MPI owns the memory.  No stack or user heap allocators allowed.  Nothing
else is the slightest bit practical.


> We mentioned an attach/detach method for using user memory.
>

There is no value in this.  MPI can't turn arbitrary user memory into
shared in general so defeats the primary purpose.


> We also discussed using Fsend/Arecv to/from MPI_COMM_NULL to transfer
> ownership without needing another message.
>

This makes no sense.  Please elaborate.


> This needs careful thought - hence the suggestion for sequence diagrams.
>
>
I drew most of them at some point.  Easy to do again.

Jeff


> Cheers,
> Dan.
>
> On 27 Mar 2017, at 22:42, Jeff Hammond <jeff.science at gmail.com> wrote:
>
> Actually, it has to be required in the literal sense.
>
> Fsend frees memory. Has to know which allocator used. MPI_Alloc_mem is the
> only one MPI knows about.
>
> Same in reverse for Arecv and MPI_Free_mem.
>
> How else can you do it? We could register callbacks for memory allocator
> but that will almost certainly prevent every useful optimization.
>
> Jeff
>
> Sent from my iPhone
>
> On Mar 27, 2017, at 8:57 AM, Dan Holmes <d.holmes at epcc.ed.ac.uk> wrote:
>
> “required” is possibly too strong. “advisable” would be closer to my
> expectation. “needed in order for MPI to enable all possible optimisations”
> is more wordy/precise.
>
> Cheers,
> Dan.
>
> On 27 Mar 2017, at 16:55, Jeff Hammond <jeff.science at gmail.com> wrote:
>
> No. MPI_Alloc_mem and MPI_Free_mem were going to be required. At least
> that was my plan.
>
> Sent from my iPhone
>
> On Mar 27, 2017, at 8:51 AM, Jim Dinan <james.dinan at gmail.com> wrote:
>
> Did we previously look at using MPI_Buffer_attach as a way to support
> allocate-and-recv?
>
> On Mon, Mar 27, 2017 at 11:38 AM, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>
>> I'm on vacation but I endorse other people doing stuff with Fsend-Arecv,
>> since the slacker who owned it for the past two years hasn't made any
>> progress. I think I copied all the relevant content to GitHub already.
>>
>> Jeff
>>
>> Sent from my iPhone
>>
>> > On Mar 27, 2017, at 8:16 AM, Dan Holmes <d.holmes at epcc.ed.ac.uk> wrote:
>> >
>> > Hi Jim, et al,
>> >
>> > I was hoping to move the WG on to talking about Freeing-Send and
>> Allocating-Receive (Fsend & Arecv).
>> >
>> > I’d like to re-boot and refresh caches on that with a goal of
>> presenting something (probably informally) next face-to-face meeting in
>> June.
>> >
>> > I’ll start the call and see how many turn up.
>> >
>> > Cheers,
>> > Dan.
>> >
>> >> On 27 Mar 2017, at 15:28, Jim Dinan <james.dinan at gmail.com> wrote:
>> >>
>> >> Hi All,
>> >>
>> >> The info query proposal seems to be converging.  I don't think there's
>> anything to discuss on this topic this week.
>> >>
>> >> Current status is that Pavan will check with vendors to make sure they
>> are ok with updating MPICH to report the user's info key value instead of
>> the effective value being used by the implementation.  Assuming this is ok,
>> it sounds like the proposal that was read at the last meeting is ready to
>> move forward into voting.
>> >>
>> >> Are there any other topics for discussion?  If not, I think we can
>> cancel today's meeting.
>> >>
>> >> Cheers,
>> >> ~Jim.
>> >> _______________________________________________
>> >> mpiwg-p2p mailing list
>> >> mpiwg-p2p at lists.mpi-forum.org
>> >> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>> >
>> >
>> > --
>> > The University of Edinburgh is a charitable body, registered in
>> > Scotland, with registration number SC005336.
>> >
>> > _______________________________________________
>> > mpiwg-p2p mailing list
>> > mpiwg-p2p at lists.mpi-forum.org
>> > https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>> _______________________________________________
>> mpiwg-p2p mailing list
>> mpiwg-p2p at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>>
>
> _______________________________________________
> mpiwg-p2p mailing list
> mpiwg-p2p at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>
> _______________________________________________
> mpiwg-p2p mailing list
> mpiwg-p2p at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>
>
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
> _______________________________________________
> mpiwg-p2p mailing list
> mpiwg-p2p at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>
> _______________________________________________
> mpiwg-p2p mailing list
> mpiwg-p2p at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>
>
>
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
> _______________________________________________
> mpiwg-p2p mailing list
> mpiwg-p2p at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-p2p/attachments/20170327/eb15dbbe/attachment.html>


More information about the mpiwg-p2p mailing list