[Mpi-22] please review - Send Buffer Access (ticket #45)
Erez Haba
erezh at [hidden]
Wed Dec 10 14:16:54 CST 2008
Thanks Dick,
I think that you have somewhat wrong perception regarding the C const keyword. The only promises that the C lang is making about passing parameter as 'const' is that the function will not change the content of ANY memory location through THAT pointer. As long as you are not casting away constness in your function you are good and following the const rules. You are allowed to change the content of that same memory through a different pointer that is not const.
All the examples that you have below are 100% valid with the const keyword.
Thanks,
.Erez
From: mpi-22-bounces_at_[hidden] [mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard Treumann
Sent: Wednesday, December 10, 2008 10:45 AM
To: MPI 2.2
Subject: Re: [Mpi-22] please review - Send Buffer Access (ticket #45)
Brian has stated the issue related to the send buffer access proposal very well.
We have seen some arguments that focus on how hard the rule is for users, how counter intuitive it is and how often it is violated either naively or knowingly. These are important issues but not the only issues.
We have seen arguments about the ways optimization options could be lost if the rule is changed. These are important issues but not the only issues.
As Brian states: there is a cost benefit analysis here, not a perfect or even obvious right answer.
Early in the debate I raised the question about whether we could be risking optimizations and reminded people that it really does matter. That was at a stage when I perceived a lot of passion based on user centered arguments and almost no discussion of the other side. The tone I perceived was basically "the rationale in the original standard seems to be moot so all that remains is the user centered argument". I felt the counter argument at least needed to be examined. That has happened.
I think we have now reached a point where both sides of the debate have been aired. People on each side have heard and considered the counter arguments. I think most forum members would agree that we cannot be absolutely certain whether keeping the send buffer restriction would ever prove to be valuable. I still think there is some risk in removing the restriction but I also see substantial value in removing it.
My take is that as a cost/benefit decision we should remove the restriction on send buffer access.
I also think putting the compiler attribute "const" on a send buffer parameter should be voted down.
The formal argument to a send seen by the compiler may or may not correspond to the buffer. The datatype offsets play a role too. This most obvious case is when MPI_BOTTOM is the send argument but there are other examples.
MPI_Send(&(array[0]) ...)
MPI_Send(array,....)
MPI_Send(&var, ....)
MPI_Send(MPI_BOTTOM, ......)
are all valid ways of sending the content of array[10] when combined with a suitable datatype.
(for the "var" example we would need a datatype that set MPI_LB at addr(var) and used ( addr(array[10]-addr(var) ) as a displacement. Weird but valid.)
The complier optimizations that can come from adding "const" are probably small. The const attribute is semantically inaccurate if we consider MPI_BOTTOM to represent the entire memory array. Every subroutine call alters some portions of memory. I presume the compiler just recognizes that it has no idea what range MPI_BOTTOM represents and ignores the "const".
Dick
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
mpi-22-bounces_at_[hidden] wrote on 12/10/2008 12:36:05 AM:
> [image removed]
>
> Re: [Mpi-22] please review - Send Buffer Access (ticket #45)
>
> Barrett, Brian W
>
> to:
>
> MPI 2.2
>
> 12/10/2008 12:45 AM
>
> Sent by:
>
> mpi-22-bounces_at_[hidden]
>
> Please respond to "MPI 2.2"
>
> Ok, now that I've gotten my user's point of view off my chest, back to my
> view as an MPI implementer. I've tried to respond to all Alexander's
> objections. If I missed one, I apologize.
>
> > 1) The memory remapping scenario IO brought up a couple of days ago
>
> Unless someone's done something fun with operating system and architecture
> design I'm not aware of, remapping has one problem that always must be
> considered... Send buffers are not required to be (and frequently aren't)
> page aligned or a multiple of page size. Therefore, completely removing the
> send buffer from the user's process has the problem of also taking legal
> addresses with it (which would violate the standard). IBM's solution is
> elegant in that it allows remapping without removing from the sender's
> process space. Sandia has a solution called SMARTMAP that is both not
> patented and allows single copy transfers in shared memory environments.
>
> Point number 2 was a procedural argument. I believe others are in a better
> position than I to comment on this. My understanding, however, is that a
> technical objection can cause the vote to fail, but is not grounds on
> preventing a vote (particularly a second vote). If it were, we'd never get
> anything done.
>
> > 3) Imagine send buffers have to pinned in the memory. To avoid
> doing this too
> > often, these registrations will normally be cached. If more than
> one send can
> > be used for a buffer or, for that matter, overlapping portions of the same
> > buffer, say by different threads, access to the lookup-and-pin
> will have to be
> > made atomic. This will further complicate implementation and introduce a
> > potentially costly mutual exclusion primitive into the critical path.
>
> The caching problem already exists. Consider a case where a large send is
> completed, then multiple small sends occur within that base and bound after
> the first is completed. This situation is perfectly legal, happens in codes
> in the wild, and must be dealt with by MPI implementations. If that's not
> enough, consider a case where the buffer is part of an active Window (which
> is legal, as long as the buffers in use for communication don't overlap).
> All these cases certainly should be handled by an MPI today.
>
> > 4) I wonder what a const modifier will do for a buffer identifies by
> > MPI_BOTTOM and/or a derived data type, possibly with holes in it. How will
> > this square up with the C language sequence association rules?
>
> This sounds like an issue for the const proposal, which is different from
> the send buffer access proposal. I'm not sure I have enough data to form an
> opinion on the const proposal, but I'm fairly sure we can discuss the send
> buffer access proposal without considering this issue.
>
> > 5) Note also if both #45 and #46 will be introduced, there will beno way to
> > retract this, even with the help of the MPI_INIT_ASSERTED, should we later
> > decide to introduce assertion like MPI_NO_SEND_BUFFER_READ_ACCESS.The const
> > modifier from #46 will make that syntactically useless.
>
> If both are passed, that might be true. It could be argued the const
> proposal depends on the access proposal. However, it can not be rationally
> argued that the access proposal in any way depends upon the const proposal.
>
> The send buffer access proposal can certainly be passed and an assert added
> later (at whatever point the init_assert proposal is integrated into the
> standard) that allows MPI implementations to modify the send buffer.
>
> You raise a good point about the const proposal. But it has absolutely no
> bearing on the send buffer access proposal.
>
> > 6) Finally, what will happen in the Fortran interface? With the
> > copy-in/copy-out possibly happening on the MPI subroutine boundaryfor array
> > sections? If more than one send is allowed, the application can
> pretty easily
> > exhaust any virtual memory with a couple of long enough vectors.
>
> How does that change from today? Today users send multiple buffers at the
> same time, and seem to cope with memory exhaustion issues just fine. So
> soon they might be able to remove the data copy they've had to make at the
> user level to work around the MPI access restriction, so there's actually
> less virtual memory in use. Seems like a win to me.
>
> > 7) In-place compression and/or encryption of the messages. Compression in
> > particular can work wonders on monotonous messages, and cost less time in
> > total than the transmission of so many giga-zeroes, for example.
> Again, having
> > send buffer access allowed and const modifier attached will kill this huge
> > optimization opportunity. Too bad.
>
> While I hope you're joking about the giga-zeros, you do raise a valid
> concern, in that there are a number of optimizations regarding compression,
> encryption, and endian-swapping that may be eliminated by this proposal. On
> the flip side, as I argued in a previous e-mail, the user gains quite a bit
> in usability. We have to balance these two factors. Since users know where
> my office is, I tend to lean towards making their lives easier, particularly
> when it doesn't cause extra work for me. But I already sent an e-mail on
> that point...
>
> Our experience with Open MPI was that the potential for performance in other
> parts of the MPI (collectives, etc.) far outweighed any send-side tricks we
> could think of (and you haven't brought up any we didn't think of). So if
> we wanted to do compression or encryption, it would be done with send-side
> bounce buffers. Since a software pipeline would practically be required to
> get good performance, the bounce buffer would not have to scale with the
> size of the communication buffer but instead with the properties of the
> network pipeline. Of course, my opinion would be that it would be much
> simpler and much higher performance to support compression or encryption as
> part of the NIC as the data is streamed to the network. Otherwise, you're
> burning memory bandwidth doing the extra copy (even in the modify the send
> buffer case), and memory bandwidth is a precious resource for HPC
> applications.
>
> One other point to consider. If I was a user, I'd expect that my one-sided
> traffic also be compressed, encrypted, or endian-swapped. The standard
> already requires multiple accesses be legal for one-sided communication. So
> you're going to have a situation where some communication can use a
> send-modify implementation and some can not. I'm not familiar with how
> Intel's MPI is architected, but Open MPI is architected such that decisions
> such as compression, encryption, and endian-swapping would be made at a low
> enough level that the code path is the same whether the message is a
> point-to-point send or a one-sided put. Since that's some of the most
> complicated code in Open MPI, I can't foresee adding a second code path just
> to get a (dubious) performance benefit.
>
>
> Brian
>
> --
> Brian W. Barrett
> Dept. 1422: Scalable Computer Architectures
> Sandia National Laboratories
>
>
>
> _______________________________________________
> mpi-22 mailing list
> mpi-22_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-22/attachments/20081210/7eff4d01/attachment.html>
More information about the Mpi-22
mailing list