[Mpi-22] please review - Send Buffer Access (ticket #45)

Richard Treumann treumann at [hidden]
Wed Dec 10 12:44:47 CST 2008

Brian has stated the issue related to the send buffer access proposal very

We have seen some arguments that focus on how hard the rule is for users,
how counter intuitive it is and how often it is violated either naively or
knowingly. These are important issues but not the only issues.

We have seen arguments about the ways optimization options could be lost if
the rule is changed.  These are important issues but not the only issues.

As Brian states: there is a cost benefit analysis here, not a perfect or
even obvious right answer.

Early in the debate I raised the question about whether we could be risking
optimizations and reminded people that it really does matter. That was at a
stage when I perceived a lot of passion based on user centered arguments
and almost no discussion of the other side. The tone I perceived was
basically "the rationale in the original standard seems to be moot so all
that remains is the user centered argument".  I felt the counter argument
at least needed to be examined.  That has happened.

I think we have now reached a point where both sides of the debate have
been aired.  People on each side have heard and considered the counter
arguments. I think most forum members would agree that we cannot be
absolutely certain whether keeping the send buffer restriction would ever
prove to be valuable. I still think there is some risk in removing the
restriction but I also see substantial value in removing it.

My take is that as a cost/benefit decision we should remove the restriction
on send buffer access.

I also think putting the compiler attribute "const" on a send buffer
parameter should be voted down.

The formal argument to a send seen by the compiler may or may not
correspond to the buffer.  The datatype offsets play a role too. This most
obvious case is when MPI_BOTTOM is the send argument but there are other
MPI_Send(&(array[0]) ...)
MPI_Send(&var, ....)
MPI_Send(MPI_BOTTOM, ......)
are all valid ways of sending the content of array[10] when combined with a
suitable datatype.
(for the "var" example we would need a datatype that set MPI_LB at addr
(var) and used ( addr(array[10]-addr(var) ) as a displacement. Weird but

The complier optimizations that can come from adding "const" are probably
small.  The const attribute is semantically inaccurate if we consider
MPI_BOTTOM to represent the entire memory array. Every subroutine call
alters some portions of memory. I presume the compiler just recognizes that
it has no idea what range MPI_BOTTOM represents and ignores the "const".


Dick Treumann  -  MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363

mpi-22-bounces_at_[hidden] wrote on 12/10/2008 12:36:05 AM:

> [image removed]
> Re: [Mpi-22] please review - Send Buffer Access (ticket #45)
> Barrett, Brian W
> to:
> MPI 2.2,
> 12/10/2008 12:45 AM
> Sent by:
> mpi-22-bounces_at_[hidden]
> Please respond to "MPI 2.2"
> Ok, now that I've gotten my user's point of view off my chest, back to my
> view as an MPI implementer.  I've tried to respond to all Alexander's
> objections.  If I missed one, I apologize.
> > 1) The memory remapping scenario IO brought up a couple of days ago
> Unless someone's done something fun with operating system and
> design I'm not aware of, remapping has one problem that always must be
> considered...  Send buffers are not required to be (and frequently
> page aligned or a multiple of page size.  Therefore, completely removing
> send buffer from the user's process has the problem of also taking legal
> addresses with it (which  would violate the standard).  IBM's solution is
> elegant in that it allows remapping without removing from the sender's
> process space.  Sandia has a solution called SMARTMAP that is both not
> patented and allows single copy transfers in shared memory environments.
> Point number 2 was a procedural argument.  I believe others are in a
> position than I to comment on this.  My understanding, however, is that a
> technical objection can cause the vote to fail, but is not grounds on
> preventing a vote (particularly a second vote).  If it were, we'd never
> anything done.
> > 3) Imagine send buffers have to pinned in the memory. To avoid
> doing this too
> > often, these registrations will normally be cached. If more than
> one send can
> > be used for a buffer or, for that matter, overlapping portions of the
> > buffer, say by different threads, access to the lookup-and-pin
> will have to be
> > made atomic. This will further complicate implementation and introduce
> > potentially costly mutual exclusion primitive into the critical path.
> The caching problem already exists.  Consider a case where a large send
> completed, then multiple small sends occur within that base and bound
> the first is completed.  This situation is perfectly legal, happens in
> in the wild, and must be dealt with by MPI implementations.  If that's
> enough, consider a case where the buffer is part of an active Window
> is legal, as long as the buffers in use for communication don't overlap).
> All these cases certainly should be handled by an MPI today.
> > 4) I wonder what a const modifier will do for a buffer identifies by
> > MPI_BOTTOM and/or a derived data type, possibly with holes in it. How
> > this square up with the C language sequence association rules?
> This sounds like an issue for the const proposal, which is different from
> the send buffer access proposal.  I'm not sure I have enough data to form
> opinion on the const proposal, but I'm fairly sure we can discuss the
> buffer access proposal without considering this issue.
> > 5) Note also if both #45 and #46 will be introduced, there will beno
way to
> > retract this, even with the help of the MPI_INIT_ASSERTED, should we
> > decide to introduce assertion like MPI_NO_SEND_BUFFER_READ_ACCESS.The
> > modifier from #46 will make that syntactically useless.
> If both are passed, that might be true.  It could be argued the const
> proposal depends on the access proposal.  However, it can not be
> argued that the access proposal in any way depends upon the const
> The send buffer access proposal can certainly be passed and an assert
> later (at whatever point the init_assert proposal is integrated into the
> standard) that allows MPI implementations to modify the send buffer.
> You raise a good point about the const proposal.  But it has absolutely
> bearing on the send buffer access proposal.
> > 6) Finally, what will happen in the Fortran interface? With the
> > copy-in/copy-out possibly happening on the MPI subroutine boundaryfor
> > sections? If more than one send is allowed, the application can
> pretty easily
> > exhaust any virtual memory with a couple of long enough vectors.
> How does that change from today?  Today users send multiple buffers at
> same time, and seem to cope with memory exhaustion issues just fine.  So
> soon they might be able to remove the data copy they've had to make at
> user level to work around the MPI access restriction, so there's actually
> less virtual memory in use.  Seems like a win to me.
> > 7) In-place compression and/or encryption of the messages. Compression
> > particular can work wonders on monotonous messages, and cost less time
> > total than the transmission of so many giga-zeroes, for example.
> Again, having
> > send buffer access allowed and const modifier attached will kill this
> > optimization opportunity. Too bad.
> While I hope you're joking about the giga-zeros, you do raise a valid
> concern, in that there are a number of optimizations regarding
> encryption, and endian-swapping that may be eliminated by this proposal.
> the flip side, as I argued in a previous e-mail, the user gains quite a
> in usability.  We have to balance these two factors.  Since users know
> my office is, I tend to lean towards making their lives easier,
> when it doesn't cause extra work for me.  But I already sent an e-mail on
> that point...
> Our experience with Open MPI was that the potential for performance in
> parts of the MPI (collectives, etc.) far outweighed any send-side tricks
> could think of (and you haven't brought up any we didn't think of).  So
> we wanted to do compression or encryption, it would be done with
> bounce buffers.  Since a software pipeline would practically be required
> get good performance, the bounce buffer would not have to scale with the
> size of the communication buffer but instead with the properties of the
> network pipeline.  Of course, my opinion would be that it would be much
> simpler and much higher performance to support compression or encryption
> part of the NIC as the data is streamed to the network.  Otherwise,
> burning memory bandwidth doing the extra copy (even in the modify the
> buffer case), and memory bandwidth is a precious resource for HPC
> applications.
> One other point to consider.  If I was a user, I'd expect that my
> traffic also be compressed, encrypted, or endian-swapped.  The standard
> already requires multiple accesses be legal for one-sided communication.
> you're going to have a situation where some communication can use a
> send-modify implementation and some can not.  I'm not familiar with how
> Intel's MPI is architected, but Open MPI is architected such that
> such as compression, encryption, and endian-swapping would be made at a
> enough level that the code path is the same whether the message is a
> point-to-point send or a one-sided put.  Since that's some of the most
> complicated code in Open MPI, I can't foresee adding a second code path
> to get a (dubious) performance benefit.
> Brian
> --
>    Brian W. Barrett
>    Dept. 1422: Scalable Computer Architectures
>    Sandia National Laboratories
> _______________________________________________
> mpi-22 mailing list
> mpi-22_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-22/attachments/20081210/acb82318/attachment.html>

More information about the Mpi-22 mailing list