<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><a href="https://github.com/mpi-forum/mpi-issues/issues/667">https://github.com/mpi-forum/mpi-issues/issues/667</a><div><br></div><div><a href="https://github.com/mpi-forum/mpi-issues/issues/664">https://github.com/mpi-forum/mpi-issues/issues/664</a></div><div><br></div><div>You should create a new issue for sessions attributes. </div><div><br></div><div>We can merge into a single meta issue later if appropriate. <br><br><div dir="ltr">Sent from my iPhone</div><div dir="ltr"><br><blockquote type="cite">On 18. Jan 2023, at 19.17, Koziol, Quincey via mpi-forum <mpi-forum@lists.mpi-forum.org> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><span>“third” on attributes are necessary for MPI.  HDF5 uses them to make certain that cached file data gets written to the file (and it is closed properly) before MPI_Finalize() in the world model.   Frankly, I wasn’t paying enough attention to the sessions work ten years ago and didn’t realize that they aren’t available as a mechanism for getting this same action when a session is terminated.   This is a critical need to avoid corrupting user data.</span><br><span></span><br><span>Jeff - please add me to your work on adding attributes to requests and ops, and I’ll write text for adding attributes to sessions.</span><br><span></span><br><span>    Quincey</span><br><span></span><br><span></span><br><blockquote type="cite"><span>On Jan 16, 2023, at 2:10 PM, Jed Brown via mpi-forum <mpi-forum@lists.mpi-forum.org> wrote:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Second that MPI attributes do not suck. PETSc uses communicator attributes heavily to avoid lots of confusing or wasteful behavior when users pass communicators between libraries and similar comments would apply if other MPI objects were passed between libraries in that way.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>It was before my time, but I think PETSc's use of attributes predates MPI-1.0 and MPI's early and pervasive support for attributes is one of the things I celebrate when discussing software engineering of libraries intended for use by other libraries versus those made for use by applications. Please don't dismiss attributes even if you don't enjoy them.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Jeff Hammond via mpi-forum <mpi-forum@lists.mpi-forum.org> writes:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>The API is annoying but it really only gets used in library middleware by people like us who can figure out the void* casting nonsense and use it correctly.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Casper critically depends on window attributes.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Request attributes are the least intrusive way to allow libraries to do completion callbacks. They give users a way to do this that adds zero instructions to the critical path and is completely invisible unless actually requires.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Attributes do not suck and people should stop preventing those of us who write libraries to make the MPI ecosystem better from doing our jobs because they want to whine about problems they’re too lazy to solve.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>I guess I’ll propose request and op attributes because I need them and people can either solve those problems better ways or get out of the way.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Jeff</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Sent from my iPhone</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>On 16. Jan 2023, at 20.27, Holmes, Daniel John <daniel.john.holmes@intel.com> wrote:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Hi Jeff,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>When adding session as an object to MPI, a deliberate choice was made not to support attributes for session objects because “attributes in MPI suck”.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>This decision was made despite the usage (by some tools) of “at exit” attribute callbacks fired by the destruction of MPI_COMM_SELF during MPI_FINALIZE in the world model and the consequent obvious omission of a similar hook during MPI_SESSION_FINALIZE in the session model (there is also no MPI_COMM_SELF in the session model, so this is not a simple subject).</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Removal of attributes entirely – blocked by back-compat because usage is known to exist.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Expansion of attributes orthogonally – blocked by “attributes in MPI suck” accusations.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Result – inconsistency in the interface that no-one wants to tackle.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Best wishes,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Dan.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>From: mpi-forum <mpi-forum-bounces@lists.mpi-forum.org> On Behalf Of Jeff Hammond via mpi-forum</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Sent: 16 January 2023 14:40</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>To: MPI Forum <mpi-forum@lists.mpi-forum.org></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Cc: Jeff Hammond <jeff.science@gmail.com></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Subject: [Mpi-forum] why do we only support caching on win/comm/datatype?</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>I am curious if there is a good reason from the past as to why we only support caching on win, comm and datatype, and no other handles?</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>I have a good use case for request attributes and have found that the implementation overhead in MPICH appears to be zero.  The implementation in MPICH requires adding a single pointer to an internal struct.  This struct member will never be accessed except when the user needs it, and it can be placed at the end of the struct so that it doesn't even pollute the cache.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>I wondered if callbacks were a hidden overhead, but they only called explicitly and synchronously, so they would not interfere with critical path uses of requests.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>https://github.com/mpi-forum/mpi-issues/issues/664 has some details but since I do not understand how MPICH generates the MPI bindings, I only implemented the back-end MPIR code.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>It would make MPI more consistent if all opaque handles supported attributes.  In particular, I'd love to have a built-in MPI_Op attribute for the function pointer the user provided (which is similar to how one can query input args associated with MPI_Win) because that appears to be the only way I can implement certain corner cases of MPI F08.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Thanks,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Jeff</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>--</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Jeff Hammond</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>jeff.science@gmail.com</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>http://jeffhammond.github.io/</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>_______________________________________________</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>mpi-forum mailing list</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>mpi-forum@lists.mpi-forum.org</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>https://lists.mpi-forum.org/mailman/listinfo/mpi-forum</span><br></blockquote></blockquote><blockquote type="cite"><span>_______________________________________________</span><br></blockquote><blockquote type="cite"><span>mpi-forum mailing list</span><br></blockquote><blockquote type="cite"><span>mpi-forum@lists.mpi-forum.org</span><br></blockquote><blockquote type="cite"><span>https://lists.mpi-forum.org/mailman/listinfo/mpi-forum</span><br></blockquote><span></span><br><span>_______________________________________________</span><br><span>mpi-forum mailing list</span><br><span>mpi-forum@lists.mpi-forum.org</span><br><span>https://lists.mpi-forum.org/mailman/listinfo/mpi-forum</span><br></div></blockquote></div></body></html>