<div dir="ltr">MPI_Start_and_wait makes only slightly more sense than MPI_Init_and_finalize. Can someone please show me data that justifies breaking the orthogonality of these functions that has existed in MPI for many years? How many cycles are saved by implementing this function instead of Dan's following proposal?<div><br><div><div>int MPI_Start(…) { // no op }</div><div>int MPI_Wait(…) { MPIX_Start_and_wait (…); }</div><div><br></div><div>Jeff</div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 17, 2017 at 3:35 PM, Dan Holmes <span dir="ltr"><<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><div>Hi Akhil,</div><div><br></div><div>A legal implementation of MPIX_Start_and_wait would be (pseudo-code):</div><div>int MPIX_Start_and_wait(…) { MPI_Start(…); MPI_Wait(…); }</div><div>Adding the interface is not sufficient to force a good implementation of that interface.<div><blockquote type="cite"></blockquote></div></div><div><br></div>On the other hand, a legal implementation of MPI_Start -> MPI_Wait would be (pseudo-code):<div>int MPI_Start(…) { // no op }</div><div>int MPI_Wait(…) { MPIX_Start_and_wait (…); }</div><div>If a good implementation of the new interface existed (better than nonblocking start), then it could be used to implement the existing API and there would be (almost) zero performance gain from using the new API.</div><div><br></div><div>It could be argued that the additional API change is neither necessary nor sufficient for the performance improvement. Justification for this new extension would have to rely on semantics - is there something that can be done with the new interface that cannot be done with the old one?</div><div><br></div><div>Cheers,</div><div>Dan.</div><div><br></div><div><div><blockquote type="cite"><div>On 17 May 2017, at 23:07, Anthony Skjellum <<a href="mailto:skjellum@auburn.edu" target="_blank">skjellum@auburn.edu</a>> wrote:</div><br class="gmail-m_6693414476417009611Apple-interchange-newline"><div><div style="margin-top:0px;margin-bottom:0px;font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)">We succeeded with 15-20 year old cores :-) in overlapping :-)<br></div><div style="margin-top:0px;margin-bottom:0px;font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)"><br></div><div style="margin-top:0px;margin-bottom:0px;font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)">We will share the paper when done.<br></div><span class="gmail-"><div style="margin-top:0px;margin-bottom:0px;font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)"><br></div><div style="margin-top:0px;margin-bottom:0px;font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)"><br></div><div id="gmail-m_6693414476417009611Signature" style="font-family:calibri,arial,helvetica,sans-serif;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255)"><div name="divtagdefaultwrapper" style="font-family:calibri,arial,helvetica,sans-serif;margin:0px"><div><div><div><div><div><div><div><div><div style="margin-top:0px;margin-bottom:0px"><font face="Times New Roman, Times, serif" size="2">Anthony Skjellum, PhD</font></div><div id="gmail-m_6693414476417009611Signature"><div style="margin:0px"><font face="Times New Roman, Times, serif" size="2"><div style="margin:0px;background-color:rgb(255,255,255)">Professor of Computer Science and Software Engineering and</div><div style="margin:0px;background-color:rgb(255,255,255)"> Charles D. McCrary Eminent Scholar Endowed Chair</div><div style="margin:0px;background-color:rgb(255,255,255)">Director of the Charles D. McCrary Institute</div><div style="margin:0px;background-color:rgb(255,255,255)">Samuel Ginn College of Engineering</div></font></div></div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><font face="Times New Roman, Times, serif" size="2">Auburn University<br>e-mail:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:skjellum@auburn.edu" target="_blank">skjellum@auburn.edu</a><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>or<span class="gmail-m_6693414476417009611Apple-converted-space"><wbr> </span><a href="mailto:skjellum@gmail.com" target="_blank">skjellum@gmail.com</a></font></div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><span style="font-family:"times new roman",times,serif;font-size:small">web sites:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span><a href="http://cyber.auburn.edu/" id="gmail-m_6693414476417009611NoLP" target="_blank">http://cyber.auburn.edu</a><wbr> <a href="http://mccrary.auburn.edu/" id="gmail-m_6693414476417009611NoLP" target="_blank">http://mccrary.auburn.edu</a> </div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><font face="Times New Roman, Times, serif" size="2">cell: +1-205-807-4968 ; office: +1-334-844-6360</font></div><div style="margin-top:0px;margin-bottom:0px;font-family:tahoma"><font size="2"><br></font></div><div style="margin-top:0px;margin-bottom:0px;font-family:tahoma"><font size="2">CONFIDENTIALITY: This e-mail and any attachments are confidential and <br>may be privileged. If you are not a named recipient, please notify the <br>sender immediately and do not disclose the contents to another person, <br>use it for any purpose or store or copy the information in any medium.</font></div></div></div></div></div></div></div></div></div></div></div></span><div style="font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);word-wrap:break-word;font-size:14px;font-family:calibri,sans-serif"><hr style="display:inline-block;width:592.891px"><div id="gmail-m_6693414476417009611divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt"><b>From:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Langer, Akhil <<a href="mailto:akhil.langer@intel.com" target="_blank">akhil.langer@intel.com</a>><br><b>Sent:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Wednesday, May 17, 2017 5:06 PM<br><b>To:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Anthony Skjellum; Dan Holmes; <a href="mailto:mpiwg-coll@lists.mpi-forum.org" target="_blank">mpiwg-coll@lists.mpi-forum.org</a><br><b>Cc:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.<wbr>mpi-forum.org</a>; <a href="mailto:htor@inf.ethz.ch" target="_blank">htor@inf.ethz.ch</a>; Balaji, Pavan<div><div class="gmail-h5"><br><b>Subject:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Re: persistent blocking collectives</div></div></font><div> </div></div><div><div class="gmail-h5"><div><div><div>Hi Tony,</div><div><br></div><div>I agree that non-blocking MPI_Start is required. </div><div>If possible, can you please point me to the paper. With many-core architectures that have slower cores, difference in blocking vs non-blocking send/recv calls can be more tangible than it might/might not be with architectures that have faster cores.</div><div><br></div><div>Thanks,</div><div>Akhil </div><div><div id="gmail-m_6693414476417009611MAC_OUTLOOK_SIGNATURE"></div></div></div><div><br></div><span id="gmail-m_6693414476417009611OLK_SRC_BODY_SECTION"><div style="font-family:calibri;font-size:12pt;text-align:left;border-width:1pt medium medium;border-style:solid none none;padding:3pt 0in 0in;border-top-color:rgb(181,196,223)"><span style="font-weight:bold">From:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Anthony Skjellum <<a href="mailto:skjellum@auburn.edu" target="_blank">skjellum@auburn.edu</a>><br><span style="font-weight:bold">Date:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Wednesday, May 17, 2017 at 4:40 PM<br><span style="font-weight:bold">To:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Akhil Langer <<a href="mailto:akhil.langer@intel.com" target="_blank">akhil.langer@intel.com</a>>, Dan Holmes <<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>>, "<a href="mailto:mpiwg-coll@lists.mpi-forum.org" target="_blank">mpiwg-coll@lists.mpi-forum.<wbr>org</a>" <<a href="mailto:mpiwg-coll@lists.mpi-forum.org" target="_blank">mpiwg-coll@lists.mpi-forum.<wbr>org</a>><br><span style="font-weight:bold">Cc:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>"<a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.<wbr>mpi-forum.org</a>" <<a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.mpi-<wbr>forum.org</a>>, "<a href="mailto:htor@inf.ethz.ch" target="_blank">htor@inf.ethz.ch</a>" <<a href="mailto:htor@inf.ethz.ch" target="_blank">htor@inf.ethz.ch</a>>, "Balaji, Pavan" <<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>><br><span style="font-weight:bold">Subject:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Re: persistent blocking collectives<br></div><div><br></div><div><div dir="ltr" style="font-size:12pt;background-color:rgb(255,255,255);font-family:calibri,arial,helvetica,sans-serif"><div style="margin-top:0px;margin-bottom:0px">We have data associated with our first persistent collective paper that show no significant advantage to blocking collective over nonblocking vs. persistent, even though we haven't optimized persistent a lot yet.<br></div><div style="margin-top:0px;margin-bottom:0px"><br></div><div style="margin-top:0px;margin-bottom:0px">MPI's with strong progress can give you more benefits for long transfers, provided there is a good implementation and sufficient memory bandwidth, and you have something to do between Start and Wait... <br></div><div style="margin-top:0px;margin-bottom:0px">we had success with point-to-point-based strong progress and overlap over 15 years ago... only for really short message applications did we want polling progress or progress only at wait....<br></div><div style="margin-top:0px;margin-bottom:0px"><br></div><div style="margin-top:0px;margin-bottom:0px">Tony<br></div><div style="margin-top:0px;margin-bottom:0px"><br></div><div style="margin-top:0px;margin-bottom:0px"><br></div><div id="gmail-m_6693414476417009611Signature"><div name="divtagdefaultwrapper" style="font-family:calibri,arial,helvetica,sans-serif;margin:0px"><div><div><div><div><div><div><div><div><div style="margin-top:0px;margin-bottom:0px"><font face="Times New Roman,Times,serif" size="2">Anthony Skjellum, PhD</font></div><div id="gmail-m_6693414476417009611Signature"><div style="margin:0px"><font face="Times New Roman,Times,serif" size="2"><div style="margin:0px;background-color:rgb(255,255,255)">Professor of Computer Science and Software Engineering and</div><div style="margin:0px;background-color:rgb(255,255,255)"> Charles D. McCrary Eminent Scholar Endowed Chair</div><div style="margin:0px;background-color:rgb(255,255,255)">Director of the Charles D. McCrary Institute</div><div style="margin:0px;background-color:rgb(255,255,255)">Samuel Ginn College of Engineering</div></font></div></div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><font face="Times New Roman,Times,serif" size="2">Auburn University<br>e-mail:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:skjellum@auburn.edu" id="gmail-m_6693414476417009611NoLP" target="_blank">skjellum@auburn.edu</a><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>or<span class="gmail-m_6693414476417009611Apple-converted-space"><wbr> </span><a href="mailto:skjellum@gmail.com" id="gmail-m_6693414476417009611NoLP" target="_blank">skjellum@gmail.com</a></font></div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><span style="font-family:"times new roman",times,serif;font-size:small">web sites:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span><a href="http://cyber.auburn.edu/" id="gmail-m_6693414476417009611NoLP" target="_blank">http://cyber.auburn.edu</a><wbr> <a href="http://mccrary.auburn.edu/" id="gmail-m_6693414476417009611NoLP" target="_blank">http://mccrary.auburn.edu</a> </div><div style="margin-top:0px;margin-bottom:0px;background-color:rgb(255,255,255)"><font face="Times New Roman,Times,serif" size="2">cell: +1-205-807-4968 ; office: +1-334-844-6360</font></div><div style="margin-top:0px;margin-bottom:0px;font-family:tahoma"><font size="2"><br></font></div><div style="margin-top:0px;margin-bottom:0px;font-family:tahoma"><font size="2">CONFIDENTIALITY: This e-mail and any attachments are confidential and <br>may be privileged. If you are not a named recipient, please notify the <br>sender immediately and do not disclose the contents to another person, <br>use it for any purpose or store or copy the information in any medium.</font></div></div></div></div></div></div></div></div></div></div></div><div style="word-wrap:break-word;font-size:14px;font-family:calibri,sans-serif"><hr style="display:inline-block;width:592.891px"><div id="gmail-m_6693414476417009611divRplyFwdMsg" dir="ltr"><font face="Calibri,sans-serif" style="font-size:11pt"><b>From:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Langer, Akhil <<a href="mailto:akhil.langer@intel.com" target="_blank">akhil.langer@intel.com</a>><br><b>Sent:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Wednesday, May 17, 2017 4:29 PM<br><b>To:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Dan Holmes;<span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:mpiwg-coll@lists.mpi-forum.org" target="_blank">mpiwg-coll@lists.mpi-<wbr>forum.org</a><br><b>Cc:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.<wbr>mpi-forum.org</a>; Anthony Skjellum;<span class="gmail-m_6693414476417009611Apple-converted-space"> </span><a href="mailto:htor@inf.ethz.ch" target="_blank">htor@inf.ethz.ch</a>; Balaji, Pavan<br><b>Subject:</b><span class="gmail-m_6693414476417009611Apple-converted-space"> </span>Re: persistent blocking collectives</font><div> </div></div><div><div><div>Hi Dan,</div><div><br></div><div>Thanks a lot for your reply. As you suggested, we could add a MPI_Start_and_wait() call that is a blocking version of MPI_Start call. It could be used both for pt2pt and collective operations, without any additional changes.</div><div><br></div><div>I have noticed tangible performance difference in broadcast collective performance between the two implementations that I provided in my original email. Most of the real HPC applications still use only blocking collectives so having a blocking MPI_Start (that is, MPI_Start<span style="font-style:italic">_</span>and_wait) call for collectives is natural. The user can simply replace the blocking collective call with MPI_<span style="font-style:italic">Start_</span>and_wait call.</div><div>We have also seen that blocking sends/recvs are faster than the corresponding non-blocking calls. </div><div><br></div><div>Please let me know what kind of information would be useful to make this succeed. I can work on this.</div><div><br></div><div>Thanks,</div><div>Akhil </div><div><div id="gmail-m_6693414476417009611"></div></div></div><div><br></div><span id="gmail-m_6693414476417009611OLK_SRC_BODY_SECTION"><div style="font-family:calibri;font-size:12pt;text-align:left;border-width:1pt medium medium;border-style:solid none none;padding:3pt 0in 0in;border-top-color:rgb(181,196,223)"><span style="font-weight:bold">From:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Dan Holmes <<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>><br><span style="font-weight:bold">Date:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Wednesday, May 17, 2017 at 5:10 AM<br><span style="font-weight:bold">To:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Akhil Langer <<a href="mailto:akhil.langer@intel.com" target="_blank">akhil.langer@intel.com</a>><br><span style="font-weight:bold">Cc:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>"<a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.<wbr>mpi-forum.org</a>" <<a href="mailto:mpiwg-persistence@lists.mpi-forum.org" target="_blank">mpiwg-persistence@lists.mpi-<wbr>forum.org</a>>, Anthony Skjellum <<a href="mailto:skjellum@auburn.edu" target="_blank">skjellum@auburn.edu</a>><br><span style="font-weight:bold">Subject:<span class="gmail-m_6693414476417009611Apple-converted-space"> </span></span>Re: persistent blocking collectives<br></div><div><br></div><div><div style="word-wrap:break-word">Hi Akhil,<div><br></div><div>Thank you for your suggestion. This is an interesting area of API design for MPI. Let me jot down some notes in response to your points.</div><div><br></div><div>The MPI_Start function is used by both our proposed persistent collective communications and the existing persistent point-to-point communications. For consistency in the MPI Standard, any change to MPI_Start must be applied to point-to-point as well.</div><div><br></div><div>Our implementation work for persistent collective communication currently leverages point-to-point communication in a similar manner to your description of the tree broadcast. However, this is not required by the MPI Standard and is known to be a sub-optimal implementation choice. The interface design should not be determined by the needs of a poor implementation method.</div><div><br></div><div>All schedules for persistent collective communication operations involve multiple “rounds”. Each round concludes with a dependency on one or more remote MPI processes, i.e. a “wait”. This is not the case with point-to-point, where lower latency can be achieved with a fire-and-forget approach in some situations (ready mode or small eager protocol messages). Even for small buffer sizes, there is no ready mode or eager protocol for collective communications.</div><div><br></div><div>There is ongoing debate about the best method for implementing “wait”, e.g. active polling (spin wait) or signals (idle wait), etc. For collective operations, the inter-round “wait” could be avoided in many cases by using triggered operations - an incoming network packet is processed by the network hardware and triggers one or more response packets. Your “wait for receive, send to children” steps would then be “trigger store-and-foward on receive” programmed into the NIC itself. Having the CPU blocked would be a waste of resources for this implementation. This strongly argues that nonblocking should exist in the API, even if blocking is also added. Nonblocking already exists - MPI_Start.</div><div><br></div><div>With regards to interface naming, I would suggest MPI_Start_and_wait, and MPI_Start_and_test. You would also need to consider MPI_Startall_and_waitall and MPI_Startall_and_testall. I would avoid adding additional variants based on MPI_[Wait|Test][any|some].</div><div><br></div><div>There has been a lengthy debate about whether the persistent collective initialisation functions could/should be blocking or nonblocking. This issue is similar. One could envisage:</div><div><br></div><div>// fully non-blocking route - maximum opportunity for overlap - assumes normally slow network</div><div>MPI_Ireduce_init // begin optimisation of a reduction</div><div>MPI_Test // repeatedly test for completion of the optimisation of the reduction</div><div><loop begin></div><div>MPI_Istart // begin the reduction communication</div><div>MPI_Test // repeatedly test for completion of the reduction communication</div><div><loop end></div><div>MPI_Request_free // recover resources</div><div><br></div><div><div>// fully blocking route - minimum opportunity for overlap - assumes infinitely fast network</div><div>MPI_Reduce_init // optimise a reduction, blocking</div><div><loop begin></div><div>MPI_Start // do the reduction communication, blocking</div><div><loop end></div><div>MPI_Request_free // recover resources</div></div><div><br></div><div>Some proposed optimisations take a long time and require collective communication, so we have chosen nonblocking initialisation. The current persistent communication workflow is initialise -> (start -> complete)* -> free, so we are not proposing to have the first MPI_Test in the example above. The existing MPI_Start is nonblocking so our proposal is basically the first of the examples above. It is a minimum change to the MPI Standard to achieve our main goal, i.e. permit a planning step for collective communications. It does not exclude or prevent additional proposals that extend the API in the manner that you suggest. However, such an extension would need a strong justification to succeed.</div><div><br></div><div>Cheers,</div><div>Dan.</div><div><br><div><blockquote type="cite"><div>On 16 May 2017, at 22:33, Langer, Akhil <<a href="mailto:akhil.langer@intel.com" target="_blank">akhil.langer@intel.com</a>> wrote:</div><br class="gmail-m_6693414476417009611Apple-interchange-newline"><div><div style="word-wrap:break-word;font-size:14px;font-family:calibri,sans-serif"><div>Hello, </div><div><br></div><div>I want to propose an extension to persistent API to allow a blocking MPI_Start call. Currently, MPI_Start calls are non-blocking. So, proposal is something like MPI_Start (for blocking) and MPI_Istart (for non-blocking). Of course, to maintain backward compatibility we may have to think of an alternative API. I am not proposing the exact API here. </div><div><br></div><div>The motivation behind the proposal is that having the knowledge whether the corresponding MPI call is blocking or not can give better performance. For example, MPI_Isend followed by MPI_Wait is slower than the MPI_Send because internally MPI_Isend->MPI_Wait has to allocate additional data structures (for example, request pointer) and do more work. Similarly, lets look at an example of a bcast collective operation. </div><div><br></div><div>Tree based broadcast can be implemented in two ways:</div><ol><li>MPI_Recv (recv data from parent) -> FOREACHCHILD – MPI_Send (send data to children)</li><li>MPI_Irecv (recv data from parent) -> MPI_Wait(wait for recv to complete) -> FOREACHCHILD – MPI_Isend (send data to childrent) -> MPI_WaitAll (wait for sends to complete)</li></ol><div>Having only a non-blocking MPI_Start call forces only implementation 2 as implementation 1 has blocking MPI calls. However, implementation 1 can be significantly faster that implementation 2 for small message sizes.</div><div><br></div><div>Looking forward to hear your feedback.</div><div><br></div><div>Thanks,</div><div>Akhil </div><div><br></div><div><div id="gmail-m_6693414476417009611"></div></div></div></div></blockquote></div><br></div></div></div></span></div></div></div></div></span></div></div></div></div><br class="gmail-m_6693414476417009611Apple-interchange-newline"></div></blockquote></div><br></div></div><br>The University of Edinburgh is a charitable body, registered in<br>
Scotland, with registration number SC005336.<br>
<br>______________________________<wbr>_________________<br>
mpiwg-coll mailing list<br>
<a href="mailto:mpiwg-coll@lists.mpi-forum.org">mpiwg-coll@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-coll</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div></div></div>