<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hi Jim,<br>
    <br>
    I like that change. On a side note: are we really still calling them
    "info hints" in this context rather than "info assertions"?<br>
    <br>
    Cheers,<br>
    Dan.<br>
    <br>
    <div class="moz-cite-prefix">On 09/02/2016 15:51, Jim Dinan wrote:<br>
    </div>
    <blockquote
cite="mid:CAOoEU4FiXKpMhSG9g_Wm47uH-QXvy_wXA72VOtkfOBLL3X50fQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">Hi All,
        <div><br>
        </div>
        <div>I'm preparing the updated draft of info assertions for a
          reading in March.  Where did we land on an advice regarding
          tools?  Do we want advice (1) to users, that they info keys
          may impact tools and/or (2) to tools that they should check
          info?</div>
        <div><br>
        </div>
        <div>For instance, we could extend the current advice with the
          following sentence:</div>
        <div><span class=""><br>
          </span></div>
        <div><span class="">Setting info hints on the predefined
            communicators </span><span class="">\const</span><span
            class="">{</span><span class="">MPI</span><span class="">\_</span><span
            class="">COMM</span><span class="">\_</span><span class="">WORLD</span><span
            class="">} </span><span class="">and </span><span class="">\const</span><span
            class="">{</span><span class="">MPI</span><span class="">\_</span><span
            class="">COMM</span><span class="">\_</span><span class="">SELF</span><span
            class="">}</span><span class=""> may have unintended
            effects, as changes to these </span>global objects may
          affect all components of the application, including
          libraries.  <span style="background-color:rgb(255,255,0)">The
            usage of info hints may also impact the effectiveness of
            tools.</span></div>
        <div><span style="background-color:rgb(255,255,0)"><br>
          </span></div>
        <div><span style="background-color:rgb(255,255,255)"> ~Jim.</span></div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Fri, Dec 18, 2015 at 5:24 AM,
          Marc-Andre Hermanns <span dir="ltr"><<a
              moz-do-not-send="true"
              href="mailto:hermanns@jara.rwth-aachen.de" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:hermanns@jara.rwth-aachen.de">hermanns@jara.rwth-aachen.de</a></a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
              class="">Hi Jeff,<br>
              <br>
              >     at the moment we don't handle MPI_THREAD_MULTIPLE
              at all. But we want<br>
              >     to get there ;-)<br>
              ><br>
              ><br>
              > You should vote for endpoints, as this may help you
              out here,<br>
              > particularly if users start mapping endpoints 1:1 w/
              threads.<br>
              <br>
            </span>That would certainly ease things for us in these
            situations.<br>
            Unfortunately endpoints force use to adapt other
            infrastructure in our<br>
            measurement system.<br>
            <span class=""><br>
              >     b) Creating a derived datatype on the fly to add
              tool-level data to<br>
              >     the original payload may induce a large overhead
              in practically<br>
              >     _every_ send & receive operation and perturb
              the measurement.<br>
              ><br>
              ><br>
              > You should evaluate this experimentally.  I wrote a
              simple test<br>
              > (<a moz-do-not-send="true"
href="https://github.com/jeffhammond/BigMPI/blob/master/test/perf/typepiggy.c"
                rel="noreferrer" target="_blank">https://github.com/jeffhammond/BigMPI/blob/master/test/perf/typepiggy.c</a>)<br>
              > and measured 1.5 us per call of overhead to create a
              datatype.  That<br>
              > is not significant except for very small messages.<br>
              <br>
            </span>Thanks for the pointer. You are right. I should
            evaluate this further.<br>
            1.5us does indeed seem tolerable. I wonder how the influence
            of the<br>
            derived datatype is on overall messaging performance,
            though.<br>
            <br>
            This is also something I should evaluate in the process.<br>
            <span class=""><br>
              Cheers,<br>
              Marc-Andre<br>
              <br>
              --<br>
              Marc-Andre Hermanns<br>
              Jülich Aachen Research Alliance,<br>
              High Performance Computing (JARA-HPC)<br>
              Jülich Supercomputing Centre (JSC)<br>
              <br>
              Schinkelstrasse 2<br>
              52062 Aachen<br>
              Germany<br>
              <br>
              Phone: <a moz-do-not-send="true"
                href="tel:%2B49%202461%2061%202509"
                value="+492461612509">+49 2461 61 2509</a> | <a
                moz-do-not-send="true"
                href="tel:%2B49%20241%2080%2024381"
                value="+492418024381">+49 241 80 24381</a><br>
              Fax: <a moz-do-not-send="true"
                href="tel:%2B49%202461%2080%206%2099753"
                value="+49246180699753">+49 2461 80 6 99753</a><br>
              <a moz-do-not-send="true"
                href="http://www.jara.org/jara-hpc" rel="noreferrer"
                target="_blank">www.jara.org/jara-hpc</a><br>
            </span>email: <a moz-do-not-send="true"
              href="mailto:hermanns@jara.rwth-aachen.de">hermanns@jara.rwth-aachen.de</a><br>
            <br>
            <br>
            _______________________________________________<br>
            mpiwg-p2p mailing list<br>
            <a moz-do-not-send="true"
              href="mailto:mpiwg-p2p@lists.mpi-forum.org">mpiwg-p2p@lists.mpi-forum.org</a><br>
            <a moz-do-not-send="true"
              href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-p2p"
              rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-p2p</a><br>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
mpiwg-p2p mailing list
<a class="moz-txt-link-abbreviated" href="mailto:mpiwg-p2p@lists.mpi-forum.org">mpiwg-p2p@lists.mpi-forum.org</a>
<a class="moz-txt-link-freetext" href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-p2p">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-p2p</a></pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Dan Holmes
Applications Consultant in HPC Research
EPCC, The University of Edinburgh
James Clerk Maxwell Building
The Kings Buildings
Peter Guthrie Tait Road 
Edinburgh
EH9 3FD
T: +44(0)131 651 3465
E: <a class="moz-txt-link-abbreviated" href="mailto:dholmes@epcc.ed.ac.uk">dholmes@epcc.ed.ac.uk</a>

*Please consider the environment before printing this email.*</pre>
  </body>
</html>