<div dir="ltr">Hi Rolf,<div><br></div><div>(a1) seems to me like another artifact of storing an unsigned quantity in a signed variable, i.e., the quantity in an MPI_Aint can be an unsigned address or a signed displacement. Since we don't have an unsigned type for addresses, the user can't portably fix this above MPI. We will need to add functions to deal with combinations of MPI_Aint and MPI_Counts. This is essentially why we needed MPI_Aint_add/diff. Or ... the golden (Au is gold) int ... MPI_Auint.<br></div><div><br></div><div>(a2) Should be solved by MPI_Aint_add/diff.</div><div><br></div><div>(a3) Section 4.1.5 of MPI 3.1 states "To ensure portability, arithmetic on absolute addresses should not be performed with the intrinsic operators \-" and \+". MPI_Aint_add was written carefully to indicate that the "base" argument is treated as an unsigned address and the "disp" argument is treated as a signed displacement.</div><div><br></div><div> ~Jim.</div><div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 29, 2019 at 5:19 AM Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear Jim and all,<br>
<br>
I'm not sure whether I'm really able to understand your email.<br>
<br>
I take the MPI view:<br>
<br>
(1) An absolute address can stored in an MPI_Aint variable<br>
with and only with MPI_Get_address or MPI_Aint_add.<br>
<br>
(2) A positive or negative number of bytes or a relative address<br>
which is by definition the amount of bytes between two locations<br>
in a MPI "sequential storage" (MPI-3.1 page 115) <br>
can be assigned with any method to an MPI_Aint variable<br>
as long as the original value fits into MPI_Aint.<br>
In both languages automatic type cast (i.e., sign expansion)<br>
is done.<br>
<br>
(3) If users misuse MPI_Aint for storing anything else into MPI_Aint <br>
variable then this is out of scope of MPI.<br>
If such values are used in a minus operation then it is<br>
out of the scope of MPI whether this makes sense.<br>
If the user is sure that the new value falls into category (2)<br>
then all is fine as long as the user is correct. <br>
<br>
I expect that your => is not a "greater or equal than".<br>
I expect that you noticed assignments.<br>
<br>
> intptr_t => MPI_Aint<br>
"intptr_t: integer type capable of holding a pointer."<br>
<br>
> uintptr_t => ??? (Anyone remember the MPI_Auint "golden Aint" proposal?)<br>
"uintptr_t: unsigned integer type capable of holding a pointer."<br>
<br>
may fall exactly exactly into (3) when used for pointers.<br>
<br>
<br>
Especially on a 64 bit system the user may have in the future exactly <br>
the problems (a), (a1), (a2) and (b) as described below.<br>
But here, the user is responsible, to for example implement (a3),<br>
whereas for MPI_Get_address, the implementors of the MPI library <br>
are responsible and the MPI Forum may be responsible for giving<br>
the correct advices.<br>
<br>
By the way, the golden MPI_Auint was never golden.<br>
Such need was "resolved" by introducing MPI_Aint_diff and MPI_Aint_add<br>
in MPI-3.1. <br>
<br>
<br>
> ptrdiff_t => MPI_Aint<br>
"std::ptrdiff_t is the signed integer type of the result of subtracting two pointers."<br>
<br>
may perfectly fit to (2).<br>
<br>
All of the following falls into category (2):<br>
<br>
> size_t (sizeof) => MPI_Count, int<br>
"sizeof( type ) (1) <br>
sizeof expression (2) <br>
Both versions are constant expressions of type std::size_t."<br>
<br>
> size_t (offsetof) => MPI_Aint, int<br>
"Defined in header <cstddef> <br>
#define offsetof(type, member) /*implementation-defined*/ <br>
The macro offsetof expands to an integral constant expression <br>
of type std::size_t, the value of which is the offset, in bytes,<br>
from the beginning of an object of specified type to ist<br>
specified member, including padding if any."<br>
<br>
Note that this offsetof has nothing to do with MPI_Offset.<br>
<br>
On a system with less than 2*31 byte and 4-byte int, it is guaranteed <br>
that size_t => int works.<br>
<br>
On a system with less than 2*63 byte and 8-byte MPI_Aint, it is guaranteed <br>
that size_t => MPI_Aint works.<br>
<br>
Problem: size_t is unsigned, int and MPI_Aint are signed.<br>
<br>
MPI_Count should be defined in a way that on systems with more than<br>
2**63 Bytes of disc space, that MPI_Count can hold such values,<br>
because <br>
int .LE. {MPI_Aint, MPI_Offset} .LE. MPI_Count<br>
<br>
Therefore size_t => MPI_Count should always work.<br>
<br>
> ssize_t => Mostly for error handling. Out of scope for MPI?<br>
"In short, ssize_t is the same as size_t, but is a signed type - <br>
read ssize_t as “signed size_t”. ssize_t is able to represent <br>
the number -1, which is returned by several system calls <br>
and library functions as a way to indicate error. <br>
For example, the read and write system calls: ...<br>
ssize_t read(int fildes, void *buf, size_t nbyte); ..."<br>
<br>
ssize_t fits therefore better to MPI_Aint, because both<br>
are signed types that can hold byte counts, but<br>
the value -1 in a MPI_Aint variable stands for a <br>
byte displacement of -1 bytes and not for an error code -1.<br>
<br>
<br>
All use of (2) is in principle no problem.<br>
------------------------------------------<br>
<br>
All the complex discussiuon of the last days is about (1):<br>
<br>
(1) An absolute address can stored in an MPI_Aint variable<br>
with and only with MPI_Get_address or MPI_Aint_add.<br>
<br>
In MPI-1 to MPI-3.0 and still in MPI-3.1 (here as may be not portable),<br>
we also allow <br>
MPI_Aint variable := absolute address in MPI_Aint variable<br>
+ or - <br>
a number of bytes (in any integer type).<br>
<br>
The result is then still in category (1).<br>
<br>
<br>
For the difference of two absolute addresses, <br>
MPI_Aint_diff can be used. The result is than MPI_Aint of category (2)<br>
<br>
In MPI-1 to MPI-3.0 and still in MPI-3.1 (here as may be not portable),<br>
we also allow <br>
MPI_Aint variable := absolute address in MPI_Aint variable<br>
- absolute address in MPI_Aint variable.<br>
<br>
The result is then in category (2).<br>
<br>
<br>
The problems we discuss the last days are about systems<br>
that internally use unsigned addresses and the MPI library stores<br>
these addresses into MPI_Aint variables and<br>
<br>
(a) a sequential storage can have virtual addresses that<br>
are both in the area with highest bit =0 and other addresses<br>
in the same sequential storage (i.e., same array or structure)<br>
with highest bit =1.<br>
<br>
or <br>
(b) some higher bits contain segment addresses.<br>
<br>
(b) is not a problem as long as a sequential storage resides<br>
always within one Segment.<br>
<br>
Therefore, we only have to discuss (a).<br>
<br>
The two problems that we have is <br>
(a1) that for the minus operations an integer overflow will<br>
happen and must be ignored. <br>
(a2) if such addresses are expanded to larger variables,<br>
e.g., MPI_Count with more bits in MPI_Count than in MPI_Aint,<br>
sign expansion will result in completely wring results. <br>
<br>
And here, the most simple trick is, <br>
(a3) that MPI_Get_address really shall<br>
map the contiguous unsigned range from 0 to 2**64-1 to the <br>
signed (and also contiguous) range from -2**63 to 2**63-1<br>
by simple subtracting 2**63.<br>
With this simple trick in MPI_Get_address, Problems<br>
8a1) and (a2) are resolved.<br>
<br>
It looks like that (a) and therefore (a1) and (a2)<br>
may be far in the future.<br>
But they may be less far in the future, if a system may <br>
map the whole applications cluster address space<br>
into virtual memory (not cache coherent, but accessible).<br>
<br>
<br>
And all this is never or only partial written into the <br>
MPI Standard, also all is (well) known by the MPI Forum,<br>
with the following exceptions: <br>
- (a2) is new.<br>
- (a1) is solved in MPI-3.1 only for MPI_Aint_diff and <br>
MPI_Aint_add, but not for the operators - and +<br>
if a user will switch on integer overflow detection<br>
in the future when we will have such large systems.<br>
- (a3) is new and in principle solves the problem also<br>
for + and - operators.<br>
<br>
At lease (a1)+(a2) should be added as rationale to MPI-4.0<br>
and (a3) as advice to implementors within the framework <br>
of big count, because (a2) is newly coming with big count. <br>
<br>
I hope this helps a bit if you took the time to read<br>
this long email. <br>
<br>
Best regards<br>
Rolf<br>
<br>
<br>
<br>
----- Original Message -----<br>
> From: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> To: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> Cc: "Jim Dinan" <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>>, "James Dinan" <<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a>><br>
> Sent: Monday, October 28, 2019 5:07:37 PM<br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<br>
<br>
> Still not sure I see the issue. MPI's memory-related integers should map to<br>
> types that serve the same function in C. If the base language is broken for<br>
> segmented addressing, we won't be able to fix it in a library. Looking at the<br>
> mapping below, I don't see where we would have broken it:<br>
> <br>
> intptr_t => MPI_Aint<br>
> uintptr_t => ??? (Anyone remember the MPI_Auint "golden Aint" proposal?)<br>
> ptrdiff_t => MPI_Aint<br>
> size_t (sizeof) => MPI_Count, int<br>
> size_t (offsetof) => MPI_Aint, int<br>
> ssize_t => Mostly for error handling. Out of scope for MPI?<br>
> <br>
> It sounds like there are some places where we used MPI_Aint in place of size_t<br>
> for sizes. Not great, but MPI_Aint already needs to be at least as large as<br>
> size_t, so this seems benign.<br>
> <br>
> ~Jim.<br>
> <br>
> On Fri, Oct 25, 2019 at 8:25 PM Dinan, James via mpiwg-large-counts < [<br>
> mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Jeff, thanks so much for opening up these old wounds. I’m not sure I have enough<br>
> context to contribute to the discussion. Where can I read up on the issue with<br>
> MPI_Aint?<br>
> <br>
> <br>
> <br>
> I’m glad to hear that C signed integers will finally have a well-defined<br>
> representation.<br>
> <br>
> <br>
> <br>
> ~Jim.<br>
> <br>
> <br>
> <br>
> <br>
> From: Jeff Hammond < [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
> ><br>
> Date: Thursday, October 24, 2019 at 7:03 PM<br>
> To: "Jeff Squyres (jsquyres)" < [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a><br>
> ] ><br>
> Cc: MPI BigCount Working Group < [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
> | <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] >, "Dinan, James" < [<br>
> mailto:<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a> | <a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a> ] ><br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts,<br>
> sizes, and byte and nonbyte displacements<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Jim (cc) suffered the most in MPI 3.0 days because of AINT_DIFF and AINT_SUM, so<br>
> maybe he wants to create this ticket.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Jeff<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Thu, Oct 24, 2019 at 2:41 PM Jeff Squyres (jsquyres) < [<br>
> mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] > wrote:<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Not opposed to ditching segmented addressing at all. We'd need a ticket for this<br>
> ASAP, though.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> This whole conversation is predicated on:<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> - MPI supposedly supports segmented addressing<br>
> <br>
> <br>
> - MPI_Aint is not sufficient for modern segmented addressing (i.e., representing<br>
> an address that may not be in main RAM and is not mapped in to the current<br>
> process' linear address space)<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> If we no longer care about segmented addressing, that makes a whole bunch of<br>
> BigCount stuff a LOT easier. E.g., MPI_Aint can basically be a<br>
> non-segment-supporting address integer. AINT_DIFF and AINT_SUM can go away,<br>
> too.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Oct 24, 2019, at 5:35 PM, Jeff Hammond via mpiwg-large-counts < [<br>
> mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Rolf:<br>
> <br>
> <br>
> <br>
> Before anybody spends any time analyzing how we handle segmented addressing, I<br>
> want you to provide an example of a platform where this is relevant. What<br>
> system can you boot today that needs this and what MPI libraries have expressed<br>
> an interest in supporting it?<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> For anyone who didn't hear, ISO C and C++ have finally committed to<br>
> twos-complement integers ( [<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a> |<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a> ] , [<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> |<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> ] ) because modern<br>
> programmers should not be limited by hardware designs from the 1960s. We should<br>
> similarly not waste our time on obsolete features like segmentation.<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Jeff<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Thu, Oct 24, 2019 at 10:13 AM Rolf Rabenseifner via mpiwg-large-counts < [<br>
> mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
> <br>
> <br>
> <br>
> <br>
>> I think that changes the conversation entirely, right?<br>
> <br>
> Not the first part, the state-of-current-MPI.<br>
> <br>
> It may change something for the future, or a new interface may be needed.<br>
> <br>
> Please, can you describe how MPI_Get_address can work with the<br>
> different variables from different memory segments.<br>
> <br>
> Or whether a completely new function or a set of functions is needed.<br>
> <br>
> If we can still express variables from all memory segments as<br>
> input to MPI_Get_address, there may be still a way to flatten<br>
> the result of some internal address-iquiry into a flattened<br>
> signed integer with the same behavior as MPI_Aint today.<br>
> <br>
> If this is impossible, then new way of thinking and solution<br>
> may be needed.<br>
> <br>
> I really want to see examples for all current stuff as you<br>
> mentioned in your last email.<br>
> <br>
> Best regards<br>
> Rolf<br>
> <br>
> ----- Original Message -----<br>
>> From: "Jeff Squyres" < [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] ><br>
>> To: "Rolf Rabenseifner" < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ]<br>
>> ><br>
>> Cc: "mpiwg-large-counts" < [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] ><br>
>> Sent: Thursday, October 24, 2019 5:27:31 PM<br>
>> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts,<br>
>> sizes, and byte and nonbyte displacements<br>
> <br>
>> On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner<br>
>> < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] <mailto: [<br>
>> mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] >> wrote:<br>
>> <br>
>> For me, it looked like that there was some misunderstanding<br>
>> of the concept that absolute and relative addresses<br>
>> and number of bytes that can be stored in MPI_Aint.<br>
>> <br>
>> ...with the caveat that MPI_Aint -- as it is right now -- does not support<br>
>> modern segmented memory systems (i.e., where you need more than a small number<br>
>> of bits to indicate the segment where the memory lives).<br>
>> <br>
>> I think that changes the conversation entirely, right?<br>
>> <br>
>> --<br>
>> Jeff Squyres<br>
>> [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] <mailto: [<br>
>> mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] ><br>
> <br>
> --<br>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> |<br>
> <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] .<br>
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
> Head of Dpmt Parallel Computing . . . [ <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">http://www.hlrs.de/people/rabenseifner</a> |<br>
> <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> ] .<br>
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
> [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> --<br>
> <br>
> <br>
> Jeff Hammond<br>
> [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
> [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
> <br>
> <br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
> [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> --<br>
> Jeff Squyres<br>
> [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ]<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> --<br>
> <br>
> <br>
> Jeff Hammond<br>
> [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
> [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
> [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
> <br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a><br>
<br>
-- <br>
Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
</blockquote></div>