<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Oct 25, 2019 at 5:20 AM Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear Jeff,<br>
<br>
> If we no longer care about segmented addressing, that makes a whole bunch of<br>
> BigCount stuff a LOT easier. E.g., MPI_Aint can basically be a<br>
> non-segment-supporting address integer. <br>
<br>
> AINT_DIFF and AINT_SUM can go away, too.<br>
<br>
Both statements are -- in my opinion -- incorrect.<br>
And the real problem is really ugly, see below.<br>
<br>
After we seem to agree that MPI_Ainit is used as it is used, i.e.,<br>
to currently store<br>
- absolute addresses (which means the bits of a 64-bit-unsigned address <br>
interpreted as a signed twos-complement 64 bit integer<br>
i.e., values between -2**63 and + 2**63-1<br>
(and only here is the discussion about whether some higher bits may <br>
be used to address segments)<br>
- relative addresses between -2**63 and + 2**63-1<br></blockquote><div><br></div><div>C and C++ do not allow one to do pointer arithmetic outside of a single array so I'm not sure how one would generate relative addresses this large, particularly on an x86_64 machine where the underlying memory addresses are 48 or 57 bits.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
- byte counts between 0 and 2**63-1<br></blockquote><div><br></div><div>All such uses should use functions with MPI_Count, not MPI_Aint, once those functions are defined in MPI 4.0.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
And that for two absolute addresses within the same "sequential storage"<br>
(defined in MPI-3.1 Sect. 4.1.12 page 115 lines 17-19), it is allowed<br>
to use a minus operator (as Long as integer overflow detection is <br>
switched off) or MPI_Aint_diff.<br></blockquote><div><br></div><div>Again, pointer arithmetic has to be within a single array. It's pretty hard to generate an overflow in this context.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
In principle, the MPI standard is not fully consistent with that:<br>
<br>
MPI-3.1 page 102 lines 45-46 tell:<br>
"To ensure portability, arithmetic on MPI addresses <br>
must <br>
be performed using the MPI_AINT_ADD and MPI_AINT_DIFF functions."<br>
and <br></blockquote><div><br></div><div>Yes, it also says for portability, RMA needs to use MPI_Alloc_mem and not malloc or similar arrays. That never matters in practice.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> > ... MPI-3.1 2.5.6 "Absolute<br>
> > Addresses and Relative Address Displacements" p16:39-43:<br>
> > <br>
> > "For retrieving absolute addresses or any calculation with absolute addresses, one<br>
> > should<br>
> > use the routines and functions provided in Section 4.1.5. Section<br>
> > 4.1.12 provides additional rules for the correct use of absolute addresses. For<br>
> > expressions with relative displacements or other usage without absolute<br>
> > addresses, intrinsic operators (e.g., +, -, *) can be used."<br>
<br>
And now about large counts, especially if we want to extent routines<br>
that currently use MPI_Aint to something larger, i.e., MPI_Aint_x or MPI_Count.<br></blockquote><div><br></div><div>MPI_Aint_x will not exist. MPI_Count is already a thing and is at least as large as MPI_Aint and MPI_Offset.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Here, the major problem is the automatic cast within an assignment<br>
<br>
MPI_Aint addr;<br>
MPI_Aint_x (or MPI_Count) addr_x;<br>
<br>
MPI_Get_address(...., &addr);<br>
addr_x = addr; // ***this Statement is the problem****<br>
<br></blockquote><div><br></div><div>No. This statement is not a problem. MPI_Count is required to hold the full range of MPI_Aint. You can assign MPI_Aint to MPI_Count without truncation.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
let's take my example fro a previous email (using 8 bit MPI_Aint)<br>
<br>
addr1 01111111 = 127 (signed int) = 127 (unsigned int) <br>
addr2 10000001 = -127 (signed int) = 129 (unsigned int)<br>
<br>
Internally the addreses are viewed by the hardware and OS as unsigned.<br>
MPI_Aint is interpreting the same bits as signed int.<br>
<br>
addr2-addr1 = 129 -127 = 2 (as unsigned int)<br>
but in a real application code with "-" operator:<br>
= -127 -127 = -254<br>
--> signed int Overflow because 8 bit can express only -128 .. +127<br>
--> detected or automatically corrected with +256 --> -254+256 = 2 <br>
<br>
And now with 12 bit MPI_Aint_x<br>
<br>
addr1_x := addr1 results in (by sign propagation)<br>
addr1_x = 000001111111 = 127 (signed int) = 127 (unsigned int) <br>
<br>
addr2_x := addr2 results in (by sign propagation) <br>
addr2_x = 111110000001 = -127 (signed int) = 129 (unsigned int)<br>
<br>
and then<br>
addr2_x - addr1_x = -127 - 127 = -254 <br>
which is a normal integer within 12bit, <br>
and therefore ***NO*** overflow correction!!!!!! <br>
<br>
And therefore a completely ***wrong*** result.<br>
<br>
Using two different types for absolute addresses seems to be a <br>
real problem in my opinion.<br>
<br>
<br>
And of course signed 64bit MPI_Aint does allow to specify only<br>
2**63-1 bytes, which is about 8*1000**6 Bytes,<br>
which is only 8 Exabyte.<br>
<br>
On systems with less than 8 Exabyte per MPI process, this is not<br>
a problem for message passing, but it is a problem for I/O,<br></blockquote><div><br></div><div>That is why MPI_Count is potentially larger than MPI_Aint, because it also has to hold MPI_Offset for IO purposes.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
and therefore for derived datatypes.<br>
And derived datayptes use MPI_Aint at several locations,<br>
and some of them with the possibility of providing absolute addresses. <br></blockquote><div><br></div><div>You are welcome to create a ticket for large-count datatype functions that use MPI_Count if one does not already exist.</div><div><br></div><div>Jeff</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
A solution of this problem seems to be not trivial, or is there one?<br>
<br>
And always doing MPI_Aint with more than 8 bytes is also a no-option, <br>
based on the ABI discussion, and is also a waste of memory.<br>
<br>
<br>
Best regards<br>
Rolf<br>
<br>
<br>
----- Original Message -----<br>
> From: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> To: "Jeff Squyres" <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>><br>
> Cc: "Jeff Hammond" <<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>>, "James Dinan" <<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a>>, "mpiwg-large-counts"<br>
> <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> Sent: Friday, October 25, 2019 1:02:35 AM<br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<br>
<br>
> Jim (cc) suffered the most in MPI 3.0 days because of AINT_DIFF and AINT_SUM, so<br>
> maybe he wants to create this ticket.<br>
> <br>
> Jeff<br>
<br>
<br>
> On Thu, Oct 24, 2019 at 2:41 PM Jeff Squyres (jsquyres) < [<br>
> mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] > wrote:<br>
> <br>
> <br>
> Not opposed to ditching segmented addressing at all. We'd need a ticket for this<br>
> ASAP, though.<br>
> <br>
> This whole conversation is predicated on:<br>
> <br>
> - MPI supposedly supports segmented addressing<br>
> - MPI_Aint is not sufficient for modern segmented addressing (i.e., representing<br>
> an address that may not be in main RAM and is not mapped in to the current<br>
> process' linear address space)<br>
> <br>
> If we no longer care about segmented addressing, that makes a whole bunch of<br>
> BigCount stuff a LOT easier. E.g., MPI_Aint can basically be a<br>
> non-segment-supporting address integer. AINT_DIFF and AINT_SUM can go away,<br>
> too.<br>
<br>
<br>
> On Oct 24, 2019, at 5:35 PM, Jeff Hammond via mpiwg-large-counts < [<br>
> mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
> <br>
> Rolf:<br>
> <br>
> Before anybody spends any time analyzing how we handle segmented addressing, I<br>
> want you to provide an example of a platform where this is relevant. What<br>
> system can you boot today that needs this and what MPI libraries have expressed<br>
> an interest in supporting it?<br>
> <br>
> For anyone who didn't hear, ISO C and C++ have finally committed to<br>
> twos-complement integers ( [<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a> |<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r1.html</a> ] , [<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> |<br>
> <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm" rel="noreferrer" target="_blank">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm</a> ] ) because modern<br>
> programmers should not be limited by hardware designs from the 1960s. We should<br>
> similarly not waste our time on obsolete features like segmentation.<br>
> <br>
> Jeff<br>
> <br>
> On Thu, Oct 24, 2019 at 10:13 AM Rolf Rabenseifner via mpiwg-large-counts < [<br>
> mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] > wrote:<br>
> <br>
> <br>
>> I think that changes the conversation entirely, right?<br>
> <br>
> Not the first part, the state-of-current-MPI.<br>
> <br>
> It may change something for the future, or a new interface may be needed.<br>
> <br>
> Please, can you describe how MPI_Get_address can work with the<br>
> different variables from different memory segments.<br>
> <br>
> Or whether a completely new function or a set of functions is needed.<br>
> <br>
> If we can still express variables from all memory segments as<br>
> input to MPI_Get_address, there may be still a way to flatten<br>
> the result of some internal address-iquiry into a flattened<br>
> signed integer with the same behavior as MPI_Aint today.<br>
> <br>
> If this is impossible, then new way of thinking and solution<br>
> may be needed.<br>
> <br>
> I really want to see examples for all current stuff as you<br>
> mentioned in your last email.<br>
> <br>
> Best regards<br>
> Rolf<br>
<br>
<br>
----- Original Message -----<br>
> From: "HOLMES Daniel" <<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>><br>
> To: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a>><br>
> Cc: "Rolf Rabenseifner" <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>>, "Jeff Squyres" <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>><br>
> Sent: Thursday, October 24, 2019 6:41:34 PM<br>
> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts, sizes, and byte and nonbyte displacements<br>
<br>
> Hi Rolf & Jeff,<br>
> <br>
> I think this wiki article is instructive on this topic also:<br>
> <a href="https://en.wikipedia.org/wiki/X86_memory_segmentation" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/X86_memory_segmentation</a><br>
> <br>
> This seems like a crazy memory addressing system to me personally, but it is a<br>
> (historic) example of a segmented addressing approach that MPI_Aint can<br>
> support.<br>
> <br>
> The “strange properties” for arithmetic are strange indeed, depending on what<br>
> the MPI_Aint stores and how.<br>
> <br>
> If MPI_Aint was 20 bits long and stores only the address, then it cannot be used<br>
> to determine uniquely which segment is being used or what the offset is within<br>
> that segment (there are 4096 possible answers). Does MPI need that more<br>
> detailed information? Probably - because segments were a way of implementing<br>
> memory protection, i.e. accessing a segment you did not have permission to<br>
> access led to a “segmentation fault” error. I do not know enough about these<br>
> old architectures to say whether an attempt to access the *same byte* using two<br>
> different segment:offset pairs that produce the *same* address could result in<br>
> different behaviour. That is, if I have access permissions for segment 3 but<br>
> not for segment 4, I can access {seg=3,offset=2^16-16} but can I access<br>
> {segment=4,offset=2^16-32}, which is the same byte? If not, then MPI needs to<br>
> store segment and offset inside MPI_Aint to be able to check and to set<br>
> registers correctly.<br>
> <br>
> If MPI_Aint is 32 bits long and stores the segment in the first 16 bits and the<br>
> offset in the last 16 bits, then the 20 bit address can be computed in a single<br>
> simple instruction and both segment and offset are immediately retrievable.<br>
> However, doing ordinary arithmetic with this bitwise representation is unwise<br>
> because it is a compound structure type. Let us subtract 1 from an MPI_Aint of<br>
> this layout which stores offset of 0 and some non-zero segment. We get offset<br>
> (2^16-1) in segment (s-1), which is not 1 byte before the previous MPI_Aint<br>
> because segments overlap. The same happens when adding and overflowing the<br>
> offset portion - it changes the segment in an incorrect way. Segment++ moves<br>
> the address forward only 16 bytes, not 2^16 bytes.<br>
> <br>
> The wrap-around from the end of the address space back to the beginning is also<br>
> a source of strange properties for arithmetic.<br>
> <br>
> One of the key statements from that wiki page is this:<br>
> <br>
> The root of the problem is that no appropriate address-arithmetic instructions<br>
> suitable for flat addressing of the entire memory range are available.[citation<br>
> needed] Flat addressing is possible by applying multiple instructions, which<br>
> however leads to slower programs.<br>
> <br>
> Cheers,<br>
> Dan.<br>
> —<br>
> Dr Daniel Holmes PhD<br>
> Architect (HPC Research)<br>
> <a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a><mailto:<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>><br>
> Phone: +44 (0) 131 651 3465<br>
> Mobile: +44 (0) 7940 524 088<br>
> Address: Room 2.09, Bayes Centre, 47 Potterrow, Central Area, Edinburgh, EH8 9BT<br>
> —<br>
> The University of Edinburgh is a charitable body, registered in Scotland, with<br>
> registration number SC005336.<br>
> —<br>
<br>
<br>
> ----- Original Message -----<br>
>> From: "Jeff Squyres" < [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] ><br>
>> To: "Rolf Rabenseifner" < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ]<br>
>> ><br>
>> Cc: "mpiwg-large-counts" < [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
>> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ] ><br>
>> Sent: Thursday, October 24, 2019 5:27:31 PM<br>
>> Subject: Re: [Mpiwg-large-counts] Large Count - the principles for counts,<br>
>> sizes, and byte and nonbyte displacements<br>
> <br>
>> On Oct 24, 2019, at 11:15 AM, Rolf Rabenseifner<br>
>> < [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] <mailto: [<br>
>> mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> | <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] >> wrote:<br>
>> <br>
>> For me, it looked like that there was some misunderstanding<br>
>> of the concept that absolute and relative addresses<br>
>> and number of bytes that can be stored in MPI_Aint.<br>
>> <br>
>> ...with the caveat that MPI_Aint -- as it is right now -- does not support<br>
>> modern segmented memory systems (i.e., where you need more than a small number<br>
>> of bits to indicate the segment where the memory lives).<br>
>> <br>
>> I think that changes the conversation entirely, right?<br>
>> <br>
>> --<br>
>> Jeff Squyres<br>
>> [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] <mailto: [<br>
>> mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ] ><br>
> <br>
> --<br>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email [ mailto:<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> |<br>
> <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> ] .<br>
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
> Head of Dpmt Parallel Computing . . . [ <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">http://www.hlrs.de/people/rabenseifner</a> |<br>
> <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> ] .<br>
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
> [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
> <br>
> <br>
> --<br>
> Jeff Hammond<br>
> [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
> [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> [ mailto:<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> |<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a> ]<br>
> [ <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> |<br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a> ]<br>
> <br>
> <br>
> --<br>
> Jeff Squyres<br>
> [ mailto:<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> | <a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a> ]<br>
> <br>
> <br>
> <br>
> --<br>
> Jeff Hammond<br>
> [ mailto:<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> | <a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a> ]<br>
> [ <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> | <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a> ]<br>
> <br>
> _______________________________________________<br>
> mpiwg-large-counts mailing list<br>
> <a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" target="_blank">mpiwg-large-counts@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts</a><br>
<br>
-- <br>
Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> .<br>
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br>
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br>
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" rel="noreferrer" target="_blank">www.hlrs.de/people/rabenseifner</a> .<br>
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div></div>