[MPI3 Fortran] Argument data sizes
Jeff Squyres
jsquyres at cisco.com
Thu Sep 18 11:58:20 CDT 2008
Bill: given your silence, I feel that I should clarify that I was not
trying to be snide -- although it probably came across that way. You
replied to the middle of my mail, and it wasn't clear from your
context that you had read / grokked the rest of my mail. My big
point, probably poorly worded, was that MPI datatypes are composable,
so it's actually quite easy (and common) to smash the 2B C int element
limit.
Also, FWIW, if Cray cares about 8-byte counts (etc.) for MPI, I
suggest that you should raise this through Cray's MPI Forum
representative (I see that Cray has attended 4 out of the last 5 Forum
meetings). This is exactly the time to indicate who cares about items
for inclusion in MPI-3, etc.
On Sep 18, 2008, at 12:19 PM, Jeff Squyres wrote:
> On Sep 18, 2008, at 12:06 PM, Bill Long wrote:
>
>>> I don't follow Cray hardware -- do you actually have machines that
>>> have more than 32GB per core? If my math is right, 4^32 * 8 =
>>> 32GB would be the actual size of a 4B element array of doubles.
>>
>> Yes, but you don't really need that. It is not uncommon to use
>> OpenMP within a local SMP domain and use MPI between such domains.
>> Local SMP domains of more than 32GB are not hard to find. Only
>> 16GB is needed for an array with 4-byte elements. PC's have that
>> much memory these days. If the intention is to make a standard
>> relevant going forward, the target should be clear.
>
> Did you read the rest of my mail?
>
>>> Remember three key facts:
>>>
>>> - MPI's model is effectively distributed memory
>>> - MPI's counts are *elements*, not *bytes*
>>> - MPI datatypes are composable
>>>
>>> Most MPI apps that I have seen with huge arrays like this are
>>> actually representing multiple dimensions -- so they usually make
>>> a datatype for one row and use that as a unit to compose larger
>>> datatypes (e.g., a plane, a 3D space, ...etc.). So most apps send
>>> "1 array" (or "N rows" or ...), not 2B (or 4B or ...) individual
>>> elements.
>>>
>>> This is at least part of the discussion in the Forum about this
>>> issue (I mentioned this in a prior mail as well): since MPI
>>> datatypes are composable, applications can easily smash the 2B int
>>> count limitation anyway. As Alexander S. noted, however, there
>>> are other issues as well.
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
--
Jeff Squyres
Cisco Systems
More information about the mpiwg-fortran
mailing list