[MPI3 Fortran] Argument data sizes

Jeff Squyres jsquyres at cisco.com
Thu Sep 18 12:13:43 CDT 2008


On Sep 18, 2008, at 12:54 PM, Bill Long wrote:

>> Did you read the rest of my mail?
>
> Yes.

Hah -- sorry; my previous mail just passed this one in the ether.  :-)

>>>> - MPI's model is effectively distributed memory
>>>> - MPI's counts are *elements*, not *bytes*
>>>> - MPI datatypes are composable
>>>>
>>>> Most MPI apps that I have seen with huge arrays like this are  
>>>> actually representing multiple dimensions -- so they usually make  
>>>> a datatype for one row and use that as a unit to compose larger  
>>>> datatypes (e.g., a plane, a 3D space, ...etc.).  So most apps  
>>>> send "1 array" (or "N rows" or ...), not 2B (or 4B or ...)  
>>>> individual elements.
>
> This might occur to a C programmer, in that C does not really have  
> arrays.  However, such contortions would be very unnatural for a  
> Fortran programmer, and would only be considered if there were some  
> artificial constraint on buffer sizes.

Why?  How else would you do it efficiently for non-contiguous  
datatypes without a copy (which would be quite Bad for multiple  
reasons, especially for very large datasets)?  Such issues are  
certainly not unique to C (even in very large Fortran arrays,  
particularly if you want to send just a plane from the middle of a 3D  
array, or somesuch).  As another example of where datatypes are  
useful: with the Fortran equivalent of C structs, you need to build up  
a composed datatype to describe the message that you want to send/ 
receive.

Let's also acknowledge that a significant "artificial constraint" is  
backwards compatibility for existing applications.  We can't just  
change all the C bindings to make the millions of lines of MPI code  
out there no longer be compilable.

>>>> This is at least part of the discussion in the Forum about this  
>>>> issue (I mentioned this in a prior mail as well): since MPI  
>>>> datatypes are composable, applications can easily smash the 2B  
>>>> int count limitation anyway.  As Alexander S. noted, however,  
>>>> there are other issues as well.
>
> I'm not sure about "easily".  Certainly very unnaturally.  MPI is  
> hard to use in the first place. Asking programmers to play these  
> sorts of games makes it even worse.

Let's be fair -- *parallel programming* is hard.  MPI fits some kinds  
of parallel program models very well; MPI is a bad match for other  
kinds of parallel program models.  The same can be said for all  
parallel programming systems.  I think the last 20+ years have shown  
that there is no one-size-fits-all for parallel programming.

Note that many of these kinds of issues are part of the ongoing MPI  
Forum discussion about 8-byte counts (etc.).  If you want to  
participate in those discussions, please do so.  I think I'm done  
discussing them on the Fortran list, though.  Can we get back on- 
topic?  :-)

-- 
Jeff Squyres
Cisco Systems




More information about the mpiwg-fortran mailing list