[Mpi-forum] MPI-IO for mesh based simulations beyond 2 billion elements

Rajeev Thakur thakur at mcs.anl.gov
Mon Feb 20 14:13:13 CST 2012


> Rajeev points out that if it's a 1d array just use
> MPI_File_read_at_all or MPI_File_write_at_all.  The offsets in those
> calls take MPI_Offset types.  

That is without creating a subarray type.

Rajeev


On Feb 20, 2012, at 2:09 PM, Rob Latham wrote:

> On Mon, Feb 20, 2012 at 09:26:28AM -0600, Rob Latham wrote:
>> On Sun, Feb 19, 2012 at 10:51:42AM -0600, Rajeev Thakur wrote:
>>> Harald,
>>>            If all you have is a one-dimensional array, and each process reads or writes a large contiguous chunk of it, you don't need to use Type_create_subarray or even use collective I/O. You can just have each process call MPI_File_read_at or write_at with the right offset. The subarray type is more useful if you have a multidimensional array and need to read a subsection of it that is not contiguously located in the file.
>> 
>> Hi Rajeev: that's not entirely true any longer.  On BlueGene,
>> collective I/O will carry out aggregation and alignment to block
>> boundaries.  On Lustre, collective I/O can conjure up a workload such
>> that all operations from a particular client go to one server.  
>> 
>> So, anything that allows collective I/O, even for large requests, we
>> should try to encourage.
>> 
>> 32 bit MPI_Aint on 32 bit platforms isn't perfect (I think there's
>> some problem in the Fortran cases?)  but it's allowed us to actually
>> get work done on BlueGene.
> 
> oh, gosh, right, 64 bit MPI_Aint on a platform with 32 bit integers.
> 
> Rajeev points out that if it's a 1d array just use
> MPI_File_read_at_all or MPI_File_write_at_all.  The offsets in those
> calls take MPI_Offset types.  
> 
> ==rob
> 
> -- 
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum





More information about the mpi-forum mailing list