[Mpi-forum] MPI_Count

Rajeev Thakur thakur at mcs.anl.gov
Sun Jan 24 16:40:29 CST 2010


Still seems a little odd to have a solution for I/O functions and not for communication functions (the original ticket was for
communication https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/117). If > 2GB can be written from a single process in a single
call, they can also be communicated. If it is tedious to break up the write into smaller writes, the same applies to communication.

Also this doesn't address the other real problem in I/O that occurs on 32-bit systems because of the mismatch between MPI_Aint and
MPI_Offset. Datatypes are used to describe file layout, and they use MPI_Aint. If you have a distributed array of global size 1K x
1K x 1K and try to use the subarray or darray datatype, its extent becomes negative on 32-bit systems. Similarly, you can't use an
hindexed type as a filetype if you want to index beyond 2G.

Rajeev

 

> -----Original Message-----
> From: mpi-forum-bounces at lists.mpi-forum.org 
> [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of 
> Jeff Squyres
> Sent: Friday, January 22, 2010 2:07 PM
> To: MPI Forum list; mpi3 at lists.mpi-forum.org
> Subject: [Mpi-forum] MPI_Count
> 
> Please note that there was a bunch of discussion about 
> MPI_Count and other compatibility issues at the meeting in 
> Atlanta this week.  I posted a summary of takeaways from the 
> discussion on the bwcompat WG mailing list and wiki:
> 
>     
> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/BackCompatMeetings
>     http://lists.mpi-forum.org/mpi3-bwcompat/2010/01/0024.php
> 
> Although there are still some decisions to be made (e.g., 
> about Fortran), a surprising amount of consensus emerged.  
> Please read up on the notes to see what was discussed -- 
> please chime in ASAP if you have dissenting views.
> 
> Thanks!
> 
> --
> Jeff Squyres
> jsquyres at cisco.com
> 
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> 




More information about the mpi-forum mailing list