[Mpi-forum] MPI_Count

Graham, Richard L. rlgraham at ornl.gov
Sun Jan 24 18:18:37 CST 2010

I tend to agree with Rajeev, and have passed this on to our scientific applications group to see what their opinion is.


On 1/24/10 6:51 PM, "Rajeev Thakur" <thakur at mcs.anl.gov> wrote:

I don't know who wants to communicate >2GB at a time, but HP seemed to have enough users wanting it to cause Kannan to create and
pursue ticket #117 until he moved to a different project.


> -----Original Message-----
> From: mpi-forum-bounces at lists.mpi-forum.org
> [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of
> Underwood, Keith D
> Sent: Sunday, January 24, 2010 5:20 PM
> To: Main MPI Forum mailing list; mpi3 at lists.mpi-forum.org
> Subject: Re: [Mpi-forum] MPI_Count
> > Still seems a little odd to have a solution for I/O
> functions and not
> > for communication functions (the original ticket was for
> communication
> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/117).
> > If > 2GB can be written from a single process in a single
> call, they
> > can also be communicated. If it is tedious to break up the
> write into
> > smaller writes, the same applies to communication.
> "can be" and "will be" are very different phrases.  The
> feeling within the group that was there Thursday morning was
> that the "way 64 bit is done" (TM) is to replicate all of the
> function calls.  The group felt like a compelling case had
> been made for I/O, because you would want to write all of
> your memory to file (e.g. checkpointing) and memory per node
> is growing beyond 2 GB.  However, it was much harder to make
> the case that you really wanted to send that much data from a
> contiguous location to any other node.  In fact, someone made
> the statement "all of the application users understand that
> you can't do that and still scale on any network in the
> foreseeable future".
> Your interoperability comments may be a more motivating
> example, but as of Thursday morning, it was clear that there
> was not consensus among the forum members who were -
> especially given the pain that is going to be involved in
> fixing this for the general case.
> Keith
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum

mpi-forum mailing list
mpi-forum at lists.mpi-forum.org

More information about the mpi-forum mailing list