[Mpi-forum] MPI_Count
Rajeev Thakur
thakur at mcs.anl.gov
Mon Jan 25 09:17:16 CST 2010
> IIRC, HP cited a small number of customers who wanted to do
> this. I know that send/receive were specifically mentioned,
> but I don't know if it was a side effect of actually wanting
> to fix the MPI_File stuff or not (i.e., was the real desire
> to fix the MPI_File stuff and all the rest was done for consistency?).
Ticket 117 is the opposite of the current proposal. It doesn't address I/O functions at all, only communication functions.
Rajeev
> I do know that #117 is woefully out of date; it has many
> holes -- some of which were partially updated by Dave.
>
>
>
> On Jan 24, 2010, at 6:51 PM, Rajeev Thakur wrote:
>
> > I don't know who wants to communicate >2GB at a time, but
> HP seemed to
> > have enough users wanting it to cause Kannan to create and
> pursue ticket #117 until he moved to a different project.
> >
> > Rajeev
> >
> >
> > > -----Original Message-----
> > > From: mpi-forum-bounces at lists.mpi-forum.org
> > > [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of
> > > Underwood, Keith D
> > > Sent: Sunday, January 24, 2010 5:20 PM
> > > To: Main MPI Forum mailing list; mpi3 at lists.mpi-forum.org
> > > Subject: Re: [Mpi-forum] MPI_Count
> > >
> > > > Still seems a little odd to have a solution for I/O
> > > functions and not
> > > > for communication functions (the original ticket was for
> > > communication
> > > > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/117).
> > > > If > 2GB can be written from a single process in a single
> > > call, they
> > > > can also be communicated. If it is tedious to break up the
> > > write into
> > > > smaller writes, the same applies to communication.
> > >
> > > "can be" and "will be" are very different phrases. The feeling
> > > within the group that was there Thursday morning was that
> the "way
> > > 64 bit is done" (TM) is to replicate all of the function
> calls. The
> > > group felt like a compelling case had been made for I/O,
> because you
> > > would want to write all of your memory to file (e.g.
> checkpointing)
> > > and memory per node is growing beyond 2 GB. However, it was much
> > > harder to make the case that you really wanted to send that much
> > > data from a contiguous location to any other node. In
> fact, someone
> > > made the statement "all of the application users
> understand that you
> > > can't do that and still scale on any network in the foreseeable
> > > future".
> > >
> > > Your interoperability comments may be a more motivating
> example, but
> > > as of Thursday morning, it was clear that there was not consensus
> > > among the forum members who were - especially given the
> pain that is
> > > going to be involved in fixing this for the general case.
> > >
> > > Keith
> > >
> > > _______________________________________________
> > > mpi-forum mailing list
> > > mpi-forum at lists.mpi-forum.org
> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> > >
> >
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> >
>
>
> --
> Jeff Squyres
> jsquyres at cisco.com
>
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
More information about the mpi-forum
mailing list