[Mpi-forum] MPI_Count

Solt, David George david.solt at hp.com
Mon Jan 25 15:17:32 CST 2010


HP does not care anymore.

I've had some informal chats with one person from Platform who claims it is still important to them, but I also believe their current workaround (-I8 support for fortran and non-standardized API's for C that take long counts) is getting them by.  The person I talked to thought it was ridiculous to assume that message sizes would not continue to increase because users were likely to combine core-to-core traffic into large node-to-node messages.

The initial motivation for this was communication based.  I believe there were two customers who both wanted to use ScaLAPACK in a way that resulted in very large messages.

Yes, I have not updated #117 in a while since there has not been a lot of consensus.  

Dave

-----Original Message-----
From: Jeff Squyres [mailto:jsquyres at cisco.com] 
Sent: Monday, January 25, 2010 9:10 AM
To: Main MPI Forum mailing list
Cc: Solt, David George
Subject: Re: [Mpi-forum] MPI_Count

FWIW, HP doesn't seem to care any more -- Dave, can you verify/disagree?  We haven't seen anyone from Platform comment on this.  Dave -- have you pinged Platform to see if they care?

IIRC, HP cited a small number of customers who wanted to do this.  I know that send/receive were specifically mentioned, but I don't know if it was a side effect of actually wanting to fix the MPI_File stuff or not (i.e., was the real desire to fix the MPI_File stuff and all the rest was done for consistency?).

I do know that #117 is woefully out of date; it has many holes -- some of which were partially updated by Dave.



On Jan 24, 2010, at 6:51 PM, Rajeev Thakur wrote:

> I don't know who wants to communicate >2GB at a time, but HP seemed to have enough users wanting it to cause Kannan to create and
> pursue ticket #117 until he moved to a different project.
> 
> Rajeev
> 
> 
> > -----Original Message-----
> > From: mpi-forum-bounces at lists.mpi-forum.org
> > [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of
> > Underwood, Keith D
> > Sent: Sunday, January 24, 2010 5:20 PM
> > To: Main MPI Forum mailing list; mpi3 at lists.mpi-forum.org
> > Subject: Re: [Mpi-forum] MPI_Count
> >
> > > Still seems a little odd to have a solution for I/O
> > functions and not
> > > for communication functions (the original ticket was for
> > communication
> > > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/117).
> > > If > 2GB can be written from a single process in a single
> > call, they
> > > can also be communicated. If it is tedious to break up the
> > write into
> > > smaller writes, the same applies to communication.
> >
> > "can be" and "will be" are very different phrases.  The
> > feeling within the group that was there Thursday morning was
> > that the "way 64 bit is done" (TM) is to replicate all of the
> > function calls.  The group felt like a compelling case had
> > been made for I/O, because you would want to write all of
> > your memory to file (e.g. checkpointing) and memory per node
> > is growing beyond 2 GB.  However, it was much harder to make
> > the case that you really wanted to send that much data from a
> > contiguous location to any other node.  In fact, someone made
> > the statement "all of the application users understand that
> > you can't do that and still scale on any network in the
> > foreseeable future".
> >
> > Your interoperability comments may be a more motivating
> > example, but as of Thursday morning, it was clear that there
> > was not consensus among the forum members who were -
> > especially given the pain that is going to be involved in
> > fixing this for the general case.
> >
> > Keith
> >
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> >
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> 


-- 
Jeff Squyres
jsquyres at cisco.com





More information about the mpi-forum mailing list