[Mpi-forum] MPI user survey

Underwood, Keith D keith.d.underwood at intel.com
Sat Nov 14 20:48:09 CST 2009


yes, possibly. An arbitrarily complex interface will not support performance that is "close to the metal". Brian Barrett and I accepted the challenge that was issued tp bring back data on this.

Keith


________________________________
From: mpi-forum-bounces at lists.mpi-forum.org <mpi-forum-bounces at lists.mpi-forum.org>
To: mpi-forum at lists.mpi-forum.org <mpi-forum at lists.mpi-forum.org>
Sent: Sat Nov 14 18:39:42 2009
Subject: Re: [Mpi-forum] MPI user survey

Hello,

I have a question about this questionnaire:
MPI one-sided communication performance is more important to me than supporting a rich remote memory access (RMA) feature set.
Why are the users being asked to chose one or another? Are we suggesting that we cannot give the user community both performance and a rich feature set ( via our interface and their implementations)?

Thanks,
Vinod.
--
Vinod Tipparaju ^ http://ft.ornl.gov/~vinod ^ 1-865-241-1802



> From: keith.d.underwood at intel.com
> To: mpi-forum at lists.mpi-forum.org
> Date: Sat, 14 Nov 2009 18:54:53 -0700
> Subject: Re: [Mpi-forum] MPI user survey
>
> Processes per node appears to be a poor question without context. Do they have a 4 socket, 24 core node? Or an older 1 socket, 2 core node?
>
> I think the 2^31 question needs more explanation. 2 billion data items should be characterized as 8 GB of floats or 16 GB of doubles or.... just as an example to help the user understand. I agree that the user won't grok the current wording.
>
> I don't think the RMA question will be interpreted correctly as is. The real core of it is more like "how much performance would you be willing to sacrifice to get derived datatypes, communicators, etc.?" I doubt that you can phrase the question this way, but I don't think it gains enough insight into that as is.
>
> Keith
>
> > -----Original Message-----
> > From: mpi-forum-bounces at lists.mpi-forum.org [mailto:mpi-forum-
> > bounces at lists.mpi-forum.org] On Behalf Of Jeff Squyres
> > Sent: Saturday, November 14, 2009 8:11 PM
> > To: Main MPI Forum mailing list
> > Subject: Re: [Mpi-forum] MPI user survey
> >
> > Actually, when we put the questions in the survey web site, they came
> > out slightly differently. Have a look here:
> >
> > http://mpi-forum.questionpro.com/
> >
> > *** DO NOT GIVE THIS URL OUT TO USERS YET! ***
> >
> > Feel free to fill out the survey; we'll be clearing all the data on
> > Monday evening so that it can "go live".
> >
> >
> >
> >
> > On Nov 14, 2009, at 4:08 PM, Jeff Squyres (jsquyres) wrote:
> >
> > > Forum -- here's the questions that I took down on Friday morning.
> > > Josh Hursey and I cleaned them up quite a bit, and we grabbed Bill
> > > Gropp for 5 minutes on Saturday to give us a bit of spot feedback.
> > > Here's the results.
> > >
> > > *** Please send comments by Monday evening so that we can get these
> > > posted on a web site. Thanks.
> > >
> > > ------------------------
> > > x. Which of the following best describes you?
> > > - User of MPI applications
> > > - MPI application developer
> > > - MPI implementer
> > > - Academic educator, student, or researcher
> > > - Program / project management
> > > - Other ________________
> > >
> > > x. Rate your familiarity with the MPI standard?
> > > - 1/not familiar at all ... 5/extremely familiar
> > >
> > > x. Think of an MPI application that you run frequently. What is the
> > > typical number of MPI processes per job that you run? (select all
> > > that apply)
> > > - 1-16 MPI processes
> > > - 17-64 MPI processes
> > > - 65-512 MPI processes
> > > - 513-2048 MPI processes
> > > - 2049 MPI processes or more
> > > - I don't know
> > >
> > > x. Using the same MPI application from #3, what is the typical number
> > > of MPI processes that you run per node? (select all that apply)
> > > - 1 MPI process
> > > - 2-3 MPI processes
> > > - 4-7 MPI processes
> > > - 8-15 MPI processes
> > > - 16 MPI processes or more
> > > - I don't know
> > >
> > > x. Using the same application from #3, is it a 32 or 64 bit
> > > application?
> > > (select all that apply)
> > > - 32 bit
> > > - 64 bit
> > > - I don't know
> > > - Other: _________________
> > >
> > > x. Which of the following do your *any* of your MPI applications use?
> > > (select all that apply)
> > > - Threads
> > > - OpenMP
> > > - Shmem
> > > - Global Arrays
> > > - Co-processors / accelerators
> > > - PGAS languages
> > > - I don't know
> > > - Other: ______________
> > >
> > > x. How important are each of the following sets of MPI functionality
> > > to *any* of your MPI applications?
> > > 1/not important at all ... 5/very important
> > > - Point-to-point communications
> > > - Collective communications
> > > - Derived / complex datatypes
> > > - Communicators other than MPI_COMM_WORLD
> > > - Graph or Cartesian process topologies
> > > - Error handles / error checking
> > > - Dynamic MPI processes (spawn, connect/accept, join)
> > > - One-sided communication
> > > - Generalized requests
> > > - Parallel I/O
> > > - "PMPI" profiling interface
> > > - Multi-threaded applications (for example, MPI_THREAD_MULTIPLE)
> > > - Other: ______________
> > > If you marked any set with 1 or 2, please explain why.
> > > __________
> > >
> > > x. Are any of your MPI applications written to use the MPI C++
> > > bindings?
> > > - Yes
> > > - No
> > > - I don't know
> > >
> > > x. I expect to be able to upgrade to an MPI-3 implementation and
> > still
> > > be able to run my legacy MPI aplications *without recompiling*.
> > > Strongly agree/1 ...... Strongly disagree/5
> > > Open comment: _________________________
> > >
> > > x. I expect to be able to upgrade to an MPI-3 implementation and only
> > > need to recompile my legacy MPI applications *with no source
> > code
> > > changes*.
> > > Strongly agree/1 ....... Strongly disagree/5
> > > Open comment: _________________________
> > >
> > > x. My MPI application would benefit from being able to reference more
> > > than 2^31 data items in a single MPI function invocation.
> > > Strongly agree/1 ....... Strongly disagree/5
> > > Open comment: _________________________
> > >
> > > x. MPI one-sided communication performance is more important to me
> > > than supporting a rich remote memory access (RMA) feature set.
> > > Strongly agree/1 ....... Strongly disagree/5
> > > Open comment: _________________________
> > >
> > > x. The following are a list of topics that the MPI Forum is
> > > considering for MPI-3. Rank them in order of importance to your
> > > MPI applications:
> > > - Non-blocking collective communications
> > > - Revamped one-sided communications (compared to MPI-2.2)
> > > - MPI application control of fault tolerance
> > > - New Fortran bindings (type safety, etc.)
> > > - "Hybrid" programming (MPI in conjunction with threads,
> > > OpenMP, ..)
> > > - Standardized third-party MPI tool support
> > > - Other: ______________
> > >
> > > x. What *ONE THING* would you like to see added or improved in the
> > MPI
> > > standard?
> > > _____________
> > >
> > > x. Rate the following in order of importance to your MPI
> > applications:
> > > - Performance
> > > - Feature-rich API
> > > - Run-time reliability
> > > - Scalability to large numbers of MPI processes
> > > - Integration with other communication protocols /
> > >
> > > x. Did you attend the MPI Forum BOF at SC09?
> > > - Yes
> > > - No
> > >
> > > x. Use the space below to provide any other information, suggestions,
> > > or comments to the MPI Forum.
> > > ________________________
> > >
> > >
> > > --
> > > Jeff Squyres
> > > jsquyres at cisco.com
> > >
> > > _______________________________________________
> > > mpi-forum mailing list
> > > mpi-forum at lists.mpi-forum.org
> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> > >
> >
> >
> > --
> > Jeff Squyres
> > jsquyres at cisco.com
> >
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20091114/ccaf813c/attachment-0001.html>


More information about the mpi-forum mailing list