[Mpi-forum] MPI user survey
Jeff Squyres
jsquyres at cisco.com
Sat Nov 14 18:08:51 CST 2009
Forum -- here's the questions that I took down on Friday morning.
Josh Hursey and I cleaned them up quite a bit, and we grabbed Bill
Gropp for 5 minutes on Saturday to give us a bit of spot feedback.
Here's the results.
*** Please send comments by Monday evening so that we can get these
posted on a web site. Thanks.
------------------------
x. Which of the following best describes you?
- User of MPI applications
- MPI application developer
- MPI implementer
- Academic educator, student, or researcher
- Program / project management
- Other ________________
x. Rate your familiarity with the MPI standard?
- 1/not familiar at all ... 5/extremely familiar
x. Think of an MPI application that you run frequently. What is the
typical number of MPI processes per job that you run? (select all
that apply)
- 1-16 MPI processes
- 17-64 MPI processes
- 65-512 MPI processes
- 513-2048 MPI processes
- 2049 MPI processes or more
- I don't know
x. Using the same MPI application from #3, what is the typical number
of MPI processes that you run per node? (select all that apply)
- 1 MPI process
- 2-3 MPI processes
- 4-7 MPI processes
- 8-15 MPI processes
- 16 MPI processes or more
- I don't know
x. Using the same application from #3, is it a 32 or 64 bit application?
(select all that apply)
- 32 bit
- 64 bit
- I don't know
- Other: _________________
x. Which of the following do your *any* of your MPI applications use?
(select all that apply)
- Threads
- OpenMP
- Shmem
- Global Arrays
- Co-processors / accelerators
- PGAS languages
- I don't know
- Other: ______________
x. How important are each of the following sets of MPI functionality
to *any* of your MPI applications?
1/not important at all ... 5/very important
- Point-to-point communications
- Collective communications
- Derived / complex datatypes
- Communicators other than MPI_COMM_WORLD
- Graph or Cartesian process topologies
- Error handles / error checking
- Dynamic MPI processes (spawn, connect/accept, join)
- One-sided communication
- Generalized requests
- Parallel I/O
- "PMPI" profiling interface
- Multi-threaded applications (for example, MPI_THREAD_MULTIPLE)
- Other: ______________
If you marked any set with 1 or 2, please explain why.
__________
x. Are any of your MPI applications written to use the MPI C++
bindings?
- Yes
- No
- I don't know
x. I expect to be able to upgrade to an MPI-3 implementation and still
be able to run my legacy MPI aplications *without recompiling*.
Strongly agree/1 ...... Strongly disagree/5
Open comment: _________________________
x. I expect to be able to upgrade to an MPI-3 implementation and only
need to recompile my legacy MPI applications *with no source code
changes*.
Strongly agree/1 ....... Strongly disagree/5
Open comment: _________________________
x. My MPI application would benefit from being able to reference more
than 2^31 data items in a single MPI function invocation.
Strongly agree/1 ....... Strongly disagree/5
Open comment: _________________________
x. MPI one-sided communication performance is more important to me
than supporting a rich remote memory access (RMA) feature set.
Strongly agree/1 ....... Strongly disagree/5
Open comment: _________________________
x. The following are a list of topics that the MPI Forum is
considering for MPI-3. Rank them in order of importance to your
MPI applications:
- Non-blocking collective communications
- Revamped one-sided communications (compared to MPI-2.2)
- MPI application control of fault tolerance
- New Fortran bindings (type safety, etc.)
- "Hybrid" programming (MPI in conjunction with threads, OpenMP, ..)
- Standardized third-party MPI tool support
- Other: ______________
x. What *ONE THING* would you like to see added or improved in the MPI
standard?
_____________
x. Rate the following in order of importance to your MPI applications:
- Performance
- Feature-rich API
- Run-time reliability
- Scalability to large numbers of MPI processes
- Integration with other communication protocols /
x. Did you attend the MPI Forum BOF at SC09?
- Yes
- No
x. Use the space below to provide any other information, suggestions,
or comments to the MPI Forum.
________________________
--
Jeff Squyres
jsquyres at cisco.com
More information about the mpi-forum
mailing list