[mpi3-coll] July/August telecon
Torsten Hoefler
htor at cs.indiana.edu
Mon Jul 28 09:13:49 CDT 2008
Hello Collectives-WG,
we propose
July 31, 12:00pm EDT
as the time/date for our monthly collectives workgroup teleconference.
Please let us know if there are any strong objections.
We will mainly be talking about the feedback we got from the whole forum
on our proposals. My notes are at:
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/forum063008
Im particular we have to decide (those points also serve as agenda
items):
1) "One call fits all" vs. "Calls for everything:
- we should make a decision on which model we want to use in order to
be able to flesh out semantic details of the operations
2) Do we want/need all combinations that are semantically possible?
- the Forum accepted the usefulness of non-blocking collectives as proved
- the Forum wants to see more research/use cases for sparse collectives
- the Forum wants to see more research/use cases for persistent colls
- who wants to invest time into this?
3) Updates on topological collectives
4) Updates on MPI Plans (I have a slightly different
proposal/implementation than Christian for the same thing)
5) Variable size collectives (does anyone pick this topic up?)
6) MPI-2.2 issues (I will add the WG's proposals to the MPI-2.2 wiki so
that this can serve as a base for discussion)
a) fix non-scalable graph interface (obvious)
b) local reduction operation (needed by libraries, e.g., LibNBC)
c) local progress function (caused heavy discussions in Forum)
d) request completion callbacks (for better progression, e.g., LibNBC)
e) partial pack/unpack (needed by libraries, e.g., LibNBC)
7) other items
Please post any additional proposals items to discuss to the list!
Thanks & Best,
Andrew Lumsdaine and Torsten Hoefler
--
bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
Indiana University | http://www.indiana.edu
Open Systems Lab | http://osl.iu.edu/
150 S. Woodlawn Ave. | Bloomington, IN, 474045-7104 | USA
Lindley Hall Room 135 | +01 (812) 855-3608
More information about the mpiwg-coll
mailing list