[Mpi3-hybridpm] First cut at slides

Jeff Hammond jhammond at alcf.anl.gov
Tue Jun 18 09:02:38 CDT 2013

I agree that this solves most of my problems.  I still need
MPI_QUERY_THREAD to be multi-thread-safe so I can distinguish between
SERIALIZED and MULTIPLE inside of library calls though.  If you want me to
instead call INIT_THREAD(..) then your ref count will probably need to be a
64b integer because I very well could do


in every library call that calls MPI, in which case it could happen more
than 2^31 times if I've got >>10 threads per process making these library
calls.  I'm not saying that this is going be common, but 1000 threads times
24 hours times 30 calls per second = 32b-overflow.


On Tue, Jun 18, 2013 at 7:47 AM, Jeff Squyres (jsquyres) <jsquyres at cisco.com
> wrote:

> For comment...
> Here's a first cut at slides that we discussed yesterday.  It's meant to
> be a rollup of all the issues surrounding:
> - thread safe INIT / FINALIZE
> - reference-counting INIT / FINALIZE
> - nesting of INIT / FINALIZE
> I discovered a new issue when making up these slides: with ref-counting
> INIT/FINALIZE, since INIT/FINALIZE are collective, we have the same ref
> counts in all procs in MPI_COMM_WORLD.  But the ref counts may be different
> in non-COMM_WORLD connected processes.  What happens when they try to
> finalize over all connected processes?  See slide 12 for an example.  I'm
> not sure what the right answer is yet.
> --
> Jeff Squyres
> jsquyres at cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm

Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
ALCF docs: http://www.alcf.anl.gov/user-guides
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20130618/c73d7cae/attachment-0001.html>

More information about the mpiwg-hybridpm mailing list