[Mpi3-hybridpm] First cut at slides

Jeff Squyres (jsquyres) jsquyres at cisco.com
Tue Jun 18 09:49:29 CDT 2013


Yes, good point.  I'll add thread safetyness of QUERY_THREAD to the slides.


On Jun 18, 2013, at 10:02 AM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:

> I agree that this solves most of my problems.  I still need MPI_QUERY_THREAD to be multi-thread-safe so I can distinguish between SERIALIZED and MULTIPLE inside of library calls though.  If you want me to instead call INIT_THREAD(..) then your ref count will probably need to be a 64b integer because I very well could do 
> 
> MPI_QUERY_THREAD();
> if SERIALIZED:
>   THREAD_LOCK
>   MPI_FOO
>   THREAD_UNLOCK
> else
>   MPI_FOO
> endif
> 
> in every library call that calls MPI, in which case it could happen more than 2^31 times if I've got >>10 threads per process making these library calls.  I'm not saying that this is going be common, but 1000 threads times 24 hours times 30 calls per second = 32b-overflow.
> 
> Jeff
> 
> On Tue, Jun 18, 2013 at 7:47 AM, Jeff Squyres (jsquyres) <jsquyres at cisco.com> wrote:
> For comment...
> 
> Here's a first cut at slides that we discussed yesterday.  It's meant to be a rollup of all the issues surrounding:
> 
> - thread safe INIT / FINALIZE
> - reference-counting INIT / FINALIZE
> - nesting of INIT / FINALIZE
> 
> I discovered a new issue when making up these slides: with ref-counting INIT/FINALIZE, since INIT/FINALIZE are collective, we have the same ref counts in all procs in MPI_COMM_WORLD.  But the ref counts may be different in non-COMM_WORLD connected processes.  What happens when they try to finalize over all connected processes?  See slide 12 for an example.  I'm not sure what the right answer is yet.
> 
> --
> Jeff Squyres
> jsquyres at cisco.com
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> 
> 
> 
> -- 
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> ALCF docs: http://www.alcf.anl.gov/user-guides
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm


-- 
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/





More information about the mpiwg-hybridpm mailing list