[Mpi3-hybridpm] First cut at slides
Pavan Balaji
balaji at mcs.anl.gov
Sat Jun 22 08:03:38 CDT 2013
Hi Jeff,
Thanks. The slides seem to be confusing collective vs. synchronizing.
I think you mean to say that INIT/FINALIZE are always collective, and
they *might* be synchronizing (though we expect most implementations to
not be synchronizing when they are not actually initializing or finalizing).
Also, doesn't IS_THREAD_MAIN need to be thread safe as well, like
QUERY_THREAD_LEVEL?
#pragma omp parallel for
for (i = 0 to gazillion) {
if (IS_THREAD_MAIN()) {
MPI_SUPER_CALL(...);
}
}
I agree with JeffH that CONNECT/DISCONNECT is not a problem. MPI cannot
actually finalize in the first call to MPI_FINALIZE in your example,
since the ref-count didn't reach zero.
-- Pavan
On 06/18/2013 07:47 AM, Jeff Squyres (jsquyres) wrote:
> For comment...
>
> Here's a first cut at slides that we discussed yesterday. It's meant to be a rollup of all the issues surrounding:
>
> - thread safe INIT / FINALIZE
> - reference-counting INIT / FINALIZE
> - nesting of INIT / FINALIZE
>
> I discovered a new issue when making up these slides: with ref-counting INIT/FINALIZE, since INIT/FINALIZE are collective, we have the same ref counts in all procs in MPI_COMM_WORLD. But the ref counts may be different in non-COMM_WORLD connected processes. What happens when they try to finalize over all connected processes? See slide 12 for an example. I'm not sure what the right answer is yet.
>
>
>
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpiwg-hybridpm
mailing list