[Mpi3-hybridpm] First cut at slides

Pavan Balaji balaji at mcs.anl.gov
Sat Jun 22 08:35:49 CDT 2013


One more issue --

Consider a library libfoo.la, which provides:

foo_init()
{
	MPI_INIT_THREAD(THREAD_MULTIPLE);
}

foo_finalize()
{
	MPI_FINALIZE();
}

int main()
{
	MPI_INIT_THREAD(FUNNELED);

	foo_init();
	level = QUERY_THREAD_LEVEL();
	foo_finalize();

	if (level == MULTIPLE) {
		/* do some THREAD_MULTIPLE stuff */
	}

	return 0;
}

I guess my questions are:

1. can any library function call increase my thread level?

I guess yes.

2. can any library function call decrease my thread level?

My guess is yes, but then things get messy.

  -- Pavan

On 06/22/2013 08:03 AM, Pavan Balaji wrote:
> Hi Jeff,
>
> Thanks.  The slides seem to be confusing collective vs. synchronizing. I
> think you mean to say that INIT/FINALIZE are always collective, and they
> *might* be synchronizing (though we expect most implementations to not
> be synchronizing when they are not actually initializing or finalizing).
>
> Also, doesn't IS_THREAD_MAIN need to be thread safe as well, like
> QUERY_THREAD_LEVEL?
>
> #pragma omp parallel for
> for (i = 0 to gazillion) {
>      if (IS_THREAD_MAIN()) {
>          MPI_SUPER_CALL(...);
>      }
> }
>
> I agree with JeffH that CONNECT/DISCONNECT is not a problem.  MPI cannot
> actually finalize in the first call to MPI_FINALIZE in your example,
> since the ref-count didn't reach zero.
>
>   -- Pavan
>
> On 06/18/2013 07:47 AM, Jeff Squyres (jsquyres) wrote:
>> For comment...
>>
>> Here's a first cut at slides that we discussed yesterday.  It's meant
>> to be a rollup of all the issues surrounding:
>>
>> - thread safe INIT / FINALIZE
>> - reference-counting INIT / FINALIZE
>> - nesting of INIT / FINALIZE
>>
>> I discovered a new issue when making up these slides: with
>> ref-counting INIT/FINALIZE, since INIT/FINALIZE are collective, we
>> have the same ref counts in all procs in MPI_COMM_WORLD.  But the ref
>> counts may be different in non-COMM_WORLD connected processes.  What
>> happens when they try to finalize over all connected processes?  See
>> slide 12 for an example.  I'm not sure what the right answer is yet.
>>
>>
>>
>> _______________________________________________
>> Mpi3-hybridpm mailing list
>> Mpi3-hybridpm at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>>
>

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji



More information about the mpiwg-hybridpm mailing list