[Mpi3-hybridpm] First cut at slides

Jeff Squyres (jsquyres) jsquyres at cisco.com
Sat Jun 22 08:27:59 CDT 2013


On Jun 22, 2013, at 9:03 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:

> Thanks.  The slides seem to be confusing collective vs. synchronizing. I think you mean to say that INIT/FINALIZE are always collective, and they *might* be synchronizing (though we expect most implementations to not be synchronizing when they are not actually initializing or finalizing).

How so?  I thought slide 13 was fairly explicit about that:

- INIT and FINALIZE still collective
   - Continue to not specify if they synchronize or not

(although to be fair, I'm not sure if that is a local edit or not -- I can't remember offhand what was in the last version of the PPTX I sent out, so I've attached a current copy of the slides; see slide 13)

> Also, doesn't IS_THREAD_MAIN need to be thread safe as well, like QUERY_THREAD_LEVEL?

Yes, I missed that -- fixed.

> I agree with JeffH that CONNECT/DISCONNECT is not a problem.  MPI cannot actually finalize in the first call to MPI_FINALIZE in your example, since the ref-count didn't reach zero.

I guess my point is that process A must wait for the *actual* finalization in process B (which is the 2nd one), and if Finalize synchronizes (which most.. but not all.. MPI implementations do), then A will block until B calls the 2nd Finalize.

But I think I'm coming to the conclusion that if you want to not be affected by Finalize-possibly-blocking semantics, then you should just call MPI_COMM_DISCONNECT first.

-- 
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: init-finalize-issues.pptx
Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Size: 64646 bytes
Desc: init-finalize-issues.pptx
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20130622/3ef26e1a/attachment-0001.pptx>


More information about the mpiwg-hybridpm mailing list