[Mpi3-ft] New revision of RTS proposal
Josh Hursey
jjhursey at open-mpi.org
Tue Dec 20 14:00:40 CST 2011
Updated document now available on the website.
I'm planning on sending this around to the forum at 5 pm eastern.
-- Josh
On Mon, Dec 19, 2011 at 1:44 PM, Josh Hursey <jjhursey at open-mpi.org> wrote:
> A new version of the document is available on the following wiki page:
> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/ft/rts_proposal_main
> I think this one is ready to go. If you have time, take a look over it
> and send notes to the list. I am planning on sending this out to the
> MPI Forum mailing list (and attaching it to the ticket) tonight or
> tomorrow morning.
>
> I'm working on the slides for the meeting. I hope to have a version to
> circulate tomorrow afternoon.
>
> Thanks,
> Josh
> Change Log:----------- * Minor wording touch up clarifications * 17.6:
> Added clarification to MPI_Comm_reenable_any_source() that is ok to
> pass MPI_GROUP_IGNORE. * 17.6: Added clarification that MPI_Comm_drain
> will match across process failure notification. Similar to
> MPI_Comm_validate. * 17.7.1: Added matching clarification to
> MPI_Comm_validate. * 17.10: Clarify that MPI_Comm_disconnect can be
> passed a collectively inactive communicator.
>
> On Fri, Dec 16, 2011 at 3:36 PM, Josh Hursey <jjhursey at open-mpi.org> wrote:
>> A new version of the document is available on the following wiki page:
>> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/ft/rts_proposal_main
>>
>> We have to have this document finalized by COB Monday if at all
>> possible. So take a look at it and send feedback to the list.
>>
>> -- Josh
>>
>> Change Log:
>> -----------
>> * Fix a few typos
>> * 17.5.1: Update the Process Failure handler section per discussions
>> * 17.6: Touch up the MPI_ANY_SOURCE wording
>> * 17.6: Clarify that if MPI_Comm_reenable_any_source is called with
>> an intercommunicator then it returns the group of failed processes in
>> the remote group.
>> * 17.6: Advice to implementors that MPI_Comm_drain maybe a locally
>> collective operation.
>> * 17.6: Extend advice to users to note that MPI_FAILHANDLER_MODE_ALL
>> does not guarantee that the failure handler will be called the same
>> number of times unless they call a validation operation to synchronize
>> the handlers. So they need to take care when using MPI_Comm_drain() in
>> this operating mode.
>> * 17.6.1: Add a sentence allowing MPI_ERR_IN_STATUS to be returned
>> from test and completion operations even if it is just the
>> MPI_ERR_ANY_SOURCE_DISABLED warning.
>> * 17.8.2: Communicator creation must have a collectively active input
>> communicator, and return uniformly at all processes.
>> * 17.8.2: Communicator construction operations will match across
>> process failure. So they match similar to MPI_Comm_validate() and not
>> like other collectives.
>> * 17.8.3: Inter-communicator creation operations have the same
>> constraints as communicator creation (Previous 2 points).
>> * 17.8.4: Added example section with the communicator creation loop example.
>> * 17.9: Topology creation operations match semantics of communicator creation.
>> * 17.10: Dynamic creation operations (spawn and friends) match
>> semantics of communicator creation.
>> * 17.11: Window creation operations match semantics of communicator creation.
>> * 17.12.2: File_open/close match semantics of communicator creation.
>> * A.1.1: Added the MPI_FAILHANDLER_MODE_*'s to the appendix
>>
>> Open Discussion Items:
>> ----------------------
>> * 17.6: Rename MPI_Comm_reenable_any_source to *_validate_* (?)
>> * 3.10 & 17.6.2 : Do these sections conflict? Should the status only
>> be associated with the 'source' since MPI_Recv would have returned the
>> status value if the operations were called separately?
>>
>>
>> --
>> Joshua Hursey
>> Postdoctoral Research Associate
>> Oak Ridge National Laboratory
>> http://users.nccs.gov/~jjhursey
>
>
>
> --
> Joshua Hursey
> Postdoctoral Research Associate
> Oak Ridge National Laboratory
> http://users.nccs.gov/~jjhursey
--
Joshua Hursey
Postdoctoral Research Associate
Oak Ridge National Laboratory
http://users.nccs.gov/~jjhursey
More information about the mpiwg-ft
mailing list