[mpiwg-tools] Meeting reminder - performance tools

Marc-Andre Hermanns hermanns at jara.rwth-aachen.de
Thu Sep 1 05:26:12 CDT 2016


Hi all,

This is your friendly reminder of our regularly
scheduled Tools WG call *today* at 11 a.m. EDT.

This week's focus is performance tools.

The link is here:
https://cisco.webex.com/ciscosales/j.php?MTID=m0fb3e4a68162bff5874849e4700b806a

The Agenda for today:
- Checklist for face-to-face in Edinburgh
- MPI_T Events (current version of Header attached)

Also, please write me an email if you still need access to the
repository for the MPI_T event header file. I will send you an invitation.

My plan is to discuss which text passages we need, to bring this
current header into a Standard text format that we can discuss at the
face-to-face.

Furthermore, I'd like to discuss whether we want to get the
synchronous parts in first and then tackle the asynchronous, or
whether we want this to go in to a single big update to the MPI_T header.

Looking forward to talking to you.

Cheers,
Marc-Andre
-- 
Marc-Andre Hermanns
Jülich Aachen Research Alliance,
High Performance Computing (JARA-HPC)
Jülich Supercomputing Centre (JSC)

Schinkelstrasse 2
52062 Aachen
Germany

Phone: +49 2461 61 2509 | +49 241 80 24381
Fax: +49 2461 80 6 99753
www.jara.org/jara-hpc
email: hermanns at jara.rwth-aachen.de




-------------- next part --------------
/// @brief Event handler prototype to be implemented by the tool.
///
/// @param[in] event
///     Event handle of triggered event
/// @param[in,out] user_data
///     User data provided on event handle creation
///
typedef (void)(*MPI_T_event_handler_func)(MPI_T_event event,
                                          void*       user_data);

/// @brief Registration flags
///
/// @note Asynchronous mode is not yet handled by this API.
///
typedef enum {
    MPI_T_EVENT_MODE_SYNCHRONOUS  = 0, ///< Event is triggered as it occurs
    MPI_T_EVENT_MODE_ASYNCHRONOUS = 1, ///< Event is accumulated in a buffer
} MPI_T_Event_mode;

/// @brief Get number of supported events
/// 
/// @param[out] num_events
///     Number of events defined
///
int
MPI_T_event_get_num(int* num_events);

/// @brief Get event information for an event at a given index
///
/// @param[in] index,
///     Event index
/// @param[out] name
///     Unique name of the event
/// @param[out] name_len
///     Length of the name
/// @param[out] desc
///     Description of the event
/// @param[out[ desc_len
///     Length of the string/buffer %desc
/// @param[out] class
///     Event class
/// @param[in,out] list_of_datatypes
///     List of datatypes used to encode event data
/// @param[in,out] num_elements
///     Size of the allocated array (in) and number of elements
///     written to the array by the runtime (out)
/// @param[out] bind
///     Object type this event is bound to
/// @param[out] mode
///     Flag field indicating event characteristics, e.g., synchronicity
///
/// @return
///     MPI_T return code, Error code is returned if list_of_datatypes is
///     too small.
///
int
MPI_T_event_get_info(int           index,
                     char*         name,
                     int*          name_len,
                     char*         desc,
                     int*          desc_len,
                     int*          verbosity,
                     MPI_Datatype* list_of_datatypes,
                     int*          num_elements,
                     int*          bind,
                     int*          mode);


/// @brief Get event index for a given name
///
/// @param[in] name
///     NULL-terminated string specifying the event name
/// @param[out] index
///     Event index
///
/// @return
///     MPI_T return code
///
int
MPI_T_event_get_index(char* name, int* index);

/// @brief Register an event and allocate handle
///
/// @param[in] index
///     Event index
/// @param[in] obj_handle
///     Concrete MPI handle (e.g., MPI_Comm, MPI_File, ...), not a pointer to a handle.
/// @param[in] user_data
///     Pointer to user data for this registration
/// @param[out] event
///     New event handle
///
/// @return
///     MPI_T return code
///
/// @note This call does not block until the event is registered. The runtime
///     will register the event at the earliest possible time, but may delay
///     the concrete time beyond the end of this call.
///
int
MPI_T_event_handle_alloc(int                      index,
                         void*                    obj_handle,
                         void*                    user_data,
                         MPI_T_event_handler_func event_handler,
                         MPI_T_event*             event);

/// @brief Unregister and deallocate an event handle
///
/// @param[in,out] event
///     Event handle to unregister
///
/// @return
///     MPI_T return code
///
/// @note When is this unregistered? When would it be safe to deallocate
///     user_data? Could unregister block until 'done'?
/// 
int
MPI_T_event_unregister(MPI_T_event* event);

/// @brief Read next value from event handle
///
/// @param[in] event_handle
///     Event handle
/// @param[in] datatype
///     Datatype to be read from event data
/// @param[in,out] buffer
///     User-allocated buffer where data is copied to.
///
/// @return
///     MPI_T return code
///
/// @note If NULL is passed as the buffer argument, the 
///     element is skipped.
///
/// @note Should we provide a means to explicitly end
///     the processing of this event?
///
/// @code
/// void
/// my_event_handler( MPI_T_event event, void* data) {
///    MPI_T_event_read(event, MPI_T_DOUBLE, &mystruct.time);
///    MPI_T_event_read(event, MPI_T_INT, &mystruct.rank);
///    MPI_T_event_read(event, MPI_T_INT, NULL);
///    MPI_T_event_read(event, MPI_T_UNSIGNED_LONG, &mystruct.id);
/// }
/// @endcode
/// 
int
MPI_T_event_read(MPI_T_event  event,
                 MPI_Datatype datatype,
                 void*        buffer);

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4899 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-tools/attachments/20160901/a580554f/attachment.bin>


More information about the mpiwg-tools mailing list