[Mpi3-tools] One more suggested change to MPI_T / moving count from get_info to alloc_handle

Kathryn Mohror kathryn at llnl.gov
Thu Jun 30 16:04:59 CDT 2011


Hi all,

I spoke with Adam Moody and Christof about this issue a bit ago.  We all 
like the proposed change -- moving the count to the allocate_handle 
function.

Kathryn

On 6/30/2011 12:10 PM, Martin Schulz wrote:
> Hi all,
>
> Christof has found a potential problem in the API design that
> I think is worth discussing and (at least IMHO) worth changing:
>
> The current design returns the "count" of a variable, i.e., how many
> data elements are used to return or set the data, during the
> Get_info calls. For some variables, though, it may return information
> in multiple elements, e.g., once for each rank. The number of
> these elements, in this case, would depend on the MPI object
> we are binding the variable to and hence is not known until
> we allocate the matching handle.
>
> Examples for these kind of variables are eager limits that can
> be set on individual rank pairs or number of packets received
> or sent, again on individual communication pairs. In both cases
> the variable would be bound to / associated with a Comm
> object.
>
> The only way to support this in the current interface proposal
> is to return a count that is the biggest possible. In the example
> that would be the size of MPI_COMM_WORLD, which wouldn't
> scale.
>
> One (easy) way to solve this (and the credit for this goes to
> Christof) is to move returning the count from the Get_info
> calls to the Handle_allocate calls. This way, the count could
> be set based on the MPI object we are associating the
> variable with.
>
> Anybody see any problems with this change? Note, that valid
> implementations can still use scalar variables only by always
> return one for each allocated handle.
>
> Comments?
>
> Thanks,
>
> Martin
>
>
> ________________________________________________________________________
> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
>
>
>
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools



More information about the mpiwg-tools mailing list