[Mpi3-rma] non-contiguous support in RMA & one-sided pack/unpack (?)

Jeff Hammond jeff.science at gmail.com
Wed Sep 16 08:37:04 CDT 2009


Could there be two "xfer" calls - one for contiguous and one for
non-contiguous - instead of going the assertion route?  Is there
something under the hood that I do not appreciate that requires that
MPI know at the outset and for all time whether or not non-contiguous
operations will occur to optimize fully for the contiguous ones?

Perhaps "xfer" could roll up the various possibilities - contiguous,
vector, indexed, strided, and completely general non-contiguous - as a
function argument rather than hiding these details inside of datatype,
which would preclude certain optimizations?  Contiguous one-sided
transfers could be a macro when the underlying communication layer
supports them natively (e.g. DCMF), bypassing all the datatype
interrogation that would occur in the general case.

As an application developer, it would be good to have both available
but with some way to distinguish between these variants since in every
case I can think of, it will be known at compile-time whether or not
the data is contiguous.

Thanks,

Jeff

On Wed, Sep 16, 2009 at 8:13 AM, Richard Treumann <treumann at us.ibm.com> wrote:
> While it is not the Forum's responsibility to mandate performance for a
> particular implementation, it is certainly within the Forum's purview to
> press for models that can be implemented with high performance and deprecate
> models that are destined for poor performance on many or most architectures.
>
> Broad support for non-contiguous and user defined datatypes could easily be
> a deal breaker for a high performance implementation.
>
> This is another case where MPI_Init time assertions could be useful. If an
> application could declare in MPI_Init_asserted that it will only uses
> predefined, contiguous datatypes in RMA operations then an implementation
> would be able to exploit any optimizations that depend on contiguous
> datatypes and raise an error it the application tried to use a
> non-contiguous or user defined datatype.
>
> The assertion would not relieve the implementation of the need to include
> support for non-contiguous or user defined datatypes because any application
> the did not assert it would use ONLY contiguous & predefined types would
> have a right to expect the broader support.
>
> If generalized readiness for RMA calls that just might use an arbitrary
> datatype mean all RMA calls in the application pay a performance penalty
> then the application author will need to decide whether he really needs
> non-contiguous RMAs badly enough to pay the penalty. I suspect many or even
> most applications can do what they need to do without resorting to
> non-contiguous RMA.
>
> Recall that an assertion by an application does not require an
> implementation to do anything different. If some MPI implementation can
> provide high performance RMA for contiguous & user defined datatypes and
> still have robust support for generalized datatypes, it can simply ignore
> the assertion.)
>
> Dick
>
> Dick Treumann - MPI Team
> IBM Systems & Technology Group
> Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
>
>
> mpi3-rma-bounces at lists.mpi-forum.org wrote on 09/15/2009 09:18:42 PM:
>
>> [image removed]
>>
>> Re: [Mpi3-rma] non-contiguous support in RMA & one-sided pack/unpack (?)
>>
>> Jeff Hammond
>>
>> to:
>>
>> MPI 3.0 Remote Memory Access working group
>>
>> 09/15/2009 09:19 PM
>>
>> Sent by:
>>
>> mpi3-rma-bounces at lists.mpi-forum.org
>>
>> Please respond to "MPI 3.0 Remote Memory Access working group"
>
>>
>> I must be blind for missing that.  Sorry.
>>
>> I understand that it is not MPI Forum's responsibility to ensure
>> efficient implementations of the standard, but I am still concerned
>> about the performance of even simple non-contiguous operations based
>> upon what I see with ARMCI.  I guess I'll have to wait and see what
>> the various groups/vendors produce.
>>
>> Thanks,
>>
>> jeff
>>
>>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>



-- 
Jeff Hammond
Argonne Leadership Computing Facility
jhammond at mcs.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
http://home.uchicago.edu/~jhammond/




More information about the mpiwg-rma mailing list