[mpiwg-rma] Use of MPI_SHORT_INT, etc with MPI_Accumulate

Jim Dinan james.dinan at gmail.com
Fri May 13 09:12:51 CDT 2016


Some masochist actually wrote a unit test for this, not too long ago.  :)

http://git.mpich.org/mpich.git/blob/HEAD:/test/mpi/rma/acc-loc.c

 ~Jim.

On Thu, May 12, 2016 at 3:50 PM, Jeff Hammond <jeff.science at gmail.com>
wrote:

>
>
> On Thu, May 12, 2016 at 1:38 PM, Nathan Hjelm <hjelmn at mac.com> wrote:
> >
> > Ok, so in this case “predefined type” means any type defined by MPI.
> That is not clear from the text and it is reasonable to assume that
> non-homogenous (and possibly non-contiguous) datatypes are not allowed.
>
> It is absolutely clear from page 12 of MPI 3.1:
>
> A predefined datatype is a datatype with a predefined (constant) name
> (such as MPI_INT, MPI_FLOAT_INT, or MPI_PACKED) or a datatype constructed
> with MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL, or
> MPI_TYPE_CREATE_F90_COMPLEX. The former are named whereas the latter are
> unnamed.
>
> > MPI_MINLOC and MPI_MAXLOC could be be assumed to be only allowed for
> MPI_2INT, MPI_2REAL, etc for example.
>
> Except for MPI_2INT, the homogeneous pair types are for Fortran.  If we
> added this restriction, we might as well remove MAXLOC/MINLOC from RMA
> altogether.
>
> > Allowing MPI_SHORT_INT, MPI_DOUBLE_INT, etc requires MPI implementations
> to write special cases to handle these potentially non-contiguous datatypes.
>
> MPI RMA permits the use of user-defined non-contiguous datatypes where all
> basic components are of the same predefined datatype.  The only difference
> here is that the atomicity covers the whole pair type.  However, since
> accumulate atomicity is only assured when the same datatype is used, you
> can implement e.g. MPI_LONG_DOUBLE_INT using locking and use atomic
> instructions for other types where it is easier.
>
> > A lot of extra code for what is likely no benefit to end users.
>
> How is it *a lot* of extra code?  Do you not already have a "stupid" RMA
> code path for at least some of MPI_C(XX)_LONG_DOUBLE,
> MPI_C(XX)_DOUBLE_COMPLEX, MPI_C(XX)_LONG_DOUBLE_COMPLEX and/or MPI_PROD?
> Remember that the accumulate atomicity rules are very strict, which permits
> a fast (potentially hardware) implementation for some (op,type) pairs and a
> slow but functional implementation for others.
>
> >  Does anyone know of a code that makes use of this feature?
>
> I know users that depend upon the symmetry of MPI_Accumulate and
> MPI_Reduce w.r.t. MAXLOC and MINLOC.  I don't know what state the code is
> in, but PETSc developers have said they want to use this.
>
> Jeff
>
> > -Nathan
> >
> >
> > > On May 11, 2016, at 8:35 AM, William Gropp <wgropp at illinois.edu>
> wrote:
> > >
> > > Read this as (predefined datatype) or (derived datatype where all
> basic components are of the same predefined datatype).
> > >
> > > MPI_SHORT_INT is a predefined datatype, and so is valid for accumulate.
> > >
> > > Bill
> > >
> > > William Gropp
> > > Director, Parallel Computing Institute
> > > Thomas M. Siebel Chair in Computer Science
> > > Chief Scientist, NCSA
> > > University of Illinois Urbana-Champaign
> > >
> > >
> > >
> > >
> > >
> > > On May 11, 2016, at 9:02 AM, Nathan Hjelm <hjelmn at mac.com> wrote:
> > >
> > >> I have a user who is trying to use MPI_SHORT_INT as the origin and
> target datatype arguments of MPI_Accumulate. My interpretation of the
> standard does not allow this type. I am justifying this because of MPI 3-1
> § 11.3.4 pp 425:2-8:
> > >>
> > >> Each datatype argument must be a predefined datatype or a derived
> datatype, where
> > >> all basic components are of the same predefined datatype. Both
> datatype arguments must
> > >> be constructed from the same predefined datatype.
> > >>
> > >> MPI_SHORT_INT, MPI_LONG_DOUBLE_INT, etc are not in any of the A.1
> tables labeled as "predefined datatypes”. They show up in a separate list
> of datatypes for reduction functions. Since these datatypes are not
> “predefined datatypes” and are not composite datatypes of the "same
> predefined datatype” they are not valid for MPI_Accumulate,
> MPI_Get_accumulate, etc. Am I wrong in my interpretation?
> > >>
> > >> -Nathan
> > >> _______________________________________________
> > >> mpiwg-rma mailing list
> > >> mpiwg-rma at mpi-forum.org
> > >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> > >
> > > _______________________________________________
> > > mpiwg-rma mailing list
> > > mpiwg-rma at mpi-forum.org
> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >
> >
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20160513/717c9598/attachment-0001.html>


More information about the mpiwg-rma mailing list