[Mpi-forum] large count support not as easy as people seem to have thought

Jeff Hammond jeff.science at gmail.com
Tue May 6 15:06:20 CDT 2014


On Tue, May 6, 2014 at 2:50 PM, Rob Latham <robl at mcs.anl.gov> wrote:
>
>
> On 05/06/2014 12:19 PM, Jeff Hammond wrote:
>>
>> Issue #2: cannot use built-in reduce ops.
>>
>> Once we rule out using built-in ops with our large-count datatypes, we
>> must reimplement all of the reduction operations required.  I find
>> this to be nontrivial.  I have not yet figured out how to get at the
>> underlying datatype info in a simple manner.  It appears that
>> MPI_Type_get_envelope exists for this purpose, but it's a huge pain to
>> have to call this function when all I need to know is the number of
>> built-in datatypes so that I can apply my clever and use
>> MPI_Reduce_local inside of my user-defined operation.
>
>
> To determine the number of built-in datatypes, yes, one must recursively
> call MPI_Type_get_envelope and MPI_Type_get_contents.  See the ROMIO
> datatype flattening code for just how much of a pain MPI_Type_get_contents
> and MPI_Type_get_envelope is.

Indeed, I finished debugging my implementation of that for BigMPI a
few minutes ago
(https://github.com/jeffhammond/BigMPI/blob/master/src/type_contiguous_x.c).
 It was obnoxious just to do it for the single case where I am parsing
my own datatype and thus know exactly what should be in it.  It takes
6 MPI calls to convert from the contiguous bigtype to a
(bigcount,basictype) pair, just as it takes 6 MPI calls to go the
other way.

> You might find Rob Ross's libmpitypes approach a bit better, but that code
> is designed for applying functions to the data of a datatype, deliberately
> bypassing the get_envelope/get_contents information.

I just need the decoder ring so I can write my own reduction
operations using MPI_Reduce_local so I think I'll avoid that
dependency.  I'd rather be implementation agnostic anyways and I guess
libmpitypes is MPICH-oriented.

Best,

Jeff


> --
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the mpi-forum mailing list