[MPIWG Fortran] Data type of F08 subarray

Junchao Zhang jczhang at mcs.anl.gov
Tue May 13 12:44:46 CDT 2014


On Tue, May 13, 2014 at 11:56 AM, Rolf Rabenseifner <rabenseifner at hlrs.de>wrote:

> > REAL s(100)
> > MPI_SEND(s (1:100:5) , 3, dtype, ...)
> > dtype can only be MPI_REAL. In other words, dtype is kind of
> > redundant here since the type map is actually specified by the
> > subarray.
>
> No, if dtype is a vector then it is applied to a virtual
> contiguous array that consists of s(1), s(6), s(11) ...
>

That is nasty. Then I will have two data types. I even can not assume any
relationship between the two types. I have to allocate a scratch buffer for
the virtual contiguous array in MPI_ISEND etc, do memory copying and then
free the buffer in MPI_WAIT. I'm not sure one can implement it efficiently.



>
> ----- Original Message -----
> > From: "Junchao Zhang" <jczhang at mcs.anl.gov>
> > To: "MPI-WG Fortran working group" <mpiwg-fortran at lists.mpi-forum.org>
> > Sent: Tuesday, May 13, 2014 6:23:08 PM
> > Subject: Re: [MPIWG Fortran] Data type of F08 subarray
> >
> >
> >
> > Thanks, Rolf. And I feel there is a jump from contiguous subarray to
> > non-contiguous subarray.
> >
> >
> >
> > For contiguous subarray, such as
> >
> >
> >
> >
> > REAL s(100)
> >
> > MPI_SEND(s(2:50), 3, dtype, ...) s only gives the start address.
> > dtype can be anything, e.g., either a basic type or a derived type
> > created by MPI_Type_vector() etc.
> >
> >
> >
> > For non-contiguous subarray, such as
> >
> >
> >
> > REAL s(100)
> > MPI_SEND(s (1:100:5) , 3, dtype, ...)
> > dtype can only be MPI_REAL. In other words, dtype is kind of
> > redundant here since the type map is actually specified by the
> > subarray.
> >
> >
> >
> >
> >
> >
> > --Junchao Zhang
> >
> >
> > On Tue, May 13, 2014 at 10:20 AM, Rolf Rabenseifner <
> > rabenseifner at hlrs.de > wrote:
> >
> >
> > Dear Junchao,
> >
> > MPI-3.0 p25:7-8 describes only communication with language type
> > of the buffer argument matches to the MPI datatype used
> > in the datatype argument.
> > Same p83:36-37.
> >
> > Therefore, the answer is no and the compiler cannot detect
> > a mismatch beteen language buffer specification and
> > MPI datatype specification.
> >
> > I hope my answer could help.
> >
> > Best regards
> > Rolf
> >
> >
> >
> >
> > ----- Original Message -----
> > > From: "Junchao Zhang" < jczhang at mcs.anl.gov >
> > > To: "MPI-WG Fortran working group" <
> > > mpiwg-fortran at lists.mpi-forum.org >
> > > Sent: Tuesday, May 13, 2014 5:08:30 PM
> > > Subject: [MPIWG Fortran] Data type of F08 subarray
> > >
> > >
> > >
> > >
> > > p626 of MPI-3.0 gives such an example
> > >
> > >
> > > REAL s(100), r(100)
> > > CALL MPI_Isend(s(1:100:5), 3, MPI_REAL, ..., rq, ierror)
> > >
> > > All nonblocking MPI functions behave as if the user-specified
> > > elements of choice buffers are copied to a contiguous scratch
> > > buffer
> > > in the MPI runtime environment. All datatype descriptions (in the
> > > example above, “3, MPI_REAL”) read and store data from and to this
> > > virtual contiguous scratch buffer ...
> > >
> > > Here, data type of s(100) match with MPI_REAL, so everything is
> > > fine.
> > > But I want to know if MPI permits mismatched types, for example,
> > > can
> > > s(100) be an integer array? If the answer is no, then compilers can
> > > not detect this error ; if yes, then it is hard to implement that.
> > > To avoid memory copying to a scratch buffer, I want to use MPI
> > > datatypes. But if I have two types, one is given by the choice
> > > buffer itself, the other is given by the MPI_Datatype argument, how
> > > could I do that?
> > >
> > > Any thoughts?
> > >
> > > Thanks
> > >
> > > --Junchao Zhang
> > > _______________________________________________
> > > mpiwg-fortran mailing list
> > > mpiwg-fortran at lists.mpi-forum.org
> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
> >
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> > _______________________________________________
> > mpiwg-fortran mailing list
> > mpiwg-fortran at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
> >
> > _______________________________________________
> > mpiwg-fortran mailing list
> > mpiwg-fortran at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20140513/010a4263/attachment-0001.html>


More information about the mpiwg-fortran mailing list