[MPIWG Fortran] Data type of F08 subarray

Jeff Squyres (jsquyres) jsquyres at cisco.com
Wed May 14 12:50:40 CDT 2014


Additionally, when we talked about the implementation possibilities, we figured the implementations would make a choice at run time which was the least expensive:

- copy to a contiguous buffer, apply the datatype
- combine the subarray datatype and MPI datatype and use that
- ...something else


On May 14, 2014, at 3:14 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:

> I want to comment on
>>>>> That is nasty. Then I will have two data types. I even can not
>>>>> assume any relationship between the two types. I have to
>>>>> allocate a scratch buffer for the virtual contiguous array in
>>>>> MPI_ISEND etc, do memory copying and then free the buffer in
>>>>> MPI_WAIT. I'm not sure one can implement it efficiently.
> 
> The reason for that interface is very simple:
> For blocking calls, the combination of strided arrays
> and complicated derived datatypes (eg. produced with type_vector)
> was ever allowed for blocking calls. Therefore,
> the extension to nonblocking calls is defined with
> exactly the same meaning as for blocking calls.
> You may name this nasty. Sure. But it would have been
> more nasty if we would have defined that the meaning of 
> datatype handles should be different for blocking 
> and nonblocking calls.
> 
> Rolf
> 
> 
> ----- Original Message -----
>> From: "Junchao Zhang" <jczhang at mcs.anl.gov>
>> To: "MPI-WG Fortran working group" <mpiwg-fortran at lists.mpi-forum.org>
>> Sent: Wednesday, May 14, 2014 12:11:30 AM
>> Subject: Re: [MPIWG Fortran] Data type of F08 subarray
>> 
>> 
>> 
>> 
>> 
>> 
>> On Tue, May 13, 2014 at 3:21 PM, William Gropp < wgropp at illinois.edu
>>> wrote:
>> 
>> 
>> 
>> You can always create a new MPI datatype that is the composition of
>> the array section and the MPI datatype. For a vector of a simple
>> (vector) section, for example, the new datatype simply has the
>> product of the strides. Other types are more complex but always
>> possible.
>> 
>> 
>> OK. If MPI datatype is represented in a hierarchical tree, then one
>> need to combine two MPI datatype trees, which is complicated in
>> general.
>> In my view, if a user wants to have a complex derived datatype, he
>> should create it explicitly with MPI datatype calls, instead of
>> doing it implicitly with "subarray X datatype", since that makes
>> code hard to understand. MPI standard is better not to support that.
>> 
>> 
>> 
>> 
>> 
>> 
>> Bill
>> 
>> 
>> 
>> 
>> 
>> 
>> William Gropp
>> Director, Parallel Computing Institute Thomas M. Siebel Chair in
>> Computer Science
>> 
>> 
>> University of Illinois Urbana-Champaign
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On May 13, 2014, at 3:02 PM, Junchao Zhang wrote:
>> 
>> 
>> 
>> 
>> 
>> 
>> On Tue, May 13, 2014 at 2:56 PM, Bill Long < longb at cray.com > wrote:
>> 
>> 
>> 
>> 
>> On May 13, 2014, at 2:48 PM, Junchao Zhang < jczhang at mcs.anl.gov >
>> wrote:
>> 
>>> 
>>> On Tue, May 13, 2014 at 2:37 PM, Bill Long < longb at cray.com >
>>> wrote:
>>> 
>>> On May 13, 2014, at 2:19 PM, Junchao Zhang < jczhang at mcs.anl.gov >
>>> wrote:
>>> 
>>>> 
>>>> On Tue, May 13, 2014 at 2:00 PM, Bill Long < longb at cray.com >
>>>> wrote:
>>>> 
>>>> On May 13, 2014, at 12:44 PM, Junchao Zhang < jczhang at mcs.anl.gov
>>>>> wrote:
>>>> 
>>>>> 
>>>>> On Tue, May 13, 2014 at 11:56 AM, Rolf Rabenseifner <
>>>>> rabenseifner at hlrs.de > wrote:
>>>>>> REAL s(100)
>>>>>> MPI_SEND(s (1:100:5) , 3, dtype, ...)
>>>>>> dtype can only be MPI_REAL. In other words, dtype is kind of
>>>>>> redundant here since the type map is actually specified by
>>>>>> the
>>>>>> subarray.
>>>> 
>>>> Right. The descriptor for the first argument has a member whose
>>>> value is a type code. In principal the library could verify this
>>>> is compatible with the data type handle supplied as the third
>>>> argument, and issue an error if not. Perhaps in a “debug” mode.
>>>> 
>>>>> 
>>>>> No, if dtype is a vector then it is applied to a virtual
>>>>> contiguous array that consists of s(1), s(6), s(11) …
>>>> 
>>>> dtype is not a vector, is it? That argument is a scalar of type
>>>> TYPE(MPI_DATATYPE). At least that is what the interface says.
>>>> 
>>>> Rolf meant dtype is an MPI datatype created by MPI_Type_vector.
>>>> For this case, I will have two datatypes, one from the
>>>> MPI_Datatype argument, the other from the choice buffer itself.
>>>> It is hard to implement that. Perhaps it is useless since it
>>>> obscures the program.
>>> 
>>> OK. But one of the virtues of the new interface for users is that
>>> you do not have to create such data types anymore for array
>>> sections. Even if someone did do this, you can detect that the
>>> incoming data type is user-created, and in that case ignore the
>>> type code in the descriptor. If the program is valid at all, the
>>> element length, strides, and extents in the descriptor should be
>>> correct.
>>> 
>>> Yes, I can do that. The hard part is when the subarray is
>>> non-contiguous, and it is a non-blocking call. I need to allocate
>>> a scratch buffer and pack the subarray. Since it is non-blocking,
>>> I can not free the buffer.
>> 
>> Can you create, locally, a datatype that describes the layout of the
>> array section, and then call MPI_Isend again with that data type?
>> That avoids the contiguous local buffer and the problem of when to
>> free it.
>> 
>> 
>> 
>> That is my first thought. But then I realized I have to assume the
>> MPI_Datatype argument is for subarray elements.
>> 
>> 
>> 
>> Cheers,
>> Bill
>> 
>> 
>> 
>> 
>> 
>> 
>>> 
>>> 
>>> Cheers,
>>> Bill
>>> 
>>> 
>>> 
>>>> 
>>>> 
>>>> 
>>>> Cheers,
>>>> Bill
>>>> 
>>>> 
>>>>> 
>>>>> That is nasty. Then I will have two data types. I even can not
>>>>> assume any relationship between the two types. I have to
>>>>> allocate a scratch buffer for the virtual contiguous array in
>>>>> MPI_ISEND etc, do memory copying and then free the buffer in
>>>>> MPI_WAIT. I'm not sure one can implement it efficiently.
>>>>> 
>>>>> 
>>>>> 
>>>>> ----- Original Message -----
>>>>>> From: "Junchao Zhang" < jczhang at mcs.anl.gov >
>>>>>> To: "MPI-WG Fortran working group" <
>>>>>> mpiwg-fortran at lists.mpi-forum.org >
>>>>>> Sent: Tuesday, May 13, 2014 6:23:08 PM
>>>>>> Subject: Re: [MPIWG Fortran] Data type of F08 subarray
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Thanks, Rolf. And I feel there is a jump from contiguous
>>>>>> subarray to
>>>>>> non-contiguous subarray.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> For contiguous subarray, such as
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> REAL s(100)
>>>>>> 
>>>>>> MPI_SEND(s(2:50), 3, dtype, ...) s only gives the start
>>>>>> address.
>>>>>> dtype can be anything, e.g., either a basic type or a derived
>>>>>> type
>>>>>> created by MPI_Type_vector() etc.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> For non-contiguous subarray, such as
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> REAL s(100)
>>>>>> MPI_SEND(s (1:100:5) , 3, dtype, ...)
>>>>>> dtype can only be MPI_REAL. In other words, dtype is kind of
>>>>>> redundant here since the type map is actually specified by
>>>>>> the
>>>>>> subarray.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --Junchao Zhang
>>>>>> 
>>>>>> 
>>>>>> On Tue, May 13, 2014 at 10:20 AM, Rolf Rabenseifner <
>>>>>> rabenseifner at hlrs.de > wrote:
>>>>>> 
>>>>>> 
>>>>>> Dear Junchao,
>>>>>> 
>>>>>> MPI-3.0 p25:7-8 describes only communication with language
>>>>>> type
>>>>>> of the buffer argument matches to the MPI datatype used
>>>>>> in the datatype argument.
>>>>>> Same p83:36-37.
>>>>>> 
>>>>>> Therefore, the answer is no and the compiler cannot detect
>>>>>> a mismatch beteen language buffer specification and
>>>>>> MPI datatype specification.
>>>>>> 
>>>>>> I hope my answer could help.
>>>>>> 
>>>>>> Best regards
>>>>>> Rolf
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> ----- Original Message -----
>>>>>>> From: "Junchao Zhang" < jczhang at mcs.anl.gov >
>>>>>>> To: "MPI-WG Fortran working group" <
>>>>>>> mpiwg-fortran at lists.mpi-forum.org >
>>>>>>> Sent: Tuesday, May 13, 2014 5:08:30 PM
>>>>>>> Subject: [MPIWG Fortran] Data type of F08 subarray
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> p626 of MPI-3.0 gives such an example
>>>>>>> 
>>>>>>> 
>>>>>>> REAL s(100), r(100)
>>>>>>> CALL MPI_Isend(s(1:100:5), 3, MPI_REAL, ..., rq, ierror)
>>>>>>> 
>>>>>>> All nonblocking MPI functions behave as if the
>>>>>>> user-specified
>>>>>>> elements of choice buffers are copied to a contiguous
>>>>>>> scratch
>>>>>>> buffer
>>>>>>> in the MPI runtime environment. All datatype descriptions
>>>>>>> (in the
>>>>>>> example above, “3, MPI_REAL”) read and store data from and
>>>>>>> to this
>>>>>>> virtual contiguous scratch buffer ...
>>>>>>> 
>>>>>>> Here, data type of s(100) match with MPI_REAL, so
>>>>>>> everything is
>>>>>>> fine.
>>>>>>> But I want to know if MPI permits mismatched types, for
>>>>>>> example,
>>>>>>> can
>>>>>>> s(100) be an integer array? If the answer is no, then
>>>>>>> compilers can
>>>>>>> not detect this error ; if yes, then it is hard to
>>>>>>> implement that.
>>>>>>> To avoid memory copying to a scratch buffer, I want to use
>>>>>>> MPI
>>>>>>> datatypes. But if I have two types, one is given by the
>>>>>>> choice
>>>>>>> buffer itself, the other is given by the MPI_Datatype
>>>>>>> argument, how
>>>>>>> could I do that?
>>>>>>> 
>>>>>>> Any thoughts?
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> --Junchao Zhang
>>>>>>> _______________________________________________
>>>>>>> mpiwg-fortran mailing list
>>>>>>> mpiwg-fortran at lists.mpi-forum.org
>>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>>>> 
>>>>>> --
>>>>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email
>>>>>> rabenseifner at hlrs.de
>>>>>> High Performance Computing Center (HLRS) . phone
>>>>>> ++49(0)711/685-65530
>>>>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
>>>>>> 685-65832
>>>>>> Head of Dpmt Parallel Computing . . .
>>>>>> www.hlrs.de/people/rabenseifner
>>>>>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office:
>>>>>> Room 1.307)
>>>>>> _______________________________________________
>>>>>> mpiwg-fortran mailing list
>>>>>> mpiwg-fortran at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>>>> 
>>>>>> _______________________________________________
>>>>>> mpiwg-fortran mailing list
>>>>>> mpiwg-fortran at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>>> 
>>>>> --
>>>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email
>>>>> rabenseifner at hlrs.de
>>>>> High Performance Computing Center (HLRS) . phone
>>>>> ++49(0)711/685-65530
>>>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
>>>>> 685-65832
>>>>> Head of Dpmt Parallel Computing . . .
>>>>> www.hlrs.de/people/rabenseifner
>>>>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
>>>>> 1.307)
>>>>> _______________________________________________
>>>>> mpiwg-fortran mailing list
>>>>> mpiwg-fortran at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>>> 
>>>>> _______________________________________________
>>>>> mpiwg-fortran mailing list
>>>>> mpiwg-fortran at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>> 
>>>> Bill Long longb at cray.com
>>>> Fortran Technical Suport & voice: 651-605-9024
>>>> Bioinformatics Software Development fax: 651-605-9142
>>>> Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN
>>>> 55101
>>>> 
>>>> 
>>>> _______________________________________________
>>>> mpiwg-fortran mailing list
>>>> mpiwg-fortran at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>> 
>>>> _______________________________________________
>>>> mpiwg-fortran mailing list
>>>> mpiwg-fortran at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>> 
>>> Bill Long longb at cray.com
>>> Fortran Technical Suport & voice: 651-605-9024
>>> Bioinformatics Software Development fax: 651-605-9142
>>> Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN
>>> 55101
>>> 
>>> 
>>> _______________________________________________
>>> mpiwg-fortran mailing list
>>> mpiwg-fortran at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>> 
>>> _______________________________________________
>>> mpiwg-fortran mailing list
>>> mpiwg-fortran at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
>> Bill Long longb at cray.com
>> Fortran Technical Suport & voice: 651-605-9024
>> Bioinformatics Software Development fax: 651-605-9142
>> Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
>> 
>> 
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
>> 
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran


-- 
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/




More information about the mpiwg-fortran mailing list