[MPIWG Fortran] Questions on the F08 subarray format

Balaji, Pavan balaji at anl.gov
Sun Mar 2 16:32:08 CST 2014


Hi Rolf,

WIN_CREATE was just an example.  There are several functions that have a void*, but don’t have a datatype associated with them.  For example, COMM_CREATE_KEYVAL, WIN_ATTACH (which doesn’t have the wording you are pointing out).

Furthermore, the problem with MPI_PUT doesn’t go away.  You can create a window with noncontiguous memory regions using MPI_WIN_CREATE_DYNAMIC.  But my point is different — the “offset” given to MPI_PUT can actually be a void* that is just typecast to MPI_AINT (e.g., in WIN_CREATE_DYNAMIC, offsets are against MPI_BOTTOM).  So my question is: isn’t this losing information?  The answer might be a simple, “yes, and we don’t care”, but I want the Fortran WG to confirm that.

As per my remaining questions, I don’t think those are answered.  See the example about MPI_GATHER below, for instance.

Thanks,

  — Pavan

On Mar 2, 2014, at 4:18 PM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:

> Pavan,
> 
> I'm currently on travel for teaching a 3 day course, 
> but a short answer:
> 
> All blocking routines and all nonblocking routines with a blocking 
> counter-part have already in MPI-2.2 a well-defined behavior for 
> strided-subarray-buffers.
> Exactly the same behavior is expected with the new syntax.
> This behavior can be implemented exactly the same way 
> as the compiler did in MPI-2.0 with blocking routines:
> - copying the strided subarray into a contiguous scratch buffer
> - calling the C routine with this contiguous scratch buffer
> - and 
>    -- in the case of blocking routines: dealloc the sctatch buffer
>    -- in the case of nonblocking routines: mark for dealloc 
>       when the operation is completed, i.e., in the Wait or Test.
> 
> If you want to do some optimizations, then thexy must behave as if
> it is coded in this described way.
> 
> The only intersting cases seem to be:
> - MPI_Win_create - I would forbid strided windows,
> - and the remote part of MPI_Put/..., but this is solved
>  if strided windows are not allowed.
> 
> For MPI_Win_create, the problem is already solved as I wished above:
> MPI-3.0 p405:24-26:
>  "In Fortran, one can pass the first element of a memory region 
>   or a whole array, which must be 'simply contiguous' (for
>   'simply contiguous', see also Section 17.1.12 on page 626)." 
> 
> For 'simply contiguous', see p628:14-27.
> 
> Therefore, I expect that your questions are fully answered
> and there is no need for aditional clarifications.
> 
> I may have missed some additional aspects in your questions?
> 
> Bestregards
> Rolf
> 
> ----- Original Message -----
>> From: "Pavan Balaji" <balaji at anl.gov>
>> To: "MPI-WG Fortran working group" <mpiwg-fortran at lists.mpi-forum.org>
>> Sent: Sunday, March 2, 2014 10:47:14 PM
>> Subject: Re: [MPIWG Fortran] Questions on the F08 subarray format
>> 
>> 
>> Some more questions:
>> 
>> 4. What happens when the void* corresponds to multiple counts of a
>> single datatype from different processes (e.g., GATHER or ALLTOALL)?
>> In the case of GATHER, suppose I’m gathering 2 INTEGERS from each
>> processes, can my receive buffer now be contiguous for some
>> processes and noncontiguous for others?
>> 
>> 5. For some functions, the standard has wording that MPI_DATATYPE for
>> a particular function can only have predefined datatype.  What
>> happens when the user uses a predefined datatype, but describes a
>> subarray in the void* argument?
>> 
>> I’m assuming these are just holes that were not intended, but the
>> standard doesn’t seem to clearly state this.
>> 
>> It’ll be great if someone can clarify the intention of the working
>> group for each of the five cases I mentioned.  Rolf already
>> mentioned that the intention was to not allow noncontiguous buffers
>> for WIN_CREATE.  What about other functions that take void*?  E.g.,
>> WIN_ATTACH/DETACH or other non-RMA functions that take void* but
>> don’t provide a datatype?
>> 
>> Thanks,
>> 
>>  — Pavan
>> 
>> On Mar 1, 2014, at 5:29 PM, Rolf Rabenseifner <rabenseifner at hlrs.de>
>> wrote:
>> 
>>> I expect (after midnight) that clarifications are needed.
>>> On 1. MPI_WIN_CREATE: the goal should be contigues window,
>>> i.e., no strided subarrays.
>>> 
>>> Rolf
>>> 
>>> ----- Original Message -----
>>>> From: "Pavan Balaji" <balaji at anl.gov>
>>>> To: mpiwg-fortran at lists.mpi-forum.org
>>>> Sent: Saturday, March 1, 2014 11:29:59 PM
>>>> Subject: [MPIWG Fortran] Questions on the F08 subarray format
>>>> 
>>>> Folks,
>>>> 
>>>> I had a few questions on the MPI-3 F08 bindings, which I couldn’t
>>>> find answers to in the standard.  Can someone point me to the
>>>> place
>>>> where these are defined?
>>>> 
>>>> 1. How does the subarray format work for functions that have a
>>>> void*
>>>> argument, but no datatype to describe them (e.g., MPI_WIN_CREATE)?
>>>> In this case, What C function will MPI_WIN_CREATE_F08TS call?  Do
>>>> we need to create an separate internal MPIR_WIN_CREATE_NEW
>>>> function
>>>> in our implementation that takes a datatype argument?  Does this
>>>> mean that now MPI_WIN_CREATE can allow for noncontiguous buffers
>>>> on
>>>> each process?
>>>> 
>>>> 2. How does the subarray format work for functions that have a
>>>> datatype argument, but no void* corresponding to that datatype?
>>>> For
>>>> example, the target buffer in MPI_PUT is described using an
>>>> MPI_AINT
>>>> (offset), rather than a void*.
>>>> 
>>>> 3. How does the subarray format work for functions that have two
>>>> void* arguments corresponding to the same datatype (e.g.,
>>>> MPI_REDUCE)?
>>>> 
>>>> Thanks,
>>>> 
>>>> — Pavan
>>>> 
>>>> _______________________________________________
>>>> mpiwg-fortran mailing list
>>>> mpiwg-fortran at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>>> 
>>> 
>>> --
>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email
>>> rabenseifner at hlrs.de
>>> High Performance Computing Center (HLRS) . phone
>>> ++49(0)711/685-65530
>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
>>> 685-65832
>>> Head of Dpmt Parallel Computing . . .
>>> www.hlrs.de/people/rabenseifner
>>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
>>> 1.307)
>>> _______________________________________________
>>> mpiwg-fortran mailing list
>>> mpiwg-fortran at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> 
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran




More information about the mpiwg-fortran mailing list