<div dir="ltr">Here is my understanding (inlined)<div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
WIN_CREATE was just an example. There are several functions that have a void*, but don’t have a datatype associated with them. For example, COMM_CREATE_KEYVAL, WIN_ATTACH (which doesn’t have the wording you are pointing out).<br>
</blockquote><div>In WIN_ATTACH / DETATH, if <base> is non-contiguous, we can split it into multiple contiguous ATTACH / DETACH. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Furthermore, the problem with MPI_PUT doesn’t go away. You can create a window with noncontiguous memory regions using MPI_WIN_CREATE_DYNAMIC. But my point is different — the “offset” given to MPI_PUT can actually be a void* that is just typecast to MPI_AINT (e.g., in WIN_CREATE_DYNAMIC, offsets are against MPI_BOTTOM). So my question is: isn’t this losing information? The answer might be a simple, “yes, and we don’t care”, but I want the Fortran WG to confirm that.<br>
</blockquote><div><br></div><div> In MPI_PUT, <target_disp> is of INTEGER(KIND=MPI_ADDRESS_KIND), in other words, not a choice buffer. It is not an issue of the Standard.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
As per my remaining questions, I don’t think those are answered. See the example about MPI_GATHER below, for instance.<br></blockquote><div> </div><div>For MPI_GATHER, it is possible for the <span style="font-family:arial,sans-serif;font-size:13px">receive buffer be contiguous for some processes and noncontiguous for others. A choice buffer defines a virtually contiguous memory (may actually contiguous or not), the datatype defines type map on the memory, and the count defines the number of elements of this datatype. With these, users can do what they want. For example, one can gather data of MPI_2INT from processes into a row of a matrix in Fortran.</span></div>
<div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Thanks,<br>
<br>
— Pavan<br>
<div><div><br>
On Mar 2, 2014, at 4:18 PM, Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>> wrote:<br>
<br>
> Pavan,<br>
><br>
> I'm currently on travel for teaching a 3 day course,<br>
> but a short answer:<br>
><br>
> All blocking routines and all nonblocking routines with a blocking<br>
> counter-part have already in MPI-2.2 a well-defined behavior for<br>
> strided-subarray-buffers.<br>
> Exactly the same behavior is expected with the new syntax.<br>
> This behavior can be implemented exactly the same way<br>
> as the compiler did in MPI-2.0 with blocking routines:<br>
> - copying the strided subarray into a contiguous scratch buffer<br>
> - calling the C routine with this contiguous scratch buffer<br>
> - and<br>
> -- in the case of blocking routines: dealloc the sctatch buffer<br>
> -- in the case of nonblocking routines: mark for dealloc<br>
> when the operation is completed, i.e., in the Wait or Test.<br>
><br>
> If you want to do some optimizations, then thexy must behave as if<br>
> it is coded in this described way.<br>
><br>
> The only intersting cases seem to be:<br>
> - MPI_Win_create - I would forbid strided windows,<br>
> - and the remote part of MPI_Put/..., but this is solved<br>
> if strided windows are not allowed.<br>
><br>
> For MPI_Win_create, the problem is already solved as I wished above:<br>
> MPI-3.0 p405:24-26:<br>
> "In Fortran, one can pass the first element of a memory region<br>
> or a whole array, which must be 'simply contiguous' (for<br>
> 'simply contiguous', see also Section 17.1.12 on page 626)."<br>
><br>
> For 'simply contiguous', see p628:14-27.<br>
><br>
> Therefore, I expect that your questions are fully answered<br>
> and there is no need for aditional clarifications.<br>
><br>
> I may have missed some additional aspects in your questions?<br>
><br>
> Bestregards<br>
> Rolf<br>
><br>
> ----- Original Message -----<br>
>> From: "Pavan Balaji" <<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>><br>
>> To: "MPI-WG Fortran working group" <<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a>><br>
>> Sent: Sunday, March 2, 2014 10:47:14 PM<br>
>> Subject: Re: [MPIWG Fortran] Questions on the F08 subarray format<br>
>><br>
>><br>
>> Some more questions:<br>
>><br>
>> 4. What happens when the void* corresponds to multiple counts of a<br>
>> single datatype from different processes (e.g., GATHER or ALLTOALL)?<br>
>> In the case of GATHER, suppose I’m gathering 2 INTEGERS from each<br>
>> processes, can my receive buffer now be contiguous for some<br>
>> processes and noncontiguous for others?<br>
>><br>
>> 5. For some functions, the standard has wording that MPI_DATATYPE for<br>
>> a particular function can only have predefined datatype. What<br>
>> happens when the user uses a predefined datatype, but describes a<br>
>> subarray in the void* argument?<br>
>><br>
>> I’m assuming these are just holes that were not intended, but the<br>
>> standard doesn’t seem to clearly state this.<br>
>><br>
>> It’ll be great if someone can clarify the intention of the working<br>
>> group for each of the five cases I mentioned. Rolf already<br>
>> mentioned that the intention was to not allow noncontiguous buffers<br>
>> for WIN_CREATE. What about other functions that take void*? E.g.,<br>
>> WIN_ATTACH/DETACH or other non-RMA functions that take void* but<br>
>> don’t provide a datatype?<br>
>><br>
>> Thanks,<br>
>><br>
>> — Pavan<br>
>><br>
>> On Mar 1, 2014, at 5:29 PM, Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>><br>
>> wrote:<br>
>><br>
>>> I expect (after midnight) that clarifications are needed.<br>
>>> On 1. MPI_WIN_CREATE: the goal should be contigues window,<br>
>>> i.e., no strided subarrays.<br>
>>><br>
>>> Rolf<br>
>>><br>
>>> ----- Original Message -----<br>
>>>> From: "Pavan Balaji" <<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>><br>
>>>> To: <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
>>>> Sent: Saturday, March 1, 2014 11:29:59 PM<br>
>>>> Subject: [MPIWG Fortran] Questions on the F08 subarray format<br>
>>>><br>
>>>> Folks,<br>
>>>><br>
>>>> I had a few questions on the MPI-3 F08 bindings, which I couldn’t<br>
>>>> find answers to in the standard. Can someone point me to the<br>
>>>> place<br>
>>>> where these are defined?<br>
>>>><br>
>>>> 1. How does the subarray format work for functions that have a<br>
>>>> void*<br>
>>>> argument, but no datatype to describe them (e.g., MPI_WIN_CREATE)?<br>
>>>> In this case, What C function will MPI_WIN_CREATE_F08TS call? Do<br>
>>>> we need to create an separate internal MPIR_WIN_CREATE_NEW<br>
>>>> function<br>
>>>> in our implementation that takes a datatype argument? Does this<br>
>>>> mean that now MPI_WIN_CREATE can allow for noncontiguous buffers<br>
>>>> on<br>
>>>> each process?<br>
>>>><br>
>>>> 2. How does the subarray format work for functions that have a<br>
>>>> datatype argument, but no void* corresponding to that datatype?<br>
>>>> For<br>
>>>> example, the target buffer in MPI_PUT is described using an<br>
>>>> MPI_AINT<br>
>>>> (offset), rather than a void*.<br>
>>>><br>
>>>> 3. How does the subarray format work for functions that have two<br>
>>>> void* arguments corresponding to the same datatype (e.g.,<br>
>>>> MPI_REDUCE)?<br>
>>>><br>
>>>> Thanks,<br>
>>>><br>
>>>> — Pavan<br>
>>>><br>
>>>> _______________________________________________<br>
>>>> mpiwg-fortran mailing list<br>
>>>> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
>>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
>>>><br>
>>><br>
>>> --<br>
>>> Dr. Rolf Rabenseifner . . . . . . . . . .. email<br>
>>> <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><br>
>>> High Performance Computing Center (HLRS) . phone<br>
>>> <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530" target="_blank">++49(0)711/685-65530</a><br>
>>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 /<br>
>>> 685-65832<br>
>>> Head of Dpmt Parallel Computing . . .<br>
>>> <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
>>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room<br>
>>> 1.307)<br>
>>> _______________________________________________<br>
>>> mpiwg-fortran mailing list<br>
>>> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
>><br>
>> _______________________________________________<br>
>> mpiwg-fortran mailing list<br>
>> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
>><br>
><br>
> --<br>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><br>
> High Performance Computing Center (HLRS) . phone <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530" target="_blank">++49(0)711/685-65530</a><br>
> University of Stuttgart . . . . . . . . .. fax <a href="tel:%2B%2B49%280%29711%20%2F%20685-65832" value="+4971168565832" target="_blank">++49(0)711 / 685-65832</a><br>
> Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<br>
> _______________________________________________<br>
> mpiwg-fortran mailing list<br>
> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
<br>
_______________________________________________<br>
mpiwg-fortran mailing list<br>
<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
</div></div></blockquote></div><br></div></div>