[MPIWG Fortran] Questions on the F08 subarray format

Balaji, Pavan balaji at anl.gov
Sun Mar 2 20:30:47 CST 2014


On Mar 2, 2014, at 7:42 PM, Junchao Zhang <jczhang at mcs.anl.gov> wrote:

> Here is my understanding (inlined)
> 
> WIN_CREATE was just an example.  There are several functions that have a void*, but don’t have a datatype associated with them.  For example, COMM_CREATE_KEYVAL, WIN_ATTACH (which doesn’t have the wording you are pointing out).
> In WIN_ATTACH / DETATH, if <base> is non-contiguous, we can split it into multiple contiguous ATTACH / DETACH. 

I’m not asking what the implementation can do.  For example, an implementation can provide a new internal function that deals with noncontiguous buffers called MPIR_WIN_ATTACH_NONCONTIG.  I’m asking what the standard intended.

Besides, that doesn’t make sense for the other example I gave above (CREATE_KEYVAL) and is not a generic solution for all routines that take a void* argument.

FWIW, for your approach to work, the user will need to use the same attributes for both ATTACH and DETACH.  Just giving a buffer address to detach is no longer sufficient since it loses information on the associated noncontiguous buffers.
  
> As per my remaining questions, I don’t think those are answered.  See the example about MPI_GATHER below, for instance.
>  
> For MPI_GATHER, it is possible for the receive buffer be contiguous for some processes and noncontiguous for others. A choice buffer defines a virtually contiguous memory (may actually contiguous or not), the datatype defines type map on the memory, and the count defines the number of elements of this datatype. With these, users can do what they want. For example, one can gather data of MPI_2INT from processes into a row of a matrix in Fortran.

Is that the WG’s interpretation or are you guessing here?  If that was the intention of the Fortran WG, I’d strongly object to it.  That’s a major change in the semantics of MPI_REDUCE if my receives can be contiguous for some processes and noncontiguous for others.

  — Pavan


More information about the mpiwg-fortran mailing list