<div dir="ltr">I find another complex issue for non-contiguous subarrays.<br>Let's use MPI_Isend(buf, count, datatype, ...) as an example and assume a simple case: datatype is the type of elements in buf.<br>Assume bufsize is the number of elements in buf.<br>
<br>* If count > bufsize, it is a program error<br>* if count == bufsize, it is perfect. We create a new datatype for buf, and do MPI_Isend(base_addr, 1, newtype, ...)<br>* If count < bufsize (MPI Standard has such examples), then we must be careful and only create a dataype that at most describes count elements. Here is an example<br>
<br>REAL a(10, 10) <br>MPI_Isend(a(1:5, 1:2), 10, MPI_REAL, ...) // Perfect match<br>MPI_Isend(a(1:5, 1:2), 5, MPI_REAL, ...) // The subarray is non-contiguous, but we only send a contiguous part.<br>MPI_Isend(a(1:5, 1:2), 6, MPI_REAL, ...) // A bad match. Have to create an MPI indexed datatype.<br>
<br>If the datatype argument is a complexed derived type, creating the datatype for the subarray is even more complex.<br><div>The root reason is that subarray and the count/datatype argument contain the same kind of information, and MPI allows mismatch between them. <br>
<div><div><div><div class="gmail_extra"><br clear="all"><div><div dir="ltr">--Junchao Zhang</div></div>
<br><div class="gmail_quote">On Wed, May 14, 2014 at 2:14 AM, Rolf Rabenseifner <span dir="ltr"><<a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I want to comment on<br>
<div>> > > > That is nasty. Then I will have two data types. I even can not<br>
> > > > assume any relationship between the two types. I have to<br>
> > > > allocate a scratch buffer for the virtual contiguous array in<br>
> > > > MPI_ISEND etc, do memory copying and then free the buffer in<br>
> > > > MPI_WAIT. I'm not sure one can implement it efficiently.<br>
<br>
</div>The reason for that interface is very simple:<br>
For blocking calls, the combination of strided arrays<br>
and complicated derived datatypes (eg. produced with type_vector)<br>
was ever allowed for blocking calls. Therefore,<br>
the extension to nonblocking calls is defined with<br>
exactly the same meaning as for blocking calls.<br>
You may name this nasty. Sure. But it would have been<br>
more nasty if we would have defined that the meaning of<br>
datatype handles should be different for blocking<br>
and nonblocking calls.<br>
<div><br>
Rolf<br>
<br>
<br>
----- Original Message -----<br>
> From: "Junchao Zhang" <<a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a>><br>
> To: "MPI-WG Fortran working group" <<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a>><br>
</div><div><div>> Sent: Wednesday, May 14, 2014 12:11:30 AM<br>
> Subject: Re: [MPIWG Fortran] Data type of F08 subarray<br>
><br>
><br>
><br>
><br>
><br>
><br>
> On Tue, May 13, 2014 at 3:21 PM, William Gropp < <a href="mailto:wgropp@illinois.edu" target="_blank">wgropp@illinois.edu</a><br>
> > wrote:<br>
><br>
><br>
><br>
> You can always create a new MPI datatype that is the composition of<br>
> the array section and the MPI datatype. For a vector of a simple<br>
> (vector) section, for example, the new datatype simply has the<br>
> product of the strides. Other types are more complex but always<br>
> possible.<br>
><br>
><br>
> OK. If MPI datatype is represented in a hierarchical tree, then one<br>
> need to combine two MPI datatype trees, which is complicated in<br>
> general.<br>
> In my view, if a user wants to have a complex derived datatype, he<br>
> should create it explicitly with MPI datatype calls, instead of<br>
> doing it implicitly with "subarray X datatype", since that makes<br>
> code hard to understand. MPI standard is better not to support that.<br>
><br>
><br>
><br>
><br>
><br>
><br>
> Bill<br>
><br>
><br>
><br>
><br>
><br>
><br>
> William Gropp<br>
> Director, Parallel Computing Institute Thomas M. Siebel Chair in<br>
> Computer Science<br>
><br>
><br>
> University of Illinois Urbana-Champaign<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> On May 13, 2014, at 3:02 PM, Junchao Zhang wrote:<br>
><br>
><br>
><br>
><br>
><br>
><br>
> On Tue, May 13, 2014 at 2:56 PM, Bill Long < <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a> > wrote:<br>
><br>
><br>
><br>
><br>
> On May 13, 2014, at 2:48 PM, Junchao Zhang < <a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a> ><br>
> wrote:<br>
><br>
> ><br>
> > On Tue, May 13, 2014 at 2:37 PM, Bill Long < <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a> ><br>
> > wrote:<br>
> ><br>
> > On May 13, 2014, at 2:19 PM, Junchao Zhang < <a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a> ><br>
> > wrote:<br>
> ><br>
> > ><br>
> > > On Tue, May 13, 2014 at 2:00 PM, Bill Long < <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a> ><br>
> > > wrote:<br>
> > ><br>
> > > On May 13, 2014, at 12:44 PM, Junchao Zhang < <a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a><br>
> > > > wrote:<br>
> > ><br>
> > > ><br>
> > > > On Tue, May 13, 2014 at 11:56 AM, Rolf Rabenseifner <<br>
> > > > <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> > wrote:<br>
> > > > > REAL s(100)<br>
> > > > > MPI_SEND(s (1:100:5) , 3, dtype, ...)<br>
> > > > > dtype can only be MPI_REAL. In other words, dtype is kind of<br>
> > > > > redundant here since the type map is actually specified by<br>
> > > > > the<br>
> > > > > subarray.<br>
> > ><br>
> > > Right. The descriptor for the first argument has a member whose<br>
> > > value is a type code. In principal the library could verify this<br>
> > > is compatible with the data type handle supplied as the third<br>
> > > argument, and issue an error if not. Perhaps in a “debug” mode.<br>
> > ><br>
> > > ><br>
> > > > No, if dtype is a vector then it is applied to a virtual<br>
> > > > contiguous array that consists of s(1), s(6), s(11) …<br>
> > ><br>
> > > dtype is not a vector, is it? That argument is a scalar of type<br>
> > > TYPE(MPI_DATATYPE). At least that is what the interface says.<br>
> > ><br>
> > > Rolf meant dtype is an MPI datatype created by MPI_Type_vector.<br>
> > > For this case, I will have two datatypes, one from the<br>
> > > MPI_Datatype argument, the other from the choice buffer itself.<br>
> > > It is hard to implement that. Perhaps it is useless since it<br>
> > > obscures the program.<br>
> ><br>
> > OK. But one of the virtues of the new interface for users is that<br>
> > you do not have to create such data types anymore for array<br>
> > sections. Even if someone did do this, you can detect that the<br>
> > incoming data type is user-created, and in that case ignore the<br>
> > type code in the descriptor. If the program is valid at all, the<br>
> > element length, strides, and extents in the descriptor should be<br>
> > correct.<br>
> ><br>
> > Yes, I can do that. The hard part is when the subarray is<br>
> > non-contiguous, and it is a non-blocking call. I need to allocate<br>
> > a scratch buffer and pack the subarray. Since it is non-blocking,<br>
> > I can not free the buffer.<br>
><br>
> Can you create, locally, a datatype that describes the layout of the<br>
> array section, and then call MPI_Isend again with that data type?<br>
> That avoids the contiguous local buffer and the problem of when to<br>
> free it.<br>
><br>
><br>
><br>
> That is my first thought. But then I realized I have to assume the<br>
> MPI_Datatype argument is for subarray elements.<br>
><br>
><br>
><br>
> Cheers,<br>
> Bill<br>
><br>
><br>
><br>
><br>
><br>
><br>
> ><br>
> ><br>
> > Cheers,<br>
> > Bill<br>
> ><br>
> ><br>
> ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Cheers,<br>
> > > Bill<br>
> > ><br>
> > ><br>
> > > ><br>
> > > > That is nasty. Then I will have two data types. I even can not<br>
> > > > assume any relationship between the two types. I have to<br>
> > > > allocate a scratch buffer for the virtual contiguous array in<br>
> > > > MPI_ISEND etc, do memory copying and then free the buffer in<br>
> > > > MPI_WAIT. I'm not sure one can implement it efficiently.<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > ----- Original Message -----<br>
> > > > > From: "Junchao Zhang" < <a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a> ><br>
> > > > > To: "MPI-WG Fortran working group" <<br>
> > > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a> ><br>
> > > > > Sent: Tuesday, May 13, 2014 6:23:08 PM<br>
> > > > > Subject: Re: [MPIWG Fortran] Data type of F08 subarray<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > Thanks, Rolf. And I feel there is a jump from contiguous<br>
> > > > > subarray to<br>
> > > > > non-contiguous subarray.<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > For contiguous subarray, such as<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > REAL s(100)<br>
> > > > ><br>
> > > > > MPI_SEND(s(2:50), 3, dtype, ...) s only gives the start<br>
> > > > > address.<br>
> > > > > dtype can be anything, e.g., either a basic type or a derived<br>
> > > > > type<br>
> > > > > created by MPI_Type_vector() etc.<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > For non-contiguous subarray, such as<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > REAL s(100)<br>
> > > > > MPI_SEND(s (1:100:5) , 3, dtype, ...)<br>
> > > > > dtype can only be MPI_REAL. In other words, dtype is kind of<br>
> > > > > redundant here since the type map is actually specified by<br>
> > > > > the<br>
> > > > > subarray.<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > --Junchao Zhang<br>
> > > > ><br>
> > > > ><br>
> > > > > On Tue, May 13, 2014 at 10:20 AM, Rolf Rabenseifner <<br>
> > > > > <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a> > wrote:<br>
> > > > ><br>
> > > > ><br>
> > > > > Dear Junchao,<br>
> > > > ><br>
> > > > > MPI-3.0 p25:7-8 describes only communication with language<br>
> > > > > type<br>
> > > > > of the buffer argument matches to the MPI datatype used<br>
> > > > > in the datatype argument.<br>
> > > > > Same p83:36-37.<br>
> > > > ><br>
> > > > > Therefore, the answer is no and the compiler cannot detect<br>
> > > > > a mismatch beteen language buffer specification and<br>
> > > > > MPI datatype specification.<br>
> > > > ><br>
> > > > > I hope my answer could help.<br>
> > > > ><br>
> > > > > Best regards<br>
> > > > > Rolf<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > ----- Original Message -----<br>
> > > > > > From: "Junchao Zhang" < <a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a> ><br>
> > > > > > To: "MPI-WG Fortran working group" <<br>
> > > > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a> ><br>
> > > > > > Sent: Tuesday, May 13, 2014 5:08:30 PM<br>
> > > > > > Subject: [MPIWG Fortran] Data type of F08 subarray<br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > > > p626 of MPI-3.0 gives such an example<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > REAL s(100), r(100)<br>
> > > > > > CALL MPI_Isend(s(1:100:5), 3, MPI_REAL, ..., rq, ierror)<br>
> > > > > ><br>
> > > > > > All nonblocking MPI functions behave as if the<br>
> > > > > > user-specified<br>
> > > > > > elements of choice buffers are copied to a contiguous<br>
> > > > > > scratch<br>
> > > > > > buffer<br>
> > > > > > in the MPI runtime environment. All datatype descriptions<br>
> > > > > > (in the<br>
> > > > > > example above, “3, MPI_REAL”) read and store data from and<br>
> > > > > > to this<br>
> > > > > > virtual contiguous scratch buffer ...<br>
> > > > > ><br>
> > > > > > Here, data type of s(100) match with MPI_REAL, so<br>
> > > > > > everything is<br>
> > > > > > fine.<br>
> > > > > > But I want to know if MPI permits mismatched types, for<br>
> > > > > > example,<br>
> > > > > > can<br>
> > > > > > s(100) be an integer array? If the answer is no, then<br>
> > > > > > compilers can<br>
> > > > > > not detect this error ; if yes, then it is hard to<br>
> > > > > > implement that.<br>
> > > > > > To avoid memory copying to a scratch buffer, I want to use<br>
> > > > > > MPI<br>
> > > > > > datatypes. But if I have two types, one is given by the<br>
> > > > > > choice<br>
> > > > > > buffer itself, the other is given by the MPI_Datatype<br>
> > > > > > argument, how<br>
> > > > > > could I do that?<br>
> > > > > ><br>
> > > > > > Any thoughts?<br>
> > > > > ><br>
> > > > > > Thanks<br>
> > > > > ><br>
> > > > > > --Junchao Zhang<br>
> > > > > > _______________________________________________<br>
> > > > > > mpiwg-fortran mailing list<br>
> > > > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > > > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > > > ><br>
> > > > > --<br>
> > > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email<br>
> > > > > <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><br>
> > > > > High Performance Computing Center (HLRS) . phone<br>
> > > > > <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530" target="_blank">++49(0)711/685-65530</a><br>
> > > > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /<br>
> > > > > 685-65832<br>
> > > > > Head of Dpmt Parallel Computing . . .<br>
> > > > > <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
> > > > > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office:<br>
> > > > > Room 1.307)<br>
> > > > > _______________________________________________<br>
> > > > > mpiwg-fortran mailing list<br>
> > > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > > > ><br>
> > > > > _______________________________________________<br>
> > > > > mpiwg-fortran mailing list<br>
> > > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > > ><br>
> > > > --<br>
> > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email<br>
> > > > <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><br>
> > > > High Performance Computing Center (HLRS) . phone<br>
> > > > <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530" target="_blank">++49(0)711/685-65530</a><br>
> > > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /<br>
> > > > 685-65832<br>
> > > > Head of Dpmt Parallel Computing . . .<br>
> > > > <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
> > > > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room<br>
> > > > 1.307)<br>
> > > > _______________________________________________<br>
> > > > mpiwg-fortran mailing list<br>
> > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > mpiwg-fortran mailing list<br>
> > > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > ><br>
> > > Bill Long <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a><br>
> > > Fortran Technical Suport & voice: <a href="tel:651-605-9024" value="+16516059024" target="_blank">651-605-9024</a><br>
> > > Bioinformatics Software Development fax: <a href="tel:651-605-9142" value="+16516059142" target="_blank">651-605-9142</a><br>
> > > Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN<br>
> > > 55101<br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > mpiwg-fortran mailing list<br>
> > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> > ><br>
> > > _______________________________________________<br>
> > > mpiwg-fortran mailing list<br>
> > > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> ><br>
> > Bill Long <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a><br>
> > Fortran Technical Suport & voice: <a href="tel:651-605-9024" value="+16516059024" target="_blank">651-605-9024</a><br>
> > Bioinformatics Software Development fax: <a href="tel:651-605-9142" value="+16516059142" target="_blank">651-605-9142</a><br>
> > Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN<br>
> > 55101<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > mpiwg-fortran mailing list<br>
> > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
> ><br>
> > _______________________________________________<br>
> > mpiwg-fortran mailing list<br>
> > <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
><br>
> Bill Long <a href="mailto:longb@cray.com" target="_blank">longb@cray.com</a><br>
> Fortran Technical Suport & voice: <a href="tel:651-605-9024" value="+16516059024" target="_blank">651-605-9024</a><br>
> Bioinformatics Software Development fax: <a href="tel:651-605-9142" value="+16516059142" target="_blank">651-605-9142</a><br>
> Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101<br>
><br>
><br>
> _______________________________________________<br>
> mpiwg-fortran mailing list<br>
> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
><br>
> _______________________________________________<br>
> mpiwg-fortran mailing list<br>
> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
><br>
> _______________________________________________<br>
> mpiwg-fortran mailing list<br>
> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
><br>
><br>
> _______________________________________________<br>
> mpiwg-fortran mailing list<br>
> <a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a><br>
<br>
--<br>
Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" target="_blank">rabenseifner@hlrs.de</a><br>
High Performance Computing Center (HLRS) . phone <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530" target="_blank">++49(0)711/685-65530</a><br>
University of Stuttgart . . . . . . . . .. fax <a href="tel:%2B%2B49%280%29711%20%2F%20685-65832" value="+4971168565832" target="_blank">++49(0)711 / 685-65832</a><br>
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<br>
_______________________________________________<br>
mpiwg-fortran mailing list<br>
<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran</a></div></div></blockquote></div><br></div></div></div></div></div></div>