[Mpi3-tools] Next telecon tomorrow 4/18

Rolf Rabenseifner rabenseifner at hlrs.de
Fri Apr 19 04:01:26 CDT 2013


Dear Kathryn and all,

I try to answer to the concerns mentioned at the telecon on April 18, 2013
an in your notes about this telecon:
( https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Tools/notes-2013-04-18 )

> - Concerns raised: 
>   - This complicated scheme means that implementors and tool developers 
>     need to become Fortran experts. We would need a standardized 
>     wrapper generator to help with complexity. 

In principle, whether there are only two linker names, e.g.,
 - MPI_Isend (for the C MPI routine), and
 - mpi_isend__ (for the Fortran MPI routine, mapped from
                Fortran MPI_ISEND to this linker name
                through the mpi module and in mpif.h
                together with the Fortran compiler's
                compiler-dependent linker name mangling),
or whether there are some more linker names with marginal
changes in the argument list, e.g.,
 - mpi_isend__     uses INTEGER handles
 - mpi_isend_f08__ uses these TYPE(MPI_Comm) handles, etc.
should not be a great deal.
With the new MPI-3.0, a profiling provider must mainly 
compile the same set of profiling wrappers several times
with small modifications. 
These modifications are full systematic and apply to:
 - Which linker name must be used.
 - How handle arguments are done (this is only a theoretical aspect
   because old-style INTEGER handles and new-style TYPE(MPI_Comm) ...
   handles are internally the same: INTEGER.
 - How the buffer arguments are passed:
   this is the only real new stuff.
 - Whether the ierror argument is optional or not.
 - How Fortran callback procedures are passed (this is also
   as in MPI-2) 
Therefore it looks complicated, but it is not.

I do not plan to write and additional advice to users
for the user that are tools provider. 
I hope this email is enough to resolve this concern.
    
>   - What if application developers choose to call the various 
>     linker names directly instead of the high level MPI call? 

I'll add an advice to users at page 6 line 40 in 
binding-2_Sec.17.1.5_2013-04-18.pdf (see previous email):

Advice to users.
The additional C routines MPI_..._cdesc, MPI_..._f08cb, and 
MPI_..._fcb for the implementation schemes B2 an B3 
should not be called by application program, 
i.e., an application program that uses these {\em internal}
routines does not conform to this \MPI/ standard.
These routine names and their interfaces are standardized only 
for profiling purpose,
i.e., that the routines can be intercepted with a profiling
wrapper that internally calls the corresponding 
PMPI_..._cdesc, PMPI_..._f08cb, and PMPI_..._fcb routines.
These additional MPI_... and PMPI_... C routines
are not or only partially available if B2 and B3 is 
not or only partially used in the implementation of 
an \MPI/ library. This is described with the macros as follows.
End of advice to users.

I hope that this is clear enough, and that this advice 
resolves the second concern.

Best regards
Rolf 


----- Original Message -----
> From: "Kathryn Mohror" <kathryn at llnl.gov>
> To: mpi3-tools at lists.mpi-forum.org
> Sent: Thursday, April 18, 2013 10:31:07 PM
> Subject: Re: [Mpi3-tools] Next telecon tomorrow 4/18
> Hi Rolf,
> 
> 
> Thanks for the corrections and additions. I updated the wiki
> accordingly.
> 
> 
> Kathryn
> 
> 
> 
> On Apr 18, 2013, at 12:31 PM, Rolf Rabenseifner < rabenseifner at hlrs.de
> >
>  wrote:
> 
> 
> Dear kathryn,
> 
> Good catch, here only a few small corrections and additions:
> 
> 1)
> Now, in Fortran if you declare the buffer in a particular way, the
> compiler produces a descriptor instead of using a pointer, where a
> descriptor is a pointer plus some description of the buffer.
> -->
> Now, in Fortran if the MPI library (in the mpi_f08 or mpi module or
> mpif.h) declares the buffer ....
> 
> 2)
> different linker names for routines in MPI applications.
> -->
> different linker names for the same MPI operation.
> The appropriate mapping of the MPI call is done through
> the specifications within the mpi_f08 or mpi module or mpif.h
> 
> 3)
> After "Some routines will be able to be intercepted ...":
> - Special macros in mpi.h report to the tools, which linker names are
> provided
>  and whether interception through the C interface is available.
>  Based on these macros, all needed tool wrappers can be generated.
> 
> Best regards
> Rolf
> 
> 
> 
> ----- Original Message -----
> 
> 
> From: "Kathryn Mohror" < kathryn at llnl.gov >
> To: mpi3-tools at lists.mpi-forum.org
> Sent: Thursday, April 18, 2013 9:14:06 PM
> Subject: Re: [Mpi3-tools] Next telecon tomorrow 4/18
> Hi all,
> 
> 
> For anyone who missed the call, I put some notes on the wiki.
> Hopefully, I got the Fortran details right. Anyone with more expertise
> in Fortran than I have is welcome to correct me!
> 
> 
> Kathryn
> 
> 
> 
> On Apr 17, 2013, at 10:13 AM, Kathryn Mohror < kathryn at llnl.gov >
> wrote:
> 
> 
> 
> Hi all,
> 
> 
> This is a reminder of our next telecon tomorrow 4/18 at 8:00 AM PDT,
> 11:00 AM EDT. The Webex information is posted on the wiki:
> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Tools
> 
> 
> I expect the main topic of discussion will be the issues dealing with
> the new Fortran interface in MPI-3 as they relate to tools. Rolf and
> Craig will join the call to fill us in on the details.
> 
> 
> If there is time, other topics may include:
> - MPI_Pcontrol Fortran problem
> - Updates to the MQS document
> - Possible change in telecon time to accommodate changed schedules
> - What is the openMP tools WG doing for DLL loading?
> - Status of ticket #357, threading problems in MPI-3
> 
> 
> Kathryn
> 
> 
> 
> ______________________________________________________________
> Kathryn Mohror,  kathryn at llnl.gov ,  http://people.llnl.gov/mohror1
> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
> 
> 
> 
> 
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools
> 
> 
> 
> ______________________________________________________________
> Kathryn Mohror,  kathryn at llnl.gov ,  http://people.llnl.gov/mohror1
> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
> 
> 
> 
> 
> 
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools
> 
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools
> 
> 
> 
> 
> ______________________________________________________________
> Kathryn Mohror,  kathryn at llnl.gov ,  http://people.llnl.gov/mohror1
> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
> 
> 
> 
> 
> 
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-tools mailing list