[Mpi3-tools] Questions on the F2008 profiling interface issues

Jeff Squyres (jsquyres) jsquyres at cisco.com
Fri Aug 2 12:39:01 CDT 2013


Indiana U, which is the email hosting provider for the MPI Forum mailing lists, is experincing some difficulties with their email system today.

For example, the mail below is from Oct 3, 2011.  I think it's safe to ignore it at this point.  :-)

On Oct 3, 2011, at 6:18 AM, Rolf Rabenseifner <rabenseifner at hlrs.de>
 wrote:

> Dear all,
> 
> according to pages:lines in
> https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/ticket/229/mpi-report-F2008-2011-09-08-changeonlyplustickets.pdf
> and the decision of the Santorini 2011 meeting,
> we should add the following text to accommodate the tools implementors.
> 
> P559:20-21 reads
>  equals .FALSE.. If
> but should read
>  equals .FALSE., and  
> 
> After p559 the following text should be added:
> 
> ----------------------------
> To simplify the development of profiling libraries, the MPI routines
> are grouped together and it is required that
> if the peer routine of a group is available within an MPI library
> with one of its possible linker names then all of the routines
> in this group must be also provided according to the same linker 
> name scheme, and if the peer routine is not available through
> a linker name scheme then all other routines have also not to be
> available through this scheme.
> 
> Peer routines and their groups:
> - MPI_ALLOC_MEM
>     MPI_ALLOC_MEM and MPI_WIN_ALLOCATE.
> - MPI_FREE_MEM
>     Only this routine is in this group.
> - MPI_GET_ADDRESS
>     MPI_GET_ADDRESS and MPI_ADDRESS.
> - MPI_SEND
>     All routines with choice buffer arguments that
>     are not declared as ASYNCHRONOUS within the mpi_f08 module
>     and exist already in MPI-2.2.
> - MPI_NEIGHBOR_ALLTOALL
>     All routines with choice buffer arguments that
>     are not declared as ASYNCHRONOUS within the mpi_f08 module
>     and are new in MPI-3.0.
> - MPI_ISEND
>     All routines with choice buffer arguments that
>     are declared as ASYNCHRONOUS within the mpi_f08 module
>     and exist already in MPI-2.2.
> - MPI_IBCAST
>     All routines with choice buffer arguments that
>     are declared as ASYNCHRONOUS within the mpi_f08 module
>     and are new in MPI-3.0.
> - MPI_OP_CREATE
>     Only this routine is in this group.
> - MPI_REGISTER_DATAREP
>     Only this routine is in this group.
> - MPI_COMM_KEYVAL_CREATE
>     All other routines with callback function arguments.
> - MPI_COMM_DUP_FN
>     All predefined callback routines.
> - MPI_COMM_RANK
>     All other MPI routines that exist already in MPI-2.2.
> - MPI_IBARRIER
>     All other MPI routines that are new in MPI-3.0.
> 
> Advice to implementors.
> If all the following conditions are fulfilled 
> (which is the case for most compilers)
> - the handles in the mpi_f08 module occupy one Fortran 
>   numerical storage unit (same as an INTEGER handle), and
> - the internal argument passing used to pass an actual ierror
>   argument to a non optional ierror dummy argument is binary
>   compatible to passing an actual ierror argument to an ierror 
>   dummy argument that is declared as OPTIONAL, and
> - the internal argument passing for ASYNCHRONOUS and 
>   non-ASYNCHRONOUS arguments is the same, and
> - the internal routine call mechanism is the same for 
>   the Fortran and the C compiler, and
> - the compiler does not provide TR 29113,
> then for most groups, the implementor may use the same
> internal routine implementations for all Fortran support 
> methods with only different linker names.
> For TR 29113 quality, new routines are needed only for
> the routine groups of MPI_ISEND and MPI_IBCAST. 
> End of advice to implementors.
> ----------------------------
> 
> Not directly relevant for the tools people, but giving the reason 
> for differentiating several Choice-buffer routine groups, 
> the following changes are also done:
> 
> P552:14-18 read
>  - Set the INTEGER compile-time constant MPI_SUBARRAYS_SUPPORTED to
>    .TRUE. and declare choice buffers using the Fortran 2008 TR 29113 
>    feature assumed-type and assumed-rank, i.e., TYPE(*), DIMENSION(..), 
>    if the underlying Fortran compiler supports it. With this, 
>    non-contiguous sub-arrays can be used as buffers in nonblocking routines.
> but should read
>  - Set the INTEGER compile-time constant MPI_SUBARRAYS_SUPPORTED to
>    .TRUE. and declare choice buffers using the Fortran 2008 TR 29113 
>    feature assumed-type and assumed-rank, i.e., TYPE(*), DIMENSION(..)
>    in all nonblocking, split collective and persistent communication
>    routines, 
>    if the underlying Fortran compiler supports it. With this, 
>    non-contiguous sub-arrays can be used as buffers in nonblocking routines.
> 
>    Rationale. In all blocking routines, i.e., if the choice-buffer 
>    is not declared as ASYNCHRONOUS, the TR 29113 feature is not needed
>    for the support of non-contiguous buffers because the compiler
>    can pass the buffer by in-and-out-copy through a contiguous scratch
>    array. End of rationale.
> 
> P555:7-10 read
>  - Set the INTEGER compile-time constant MPI_SUBARRAYS_SUPPORTED to 
>    .TRUE. if all choice buffer arguments 
>    are declared with TYPE(*), DIMENSION(..), otherwise set it to 
>    .FALSE.. With MPI_SUBARRAYS_SUPPORTED==.TRUE., non-contiguous 
>    subarrays can be used as buers in nonblocking routines.
> but should read
>  - Set the INTEGER compile-time constant MPI_SUBARRAYS_SUPPORTED to 
>    .TRUE. if all choice buffer arguments 
>    in all nonblocking, split collective and persistent communication
>    routines
>    are declared with TYPE(*), DIMENSION(..), otherwise set it to 
>    .FALSE.. With MPI_SUBARRAYS_SUPPORTED==.TRUE., non-contiguous 
>    subarrays can be used as buers in nonblocking routines.
> 
> I'll try to be at the tel-con today.
> 
> Best regards
> Rolf
> 
> 
> ----- Original Message -----
>> From: "Martin Schulz" <schulzm at llnl.gov>
>> To: "Marc-Andre Hermanns" <m.a.hermanns at grs-sim.de>
>> Cc: "MPI3 Tools" <mpi3-tools at lists.mpi-forum.org>, "Martin Schulz" <schulz6 at llnl.gov>, "Craig E Rasmussen"
>> <rasmussn at lanl.gov>, "Rolf Rabenseifner" <rabenseifner at hlrs.de>, "Jeff Squyres" <jsquyres at cisco.com>, "Andreas
>> Knüpfer" <andreas.knuepfer at tu-dresden.de>, "Todd Gamblin" <tgamblin at llnl.gov>, "Tobias Hilbrich"
>> <tobias.hilbrich at tu-dresden.de>, "MPI-3 Fortran working group" <mpi3-fortran at lists.mpi-forum.org>
>> Sent: Sunday, October 2, 2011 6:13:46 AM
>> Subject: Re: [Mpi3-tools] Questions on the F2008 profiling interface issues
>> Hi Marc-Andre,
>> 
>> On Sep 30, 2011, at 6:27 AM, Marc-Andre Hermanns wrote:
>> 
>>> Martin,
>>> 
>>>> I was in Dresden the last few days (visiting the Vampir group, in
>>>> particular Andreas Knuepfer and Tobias Hilbrich who are responsible
>>>> for
>>>> their MPI wrappers) and we sat down to talk about the issues around
>>>> the
>>>> profiling interface in ticket #229.
>>>> 
>>>> Here is a quick summary with a few questions that came up:
>>>> 
>>>> * With the new Fortran symbol naming variants, there are 6 or 8 or
>>>> 10
>>>> versions for every MPI subroutine. Tools could intercept all
>>>> versions by
>>>> providing own symbols for all versions.
>>> 
>>> If it is clear which name the symbols have, additional symbol names
>>> should be easy to generate, so at this stage I think it doesn't
>>> matter
>>> much, whether we have 6, 8, or 10 different schemes to cover. Right?
>> 
>> I agree - as long as we can generate a full list of names, we can
>> intercept them all. Adding wrappers shouldn't be a problem.
>> 
>>> 
>>>> Now the problem: Within the wrapper function a tool needs to call
>>>> the
>>>> correct PMPI call which usually is the same prefix (either '_f08',
>>>> '_f', '__' etc.) as the MPI call it is in. A mapping to matching C
>>>> functions (as most tools do it now) is not possible due to problems
>>>> with calling conventions for callback routines.
>>> 
>>> As far as I know our wrappers call the C routines, because we jump
>>> through all sorts of hoops to get a proper Fortran to C conversion
>>> and
>>> back. So in this regard, we need to fix this. Interestingly this has
>>> not
>>> popped up yet.
>> 
>> I agree - this is remarkable.
>> 
>>> 
>>> If we write Fortran wrappers in Fortran, how would a Fortran wrapper
>>> compiled with Fortran 77 have to call the C measurement code inside
>>> the
>>> wrapper? If the Fortran compiler implicitly adds underscores to the
>>> calls, do I have to provide a special Fortran interface to my
>>> measurement system then?
>> 
>> That is a question for the Fortran experts.
>> 
>>> 
>>>> * Would the following solution work? All wrappers for Fortran are
>>>> written
>>>> in Fortran instead of C.
>>>> [...]
>>>> For every MPI function, there are 6 or 8 variants and all of them
>>>> need to be provided by a tool. This would probably mean that we
>>>> have
>>>> to compile wrappers with all three calling conventions (mpif.h, use
>>>> mpi_f and use mpi_f08) and then link them together?
>>> 
>>> Don't we need to compile with all three calling conventions on all
>>> possible compilers on a given system?
>> 
>> Also, here, I think the Fortran experts need to weigh in.
>> 
>>> 
>>> We currently provide all 4 name-mangling schemes with every install
>>> of
>>> Scalasca, because we had some issues with applications using two
>>> different Fortran compilers for different sub-modules. (This came up
>>> in
>>> a project some years back, and ever since then we provide all
>>> schemes in
>>> the same library and it seems to work.)
>>> 
>>>> * As discussed at the forum, if we want to keep our tools in C
>>>> (which is
>>>> probably still the easier and better option), we need to know the
>>>> symbols
>>>> on the "p" side and we need a portable way to figure this out. We
>>>> talked
>>>> about the solution of two groups of routines - one for which we
>>>> have _f08
>>>> symbols for one for which we don't. Rolf, how will you add this to
>>>> the
>>>> standard, in particular how will you form those groups.
>>> 
>>> I started to get some doubts about this approach. One of the issues
>>> was
>>> that some choice-buffer routines had names longer than the proposed
>>> limit. So they need to be mangled differently, especially, their
>>> symbol
>>> needs to be shortened. And as I understood it, the current idea is
>>> to
>>> leave it to the MPI implementation to define the shortened name.
>> 
>> I thought that was just for the argument checking in mpif.h and that
>> we decided to not support this. Are there more locations were the
>> names are too long?
>> 
>>> 
>>> It is one thing, finding out which suffix is used. Finding how
>>> function
>>> names are mangled without any form of restricting to possible
>>> schemes
>>> seems to be another ball game?
>>> 
>>> Am I missing something?
>> 
>> Yes, if we do have to shorten names, I agree - I think, though, we got
>> around the issue, or did we not?
>> 
>>> 
>>> I think, as it was pointed out in Santorini with the Fortran
>>> Bindings in
>>> general, we (or at least I) don't have enough Fortran expertise to
>>> think
>>> this through all to the end and consider all implications. I think
>>> it
>>> would be a great benefit to have people of the Fortran WG to join on
>>> the
>>> call on Monday. I hope this is possible.
>> 
>> Yes, that would be great - all are cc-ed.
>> 
>> Martin
>> 
>> 
>>> 
>>> Cheers,
>>> Marc-Andre
>>> --
>>> Marc-Andre Hermanns
>>> German Research School for
>>> Simulation Sciences GmbH
>>> c/o Laboratory for Parallel Programming
>>> 52056 Aachen | Germany
>>> 
>>> Tel +49 241 80 99753
>>> Fax +49 241 80 6 99753
>>> Web www.grs-sim.de
>>> 
>>> Members: Forschungszentrum Jülich GmbH | RWTH Aachen University
>>> Registered in the commercial register of the local court of
>>> Düren (Amtsgericht Düren) under registration number HRB 5268
>>> Registered office: Jülich
>>> Executive board: Prof. Marek Behr Ph.D. | Dr. Norbert Drewes
>>> 
>> 
>> ________________________________________________________________________
>> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
>> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> 
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools


-- 
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/





More information about the mpiwg-tools mailing list