[Mpiwg-large-counts] A perfect trick in C with similar API as with Fortran overloading - instead of the many new _l functions in C
Rolf Rabenseifner
rabenseifner at hlrs.de
Sun Oct 20 13:25:36 CDT 2019
Dear all,
a participant in my latest course asked me, whether we not
substituting in by MPI_Count in all "int xxxxx" input arguments.
The compiler automatically would cast to MPI_Count which may be a fast
instruction. Then we need the new _x or _l functions only for those
rare functions with MPI_Count as output or array argument.
Was this ever discussed?
Second question:
Does the large Count roup plan, only to use MPI_Count for data,
i.e., only in communication, I/O, and derived type routines?
Is it correct, that we keep the number of
- request handles in request arrays,
- ranks an size in/of communicator and group handles,
will stay int?
How do we handle disp in 1-sided communication?
With this approach, it seems that explicit MPI_Count versions of
a subroutine is needed only for
MPI_[I][NEIGHBOR_]{ALLTOALL|ALLGATHER}{V|W}
MPI_[I]{GATHER|SCATTER}V
MPI_[I]REDUCE_SCATTER
(18 routines)
MPI_WIN_SHARED_QUERY --> *disp
(1 Routine)
MPI_[UN]PACK
MPI_PACK_SIZE _x/_l version should have MPI_Aint (same as for MPI_...PACK..._external)
(3 routines)
MPI_GET_COUNT
MPI_GET_ELEMENTS (has already an _X version)
MPI_TYPE_SIZE (has already an _X version)
Which solution exists for MPI_CONVERSION_FN_NULL?
All other 100 routines with "int count" and "int size" seems
to be solved with directly changing it into "MPI_Count count" and "MPI_Count size".
Best regards
Rolf
----- Original Message -----
> From: "mpiwg-large-counts" <mpiwg-large-counts at lists.mpi-forum.org>
> To: "mpiwg-large-counts" <mpiwg-large-counts at lists.mpi-forum.org>
> Cc: "Jeff Squyres" <jsquyres at cisco.com>
> Sent: Saturday, October 19, 2019 10:26:23 PM
> Subject: [Mpiwg-large-counts] Pythonizing instructions
> As discussed in our last meeting, I've written a first take at instructions for
> Chapter Authors for how to write Pythonized functions.
>
> These instructions are not yet intended for a wide audience -- they are intended
> solely for the working group. Let's iterate on them here in the WG and get
> them to a good enough state before sharing them widely:
>
> https://github.com/mpiwg-large-count/large-count-issues/wiki/pythonizing
>
> Puri/Tony/Dan: please try to use these instructions to Pythonize a few bindings.
> The PR for the Pythonization is
> https://github.com/mpiwg-large-count/mpi-standard/pull/2. All even-numbered
> chapters are done, and a small number of odd-numbered chapters. Grep for
> `funcdef` in chapter *.tex files to find a chapter that hasn't been Pythonized
> yet (funcdef is the LaTeX macro for hard-coded LaTeX macros).
>
> Send your feedback here and/or we'll discuss on Wednesday.
>
> --
> Jeff Squyres
> jsquyres at cisco.com
>
> _______________________________________________
> mpiwg-large-counts mailing list
> mpiwg-large-counts at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts
--
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de .
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
More information about the mpiwg-large-counts
mailing list