<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
I agree with Hammy.<br class="">
<br class="">
I think we all agree that adding a bunch of new _x and/or _l symbols sucks. But it's unfortunately the least evil solution. :-\<br class="">
<br class="">
C11 was our best hope to avoid that (from the user perspective at least), but that didn't work out.<br class="">
<br class="">
<img src="https://i.kym-cdn.com/photos/images/newsfeed/000/063/388/sadpandaneeds128614788256345848.jpg" alt="Image result for sad panda needs a hug gif" class=""><br class="">
<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 21, 2019, at 2:43 PM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com" class="">jeff.science@gmail.com</a>> wrote:<br class="">
<br class="">
If users want such a preprocessor solution, they should get it from a third-party implementation. I do not think it belongs in the MPI standard. <br class="">
<br class="">
Jeff<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 21, 2019, at 11:40 AM, Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" class="">rabenseifner@hlrs.de</a>> wrote:<br class="">
<br class="">
Dear all,<br class="">
<br class="">
Yes, I understand. <br class="">
<br class="">
And providing the new API in a new module mpi_x.h, but with same function names<br class="">
MPI_Send ... is then also an evil.<br class="">
Especially because the two MPI_Send(..,int count,..) from mpi.h <br class="">
and MPI_Send(..,MPI_Count count,..) from mpi_x.h cannot be together <br class="">
within the same lib because C does not allow to define an ABI-function-Name<br class="">
that is not identical to the API-function-name.<br class="">
<br class="">
Therefore next proposal:<br class="">
<br class="">
We do what we planned, but can additionally provide an mpi_l.h header file<br class="">
containing only a set of aliases<br class="">
#define MPI_Send(...) MPI_Send_l(...)<br class="">
...<br class="">
<br class="">
User who changed somehow all their 4 byte integer in into 8 byte integer <br class="">
can then directly use the new routines without changing all the MPI function calls.<br class="">
<br class="">
As far as I understand, wouldn't this add-on proposal fall into the evil category,<br class="">
or would it?<br class="">
<br class="">
Best regards<br class="">
Rolf<br class="">
<br class="">
<br class="">
----- Original Message -----<br class="">
<blockquote type="cite" class="">From: "Jeff Squyres" <<a href="mailto:jsquyres@cisco.com" class="">jsquyres@cisco.com</a>><br class="">
To: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>><br class="">
Cc: "Rolf Rabenseifner" <<a href="mailto:rabenseifner@hlrs.de" class="">rabenseifner@hlrs.de</a>>, "Jeff Hammond" <<a href="mailto:jeff.science@gmail.com" class="">jeff.science@gmail.com</a>><br class="">
Sent: Monday, October 21, 2019 3:41:53 PM<br class="">
Subject: Re: [Mpiwg-large-counts] A perfect trick in C with similar API as with Fortran overloading - instead of the<br class="">
many new _l functions in C<br class="">
</blockquote>
<br class="">
<blockquote type="cite" class="">I agree: breaking ABI in this way is truly evil.<br class="">
<br class="">
Specifically: compile your app with implementation version X. Then run your app<br class="">
with same implementation, but version Y (this is not uncommon for commercial<br class="">
ISV MPI apps). Things fail in weird, non-obvious ways (or worse, they *don't*<br class="">
fail, but silently give you wrong results).<br class="">
<br class="">
Speaking as someone who writes code for a living, changing the signature of an<br class="">
existing, well-established function is evil. Changing it in ways that a<br class="">
compiler may not even warn you about (in forward- and backward-compatility<br class="">
scenarios) is TRULY evil.<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 21, 2019, at 9:35 AM, Jeff Hammond via mpiwg-large-counts<br class="">
<<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:<br class="">
<br class="">
ABI is very much a user problem. We are not breaking ABI for this.<br class="">
<br class="">
What is being done now is five years in the making. I’m not sure why you didn’t<br class="">
comment until recently.<br class="">
<br class="">
Jeff<br class="">
<br class="">
Sent from my iPhone<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 20, 2019, at 10:49 PM, Rolf Rabenseifner via mpiwg-large-counts<br class="">
<<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:<br class="">
<br class="">
Dear Tony and all,<br class="">
<br class="">
<blockquote type="cite" class="">the group has labored over each area... 1 sided has had the least attention so<br class="">
far... All of MPI is to be made 64 bit clean not parts.<br class="">
</blockquote>
<br class="">
My proposal ia for whole MPI.<br class="">
Only for few routines we need some additional _x versions.<br class="">
<br class="">
<blockquote type="cite" class="">Partial answer to your earliest question in the note: Changing the meaning of<br class="">
existing APIs was disallowed some time ago. Certainly, for in arguments of<br class="">
count , replacing int with MPI_Count is a partial solution. But it changes the<br class="">
APIs ...<br class="">
</blockquote>
<br class="">
The goal is backward compatibility.<br class="">
And this one is provided.<br class="">
<br class="">
<blockquote type="cite" class="">In C, a program written for the assumption of big count and compiled<br class="">
accidentally with an MPI-3 compliant MPI will silently build and fail at<br class="">
runtime ... rounding of integers ...<br class="">
</blockquote>
<br class="">
Let's recommend to the users of the long coutnt feature that they should at a<br class="">
check of the version. In C, in can be tested at compile time with cpp.<br class="">
<br class="">
<blockquote type="cite" class="">and changing the AbI violates the ABI for tools and such — two possible<br class="">
reasons why not chosen ... but we will have to go refresh on all the reasons .<br class="">
</blockquote>
<br class="">
Not a user problem ...<br class="">
<br class="">
<blockquote type="cite" class="">I am sure you will get extensive feedback on these questions from the whole<br class="">
group ? :-)<br class="">
</blockquote>
<br class="">
And also by the plenum.<br class="">
<br class="">
Best regards<br class="">
Rolf<br class="">
<br class="">
----- Tony Skjellum <<a href="mailto:skjellum@gmail.com" class="">skjellum@gmail.com</a>> wrote:<br class="">
<blockquote type="cite" class="">Rolf, the group has labored over each area... 1 sided has had the least<br class="">
attention so far... All of MPI is to be made 64 bit clean not parts.<br class="">
<br class="">
Partial answer to your earliest question in the note: Changing the meaning of<br class="">
existing APIs was disallowed some time ago. Certainly, for in arguments of<br class="">
count , replacing int with MPI_Count is a partial solution. But it changes the<br class="">
APIs ...<br class="">
<br class="">
In C, a program written for the assumption of big count and compiled<br class="">
accidentally with an MPI-3 compliant MPI will silently build and fail at<br class="">
runtime ... rounding of integers ... and changing the AbI violates the ABI for<br class="">
tools and such — two possible reasons why not chosen ... but we will have to<br class="">
go refresh on all the reasons .<br class="">
<br class="">
I am sure you will get extensive feedback on these questions from the whole<br class="">
group ? :-)<br class="">
<br class="">
Tony<br class="">
<br class="">
Anthony Skjellum, PhD<br class="">
205-807-4968<br class="">
<br class="">
<br class="">
<blockquote type="cite" class="">On Oct 20, 2019, at 2:25 PM, Rolf Rabenseifner via mpiwg-large-counts<br class="">
<<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>> wrote:<br class="">
<br class="">
Dear all,<br class="">
<br class="">
a participant in my latest course asked me, whether we not<br class="">
substituting in by MPI_Count in all "int xxxxx" input arguments.<br class="">
The compiler automatically would cast to MPI_Count which may be a fast<br class="">
instruction. Then we need the new _x or _l functions only for those<br class="">
rare functions with MPI_Count as output or array argument.<br class="">
<br class="">
Was this ever discussed?<br class="">
<br class="">
Second question:<br class="">
Does the large Count roup plan, only to use MPI_Count for data,<br class="">
i.e., only in communication, I/O, and derived type routines?<br class="">
<br class="">
Is it correct, that we keep the number of<br class="">
- request handles in request arrays,<br class="">
- ranks an size in/of communicator and group handles,<br class="">
will stay int?<br class="">
<br class="">
How do we handle disp in 1-sided communication?<br class="">
<br class="">
With this approach, it seems that explicit MPI_Count versions of<br class="">
a subroutine is needed only for<br class="">
MPI_[I][NEIGHBOR_]{ALLTOALL|ALLGATHER}{V|W}<br class="">
MPI_[I]{GATHER|SCATTER}V<br class="">
MPI_[I]REDUCE_SCATTER<br class="">
(18 routines)<br class="">
MPI_WIN_SHARED_QUERY --> *disp<br class="">
(1 Routine)<br class="">
MPI_[UN]PACK<br class="">
MPI_PACK_SIZE _x/_l version should have MPI_Aint (same as for<br class="">
MPI_...PACK..._external)<br class="">
(3 routines)<br class="">
MPI_GET_COUNT<br class="">
MPI_GET_ELEMENTS (has already an _X version)<br class="">
MPI_TYPE_SIZE (has already an _X version)<br class="">
<br class="">
Which solution exists for MPI_CONVERSION_FN_NULL?<br class="">
<br class="">
All other 100 routines with "int count" and "int size" seems<br class="">
to be solved with directly changing it into "MPI_Count count" and "MPI_Count<br class="">
size".<br class="">
<br class="">
Best regards<br class="">
Rolf<br class="">
<br class="">
----- Original Message -----<br class="">
<blockquote type="cite" class="">From: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>><br class="">
To: "mpiwg-large-counts" <<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a>><br class="">
Cc: "Jeff Squyres" <<a href="mailto:jsquyres@cisco.com" class="">jsquyres@cisco.com</a>><br class="">
Sent: Saturday, October 19, 2019 10:26:23 PM<br class="">
Subject: [Mpiwg-large-counts] Pythonizing instructions<br class="">
</blockquote>
<br class="">
<blockquote type="cite" class="">As discussed in our last meeting, I've written a first take at instructions for<br class="">
Chapter Authors for how to write Pythonized functions.<br class="">
<br class="">
These instructions are not yet intended for a wide audience -- they are intended<br class="">
solely for the working group. Let's iterate on them here in the WG and get<br class="">
them to a good enough state before sharing them widely:<br class="">
<br class="">
<a href="https://github.com/mpiwg-large-count/large-count-issues/wiki/pythonizing" class="">https://github.com/mpiwg-large-count/large-count-issues/wiki/pythonizing</a><br class="">
<br class="">
Puri/Tony/Dan: please try to use these instructions to Pythonize a few bindings.<br class="">
The PR for the Pythonization is<br class="">
https://github.com/mpiwg-large-count/mpi-standard/pull/2. All even-numbered<br class="">
chapters are done, and a small number of odd-numbered chapters. Grep for<br class="">
`funcdef` in chapter *.tex files to find a chapter that hasn't been Pythonized<br class="">
yet (funcdef is the LaTeX macro for hard-coded LaTeX macros).<br class="">
<br class="">
Send your feedback here and/or we'll discuss on Wednesday.<br class="">
<br class="">
--<br class="">
Jeff Squyres<br class="">
jsquyres@cisco.com<br class="">
<br class="">
_______________________________________________<br class="">
mpiwg-large-counts mailing list<br class="">
mpiwg-large-counts@lists.mpi-forum.org<br class="">
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts<br class="">
</blockquote>
<br class="">
--<br class="">
Dr. Rolf Rabenseifner . . . . . . . . . .. <a href="mailto:rabenseifner@hlrs.de" class="">
email rabenseifner@hlrs.de</a> .<br class="">
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br class="">
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br class="">
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" class="">
www.hlrs.de/people/rabenseifner</a> .<br class="">
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br class="">
_______________________________________________<br class="">
mpiwg-large-counts mailing list<br class="">
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a><br class="">
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts<br class="">
</blockquote>
</blockquote>
<br class="">
--<br class="">
Dr. Rolf Rabenseifner . . . . . . . . . .. <a href="mailto:rabenseifner@hlrs.de" class="">
email rabenseifner@hlrs.de</a> .<br class="">
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br class="">
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br class="">
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" class="">
www.hlrs.de/people/rabenseifner</a> .<br class="">
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br class="">
_______________________________________________<br class="">
mpiwg-large-counts mailing list<br class="">
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a><br class="">
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts<br class="">
</blockquote>
_______________________________________________<br class="">
mpiwg-large-counts mailing list<br class="">
<a href="mailto:mpiwg-large-counts@lists.mpi-forum.org" class="">mpiwg-large-counts@lists.mpi-forum.org</a><br class="">
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-large-counts<br class="">
</blockquote>
<br class="">
<br class="">
--<br class="">
Jeff Squyres<br class="">
<a href="mailto:jsquyres@cisco.com" class="">jsquyres@cisco.com</a><br class="">
</blockquote>
<br class="">
-- <br class="">
Dr. Rolf Rabenseifner . . . . . . . . . .. <a href="mailto:rabenseifner@hlrs.de" class="">
email rabenseifner@hlrs.de</a> .<br class="">
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br class="">
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br class="">
Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" class="">
www.hlrs.de/people/rabenseifner</a> .<br class="">
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br class="">
</blockquote>
</blockquote>
<br class="">
<div class=""><br class="">
-- <br class="">
Jeff Squyres<br class="">
<a href="mailto:jsquyres@cisco.com" class="">jsquyres@cisco.com</a><br class="">
</div>
<br class="">
</body>
</html>