[mpiwg-abi] MPI ABI WG meeting to discuss constants
Koziol, Quincey
qkoziol at amazon.com
Sat Apr 15 15:59:54 CDT 2023
Jeff - All your changes make reasonable sense to me.
Quincey
On Apr 12, 2023, at 2:30 AM, Jeff Hammond <jeff.science at gmail.com> wrote:
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
Here is a summary of the meeting and related Slack discussion.
During the meeting, I presented my proposal for integer and handle constants. However, in response to feedback, I made significant changes to it, which I will describe below.
There was some debate about using the Huffman code directly versus a table. Joseph wants a table, and one that is smaller than 1024 entries. I have changed the Huffman code so that the maximum table size is not large, and is furthermore amenable to a two-level table that is even smaller, if someone wants to trade time for space.
We also discussed integer constants, particularly the ones that have interesting positive values, e.g. MPI_BSEND_OVERHEAD. Dan and I agree that we should have a way to query the exact value, which is less than the ABI value, which needs to be an upper bound on all implementations. Dan and I disagree on how to do this. I am going to propose attributes similar to MPI_TAG_UB. Dan or others may propose info or something else. I encourage everyone to look at https://github.com/mpiwg-abi/specification-text-draft/blob/main/IntegerConstants.md and complain about things they don't like.
After some pondering, I concluded that we should not encode the size of types into handles if the types are not strictly fixed-size by the language. Consider MPI_LONG, for example. Once we fix the platform ABI, e.g. LP64, we know sizeof(long) and thus can encode it in MPI_LONG. However, this would cause two problems. First, the value of MPI_LONG would not be a strict constant and second, third-party languages would need to know the behavior of C to compute the value of MPI_LONG. Lisandro made the point that we create problems for heterogeneous MPI (i.e. ILP32 on one end and LP64 on another) and related activities, and I do not want to break that. Thus, implementations will need to figure out the size of MPI_LONG and related. Fortunately, message rate benchmarks almost always use MPI_CHAR, which is a fixed-size type.
The latest version of the Huffman code is in https://github.com/mpiwg-abi/specification-text-draft/pull/6/files, and it has the following features:
- I made minor changes to the MPI_Op path, because I want to be consistent that MPI_*_NULL is always the zero bit pattern other than the handle type part.
- Fixed-size datatypes are slightly changed, and exist in the range [576,747].
- Other datatypes changed a lot, because many C/C++ types are not actually fixed-size. These types are also more table-friendly, as they only span the range [512,572].
There are a few different implementations that make sense here:
1. All handles in a single table of 1024 entries (where values is whatever the implementation uses internally: MPICH -> int; OMPI -> pointer), which is ~90% sparse. The worst case storage requirement is 8KiB, which is not onerous.
2. Datatypes fit into a table of 236 entries, which can be compressed by using an 8-bit hash (internal = table[indirection[ handle & 0b11111111]]) into a table of ~71 entries.
3. Datatypes use a mostly dense table of 60 entries for non-fixed-size types and something else for the fixed-size types.
4. The Huffman code is used directly, which requires a handful of register bit mask/shift ops to lookup internal handle values.
There is an open question about whether we should make the buffer address constants equivalent to zero when they have the same effect as C NULL, or whether we should give them distinct values for debugging purposes.
I am inclined to make all of the following zero except IN_PLACE, UNWEIGHTED and WEIGHTS_EMPTY, but I am happy to hear feedback to the contrary.
MPI_STATUS_IGNORE = 0
MPI_STATUSES_IGNORE = 0
MPI_ERRCODES_IGNORE = 0
MPI_ARGV_NULL = 0
MPI_ARGVS_NULL = 0
MPI_IN_PLACE != MPI_BOTTOM = 0
MPI_WEIGHTS_EMPTY != MPI_UNWEIGHTED != 0
Jeff
On Tue, Apr 4, 2023 at 6:05 PM Jeff Hammond <jehammond at nvidia.com<mailto:jehammond at nvidia.com>> wrote:
I apologize for my incompetence with meeting scheduling and attendance this week. Hopefully this time I am able to get the timing correct and also attend.
The topics / decisions of interest are:
A. Integer constants
We have constants that must be powers of two (mode constants), constants that must have a < relationship (error codes and thread levels), constants that should be negative to avoid conflicting with e.g. ranks, and constants that can be anything.
I have a proposal for constants here: https://github.com/mpiwg-abi/specification-text-draft/blob/main/IntegerConstants.md. I have not yet implemented Dan's suggestion to make constants unique (presumably, for the aforementioned categories that allow it).
I would like feedback on the following:
1. Are the values of MPI_MAX_*** acceptable? In all cases, I chose the greater of MPICH and OMPI. For MPI_BSEND_OVERHEAD, I used 512 to be safe. Is that too large? Is 128 better?
2. Are there any opponents to Dan's suggestion that all the constants are unique, within reason? If the consensus favors this, I'll redo all the constants accordingly.
3. Are there any other integer constant values that bother people? I have very little attachment to any of them, and it is trivial to change them now.
B. Handle constants
I implemented a Huffman code for these (https://github.com/mpiwg-abi/specification-text-draft/blob/main/HandleConstants.md). I made a Python code<https://github.com/mpiwg-abi/specification-text-draft/blob/main/print-handle-constants.py> that implements it, and the program dumps all the values, and can be modified easily to generate mpi_abi.h.
Does anyone oppose the idea of a Huffman code? I know Hui is indifferent, which is obviously fine. One can ignore the Huffman code and just view the results as some random values I chose :-)
If you like Huffman codes, but dislike mine, then please complain soon. There are some parts that I do not love. For example, fixed size types are handled consistently and encode their size (as the log_2 of bytes) in bits 3:5, while language default types are on a different branch and encode their size in bits 8:10. I can makes those consistent, but it means the code branches aren't sequential in the bit indices. I think that's fine, but I am new to Huffman codes.
An alternative to the above is to say that in mpi_abi.h, that MPI_INT, MPI_LONG and MPI_LONG_LONG are aliased to MPI_INTn_T according to the ABI definition, and do not exist on their own. This has some appeal, but will change the results of MPI_Type_get_name. What do people think about this?
The good news is there is lots of free space in the Huffman code for new handle types and new constants. I am not worried about running out of space. Already, I have reserved space for a bunch of types that are likely to exist in C and C++ in the near future, so those will be trivial to add later.
Finally, as noted on Slack, we have to figure out whether we reserve space for or standardize some types that OMPI defines today in the MPI_ namespace. This is the lowest priority for me right now, so if we don't address it this week, that is fine.
Thanks,
Jeff
--
Jeff Hammond
jeff.science at gmail.com<mailto:jeff.science at gmail.com>
http://jeffhammond.github.io/
--
mpiwg-abi mailing list
mpiwg-abi at lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-abi/attachments/20230415/e66d58c7/attachment-0001.html>
More information about the mpiwg-abi
mailing list