[mpiwg-abi] asynchronous voting to determine what requires more discussion (ie a meeting)
Raffenetti, Ken
raffenet at anl.gov
Mon Apr 24 09:58:35 CDT 2023
We have an item on our ABI wishlist to extend MPI_MAX_INFO_VAL once we have the chance. I believe the motivator was some RMA info extensions Min was working on years ago. So something > 1024 sounds good to me. Nothing else on our wishlist has any impact at this stage, I don't think. For reference: https://github.com/pmodels/mpich/wiki/ABI-Change-Wishlist
Ken
-----Original Message-----
From: mpiwg-abi <mpiwg-abi-bounces at lists.mpi-forum.org <mailto:mpiwg-abi-bounces at lists.mpi-forum.org>> on behalf of "Holmes, Daniel John" <daniel.john.holmes at intel.com <mailto:daniel.john.holmes at intel.com>>
Date: Monday, April 24, 2023 at 9:47 AM
To: Christoph Niethammer <niethammer at hlrs.de <mailto:niethammer at hlrs.de>>, Jeff Hammond <jeff.science at gmail.com <mailto:jeff.science at gmail.com>>
Cc: mpiwg-abi <mpiwg-abi at lists.mpi-forum.org <mailto:mpiwg-abi at lists.mpi-forum.org>>
Subject: Re: [mpiwg-abi] asynchronous voting to determine what requires more discussion (ie a meeting)
Or a new set of INFO keys of the form "io_node_list_part_N" where N is 1,2,3,... for as many parts as are needed to express the whole value -- the user is invited to concatenate the INFO string values together to obtain the whole node list. PRO: arbitrarily extendable; CON: faff.
Best wishes,
Dan.
-----Original Message-----
From: mpiwg-abi <mpiwg-abi-bounces at lists.mpi-forum.org <mailto:mpiwg-abi-bounces at lists.mpi-forum.org>> On Behalf Of Christoph Niethammer
Sent: 24 April 2023 15:42
To: Jeff Hammond <jeff.science at gmail.com <mailto:jeff.science at gmail.com>>
Cc: mpiwg-abi <mpiwg-abi at lists.mpi-forum.org <mailto:mpiwg-abi at lists.mpi-forum.org>>
Subject: Re: [mpiwg-abi] asynchronous voting to determine what requires more discussion (ie a meeting)
For the io_node_list I could think of a solution by providing a new info key to give a path to a file containing a node list like the PBS/SLURM node list. However this might be tricky again if we have no shared file system on hand.
Looking forward to Quincy's expertise in the field.
Best
Christoph
----- Original Message -----
From: "Jeff Hammond" <jeff.science at gmail.com <mailto:jeff.science at gmail.com>>
To: "Christoph Niethammer" <niethammer at hlrs.de <mailto:niethammer at hlrs.de>>
Cc: "mpiwg-abi" <mpiwg-abi at lists.mpi-forum.org <mailto:mpiwg-abi at lists.mpi-forum.org>>
Sent: Monday, 24 April, 2023 16:28:03
Subject: Re: [mpiwg-abi] asynchronous voting to determine what requires more discussion (ie a meeting)
>
> 4. Looking at the string constants I am a bit concerned about the
> limit of MPI_MAX_INFO_VAL choice of only 1024.
> We have, e.g., the Info keys "io_node_list" or "filename" where I could imagine that we could run into problems.
> The "io_node_list" key could be a problem on lager systems that come with more than 100 cabinets - assuming for hostnames typically around 9 characters/15 byte string IP + 1 comma for separation.
> The "filename" key could be a problem if we interpret the description in the standard - "This hint specifies the file name used when the file was opened." - to include the path.
> This is unlimited in length for nearly all common filesystems (thinking ext4, lustre, ...). However, Linux will limit the combined filenames/path length to 4096 characters for us.
Interesting. I have merely worked on the assumption that MPICH and OMPI already thought about this enough and that the larger of their choices was sufficient. It is possible that is not the case.
For filename, it seems that the problem appears only in MPI_FILE_GET_INFO, because I don't see a limit for the input to MPI_FILE_OPEN. I'm not sure that making the max info val larger is the right solution. I'd be more inclined to ask the IO WG to add MPI_FILE_GET_NAME that is designed for accessing this, and design it to not have a hard-coded limit.
I wonder what Quincy thinks about io_node_list, since he may work for the company with the largest ensemble of distributed storage on the planet.
Jeff
--
mpiwg-abi mailing list
mpiwg-abi at lists.mpi-forum.org <mailto:mpiwg-abi at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi <https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi>
--
mpiwg-abi mailing list
mpiwg-abi at lists.mpi-forum.org <mailto:mpiwg-abi at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi <https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi>
More information about the mpiwg-abi
mailing list