[mpiwg-sessions] Meet today

Dan Holmes danholmes at chi.scot
Mon Jan 17 07:40:32 CST 2022


Hi Ralph,

Thanks for this informative peek into the possibilities of PMIx support for these new features in MPI. I’ve finally had a chance to sit and read it properly and inwardly digest it.

This is actually more than I was hoping for, in terms of existing work to support dynamic sessions!

I was fearing that no HPC scheduler would support anything like dynamic allocations because that has been my experience so far on every single supercomputer I’ve had an opportunity to use. In general, the scheduler refuses to implement even the dynamic model that is already in MPI — MPI_COMM_SPAWN[_MULTIPLE]. In many cases, I’ve seen critical bug(s) in the MPI_COMM_CONNECT/MPI_COMM_ACCEPT mechanism that prevents its use. As for MPI_COMM_JOIN: forget it entirely!

My plan for a first-cut proof-of-concept implementation was going to be adding something to the batch queue, testing for when that completely separate job runs, and responding to the down-calls from the processes in that new job with information about the processes in the existing job that asked for additional processes. This will have ridiculous bad latency (time from request for more resources until new resources are available to use) but it would seem to be a viable implementation route to demonstrate the functionality.

APP 1: “am I first?” <- MPI 1: “yes, because envar peer_app is not set”

APP 1: “can I have 8 more processes?” <- MPI 1: "let me check”
MPI 1: “enqueue batch job (set envar peer_app=APP1)” -> scheduler: “enqueue successful”

APP 2: “am I first?” <- MPI 2: “no, because envar peer_app=APP1”
APP 2: “list pset names” <- MPI 2: “mpi://world <mpi://world>, mpi://self <mpi://self>, app1://world <app1://world>, app1://self <app1://self>” <- MPI 1: “mpi://world <mpi://world>, mpi://self <mpi://self>”

At that point, both applications can be notified of a resource change and can (hopefully) use each others resources.

In this way, I think the scheduler does not *have* to be aware of what is going on. It might react faster/more favourably if it was aware.

Does that sketch have any obvious fatal flaws?

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Executive Director
Chief Technology Officer
CHI Ltd
danholmes at chi.scot



> On 3 Jan 2022, at 19:44, Ralph Castain via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org> wrote:
> 
> Hello folks
> 
> I had a chance over the holidays to catch up on your docs regarding dynamic sessions - quite interesting. I believe there is support in PMIx for pretty much everything I saw being discussed. We have APIs by which an application can directly request allocation changes from the RM, and events by which the RM can notify (and subsequently negotiate) an application regarding changes to its allocation. So each side has the ability to initiate the process, and then both sides negotiate to a common conclusion. We also have an API by which an application can "register" its willingness to accept RM-initiated "preemption" requests so the RM can incorporate that willingness in its pricing and planning procedures.
> 
> Unfortunately, while we have that infrastructure defined in the PMIx Standard and implemented in OpenPMIx, we have not yet seen the required backend support implemented in an RM. I have started working with some folks on integrating support into Slurm, but I do not know the timetable for public release of that work. SchedMD has been ambivalent towards accepting pull requests that extend its PMIx support, so this may well have to be released as a side-project.
> 
> I have previously approached Altair about adding support to PBS - nothing has happened yet. I suspect they are waiting for customer demand. I have no knowledge of any other RMs looking into it. As a gap-filling measure, I am adding simulated support in PRRTE so that anyone wanting to develop dynamic resource code can at least have a place where they can develop it and do a little testing. PRRTE doesn't include a scheduler, but I can simulate it by retaining some of the RM-allocated resources as part of a PRRTE-managed "pool".
> 
> Meantime, I have started a little personal project to add PMIx support to Kubernetes, hopefully giving it more capability to support HPC applications. The Kubeflow community has a degree of PMIx support, but I want to directly integrate it to Kubernetes itself, including the dynamic resource elements described above. I have no timetable for completing that work - as many of you may know, I am retired and so this is something to do in my spare time. If anyone is interested on tracking progress on this, please let me know.
> 
> Thus, I would encourage you to start prodding your favorite RM vendors as this may prove the critical timeline in making dynamic sessions a reality!
> 
> Also, if you identify any "gaps" in the PMIx support, please do let me know - I'd be happy to work with you to fill them. The current definitions were developed primarily to support workflow operations and the needs of the dynamic programming model communities (e.g., TensorFlow and Data Analytics). I think those are very similar to what you are identifying, but may perhaps need some tweaking.
> 
> Ralph
> 
> 
>> On Jan 3, 2022, at 9:28 AM, Pritchard Jr., Howard via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>> wrote:
>> 
>> Hello All,
>>  
>> Happy New Year!
>>  
>> Let’s try to meet today.    Items on the agenda:
>>  
>> PR #629 (issue #511) - https://github.com/mpi-forum/mpi-issues/issues/511 <https://github.com/mpi-forum/mpi-issues/issues/511>
>>  
>> Pick up where we were on discussion of dynamic sessions requirements, see:
>> https://miro.com/app/board/o9J_l_Rxe9Q=/ <https://miro.com/app/board/o9J_l_Rxe9Q=/>
>> https://docs.google.com/document/d/1l7LQ8eeVOUW69TDVG9LjKJUuerfE3S3teaMFG5DOudM/edit#heading=h.voobxhw94rt3 <https://docs.google.com/document/d/1l7LQ8eeVOUW69TDVG9LjKJUuerfE3S3teaMFG5DOudM/edit#heading=h.voobxhw94rt3>
>>  
>> If my calendar calculation is right, we will be meeting with the FT WG today
>>  
>> Thanks,
>>  
>> Howard
>> 
>>>>  
>> <image001.png>
>> Howard Pritchard
>> Research Scientist
>> HPC-ENV
>>  
>> Los Alamos National Laboratory
>> howardp at lanl.gov <mailto:howardp at lanl.gov>
>>  
>> <image002.png> <https://www.instagram.com/losalamosnatlab/><image003.png> <https://twitter.com/LosAlamosNatLab><image004.png> <https://www.linkedin.com/company/los-alamos-national-laboratory/><image005.png> <https://www.facebook.com/LosAlamosNationalLab/>
>>  
>>  
>> _______________________________________________
>> mpiwg-sessions mailing list
>> mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions <https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions>
> _______________________________________________
> mpiwg-sessions mailing list
> mpiwg-sessions at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20220117/5c24c910/attachment-0001.html>


More information about the mpiwg-sessions mailing list