[mpiwg-sessions] Meet today

Dan Holmes danholmes at chi.scot
Mon Jan 17 09:29:31 CST 2022


Hi Ralph,

I agree with your suggestion to keep the prototype/temporary code encapsulated somewhere behind and interface that will not need to change as functionality added in the components where it should have been in the first place. My APP/MPI separation hides this intent — there are several components to “MPI” including PMIx, which I completely elided from my sketch.

Ultimately, I see the scheduler as a PMIx/PPRTE thing that doesn’t just reserve a few extra processes to use as a dynamic pool, but goes further and reserves the entire machine to use as a dynamic pool. This is kind of what it does already (in concept, if not in practice), IMHO.

I see a scheduler as an application that doesn’t do useful work with the processes it is allocated — its goal is to give those processes away to other applications, whilst obeying restrictions like priority and batch queue ordering. All the processes it currently owns are, conceptually, in the dynamic pool.

So far, I’ve sketched growing an application, but that is at the expense of dynamic pool or scheduler. The other side of that coin is a shrink of the dynamic pool or scheduler. We will need a mechanism to give back a proper subset (of any size) of the processes currently allocated/accessible. The schedule will reserve all 1024 processes, then give 128 to one application and 256 to another application, etc.

Seeing both sides of this during an application growth transition encourages us to see both sides during an application shrink transition, i.e. to realise that it also is a scheduler growth transition. This is just double-entry bookkeeping, to use an accounting concept.

The PMIx/PRRTE code you are planning to write, which manages a dynamic pool, is a scheduler — it is the first of a class of schedulers that can support dynamic allocations properly. Personally, I would separate the "manage a pool” portion (scheduler application, uses PMIx to interact with other/user applications) from the “give some of mine/take some of theirs” portion (infrastructure/messaging/interaction functionality, belongs inside PMIx). Envision activating your dynamic pool/scheduler application at machine boot time with the instruction to reserve the entire machine and run until the machine is shutdown.

I look forward to seeing the outcome!

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Executive Director
Chief Technology Officer
CHI Ltd
danholmes at chi.scot



> On 17 Jan 2022, at 14:50, Ralph Castain via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org> wrote:
> 
> Hi Dan
> 
> Lack of support for the MPI dynamic operations has indeed been problematic. For the last few years, people have been getting around that by using PRRTE as a "shim" to their native scheduler since it fully supports such operations. Caveat is that you still have to request an initial allocation that is as large as you expect to eventually need - but at least you can utilize the dynamic functions, validate the value they provide, etc. Hope is that the community can use those results to apply pressure to the HPC scheduler community to adapt their systems. If they don't....well, some of us are working from the other end (starting with flexible schedulers like Kubernetes) to teach those systems how to support HPC, so maybe we'll meet in the middle :-)
> 
> As for this project, I'd recommend going the PRRTE route until we get dynamic scheduling support in the main system. Reasoning is that:
> 
> (a) we can "hide" the mechanics for getting more resources from a particular scheduler in the shim, thus allowing the result to be somewhat more portable.
> 
> (b) the code in PMIx/PRRTE for getting the resources can remain in the library as the host environment adapts, so the app/library doesn't have to change once the environments do start to provide dynamic support
> 
> (c) there is a near-term need to support dynamic programming models (workflows, ML, etc) on HPC systems, and many of those people are using PRRTE as a shim so they can utilize the dynamic APIs in their respective models
> 
> As I mentioned in my prior note, I am already working on adding this "shim" support to PRRTE. I had to complete a prior commitment, but that is done now and I can get back to this effort. My hope is that I'll have something ready in the next few weeks. First stage is to have PRRTE "reserve" some of the original allocation for a "dynamic pool" that it will manage to meet resource requests from the apps (not really trying to "schedule" anything - just a "first come, first served" method). This will simply be a means of testing/demonstrating the functionality, but doesn't provide a truly dynamic environment.
> 
> My longer-term plan is to have PRRTE start with some initial allocation, and then as  apps adjust their resource needs via PMIx calls, PRRTE will request new allocations and stitch them together transparently to the applications (or offer them as disjoint sets of resources, depending upon the request), return allocations that are no longer required, etc. I think that should be available by summer, at least in one or two environments.
> 
> Hopefully, getting real dynamic schedulers is only a few years away - but this should help bridge the gap.
> 
> HTH
> Ralph
> 
> 
>> On Jan 17, 2022, at 5:40 AM, Dan Holmes <danholmes at chi.scot <mailto:danholmes at chi.scot>> wrote:
>> 
>> Hi Ralph,
>> 
>> Thanks for this informative peek into the possibilities of PMIx support for these new features in MPI. I’ve finally had a chance to sit and read it properly and inwardly digest it.
>> 
>> This is actually more than I was hoping for, in terms of existing work to support dynamic sessions!
>> 
>> I was fearing that no HPC scheduler would support anything like dynamic allocations because that has been my experience so far on every single supercomputer I’ve had an opportunity to use. In general, the scheduler refuses to implement even the dynamic model that is already in MPI — MPI_COMM_SPAWN[_MULTIPLE]. In many cases, I’ve seen critical bug(s) in the MPI_COMM_CONNECT/MPI_COMM_ACCEPT mechanism that prevents its use. As for MPI_COMM_JOIN: forget it entirely!
>> 
>> My plan for a first-cut proof-of-concept implementation was going to be adding something to the batch queue, testing for when that completely separate job runs, and responding to the down-calls from the processes in that new job with information about the processes in the existing job that asked for additional processes. This will have ridiculous bad latency (time from request for more resources until new resources are available to use) but it would seem to be a viable implementation route to demonstrate the functionality.
>> 
>> APP 1: “am I first?” <- MPI 1: “yes, because envar peer_app is not set”
>> 
>> APP 1: “can I have 8 more processes?” <- MPI 1: "let me check”
>> MPI 1: “enqueue batch job (set envar peer_app=APP1)” -> scheduler: “enqueue successful”
>> 
>> APP 2: “am I first?” <- MPI 2: “no, because envar peer_app=APP1”
>> APP 2: “list pset names” <- MPI 2: “mpi://world <mpi://world>, mpi://self <mpi://self>, app1://world <app1://world>, app1://self <app1://self>” <- MPI 1: “mpi://world <mpi://world>, mpi://self <mpi://self>”
>> 
>> At that point, both applications can be notified of a resource change and can (hopefully) use each others resources.
>> 
>> In this way, I think the scheduler does not *have* to be aware of what is going on. It might react faster/more favourably if it was aware.
>> 
>> Does that sketch have any obvious fatal flaws?
>> 
>> Cheers,
>> Dan.
>>>> Dr Daniel Holmes PhD
>> Executive Director
>> Chief Technology Officer
>> CHI Ltd
>> danholmes at chi.scot <mailto:danholmes at chi.scot>
>> 
>> 
>> 
>>> On 3 Jan 2022, at 19:44, Ralph Castain via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>> wrote:
>>> 
>>> Hello folks
>>> 
>>> I had a chance over the holidays to catch up on your docs regarding dynamic sessions - quite interesting. I believe there is support in PMIx for pretty much everything I saw being discussed. We have APIs by which an application can directly request allocation changes from the RM, and events by which the RM can notify (and subsequently negotiate) an application regarding changes to its allocation. So each side has the ability to initiate the process, and then both sides negotiate to a common conclusion. We also have an API by which an application can "register" its willingness to accept RM-initiated "preemption" requests so the RM can incorporate that willingness in its pricing and planning procedures.
>>> 
>>> Unfortunately, while we have that infrastructure defined in the PMIx Standard and implemented in OpenPMIx, we have not yet seen the required backend support implemented in an RM. I have started working with some folks on integrating support into Slurm, but I do not know the timetable for public release of that work. SchedMD has been ambivalent towards accepting pull requests that extend its PMIx support, so this may well have to be released as a side-project.
>>> 
>>> I have previously approached Altair about adding support to PBS - nothing has happened yet. I suspect they are waiting for customer demand. I have no knowledge of any other RMs looking into it. As a gap-filling measure, I am adding simulated support in PRRTE so that anyone wanting to develop dynamic resource code can at least have a place where they can develop it and do a little testing. PRRTE doesn't include a scheduler, but I can simulate it by retaining some of the RM-allocated resources as part of a PRRTE-managed "pool".
>>> 
>>> Meantime, I have started a little personal project to add PMIx support to Kubernetes, hopefully giving it more capability to support HPC applications. The Kubeflow community has a degree of PMIx support, but I want to directly integrate it to Kubernetes itself, including the dynamic resource elements described above. I have no timetable for completing that work - as many of you may know, I am retired and so this is something to do in my spare time. If anyone is interested on tracking progress on this, please let me know.
>>> 
>>> Thus, I would encourage you to start prodding your favorite RM vendors as this may prove the critical timeline in making dynamic sessions a reality!
>>> 
>>> Also, if you identify any "gaps" in the PMIx support, please do let me know - I'd be happy to work with you to fill them. The current definitions were developed primarily to support workflow operations and the needs of the dynamic programming model communities (e.g., TensorFlow and Data Analytics). I think those are very similar to what you are identifying, but may perhaps need some tweaking.
>>> 
>>> Ralph
>>> 
>>> 
>>>> On Jan 3, 2022, at 9:28 AM, Pritchard Jr., Howard via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>> wrote:
>>>> 
>>>> Hello All,
>>>>  
>>>> Happy New Year!
>>>>  
>>>> Let’s try to meet today.    Items on the agenda:
>>>>  
>>>> PR #629 (issue #511) - https://github.com/mpi-forum/mpi-issues/issues/511 <https://github.com/mpi-forum/mpi-issues/issues/511>
>>>>  
>>>> Pick up where we were on discussion of dynamic sessions requirements, see:
>>>> https://miro.com/app/board/o9J_l_Rxe9Q=/ <https://miro.com/app/board/o9J_l_Rxe9Q=/>
>>>> https://docs.google.com/document/d/1l7LQ8eeVOUW69TDVG9LjKJUuerfE3S3teaMFG5DOudM/edit#heading=h.voobxhw94rt3 <https://docs.google.com/document/d/1l7LQ8eeVOUW69TDVG9LjKJUuerfE3S3teaMFG5DOudM/edit#heading=h.voobxhw94rt3>
>>>>  
>>>> If my calendar calculation is right, we will be meeting with the FT WG today
>>>>  
>>>> Thanks,
>>>>  
>>>> Howard
>>>> 
>>>>>>>>  
>>>> <image001.png>
>>>> Howard Pritchard
>>>> Research Scientist
>>>> HPC-ENV
>>>>  
>>>> Los Alamos National Laboratory
>>>> howardp at lanl.gov <mailto:howardp at lanl.gov>
>>>>  
>>>> <image002.png> <https://www.instagram.com/losalamosnatlab/><image003.png> <https://twitter.com/LosAlamosNatLab><image004.png> <https://www.linkedin.com/company/los-alamos-national-laboratory/><image005.png> <https://www.facebook.com/LosAlamosNationalLab/>
>>>>  
>>>>  
>>>> _______________________________________________
>>>> mpiwg-sessions mailing list
>>>> mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>
>>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions <https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions>
>>> _______________________________________________
>>> mpiwg-sessions mailing list
>>> mpiwg-sessions at lists.mpi-forum.org <mailto:mpiwg-sessions at lists.mpi-forum.org>
>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions <https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions>
>> 
> 
> _______________________________________________
> mpiwg-sessions mailing list
> mpiwg-sessions at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20220117/9187d71d/attachment-0001.html>


More information about the mpiwg-sessions mailing list