-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5732: Topology-aware workload scheduling #5733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
KEP-5732: Topology-aware workload scheduling #5733
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: 44past4 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| // Level specifies the key of the node label representing the topology domain. | ||
| // All pods within the PodGroup must be colocated within the same domain instance. | ||
| // Examples: "topology.kubernetes.io/rack" | ||
| Level string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's specify what happens if a PodGroup is replicated (there are multiple PodGroup instances/replicas).
I'm assuming that for each of those domain at that level is chosen, but there is not coordination between those (i.e. different pod group instances may be scheduled in different domains, but they can also share them).
| type DRAConstraint struct { | ||
| // ResourceClaimName specifies the name of a specific ResourceClaim | ||
| // within the PodGroup's pods that this constraint applies to. | ||
| ResourceClaimName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does ResourceClaimName mean if a given PodGroup is replicated (there are multiple podgroup instances/replicas)?
This would effectively mean sharing the same RC across multiple instances, which in many cases would be highly misleading.
However, arguably there can be usecases for it too, but then the algorithm effectively should consider all podgroup instances in a single round, but for that we don't know how many groups we even have.
@macsko - FYI (as this is slightly colliding with the kep-4671 update)
So thinking about that more, I'm wondering if we can introduce that without further enhancing the API now (i.e. adding the replicas field to PodGroup).
Another alternative would be to very explicitly split the pod-group-replica constraints from the constraints across all pod-group-replicas and (at least for Alpha) focus only on the former.
So something more like (exact names and structures to be refined):
type PodGroupAllReplicasSchedulingConstraints {
ResourceClaimName *string // This one is supported only if Replicas=1
}
type PodGroupReplicaSchedulingConstraints {
ResourceClaimTemplateName *string // Separate RC is created from this template for every replica.
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case if the PodGroup is replicated the meaning of ResourceClaimName will depend on whether we will be scheduling those replicas together or not. If they will be scheduled separately then scheduling of the first replica will lock the referenced ResourceClaim and the subsequent replicas will not have any freedom when it comes to its allocation - there will be only one possible placement for them. When scheduling multiple replicas at once we can try to choose a DRA allocation which allows us to schedule the highest number of replicas (assuming that we do not provide all-or-nothing semantics for multiple replicas).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't asking about the implementation aspect.
I wanted to take a step back and understand what is the actual usecase we're trying to address and figure out if/how we should represent it to make it intuitive to users when they have replicated PodGroup. I feel that the API as currently described can be pretty confusing in this case.
|
|
||
| // ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate. | ||
| // This applies to all ResourceClaim instances generated from this template. | ||
| ResourceClaimTemplateName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who creates and manages lifecycle of the RC created from that template?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we are assuming that the lifecycle of RC is managed outside of kube-scheduler. One option for this is to have it managed by the specific workload controller like for instance LeaderWorkerSet which could create a RC when creating a new replica. This would be very inconvenient so probably we should have a single controller which could do this just by watching Workload objects. We had a discussion with @johnbelamaric about this. Either way this should be outside of the scope of this KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lifecycle is what I plan to address in #5729.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK - so this matches my thinking.
But the primary question now is - why do we need it then?
If we have some external entity (whether it's dedicated controller or e.g. LWS controller) that will create RC whenever it is needed (it should create it before we will actually do the scheduling), then what scheduler really needs to be aware and is an input for it is that RC (that it will be finding a best allocation for) not the template itself. It doesn't care about the template.
So I think we're aligned on the intention, but I don't really understand how that will be used.
| // PodGroupInfo holds information about a specific PodGroup within a Workload, | ||
| // including a reference to the Workload, the PodGroup's name, and its replica index. | ||
| // This struct is designed to be extensible with more fields in the future. | ||
| type PodGroupInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PodGroupInfo was already introduced in scheduler as part of initial gang-scheduling implementation. However, this is now focused on the pods and their state:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/backend/workloadmanager/podgroupinfo.go#L52
Do you suggest reuse that structure or create a second one for it?
@macsko
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The existing PodGroupInfo is much closer to the PodSetInfo proposed below so if we will consider using it we should probably rename the proposed PodGroupInfo to something like PodGroupMetadata or WorkloadPodGroupRefrence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's closer regarding what kind of information it keeps, but the granularity is different (it may contain pods of different signatures).
So my point is - we need to align that. Having two different things with same names will be pretty misleading.
| // PodSetInfo holds information about a specific PodSet within a PodGroup, | ||
| // primarily the list of Pods. | ||
| // This struct is designed to be extensible with more fields in the future. | ||
| type PodSetInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should PodGroupInfo keep a list of its PodSetInfos?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my mind PodGroupInfo should only provide basic information about the PodGroup configuration and not the actual state of the PodGroup/pods which are part of this PodGroup so I do not see the need to have a reference from PodGroupInfo to PodSetInfos.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should PodSetInfo then have a link to PodGroupInfo?
I think we will need a way to iterate over podsets within podgroup and this seems like a good place to allow for it.
| // PodSetAssignment represents the assignment of pods to nodes within a PodSet for a specific Placement. | ||
| type PodSetAssignment struct { | ||
| // PodToNodeMap maps a Pod name (string) to a Node name (string). | ||
| PodToNodeMap map[string]string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need dra assignments too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question. We might need them for the PodGroup pods binding phase which comes after the selection of the placement for a PodGroup has been finished. So provided that we can capture those when we are checking the placement feasibility then yes, we should have DRA assignments here as well.
| // DRA's AllocationResult from DRAAllocations. | ||
| // All pods within the PodSet, when being evaluated against this Placement, | ||
| // are restricted to the nodes matching this NodeAffinity. | ||
| NodeAffinity *corev1.NodeAffinity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation detail - given NodeAffinity, finding the nodes that match it is O(N) operation with N being the set of all nodes in the cluster. We together with NodeAffinity here, we should probably also store the exact list of nodes to avoid recomputing it over and over again.
| with fallbacks (e.g., prefer Rack, fallback to Block). This would introduce | ||
| a Rank field to the Placement struct. | ||
|
|
||
| 2. **Optional/Preferred Scheduling Constraints:** Constraints that serve purely |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make it out-of-scope for this KEP even for further releases - we can do that in a follow-up KEP if needed.
| 2. **Optional/Preferred Scheduling Constraints:** Constraints that serve purely | ||
| as scoring mechanisms without hard requirements. | ||
|
|
||
| 3. **Multi-level Scheduling Constraints:** Handling nested constraints (e.g., |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here - let's land TAS in a reasonably small scope first and iterate on that in a followups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess rephrasing these two comment - I think the extensions generally make sense to me, but I would claim that we should make them explicitly non-goals for this KEP and mark them more as future follow-up extensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense I will update the description that those are potential extensions for this feature which can be implemented after alpha but they will require separate KEPs.
| Block -> Rack). This would involve iterative placement generation and a | ||
| Parent field in the Placement struct. | ||
|
|
||
| 4. **Pod Group Replicas Support:** Optimizing scheduling for identical |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way it's described it's not really pod group replicas support - we need to support pod group replicas from the very beginning.
What you're saying here is that we can optimize the scheduling latency for them, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is correct. Replicas will be supported but they will be scheduled one by one. With some changes to the algorithm we can try to optimize this process but this is out of scope for this KEP. I will rephrase it to make it clear.
|
|
||
| - **Selection:** Select the Placement with the highest score. | ||
|
|
||
| - **Binding:** Proceed to bind pods to the assigned nodes and resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need to move the pods once again through their pod-by-pod cycles. They have to call Reserve, Permit etc. to move to the binding successfully. Will the expected placement be expressed using NominatedNodeNames or differently?
| // -- Add other fields below for future extensions -- | ||
| } | ||
|
|
||
| // PodSetInfo holds information about a specific PodSet within a PodGroup, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you define in the KEP what is a PodSet and any information about homogeneity?
| // ResourceClaimName specifies the name of a specific ResourceClaim | ||
| // within the PodGroup's pods that this constraint applies to. | ||
| ResourceClaimName *string | ||
|
|
||
| // ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate. | ||
| // This applies to all ResourceClaim instances generated from this template. | ||
| ResourceClaimTemplateName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do these fields relate to the ResourceClaim references that Pods already have? What happens if the sets of claims referenced by a Workload and its Pods are different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to this question, it needs to be answered here
| type PodGroupSchedulingConstraints struct { | ||
| // TopologyConstraints specifies desired topological placements for all pods | ||
| // within this PodGroup. | ||
| TopologyConstraints []TopologyConstraint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does multiple topology constraints actually make sense here? What would be the usecase for it?
| type DRAConstraint struct { | ||
| // ResourceClaimName specifies the name of a specific ResourceClaim | ||
| // within the PodGroup's pods that this constraint applies to. | ||
| ResourceClaimName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't asking about the implementation aspect.
I wanted to take a step back and understand what is the actual usecase we're trying to address and figure out if/how we should represent it to make it intuitive to users when they have replicated PodGroup. I feel that the API as currently described can be pretty confusing in this case.
|
|
||
| // ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate. | ||
| // This applies to all ResourceClaim instances generated from this template. | ||
| ResourceClaimTemplateName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK - so this matches my thinking.
But the primary question now is - why do we need it then?
If we have some external entity (whether it's dedicated controller or e.g. LWS controller) that will create RC whenever it is needed (it should create it before we will actually do the scheduling), then what scheduler really needs to be aware and is an input for it is that RC (that it will be finding a best allocation for) not the template itself. It doesn't care about the template.
So I think we're aligned on the intention, but I don't really understand how that will be used.
| // ResourceClaimName specifies the name of a specific ResourceClaim | ||
| // within the PodGroup's pods that this constraint applies to. | ||
| ResourceClaimName *string | ||
|
|
||
| // ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate. | ||
| // This applies to all ResourceClaim instances generated from this template. | ||
| ResourceClaimTemplateName *string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to this question, it needs to be answered here
| // PodGroupInfo holds information about a specific PodGroup within a Workload, | ||
| // including a reference to the Workload, the PodGroup's name, and its replica index. | ||
| // This struct is designed to be extensible with more fields in the future. | ||
| type PodGroupInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's closer regarding what kind of information it keeps, but the granularity is different (it may contain pods of different signatures).
So my point is - we need to align that. Having two different things with same names will be pretty misleading.
| // PodSetInfo holds information about a specific PodSet within a PodGroup, | ||
| // primarily the list of Pods. | ||
| // This struct is designed to be extensible with more fields in the future. | ||
| type PodSetInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should PodSetInfo then have a link to PodGroupInfo?
I think we will need a way to iterate over podsets within podgroup and this seems like a good place to allow for it.
|
|
||
| // DRAConstraints specifies constraints on how Dynamic Resources are allocated | ||
| // across the PodGroup. | ||
| DRAConstraints []DRAConstraint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Continuing my thoughts from other comments here.
The primary goal that we wanted to ensure with this KEP are:
- building the foundations for TAS and having the first version of the algorithm
- proving that the algorithm is compatible with both DRA and topology-based requirements
I think this KEP is achieving it.
However, the more I think about it, the more concerns I have about this kind of API. Up until now I thought that we can actually decouple and postpone the discussion of lifecycle of pod-group-owned (or workload-owned) RCs to later, but some of my comments below already suggest it's not that clear and may influence the API.
So I actually started thinking if (for the sake of faster and incremental progress), we shouldn't slightly revise the scope and goals of this KEP, in particular:
- remove the "DRAConstraints" from the scope (and couple it the lifecycle of PodGroup/RC discussion we'll have in DRA: ResourceClaim Support for Workloads #5729 - @nojnhuh )
- ensure that the proposal is compatible with DRA-based constraints at lower level;
namely, scheduler should not really manage the lifecycle of RC and those RC should just be an input to scheduler (whether on PodGroup level, Workload-level or some to-be-introduced level).
So what if instead we would prove that it works by simply:
- ensuring that some internal interface in scheduler (or maybe a scheduler-framework level one?) can actually accept RCs as an additional constraint to the WorkloadCycle
- we add a test at that level, that scheduling works if we pass topology constraints as RCs
That would allow us to decouple the core of the changes in that KEP from all the discussions about how to represent it in the API, how is it coupled with lifecycle etc. And hopefully unblock this KEP much faster and still proving the core of what we need.
@johnbelamaric @erictune @44past4 @dom4ha @sanposhiho @macsko - for your thoughts too
One-line PR description: Initial version of Topology-aware workload scheduling KEP
Issue link: Topology-aware workload scheduling #5732
/sig scheduling