Skip to content

Commit d5d098d

Browse files
committed
fix based on suggestion
1 parent 15728ec commit d5d098d

File tree

1 file changed

+71
-20
lines changed
  • keps/sig-scheduling/3633-matchlabelselectors-to-podaffinity

1 file changed

+71
-20
lines changed

keps/sig-scheduling/3633-matchlabelselectors-to-podaffinity/README.md

+71-20
Original file line numberDiff line numberDiff line change
@@ -274,28 +274,28 @@ metadata:
274274
#### Story 2
275275
276276
Let's say all Pods on each tenant get `tenant` label via a controller or a manifest management tool like Helm.
277-
And, the cluster admin now wants to achieve exclusive 1:1 tenant to domain placement.
277+
Although the value of `tenant` label is unknown when composing the workload's manifest, the cluster admin still wants to achieve exclusive 1:1 tenant to domain placement.
278278

279279
By applying the following affinity globally using a mutating webhook, the cluster admin can ensure that the Pods from the same tenant will land on the same domain exclusively, meaning Pods from other `tenants` won't land on the same domain.
280280

281281
```yaml
282282
affinity:
283283
podAffinity: # ensures the pods of this tenant land on the same node pool
284284
requiredDuringSchedulingIgnoredDuringExecution:
285-
- matchLabelSelectors:
286-
- key: tenant
287-
operator: In
288-
topologyKey: node-pool
285+
- matchLabelSelectors:
286+
- key: tenant
287+
operator: In
288+
topologyKey: node-pool
289289
podAntiAffinity: # ensures only Pods from this tenant lands on the same node pool
290290
requiredDuringSchedulingIgnoredDuringExecution:
291-
- matchLabelSelectors:
292-
- key: tenant
293-
operator: NotIn
294-
- labelSelector:
295-
matchExpressions:
296-
- key: tenant
297-
operator: Exists
298-
topologyKey: node-pool
291+
- matchLabelSelectors:
292+
- key: tenant
293+
operator: NotIn
294+
labelSelector:
295+
matchExpressions:
296+
- key: tenant
297+
operator: Exists
298+
topologyKey: node-pool
299299
```
300300

301301
### Notes/Constraints/Caveats (Optional)
@@ -360,16 +360,16 @@ type MatchLabelSelector struct {
360360
}
361361

362362
type PodAffinityTerm struct {
363-
LabelSelector *metav1.LabelSelector
364-
Namespaces []string
365-
TopologyKey string
366-
NamespaceSelector *metav1.LabelSelector
363+
LabelSelector *metav1.LabelSelector
364+
Namespaces []string
365+
TopologyKey string
366+
NamespaceSelector *metav1.LabelSelector
367367

368-
// MatchLabelSelectors is a set of pod label keys to select the group of existing pods
368+
// MatchLabelSelectors is a set of pod label keys to select the group of existing pods
369369
// which pods will be taken into consideration for the incoming pod's pod (anti) affinity.
370370
// The default value is empty.
371-
// +optional
372-
MatchLabelSelectors []strinMatchLabelSelectorg
371+
// +optional
372+
MatchLabelSelectors []strinMatchLabelSelectorg
373373
}
374374
```
375375

@@ -378,6 +378,53 @@ labels by the key in `MatchLabelSelectors.Key`, and merge to `LabelSelector` of
378378
- If Operator is `In`, `key in (value)` is merged with LabelSelector.
379379
- If Operator is `NotIn`, `key notin (value)` is merged with LabelSelector.
380380

381+
Only `In` and `NotIn` are supported in `Operator` of `MatchLabelSelectors`,
382+
and kube-apiserver rejects other operators (`Exist` and `DoesNotExist`).
383+
384+
For example, when this sample Pod is created,
385+
386+
```yaml
387+
apiVersion: v1
388+
kind: Pod
389+
metadata:
390+
name: sample
391+
namespace: sample-namespace
392+
labels:
393+
tenant: tenant-a
394+
...
395+
affinity:
396+
podAntiAffinity:
397+
requiredDuringSchedulingIgnoredDuringExecution:
398+
- matchLabelSelectors:
399+
- key: tenant
400+
operator: NotIn
401+
labelSelector:
402+
matchExpressions:
403+
- key: tenant
404+
operator: Exists
405+
topologyKey: node-pool
406+
```
407+
408+
kube-apiserver modifies the labelSelector like the following:
409+
410+
```diff
411+
affinity:
412+
podAntiAffinity:
413+
requiredDuringSchedulingIgnoredDuringExecution:
414+
- matchLabelSelectors:
415+
- key: tenant
416+
operator: NotIn
417+
labelSelector:
418+
matchExpressions:
419+
- key: tenant
420+
operator: Exists
421+
+ - key: tenant
422+
+ operator: NotIn
423+
+ values:
424+
+ - tenant-a
425+
topologyKey: node-pool
426+
```
427+
381428
### Test Plan
382429
383430
<!--
@@ -967,6 +1014,10 @@ Implement new enum values `ExistsWithSameValue` and `ExistsWithDifferentValue` i
9671014
- `ExistsWithSameValue`: look up the label value keyed with the key specified in the labelSelector, and match with Pods which have the same label value on the key.
9681015
- `ExistsWithDifferentValue`: look up the label value keyed with the key specified in the labelSelector, and match with Pods which have the same label key, but with the different label value on the key.
9691016

1017+
But, this idea is rejected because:
1018+
- it's difficult to prepare all existing clients to handle new enums.
1019+
- labelSelector is going to be required to know who has this labelSelector to handle these new enums, and it's a tough road to change all code handling labelSelector.
1020+
9701021
#### Example
9711022

9721023
a set of Pods A doesn't want to co-exist with other set of Pods, but want the set of Pods A co-located

0 commit comments

Comments
 (0)