You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-scheduling/3633-matchlabelselectors-to-podaffinity/README.md
+71-20
Original file line number
Diff line number
Diff line change
@@ -274,28 +274,28 @@ metadata:
274
274
#### Story 2
275
275
276
276
Let's say all Pods on each tenant get `tenant` label via a controller or a manifest management tool like Helm.
277
-
And, the cluster admin now wants to achieve exclusive 1:1 tenant to domain placement.
277
+
Although the value of `tenant` label is unknown when composing the workload's manifest, the cluster admin still wants to achieve exclusive 1:1 tenant to domain placement.
278
278
279
279
By applying the following affinity globally using a mutating webhook, the cluster admin can ensure that the Pods from the same tenant will land on the same domain exclusively, meaning Pods from other `tenants` won't land on the same domain.
280
280
281
281
```yaml
282
282
affinity:
283
283
podAffinity: # ensures the pods of this tenant land on the same node pool
284
284
requiredDuringSchedulingIgnoredDuringExecution:
285
-
- matchLabelSelectors:
286
-
- key: tenant
287
-
operator: In
288
-
topologyKey: node-pool
285
+
- matchLabelSelectors:
286
+
- key: tenant
287
+
operator: In
288
+
topologyKey: node-pool
289
289
podAntiAffinity: # ensures only Pods from this tenant lands on the same node pool
290
290
requiredDuringSchedulingIgnoredDuringExecution:
291
-
- matchLabelSelectors:
292
-
- key: tenant
293
-
operator: NotIn
294
-
- labelSelector:
295
-
matchExpressions:
296
-
- key: tenant
297
-
operator: Exists
298
-
topologyKey: node-pool
291
+
- matchLabelSelectors:
292
+
- key: tenant
293
+
operator: NotIn
294
+
labelSelector:
295
+
matchExpressions:
296
+
- key: tenant
297
+
operator: Exists
298
+
topologyKey: node-pool
299
299
```
300
300
301
301
### Notes/Constraints/Caveats (Optional)
@@ -360,16 +360,16 @@ type MatchLabelSelector struct {
360
360
}
361
361
362
362
type PodAffinityTerm struct {
363
-
LabelSelector *metav1.LabelSelector
364
-
Namespaces []string
365
-
TopologyKey string
366
-
NamespaceSelector *metav1.LabelSelector
363
+
LabelSelector *metav1.LabelSelector
364
+
Namespaces []string
365
+
TopologyKey string
366
+
NamespaceSelector *metav1.LabelSelector
367
367
368
-
// MatchLabelSelectors is a set of pod label keys to select the group of existing pods
368
+
// MatchLabelSelectors is a set of pod label keys to select the group of existing pods
369
369
// which pods will be taken into consideration for the incoming pod's pod (anti) affinity.
370
370
// The default value is empty.
371
-
// +optional
372
-
MatchLabelSelectors []strinMatchLabelSelectorg
371
+
// +optional
372
+
MatchLabelSelectors []strinMatchLabelSelectorg
373
373
}
374
374
```
375
375
@@ -378,6 +378,53 @@ labels by the key in `MatchLabelSelectors.Key`, and merge to `LabelSelector` of
378
378
- If Operator is `In`, `key in (value)` is merged with LabelSelector.
379
379
- If Operator is `NotIn`, `key notin (value)` is merged with LabelSelector.
380
380
381
+
Only `In` and `NotIn` are supported in `Operator` of `MatchLabelSelectors`,
382
+
and kube-apiserver rejects other operators (`Exist` and `DoesNotExist`).
383
+
384
+
For example, when this sample Pod is created,
385
+
386
+
```yaml
387
+
apiVersion: v1
388
+
kind: Pod
389
+
metadata:
390
+
name: sample
391
+
namespace: sample-namespace
392
+
labels:
393
+
tenant: tenant-a
394
+
...
395
+
affinity:
396
+
podAntiAffinity:
397
+
requiredDuringSchedulingIgnoredDuringExecution:
398
+
- matchLabelSelectors:
399
+
- key: tenant
400
+
operator: NotIn
401
+
labelSelector:
402
+
matchExpressions:
403
+
- key: tenant
404
+
operator: Exists
405
+
topologyKey: node-pool
406
+
```
407
+
408
+
kube-apiserver modifies the labelSelector like the following:
409
+
410
+
```diff
411
+
affinity:
412
+
podAntiAffinity:
413
+
requiredDuringSchedulingIgnoredDuringExecution:
414
+
- matchLabelSelectors:
415
+
- key: tenant
416
+
operator: NotIn
417
+
labelSelector:
418
+
matchExpressions:
419
+
- key: tenant
420
+
operator: Exists
421
+
+ - key: tenant
422
+
+ operator: NotIn
423
+
+ values:
424
+
+ - tenant-a
425
+
topologyKey: node-pool
426
+
```
427
+
381
428
### Test Plan
382
429
383
430
<!--
@@ -967,6 +1014,10 @@ Implement new enum values `ExistsWithSameValue` and `ExistsWithDifferentValue` i
967
1014
- `ExistsWithSameValue`: look up the label value keyed with the key specified in the labelSelector, and match with Pods which have the same label value on the key.
968
1015
- `ExistsWithDifferentValue`: look up the label value keyed with the key specified in the labelSelector, and match with Pods which have the same label key, but with the different label value on the key.
969
1016
1017
+
But, this idea is rejected because:
1018
+
- it's difficult to prepare all existing clients to handle new enums.
1019
+
- labelSelector is going to be required to know who has this labelSelector to handle these new enums, and it's a tough road to change all code handling labelSelector.
1020
+
970
1021
#### Example
971
1022
972
1023
a set of Pods A doesn't want to co-exist with other set of Pods, but want the set of Pods A co-located
0 commit comments