-
Notifications
You must be signed in to change notification settings - Fork 1.5k
/
Copy pathREADME.md
1098 lines (861 loc) · 47.3 KB
/
README.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!--
**Note:** When your KEP is complete, all of these comment blocks should be removed.
To get started with this template:
- [x] **Pick a hosting SIG.**
Make sure that the problem space is something the SIG is interested in taking
up. KEPs should not be checked in without a sponsoring SIG.
- [x] **Create an issue in kubernetes/enhancements**
When filing an enhancement tracking issue, please make sure to complete all
fields in that template. One of the fields asks for a link to the KEP. You
can leave that blank until this KEP is filed, and then go back to the
enhancement and add the link.
- [x] **Make a copy of this template directory.**
Copy this template into the owning SIG's directory and name it
`NNNN-short-descriptive-title`, where `NNNN` is the issue number (with no
leading-zero padding) assigned to your enhancement above.
- [x] **Fill out as much of the kep.yaml file as you can.**
At minimum, you should fill in the "Title", "Authors", "Owning-sig",
"Status", and date-related fields.
- [x] **Fill out this file as best you can.**
At minimum, you should fill in the "Summary" and "Motivation" sections.
These should be easy if you've preflighted the idea of the KEP with the
appropriate SIG(s).
- [x] **Create a PR for this KEP.**
Assign it to people in the SIG who are sponsoring this process.
- [ ] **Merge early and iterate.**
Avoid getting hung up on specific details and instead aim to get the goals of
the KEP clarified and merged quickly. The best way to do this is to just
start with the high-level sections and fill out details incrementally in
subsequent PRs.
Just because a KEP is merged does not mean it is complete or approved. Any KEP
marked as `provisional` is a working document and subject to change. You can
denote sections that are under active debate as follows:
```
<<[UNRESOLVED optional short context or usernames ]>>
Stuff that is being argued.
<<[/UNRESOLVED]>>
```
When editing KEPS, aim for tightly-scoped, single-topic PRs to keep discussions
focused. If you disagree with what is already in a document, open a new PR
with suggested changes.
One KEP corresponds to one "feature" or "enhancement" for its whole lifecycle.
You do not need a new KEP to move from beta to GA, for example. If
new details emerge that belong in the KEP, edit the KEP. Once a feature has become
"implemented", major changes should get new KEPs.
The canonical place for the latest set of instructions (and the likely source
of this file) is [here](/keps/NNNN-kep-template/README.md).
**Note:** Any PRs to move a KEP to `implementable`, or significant changes once
it is marked `implementable`, must be approved by each of the KEP approvers.
If none of those approvers are still appropriate, then changes to that list
should be approved by the remaining approvers and/or the owning SIG (or
SIG Architecture for cross-cutting KEPs).
-->
# KEP-3756: Robust VolumeManager reconstruction after kubelet restart
<!-- toc -->
- [Release Signoff Checklist](#release-signoff-checklist)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Introduction](#introduction)
- [Proposal](#proposal)
- [User Stories (Optional)](#user-stories-optional)
- [Story 1](#story-1)
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
- [Risks and Mitigations](#risks-and-mitigations)
- [Design Details](#design-details)
- [Proposed VolumeManager startup](#proposed-volumemanager-startup)
- [Old VolumeManager startup](#old-volumemanager-startup)
- [Observability](#observability)
- [Test Plan](#test-plan)
- [Prerequisite testing updates](#prerequisite-testing-updates)
- [Unit tests](#unit-tests)
- [Integration tests](#integration-tests)
- [e2e tests](#e2e-tests)
- [Graduation Criteria](#graduation-criteria)
- [Alpha](#alpha)
- [Beta](#beta)
- [GA](#ga)
- [Deprecation](#deprecation)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Version Skew Strategy](#version-skew-strategy)
- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
- [Feature Enablement and Rollback](#feature-enablement-and-rollback)
- [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning)
- [Monitoring Requirements](#monitoring-requirements)
- [Dependencies](#dependencies)
- [Scalability](#scalability)
- [Troubleshooting](#troubleshooting)
- [Implementation History](#implementation-history)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)
<!-- /toc -->
## Release Signoff Checklist
Items marked with (R) are required *prior to targeting to a milestone / release*.
- [X] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
- [X] (R) KEP approvers have approved the KEP status as `implementable`
- [X] (R) Design details are appropriately documented
- [X] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
- [ ] e2e Tests for all Beta API Operations (endpoints)
- [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
- [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free
- [X] (R) Graduation criteria is in place
- [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
- [X] (R) Production readiness review completed
- [ ] (R) Production readiness review approved
- [X] "Implementation History" section is up-to-date for milestone
- [X] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
[kubernetes.io]: https://kubernetes.io/
[kubernetes/enhancements]: https://git.k8s.io/enhancements
[kubernetes/kubernetes]: https://git.k8s.io/kubernetes
[kubernetes/website]: https://git.k8s.io/website
## Summary
After kubelet is restarted, it looses track of all volume it mounted for
running Pods. It tries to restore this state from the API server, where kubelet
can find Pods that _should_ be running, and from the host's OS, where it can
find actually mounted volumes. We know this process is imperfect.
This KEP tries to rework the process. While the work is technically a bugfix,
it changes large parts of kubelet, and we'd like to have it behind a feature
gate to provide users a way to get to the old implementations in case of
problems.
This work started as part of
[KEP 1790](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling)
and even went alpha in v1.26, but we'd like to have a separate feature + feature
gate to be able to graduate VolumeManager reconstruction faster.
<!--
This section is incredibly important for producing high-quality, user-focused
documentation such as release notes or a development roadmap. It should be
possible to collect this information before implementation begins, in order to
avoid requiring implementors to split their attention between writing release
notes and implementing the feature itself. KEP editors and SIG Docs
should help to ensure that the tone and content of the `Summary` section is
useful for a wide audience.
A good summary is probably at least a paragraph in length.
Both in this section and below, follow the guidelines of the [documentation
style guide]. In particular, wrap lines to a reasonable length, to make it
easier for reviewers to cite specific portions, and to minimize diff churn on
updates.
[documentation style guide]: https://github.com/kubernetes/community/blob/master/contributors/guide/style-guide.md
-->
## Motivation
### Goals
* During kubelet startup, allow it to populate additional information about
_how_ are existing volumes mounted.
[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling)
needs to know what mount options did the previous kubelet used when mounting
the volumes, to be able to tell if they need any change or not.
* Fix [#105536](https://github.com/kubernetes/kubernetes/issues/105536): Volumes
are not cleaned up (unmounted) after kubelet restart, which needs a similar
VolumeManager refactoring.
* In general, make volume cleanup more robust.
<!--
List the specific goals of the KEP. What is it trying to achieve? How will we
know that this has succeeded?
-->
### Non-Goals
<!--
What is out of scope for this KEP? Listing non-goals helps to focus discussion
and make progress.
-->
## Introduction
*VolumeManager* is a piece of kubelet that mounts volumes that should be
mounted (i.e. a Pod that needs the volume exists) and unmounts volumes that are
not needed any longer (all Pods that used them were deleted).
VolumeManager keeps two caches:
* *DesiredStateOfWorld* (DSW) contains volumes that should be mounted.
* *ActualStateOfWorld* (ASW) contains currently mounted volumes.
A volume in ASW can be marked as:
* Globally mounted - it is mounted in `/var/lib/kubelet/volumes/<plugin>/...`
* This mount is optional and depends on volume plugin / CSI driver
capabilities. If it's supported, each volume has only a single global
mount.
* Mounted into Pod local directory - it is mounted in
`/var/lib/kubelet/pods/<pod UID>/volumes/...`. Each pod that uses a volume
gets its own local mount, because each pod has a different `<pod UID>`.
If the volume plugin / CSI driver supports the global mount mentioned above,
each pod local mount is typically a bind-mount from the global mount.
In addition, both global and local mounts can be marked as *uncertain*, when
kubelet is not 100% sure if the volume is fully mounted there. Typically,
this happens when a CSI driver times out NodeStage / NodePublish calls
and kubelet can't be sure if the CSI driver has finished mounting the volume
*after* the timeout. Kubelet then needs to call NodeStage / NodePublish again
if the volume is still needed by some Pods, or call NodeUnstage /
NodeUnpublish if all Pods that needed the volume were deleted.
VolumeManager runs two separate goroutines:
* *[reconciler](https://github.com/kubernetes/kubernetes/blob/44b72d034852eb6da8916c82ce722af604b196c5/pkg/kubelet/volumemanager/reconciler/reconciler.go#L47-L69)
that periodically compares ASW and DSW and tries to move ASW towards DSW.
* *DesiredStateOfWorldPopulator* (DSWP) that
[periodically lists Pods from PodManager and adds them to DSW](https://github.com/kubernetes/kubernetes/blob/cca3d557e6ff7f265eca8517d7c4fa719077c8d1/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go#L175-L189).
This DSWP is marked as `hasAddedPods=true` ("fully populated") only after
it has read all Pods from files (static pods) **and** the API server (i.e.
[`sourcesReady.AllReady` returns `true` here](https://github.com/kubernetes/kubernetes/blob/cca3d557e6ff7f265eca8517d7c4fa719077c8d1/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go#L150-L159)).
Both ASW and DSW caches exist only in memory and are lost when kubelet process
dies. It's relatively easy to populate DSW - just list all Pods from the API
server and static pods and collect their volumes. Populating ASW is complicated
and actually source of several problems that we want to change in this KEP.
*Volume reconstruction* is a process where kubelet tries to create a single
valid `PersistentVolumeSpec` or `VolumeSpec` for a volume from the OS.
Typically from mount table by looking at what's mounted at
`/var/lib/kubelet/pods/*/volumes/XYZ`. This process is imperfect,
it populates only `(Persistent)VolumeSpec` fields that are necessary to unmount
the volume (i.e. to call `volumePlugin.TearDown` + `UnmountDevice` calls).
Today, kubelet populates VolumeManager's DSW first, from static Pods and pods
received from the API server. ASW is populated from the OS
after DSW is fully populated (`hasAddedPods==true`) and **only volumes missing
in DSW are added there**. In other words, kubelet reconstructs only the volumes
for Pods that were running, but were deleted from API server before kubelet
started. (If the pod is still in the API server, Running, its volumes would be
in DSW).
We assumed that this was enough, because if a volume is in DSW, the
VolumeManager will try to mount the volume, and it will eventually reach ASW.
We needed to add
[a complex workaround](https://github.com/kubernetes/kubernetes/pull/110670)
to actually unmount a volume if it's initially in DSW, but user deletes all
Pods that need it before the volume reaches ASW.
## Proposal
<!--
This is where we get down to the specifics of what the proposal actually is.
This should have enough detail that reviewers can understand exactly what
you're proposing, but should not include things like API designs or
implementation. What is the desired outcome and how do we measure success?.
The "Design Details" section below is for the real
nitty-gritty.
-->
We propose to reverse the kubelet startup process.
1. Quickly reconstruct ASW from the OS and add **all** found volumes to ASW
when kubelet starts as *uncertain*. "Quickly" means the process should look
only at the OS and files/directories in `/var/lib/kubelet/pods` and it should
not require the API server or any network calls. Esp. the API server may
not be available at this stage of kubelet startup.
2. In parallel to 1., start DSWP and populate DSW from the API server and
static pods.
3. When connection to the API server becomes available, complete reconstructed
information in ASW with data from the API server (e.g. from `node.status`).
This typically happens in parallel to the previous step.
Benefits:
* All volumes are reconstructed from the OS. As result, ASW can contain the
real information how are the volumes mounted, e.g. their mount options.
This will help with
[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling).
* Some issues become much easier to fix, e.g.
* [#105536](https://github.com/kubernetes/kubernetes/issues/105536)
* We can remove workarounds for
[#96635](https://github.com/kubernetes/kubernetes/issues/96635)
and [#70044](https://github.com/kubernetes/kubernetes/issues/70044),
they will get fixed naturally by the refactoring.
We also propose to split this work out of
[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling),
as it can be useful outside of SELinux relabeling and could graduate separately.
to split the feature, we propose feature gate `NewVolumeManagerReconstruction`.
### User Stories (Optional)
#### Story 1
(This is not a new story, we want to keep this behavior)
As a cluster admin, I want kubelet to resume where it stopped when it was
restarted or its machine was rebooted, so I don't need to clean up / unmount
any volumes manually.
It must be able to recognize what happened in the meantime and either unmount
any volumes of Pods that were deleted in the API server or mount volumes for
newly created Pods.
### Notes/Constraints/Caveats (Optional)
<!--
What are the caveats to the proposal?
What are some important details that didn't come across above?
Go in to as much detail as necessary here.
This might be a good place to talk about core concepts and how they relate.
-->
### Risks and Mitigations
The whole VolumeManager startup was rewritten as part of
[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling).
It can contain bugs that are not trivial to find, because kubelet can be used
in number of situations that we don't have in CI. For example, we found out
(and fixed) a case where the API server is actually a static Pod in kubelet
that is starting. We don't know what other kubelet configurations people use,
so we decided to write a KEP and move the new VolumeManager startup behind
a feature gate.
## Design Details
This section serves as a design document of the proposed *and* the old
VolumeManager startup + volume reconstruction during that.
### Proposed VolumeManager startup
When kubelet starts, VolumeManager starts DSWP and reconciler
[in parallel](https://github.com/kubernetes/kubernetes/blob/575616cc72dbfdd070ead81ec29c0d4f00226487/pkg/kubelet/volumemanager/volume_manager.go#L288-L292).
However, the first thing that the reconciler does before reconciling DSW and ASW
is that it scans `/var/lib/kubelet/pods/*` and reconstructs **all** found
volumes and adds them to ASW as *uncertainly mounted* and *uncertainly attached*.
Only information that is available in the Pod directory on the disk are
reconstructed into ASW, because kubelet may not have connection to the API
server at this point.
The volume reconstruction can be imperfect:
* It can miss `devicePath`, which may not be possible to reconstruct from the OS.
* For CSI volumes, it cannot decide if a volume is attach-able to
[put it into](https://github.com/kubernetes/kubernetes/blob/89bfdf02762727506c9801d38b202873793d1106/pkg/kubelet/volumemanager/volume_manager.go#L368),
or to [remove it from](https://github.com/kubernetes/kubernetes/blob/5134520a3bc3604d14a10900c7e07481f62d5912/pkg/kubelet/volumemanager/reconciler/reconciler_common.go#L298)
`node.status.volumesInUse`, because it cannot read CSIDriver from the API
server yet.
Kubelet puts the volumes to ASW as *uncertainly attached* and with possibly
wrong `devicePath` it got from the volume plugin. Kubelet stores list of the
reconstructed volumes in `volumesNeedUpdateFromNodeStatus` to fix both
`devicePath` and attach-ability from `node.status.volumesAttached` once it
establishes connection to the API server.
After **ASW** is populated, reconciler starts its
[reconciliation loop](https://github.com/kubernetes/kubernetes/blob/16534deedf1e3f7301b20041fafe15ff7f178904/pkg/kubelet/volumemanager/reconciler/reconciler_new.go#L33-L75):
1. `mountOrAttachVolumes()` - mounts (and attaches, if necessary) volumes that
are in DSW, but not in ASW. This can happen even before DSW is fully
populated.
2. `updateReconstructedFromNodeStatus()` - once kubelet gets connection to the
API server and reads its own `node.status`, volumes in
`volumesNeedUpdateFromNodeStatus` (i.e. all reconstructed volumes) are
updated from `node.status.volumesAttached`, overwriting any previous
*uncertain attach-ability* and `devicePath` of *uncertain mounts* (i.e.
potentially overwriting the reconstructed `devicePath` or even `devicePath`
from `MountDevice` / `SetUp` that ended as *uncertain*). This
happens only once, `volumesNeedUpdateFromNodeStatus` is cleared afterwards.
3. (Only once): Add all reconstructed volumes to `node.status.volumesInUse`.
4. Only after DSW was fully populated (i.e. VolumeManager can tell if a volume
is really needed or not), **and** DSW was fixed from `node.status`,
VolumeManager can start unmounting volumes and calls:
1. `unmountVolumes()` - unmounts pod local volume mounts (`TearDown`) that
are in ASW and are not in DSW.
2. `unmountDetachDevices()` - unmounts global volume mounts (`UnmountDevice`)
of volumes that are in ASW and are not in DSW.
3. `cleanOrphanVolumes()` - tries to clean up `volumesFailedReconstruction`.
Here kubelet cannot call appropriate volume plugin to unmount a
volume, because kubelet failed to reconstruct the volume spec from
`/var/lib/kubelet/pods/<uid>/volumes/xyz`. Kubelet at least tries to
unmount the directory and clean up any orphan files there.
This happens only once, `volumesFailedReconstruction` is cleared
afterwards.
Note that e.g. `mountOrAttachVolumes` can call `volumePlugin.MountDevice` /
`SetUp()` on a reconstructed volume (because it was added to ASW as *uncertain*)
and finally update ASW, while the VolumeManager is still waiting for the API
server to update `devicePath` of the same volume in ASW (step 2. above). We made
sure that `updateReconstructedDevicePaths()` will update the `devicePath` only
for volumes that are still *uncertain*, not to overwrite the *certain* ones.
### Old VolumeManager startup
When kubelet starts, VolumeManager starts DSWP and the reconciler
[in parallel](https://github.com/kubernetes/kubernetes/blob/16534deedf1e3f7301b20041fafe15ff7f178904/pkg/kubelet/volumemanager/volume_manager.go#L288-L292).
[The reconciler](https://github.com/kubernetes/kubernetes/blob/16534deedf1e3f7301b20041fafe15ff7f178904/pkg/kubelet/volumemanager/reconciler/reconciler.go#L33-L45)
then periodically does:
1. `unmountVolumes()` - unmounts (`TearDown`) pod local volumes that are in
ASW and are not in DSW. Since the ASW is initially empty, this call
becomes useful later.
2. `mountOrAttachVolumes()` - mounts (and attaches, if necessary) volumes that
are in DSW, but not in ASW. This will eventually happen for all volumes in
DSW, because ASW is empty. This actually the way how AWS is populated.
3. `unmountDetachDevices()` - unmounts (`UnmountDevice`) global volume mounts
of volumes that are in ASW and are not in DSW.
4. Only once after DSW is fully populated:
1. VolumeManager calls `sync()`, which scans `/var/lib/kubelet/pods/*`
and reconstructs **only** volumes that are not already in ASW.
In addition, volumes that are in DSW are reconstructed, but not added to
ASW (If a volume is in DSW, we expect that it reaches ASW during step 3.)
* `devicePath` of reconstructed volumes is populated from
`node.status.attachedVolumes` right away.
* In the next reconciliation loop, reconstructed volumes that are not in
DSW are finally unmounted in step 1. above.
* There is a workaround to add a reconstructed volume to ASW when it was
initially in DSW, but all pods that used the volume were deleted before
the volume was mounted and reached ASW.
([#110670](https://github.com/kubernetes/kubernetes/pull/110670))
2. VolumeManager reports all reconstructed volumes in
`node.status.volumesInUse` (that's why VolumeManager reconstructs volumes,
even if it does not add them to DSW).
3. For volumes that failed reconstruction kubelet cannot call appropriate
volume plugin to unmount them. Kubelet at least tries to unmount the
directory and clean up any orphan files there.
#### Observability
Today, any errors during volume reconstruction are exposed only as log messages.
We propose adding these new metrics, both to the old and new VolumeManager code:
* `reconstruct_volume_operations_total` / `reconstruct_volume_operations_errors_total`:
nr. of all / unsuccessfully reconstructed volumes.
* In the new VolumeManager code, this will include all volume mounts in
`/var/lib/kubelet/pods/*/volumes`
* In the old VolumeManager it will include only volumes that were not already
in ASW (those are not reconstructed).
* `force_cleaned_failed_volume_operations_total` / `force_cleaned_failed_volume_operation_errors_total`: nr.
of all / unsuccessful cleanups of volumes that failed reconstruction.
* `orphan_pod_cleaned_volumes_errors`: nr. of pods that failed cleanup with errors
like `orphaned pod "<uid>" found, but XYZ failed`
([example](https://github.com/kubernetes/kubernetes/blob/4fac7486d41c033d6bba9dfeda2356e8189035cd/pkg/kubelet/kubelet_volumes.go#L215)) in the last sync.
These messages can be a symptom of failed reconstruction (e.g.
[#105536](https://github.com/kubernetes/kubernetes/issues/105536)).
Note that kubelet logs this periodically and bumping this metric periodically
would not be useful.
[`cleanupOrphanedPodDirs`](https://github.com/kubernetes/kubernetes/blob/4fac7486d41c033d6bba9dfeda2356e8189035cd/pkg/kubelet/kubelet_volumes.go#L168)
needs to be changed to collect errors found during
one `/var/lib/kubelet/pods/` check and report collected "nr of errors during
the last housekeeping sweep (every 2 seconds)". There is no label that would
distinguish between each error cause.
* `orphan_pod_cleaned_volumes`: nr. of total pods that were attempted to be
cleaned up by `cleanupOrphanedPodDirs` in the last sync, both successful and
failed.
### Test Plan
[x] I/we understand the owners of the involved components may require updates to
existing tests to make this code solid enough prior to committing the changes necessary
to implement this enhancement.
##### Prerequisite testing updates
<!--
Based on reviewers feedback describe what additional tests need to be added prior
implementing this enhancement to ensure the enhancements have also solid foundations.
-->
##### Unit tests
<!--
In principle every added code should have complete unit test coverage, so providing
the exact set of tests will not bring additional value.
However, if complete unit test coverage is not possible, explain the reason of it
together with explanation why this is acceptable.
-->
<!--
Additionally, for Alpha try to enumerate the core package you will be touching
to implement this enhancement and provide the current unit coverage for those
in the form of:
- <package>: <date> - <current test coverage>
The data can be easily read from:
https://testgrid.k8s.io/sig-testing-canaries#ci-kubernetes-coverage-unit
This can inform certain test coverage improvements that we want to do before
extending the production code to implement this enhancement.
-->
All files are in `k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/`,
data taken on
[2023-01-26](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-coverage-unit/1613337898885582848/artifacts/combined-coverage.html).
The old reconciler + reconstruction:
- `reconciler.go`: `77.1`
- `reconstruct.go`: `75.7%`
- The new reconciler + reconstruction
- `reconciler_new.go`: `73.3%`
- The coverage is lower than `reconciler.go`, because parts of
`reconcile.go` code are tested by unit tests in different packages.
With force-enabled `SELinuxMountReadWriteOnce` gate in
today's master(`f21c60341740874703ce12e070eda6cdddfd9f7b`), I got
`reconciler_new.go` coverage `93.3%`.
- `reconstruct_new.go`: `66.2%`
- `updateReconstructedDevicePaths` does not have unit tests, this will be
added before Beta release.
Common code:
- `reconciler_common.go`: `86.2%`
- `reconstruct_common.go`: `75.8%`
##### Integration tests
<!--
This question should be filled when targeting a release.
For Alpha, describe what tests will be added to ensure proper quality of the enhancement.
For Beta and GA, add links to added tests together with links to k8s-triage for those tests:
https://storage.googleapis.com/k8s-triage/index.html
-->
None.
##### e2e tests
<!--
This question should be filled when targeting a release.
For Alpha, describe what tests will be added to ensure proper quality of the enhancement.
For Beta and GA, add links to added tests together with links to k8s-triage for those tests:
https://storage.googleapis.com/k8s-triage/index.html
We expect no non-infra related flakes in the last month as a GA graduation criteria.
-->
- "Should test that pv used in a pod that is deleted while the kubelet is down
cleans up when the kubelet returns":
https://storage.googleapis.com/k8s-triage/index.html?sig=storage&test=Should%20test%20that%20pv%20used%20in%20a%20pod%20that%20is%20deleted%20while%20the%20kubelet%20is%20down%20cleans%20up%20when%20the%20kubelet%20returns
- "Should test that pv used in a pod that is force deleted while the kubelet is
down cleans up when the kubelet returns":
https://storage.googleapis.com/k8s-triage/index.html?sig=storage&test=Should%20test%20that%20pv%20used%20in%20a%20pod%20that%20is%20force%20deleted%20while%20the%20kubelet%20is%20down%20cleans%20up%20when%20the%20kubelet%20returns
Both are for the old reconstruction code, we don't have a job that enables
alpha features + runs `[Disruptive]` tests.
Recent results:
> *235 failures (3 in last day) out of 130688 builds from 1/11/2023, 1:00:33 AM
> to 1/25/2023*
I checked couple of the recent flakes and all failed because they could not
create namespace for the test:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-serial/1620328095124819968:
> Unexpected error while creating namespace: Post
> "https://35.247.99.121/api/v1/namespaces": dial tcp 35.247.99.121:443:
> connect: connection refused
A whole new job was added to ensure static pods can start when kubelet restarts: [ci-kubernetes-e2e-storage-kind-disruptive](https://testgrid.k8s.io/sig-storage-kubernetes#kind-disruptive).
There was a single installation flake in the last 14 days (captured on 2024-01-23).
### Graduation Criteria
#### Alpha
- Feature implemented behind a feature flag
#### Beta
- Gather feedback from developers
#### GA
- Allowing time for feedback.
- No flakes in CI.
#### Deprecation
- Announce deprecation and support policy of the existing flag
- No need to wait for two versions passed since introducing the functionality that deprecates the flag (to address version skew). The feature is local to a single kubelet.
- Address feedback on usage/changed behavior, provided on GitHub issues
- Deprecate the flag
### Upgrade / Downgrade Strategy
<!--
If applicable, how will the component be upgraded and downgraded? Make sure
this is in the test plan.
Consider the following in developing an upgrade/downgrade strategy for this
enhancement:
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade, in order to maintain previous behavior?
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade, in order to make use of the enhancement?
-->
The feature is enabled by a single feature gate on kubelet and does not require
any special upgrade / downgrade handling.
### Version Skew Strategy
<!--
If applicable, how will the component handle version skew with other
components? What are the guarantees? Make sure this is in the test plan.
Consider the following in developing a version skew strategy for this
enhancement:
- Does this enhancement involve coordinating behavior in the control plane and
in the kubelet? How does an n-2 kubelet without this feature available behave
when this feature is used?
- Will any other components on the node change? For example, changes to CSI,
CRI or CNI may require updating that component before the kubelet.
-->
The feature affects only how kubelet starts. It has no implications on
other Kubernetes components or other kubelets. Therefore, we don't see any
issues with any version skew.
## Production Readiness Review Questionnaire
<!--
Production readiness reviews are intended to ensure that features merging into
Kubernetes are observable, scalable and supportable; can be safely operated in
production environments, and can be disabled or rolled back in the event they
cause increased failures in production. See more in the PRR KEP at
https://git.k8s.io/enhancements/keps/sig-architecture/1194-prod-readiness.
The production readiness review questionnaire must be completed and approved
for the KEP to move to `implementable` status and be included in the release.
In some cases, the questions below should also have answers in `kep.yaml`. This
is to enable automation to verify the presence of the review, and to reduce review
burden and latency.
The KEP must have a approver from the
[`prod-readiness-approvers`](http://git.k8s.io/enhancements/OWNERS_ALIASES)
team. Please reach out on the
[#prod-readiness](https://kubernetes.slack.com/archives/CPNHUMN74) channel if
you need any help or guidance.
-->
### Feature Enablement and Rollback
<!--
This section must be completed when targeting alpha to a release.
-->
###### How can this feature be enabled / disabled in a live cluster?
<!--
Pick one of these and delete the rest.
Documentation is available on [feature gate lifecycle] and expectations, as
well as the [existing list] of feature gates.
[feature gate lifecycle]: https://git.k8s.io/community/contributors/devel/sig-architecture/feature-gates.md
[existing list]: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
-->
- [X] Feature gate (also fill in values in `kep.yaml`)
- Feature gate name: `NewVolumeManagerReconstruction`
- Components depending on the feature gate: kubelet
###### Does enabling the feature change any default behavior?
<!--
Any change of default behavior may be surprising to users or break existing
automations, so be extremely careful here.
-->
It changes how kubelet starts and how it cleans volume mounts. It has no
visible effect in any API object.
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
<!--
Describe the consequences on existing workloads (e.g., if this is a runtime
feature, can it break the existing applications?).
Feature gates are typically disabled by setting the flag to `false` and
restarting the component. No other changes should be necessary to disable the
feature.
NOTE: Also set `disable-supported` to `true` or `false` in `kep.yaml`.
-->
The feature can be disabled without any issues.
###### What happens if we reenable the feature if it was previously rolled back?
Nothing interesting happens. This feature changes how kubelet starts and how it
cleans volume mounts. It has no visible effect in any API object nor structure
of data / mount table in the host OS.
###### Are there any tests for feature enablement/disablement?
<!--
The e2e framework does not currently support enabling or disabling feature
gates. However, unit tests in each component dealing with managing data, created
with and without the feature, are necessary. At the very least, think about
conversion tests if API types are being modified.
Additionally, for features that are introducing a new API field, unit tests that
are exercising the `switch` of feature gate itself (what happens if I disable a
feature gate after having objects written with the new field) are also critical.
You can take a look at one potential example of such test in:
https://github.com/kubernetes/kubernetes/pull/97058/files#diff-7826f7adbc1996a05ab52e3f5f02429e94b68ce6bce0dc534d1be636154fded3R246-R282
-->
We have unit tests for the feature disabled or enabled.
It affects only kubelet startup and we don't change format of data present in
the OS (mount table, content of `/var/lib/kubelet/pods/`), so we don't have
automated tests to start kubelet with the feature enabled and then disable it
or a vice versa.
### Rollout, Upgrade and Rollback Planning
<!--
This section must be completed when targeting beta to a release.
-->
###### How can a rollout or rollback fail? Can it impact already running workloads?
<!--
Try to be as paranoid as possible - e.g., what if some components will restart
mid-rollout?
Be sure to consider highly-available clusters, where, for example,
feature flags will be enabled on some API servers and not others during the
rollout. Similarly, consider large clusters and how enablement/disablement
will rollout across nodes.
-->
If this feature is buggy, kubelet either does not come up at
all (crashes, hangs) or does not unmount volumes that it should unmount.
###### What specific metrics should inform a rollback?
<!--
What signals should users be paying attention to when the feature is young
that might indicate a serious problem?
-->
`reconstruct_volume_operations_total`,
`reconstruct_volume_operations_errors_total`,
`force_cleaned_failed_volume_operations_total`,
`force_cleaned_failed_volume_operation_errors_total`,
`orphaned_volumes_cleanup_errors_total`
See Observability in the detail design section. All newly introduced metrics
will be added both to "old" and "new" VolumeManager, so users can compare
these metrics with the feature gate enabled and disabled and see if downgrade
actually helped.
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
<!--
Describe manual testing that was done and the outcomes.
Longer term, we may want to require automated upgrade/rollback tests, but we
are missing a bunch of machinery and tooling and can't do that now.
-->
Yes, see https://github.com/kubernetes/enhancements/issues/3756#issuecomment-1906255361 (and expand `Details`).
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
<!--
Even if applying deprecation policies, they may still surprise some users.
-->
No.
### Monitoring Requirements
<!--
This section must be completed when targeting beta to a release.
For GA, this section is required: approvers should be able to confirm the
previous answers based on experience in the field.
-->
###### How can an operator determine if the feature is in use by workloads?
<!--
Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
checking if there are objects with field X set) may be a last resort. Avoid
logs or events for this purpose.
-->
They can check if the FeatureGate is enabled on a node, e.g. by monitoring
`kubernetes_feature_enabled` metric. Or read kubelet logs.
###### How can someone using this feature know that it is working for their instance?
<!--
For instance, if this is a pod-related feature, it should be possible to determine if the feature is functioning properly
for each individual pod.
Pick one more of these and delete the rest.
Please describe all items visible to end users below with sufficient detail so that they can verify correct enablement
and operation of this feature.
Recall that end users cannot usually observe component logs or access metrics.
-->
- [ ] Events
- Event Reason:
- [ ] API .status
- Condition name:
- Other field:
- [X] Other (treat as last resort)
- Details: logs during kubelet startup.
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
<!--
This is your opportunity to define what "normal" quality of service looks like
for a feature.
It's impossible to provide comprehensive guidance, but at the very
high level (needs more precise definitions) those may be things like:
- per-day percentage of API calls finishing with 5XX errors <= 1%
- 99% percentile over day of absolute value from (job creation time minus expected
job creation time) for cron job <= 10%
- 99.9% of /health requests per day finish with 200 code
These goals will help you determine what you need to measure (SLIs) in the next
question.
-->
These two metrics are populated during kubelet startup:
* `reconstruct_volume_operations_errors_total` should be zero. An error here
means that kubelet was not able to reconstruct its cache of mounted volumes
and appropriate volume plugin was not called to clean up a volume mount.
There could be a leaked file or directory on the filesystem.
* `force_cleaned_failed_volume_operation_errors_total` should be zero. An error
here means that kubelet was not able to unmount a volume even with all
fallbacks it has. There *is* at least a leaked directory on the filesystem,
there could be also a leaked mount.
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
<!--
Pick one more of these and delete the rest.
-->
- [X] Metrics
- Metric name:
- `reconstruct_volume_operations_total`
- `reconstruct_volume_operations_errors_total`
- `force_cleaned_failed_volume_operations_total`
- `force_cleaned_failed_volume_operation_errors_total`
- `orphaned_volumes_cleanup_errors_total`
- Components exposing the metric: kubelet
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
<!--
Describe the metrics themselves and the reasons why they weren't added (e.g., cost,
implementation difficulties, etc.).
-->
No
### Dependencies
<!--
This section must be completed when targeting beta to a release.
-->
###### Does this feature depend on any specific services running in the cluster?
<!--
Think about both cluster-level services (e.g. metrics-server) as well
as node-level agents (e.g. specific version of CRI). Focus on external or
optional services that are needed. For example, if this feature depends on
a cloud provider API, or upon an external software-defined storage or network
control plane.
For each of these, fill in the following—thinking about running existing user workloads
and creating new ones, as well as about cluster-level services (e.g. DNS):
- [Dependency name]
- Usage description:
- Impact of its outage on the feature:
- Impact of its degraded performance or high-error rates on the feature:
-->
No.
### Scalability
<!--
For alpha, this section is encouraged: reviewers should consider these questions
and attempt to answer them.
For beta, this section is required: reviewers must answer these questions.
For GA, this section is required: approvers should be able to confirm the
previous answers based on experience in the field.
-->
###### Will enabling / using this feature result in any new API calls?
<!--
Describe them, providing:
- API call type (e.g. PATCH pods)
- estimated throughput
- originating component(s) (e.g. Kubelet, Feature-X-controller)
Focusing mostly on:
- components listing and/or watching resources they didn't before
- API calls that may be triggered by changes of some Kubernetes resources
(e.g. update of object X triggers new updates of object Y)
- periodic API calls to reconcile state (e.g. periodic fetching state,
heartbeats, leader election, etc.)
-->
No.
###### Will enabling / using this feature result in introducing new API types?
<!--
Describe them, providing:
- API type
- Supported number of objects per cluster
- Supported number of objects per namespace (for namespace-scoped objects)
-->
No.
###### Will enabling / using this feature result in any new calls to the cloud provider?
<!--
Describe them, providing:
- Which API(s):
- Estimated increase:
-->
No.
###### Will enabling / using this feature result in increasing size or count of the existing API objects?
<!--
Describe them, providing:
- API type(s):
- Estimated increase in size: (e.g., new annotation of size 32B)
- Estimated amount of new objects: (e.g., new Object X for every existing Pod)
-->
No.
###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
<!--
Look at the [existing SLIs/SLOs].
Think about adding additional work or introducing new steps in between
(e.g. need to do X to start a container), etc. Please describe the details.
[existing SLIs/SLOs]: https://git.k8s.io/community/sig-scalability/slos/slos.md#kubernetes-slisslos
-->
Kubelet startup could be slower, but that would be a bug. In theory, the old
and new VolumeManager startup does the same things, just in a different order.
###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
<!--
Things to keep in mind include: additional in-memory state, additional
non-trivial computations, excessive access to disks (including increased log
volume), significant amount of data sent and/or received over network, etc.
This through this both in small and large cases, again with respect to the
[supported limits].
[supported limits]: https://git.k8s.io/community//sig-scalability/configs-and-limits/thresholds.md
-->
No.
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
<!--
Focus not just on happy cases, but primarily on more pathological cases
(e.g. probes taking a minute instead of milliseconds, failed pods consuming resources, etc.).
If any of the resources can be exhausted, how this is mitigated with the existing limits
(e.g. pods per node) or new limits added by this KEP?
Are there any tests that were run/should be run to understand performance characteristics better
and validate the declared limits?