Skip to content

Commit f31a926

Browse files
committed
update KEP to match latest template
1 parent 977d7ee commit f31a926

File tree

1 file changed

+11
-0
lines changed
  • keps/sig-storage/3762-persistent-volume-last-phase-transition-time

1 file changed

+11
-0
lines changed

keps/sig-storage/3762-persistent-volume-last-phase-transition-time/README.md

+11
Original file line numberDiff line numberDiff line change
@@ -869,6 +869,17 @@ This through this both in small and large cases, again with respect to the
869869
[supported limits]: https://git.k8s.io/community//sig-scalability/configs-and-limits/thresholds.md
870870
-->
871871
872+
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
873+
874+
<!--
875+
Focus not just on happy cases, but primarily on more pathological cases
876+
(e.g. probes taking a minute instead of milliseconds, failed pods consuming resources, etc.).
877+
If any of the resources can be exhausted, how this is mitigated with the existing limits
878+
(e.g. pods per node) or new limits added by this KEP?
879+
Are there any tests that were run/should be run to understand performance characteristics better
880+
and validate the declared limits?
881+
-->
882+
872883
### Troubleshooting
873884
874885
<!--

0 commit comments

Comments
 (0)