|
16 | 16 | - [Timestamp of the Pod Status](#timestamp-of-the-pod-status)
|
17 | 17 | - [Runtime Service Changes](#runtime-service-changes)
|
18 | 18 | - [Pod Status Update in the Cache](#pod-status-update-in-the-cache)
|
| 19 | + - [Compatibility Check](#compatibility-check) |
19 | 20 | - [Test Plan](#test-plan)
|
20 | 21 | - [Prerequisite testing updates](#prerequisite-testing-updates)
|
21 | 22 | - [Unit tests](#unit-tests)
|
|
24 | 25 | - [Graduation Criteria](#graduation-criteria)
|
25 | 26 | - [Alpha](#alpha)
|
26 | 27 | - [Beta](#beta)
|
| 28 | + - [Beta (enabled by default)](#beta-enabled-by-default) |
| 29 | + - [Stress Test](#stress-test) |
| 30 | + - [Recovery Test](#recovery-test) |
| 31 | + - [Retries with Backoff Logic](#retries-with-backoff-logic) |
| 32 | + - [Generic PLEG Continuous Validation](#generic-pleg-continuous-validation) |
27 | 33 | - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
|
28 | 34 | - [Version Skew Strategy](#version-skew-strategy)
|
29 | 35 | - [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
|
@@ -272,6 +278,10 @@ func (c *cache) Set(id types.UID, status *PodStatus, err error, timestamp time.T
|
272 | 278 |
|
273 | 279 | This has no impact on the existing `Generic PLEG` when used without `Evented PLEG` because its the only entity that sets the cache and it does so every second (if needed) for a given pod.
|
274 | 280 |
|
| 281 | +### Compatibility Check |
| 282 | + |
| 283 | +For this feature to work Kubelet needs to be used with a compatible CRI Runtime that is capable of generating CRI Events. During the Kubelet start up if it detects that CRI Runtime doesn't support generating and streaming CRI Events, it should automatically fall back to using `Generic PLEG` |
| 284 | + |
275 | 285 |
|
276 | 286 | ### Test Plan
|
277 | 287 |
|
@@ -348,6 +358,45 @@ We expect no non-infra related flakes in the last month as a GA graduation crite
|
348 | 358 | - Add E2E Node Conformance presubmit job in CI
|
349 | 359 | - Add E2E Node Conformance periodic job in CI
|
350 | 360 |
|
| 361 | +#### Beta (enabled by default) |
| 362 | +##### Stress Test |
| 363 | +To test the performance and scalability of Evented PLEG, it is necessary to generate a large number of CRI Events by creating and deleting a significant number of containers within a short period of time. The following steps outline the stress test: |
| 364 | + |
| 365 | +Since this is a disruptive stress test, it should be part of a node e2e `Serial` job. CRI Events are generated per container, and therefore, the test should create a substantial number of containers within a single pod. After creation, these containers should run to completion and then be removed by the kubelet. This process will ensure the generation of CONTAINER_CREATED_EVENT, CONTAINER_STARTED_EVENT, CONTAINER_STOPPED_EVENT, and CONTAINER_DELETED_EVENT. |
| 366 | + |
| 367 | +The test should continue to create these containers until the histogram metric `evented_pleg_connection_latency_seconds` begins to show distinct latency values in its 1-second bucket. This indicates that it is taking 1 second or longer for an event to be observed by the kubelet after getting generated by the runtime. Typical values for this latency are around 0.001 seconds, so it is safe to assume 1 second as a measure indicates that the system is under stress. |
| 368 | + |
| 369 | +Once the `evented_pleg_connection_latency_seconds` is observed to be greater than 1 second, new container creation is halted, and the rest of the already created containers are run to completion. At this point, `kubelet_evented_pleg_connection_latency_seconds_count` can be used to determine the total number of CRI Events generated during this test. |
| 370 | + |
| 371 | +##### Recovery Test |
| 372 | +To test the ability of the Kubelet to recover the latest state of a container after a restart, a disruption test should be included in the node e2e Serial job. The test should involve creating a container with a sufficient time to completion (e.g. sleep 20), and then immediately stopping the Kubelet once the container enters the `Running` state. The CRI runtime should emit CRI events indicating the change in container state, but the Kubelet will miss the `CONTAINER_STOPPED_EVENT` for that container. |
| 373 | + |
| 374 | +To validate the Kubelet's ability to recover the latest state of the container, the test should query the CRI endpoint to confirm that the container has ran to completion successfully. Once the Kubelet is started again, it should be able to query the CRI runtime and update its cache with the latest state of the container. If the Kubelet accurately reports the state of the container as `Completed`, the test will be considered passed. |
| 375 | + |
| 376 | +##### Retries with Backoff Logic |
| 377 | +Currently, the Kubelet attempts to reconnect five times before falling back on Generic PLEG in the event of errors encountered during the streaming connection with CRI Runtime. However, in situations where the CRI Runtime is taken down for maintenance purposes, the Kubelet may exhaust all of its reconnection attempts and never try again, resulting in the usage of `Generic PLEG` despite the CRI Runtime's compatibility with `Evented PLEG`. To address this issue, a backoff logic with exponentially increasing sequence and an upper limit should be implemented to retry re-establishing the connection. Once the upper limit is reached, it should periodically try with that value. By doing so, the Kubelet will be able to reconnect to the CRI Runtime even after multiple attempts have failed, and it will be able to utilize `Evented PLEG` when possible. e.g. |
| 378 | + |
| 379 | +``` |
| 380 | +Retry immediately |
| 381 | +Retry after 1 second |
| 382 | +Retry after 2 seconds |
| 383 | +Retry after 4 seconds |
| 384 | +Retry after 8 seconds |
| 385 | +Retry after 16 seconds |
| 386 | +Retry after 32 seconds |
| 387 | +Retry after 64 seconds |
| 388 | +Retry after every 60 seconds indefinitely |
| 389 | +``` |
| 390 | + |
| 391 | +##### Generic PLEG Continuous Validation |
| 392 | +Make sure existing jobs in following test grid tabs that use `Generic PLEG` continue to use it by making sure that `Evented PLEG` is disabled for them. |
| 393 | + |
| 394 | +https://testgrid.k8s.io/sig-node-release-blocking |
| 395 | +https://testgrid.k8s.io/sig-node-kubelet |
| 396 | +https://testgrid.k8s.io/sig-node-containerd |
| 397 | +https://testgrid.k8s.io/sig-node-cri-o |
| 398 | +https://testgrid.k8s.io/sig-node-presubmits |
| 399 | + |
351 | 400 | ### Upgrade / Downgrade Strategy
|
352 | 401 |
|
353 | 402 | N/A
|
|
0 commit comments