Skip to content

Commit 2eb2f45

Browse files
committed
Add longevity test results
1 parent 8be03e1 commit 2eb2f45

14 files changed

+220
-0
lines changed
81.3 KB
Loading
111 KB
Loading
81.8 KB
Loading
Loading
107 KB
Loading
Loading

tests/results/longevity/1.6.0/oss.md

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# Results
2+
3+
## Test environment
4+
5+
NGINX Plus: false
6+
7+
NGINX Gateway Fabric:
8+
9+
- Commit: 8be03e1fc5161a2b1bc0962fb0d8732114a9093d
10+
- Date: 2025-01-14T18:57:38Z
11+
- Dirty: true
12+
13+
GKE Cluster:
14+
15+
- Node count: 3
16+
- k8s version: v1.30.6-gke.1596000
17+
- vCPUs per node: 2
18+
- RAM per node: 4018128Ki
19+
- Max pods per node: 110
20+
- Zone: us-central1-c
21+
- Instance Type: e2-medium
22+
23+
## Traffic
24+
25+
HTTP:
26+
27+
```text
28+
Running 5760m test @ http://cafe.example.com/coffee
29+
2 threads and 100 connections
30+
Thread Stats Avg Stdev Max +/- Stdev
31+
Latency 189.49ms 147.10ms 2.00s 78.44%
32+
Req/Sec 293.54 193.84 1.95k 66.59%
33+
198532845 requests in 5760.00m, 67.91GB read
34+
Socket errors: connect 0, read 309899, write 63, timeout 2396
35+
Requests/sec: 574.46
36+
Transfer/sec: 206.05KB
37+
```
38+
39+
HTTPS:
40+
41+
```text
42+
Running 5760m test @ https://cafe.example.com/tea
43+
2 threads and 100 connections
44+
Thread Stats Avg Stdev Max +/- Stdev
45+
Latency 179.59ms 121.50ms 1.99s 67.56%
46+
Req/Sec 292.54 193.88 2.39k 66.47%
47+
197890521 requests in 5760.00m, 66.57GB read
48+
Socket errors: connect 176, read 303560, write 0, timeout 7
49+
Requests/sec: 572.60
50+
Transfer/sec: 201.98KB
51+
```
52+
53+
### Logs
54+
55+
No error logs in nginx-gateway.
56+
57+
No error logs in nginx.
58+
59+
60+
### Key Metrics
61+
62+
#### Containers memory
63+
64+
![oss-memory.png](oss-memory.png)
65+
66+
#### NGF Container Memory
67+
68+
![oss-ngf-memory.png](oss-ngf-memory.png)
69+
70+
### Containers CPU
71+
72+
![oss-cpu.png](oss-cpu.png)
73+
74+
### NGINX metrics
75+
76+
![oss-stub-status.png](oss-stub-status.png)
77+
78+
### Reloads
79+
80+
Rate of reloads - successful and errors:
81+
82+
![oss-reloads.png](oss-reloads.png)
83+
84+
Reload spikes correspond to 1 hour periods of backend re-rollouts.
85+
86+
No reloads finished with an error.
87+
88+
Reload time distribution - counts:
89+
90+
![oss-reload-time.png](oss-reload-time.png)
91+
92+
93+
## Comparison with previous results
94+
95+
Graphs look similar to 1.5.0 results. There is a color change swap in a few graphs which is a little confusing.
96+
NGINX container memory increased dramatically. NGINX Stub Status graph is confusing to interpret, which can make it seem
97+
quite different to the 1.5.0 results, but it is similar, only with an increase in requests.
82 KB
Loading
103 KB
Loading
Loading
Loading
58.3 KB
Loading
169 KB
Loading

tests/results/longevity/1.6.0/plus.md

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
# Results
2+
3+
## Test environment
4+
5+
NGINX Plus: true
6+
7+
NGINX Gateway Fabric:
8+
9+
- Commit: 8be03e1fc5161a2b1bc0962fb0d8732114a9093d
10+
- Date: 2025-01-14T18:57:38Z
11+
- Dirty: true
12+
13+
GKE Cluster:
14+
15+
- Node count: 3
16+
- k8s version: v1.30.6-gke.1596000
17+
- vCPUs per node: 2
18+
- RAM per node: 4018128Ki
19+
- Max pods per node: 110
20+
- Zone: us-central1-c
21+
- Instance Type: e2-medium
22+
23+
## Traffic
24+
25+
HTTP:
26+
27+
```text
28+
Running 5760m test @ http://cafe.example.com/coffee
29+
2 threads and 100 connections
30+
Thread Stats Avg Stdev Max +/- Stdev
31+
Latency 178.76ms 115.93ms 1.54s 65.67%
32+
Req/Sec 298.56 193.44 2.46k 65.81%
33+
202236770 requests in 5760.00m, 69.39GB read
34+
Socket errors: connect 0, read 68, write 118, timeout 4
35+
Non-2xx or 3xx responses: 22514
36+
Requests/sec: 585.18
37+
Transfer/sec: 210.54KB
38+
```
39+
40+
HTTPS:
41+
42+
```text
43+
Running 5760m test @ https://cafe.example.com/tea
44+
2 threads and 100 connections
45+
Thread Stats Avg Stdev Max +/- Stdev
46+
Latency 178.97ms 115.95ms 1.45s 65.64%
47+
Req/Sec 297.98 193.03 1.82k 65.83%
48+
201870214 requests in 5760.00m, 68.09GB read
49+
Socket errors: connect 95, read 57, write 0, timeout 0
50+
Non-2xx or 3xx responses: 6
51+
Requests/sec: 584.12
52+
Transfer/sec: 206.60KB
53+
```
54+
55+
56+
### Logs
57+
58+
### nginx-gateway
59+
```text
60+
error=pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: Failed to watch *v1alpha1.ClientSettingsPolicy: clientsettingspolicies.gateway.nginx.org is forbidden: User "system:serviceaccount:nginx-gateway:ngf-longevity-nginx-gateway-fabric" cannot watch resource "clientsettingspolicies" in API group "gateway.nginx.org" at the cluster scope;level=error;logger=UnhandledError;msg=Unhandled Error;stacktrace=k8s.io/client-go/tools/cache.DefaultWatchErrorHandler
61+
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:166
62+
k8s.io/client-go/tools/cache.(*Reflector).Run.func1
63+
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:316
64+
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
65+
pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226
66+
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
67+
pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227
68+
k8s.io/client-go/tools/cache.(*Reflector).Run
69+
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:314
70+
k8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2
71+
pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55
72+
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1
73+
pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72;ts=2025-01-14T20:45:36Z
74+
```
75+
76+
### nginx
77+
78+
```text
79+
2025/01/14 06:29:09 [error] 216#216: *345664926 no live upstreams while connecting to upstream, client: 10.128.0.34, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com"
80+
81+
10.128.0.34 - - [14/Jan/2025:06:29:09 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-"
82+
2025/01/14 06:29:09 [error] 216#216: *345664926 no live upstreams while connecting to upstream, client: 10.128.0.34, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com"
83+
84+
```
85+
86+
### Key Metrics
87+
88+
#### Containers memory
89+
90+
![plus-memory.png](plus-memory.png)
91+
92+
#### NGF Container Memory
93+
94+
![plus-ngf-memory.png](plus-ngf-memory.png)
95+
96+
### Containers CPU
97+
98+
![plus-cpu.png](plus-cpu.png)
99+
100+
### NGINX Plus metrics
101+
102+
![plus-status.png](plus-status.png)
103+
104+
### Reloads
105+
106+
Rate of reloads - successful and errors:
107+
108+
![plus-reloads.png](plus-reloads.png)
109+
110+
Note: compared to OSS NGINX, we don't have as many reloads here, because NGF uses NGINX Plus API to reconfigure NGINX
111+
for endpoints changes.
112+
113+
Reload time distribution - counts:
114+
115+
![plus-reload-time.png](plus-reload-time.png)
116+
117+
## Comparison with previous results
118+
119+
Graphs look similar to 1.4.0 results. CPU usage increased slightly. There was a noticeable error sometime two days in
120+
where memory usage dipped heavily and so did the NGINX plus status, which could a test error instead of product error.
121+
There looked to be a reload event where past results didn't have one. NGINX errors differ from previous results errors but
122+
are consistent with errors seen in the 1.6.0 test suite. NGF error is something to keep an eye on. The NGINX errors did not coincide
123+
with the abnormalities on any of the graphs.

0 commit comments

Comments
 (0)