Skip to content

Commit 18c5078

Browse files
committed
Add feedback
1 parent 2eb2f45 commit 18c5078

File tree

5 files changed

+7
-5
lines changed

5 files changed

+7
-5
lines changed

tests/results/dp-perf/1.6.0/1.6.0-oss.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ GKE Cluster:
2323
## Summary:
2424

2525
- Performance stayed consistent with 1.5.0 results. Average latency slightly increased across all routing methods.
26+
- Errors that occurred are consistent with errors that occurred in the previous results.
2627

2728
## Test1: Running latte path based routing
2829

tests/results/longevity/1.6.0/oss.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,5 +93,5 @@ Reload time distribution - counts:
9393
## Comparison with previous results
9494

9595
Graphs look similar to 1.5.0 results. There is a color change swap in a few graphs which is a little confusing.
96-
NGINX container memory increased dramatically. NGINX Stub Status graph is confusing to interpret, which can make it seem
96+
NGINX container memory decreased dramatically. NGINX Stub Status graph is confusing to interpret, which can make it seem
9797
quite different to the 1.5.0 results, but it is similar, only with an increase in requests.

tests/results/longevity/1.6.0/plus.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -116,8 +116,8 @@ Reload time distribution - counts:
116116

117117
## Comparison with previous results
118118

119-
Graphs look similar to 1.4.0 results. CPU usage increased slightly. There was a noticeable error sometime two days in
120-
where memory usage dipped heavily and so did the NGINX plus status, which could a test error instead of product error.
119+
Graphs look similar to 1.5.0 results. CPU usage increased slightly. There was a noticeable error sometime two days in
120+
where memory usage dipped heavily and so did the NGINX plus status, which could be a test error instead of product error.
121121
There looked to be a reload event where past results didn't have one. NGINX errors differ from previous results errors but
122-
are consistent with errors seen in the 1.6.0 test suite. NGF error is something to keep an eye on. The NGINX errors did not coincide
122+
are consistent with errors seen in the 1.5.0 test suite. NGF error is something to keep an eye on. The NGINX errors did not coincide
123123
with the abnormalities on any of the graphs.

tests/results/scale/1.6.0/1.6.0-oss.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ GKE Cluster:
2323
## Summary:
2424

2525
- Performance improved. Average reload and event batch processing decreased across all test cases.
26+
- Errors that occurred are consistent with errors that occurred in the previous results.
2627

2728
## Test TestScale_Listeners
2829

tests/results/scale/1.6.0/1.6.0-plus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ GKE Cluster:
2222

2323
## Summary:
2424

25-
- Performance is consistent with 1.6.0 results, except for a large increase in NGF and NGINX errors in the
25+
- Performance is consistent with 1.5.0 results, except for a large increase in NGF and NGINX errors in the
2626
Scale Listeners and Scale HTTPS Listeners test cases.
2727
- Errors in Scale Upstream Servers test case are expected and of small importance.
2828
- Errors in Scale Listeners test case are expected and of small importance.

0 commit comments

Comments
 (0)