You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
create a list of sample http queries to verify API server correctness. these queries should touch every endpoint, and do so through all available routes. the queries should also use many different parameters and options applicable for each endpoint, ideally covering them with an exhaustive set of such combinations. the queries can be generated by hand or extracted (and maybe modified) from production web server logs. the queries should use relatively older dates as their parameters so that we can be sure the returned results dont change, and remain "known good".
there are a few different ways to evaluate the responses from these queries, with increasing complexity of implementation (but also increasing value of correctness assurance):
make sure each response returns a successful http code of 200 (and not 500 or similar).
make sure each response returns a known/correct number of result items.
make sure each response returns the exact known/correct result data.
(getting the known/correct result data or counts is as simple as running the queries on an already working system.)
similar to what was done in #1086, this should be connected to a production-like db backend so that we have "real" data to pull from. also like that referenced PR, this could be done with locust, but that may limit us to just verifying http return codes and not allow deeper inspection like counting rows returned or matching content of responses.
this issue is closely related to #1068, but this doesnt necessarily need to be a proper part of our python suite of integration tests.
this would have saved us the trouble we saw with rollout of 0.4.8, which we hastily fixed in #1121
The text was updated successfully, but these errors were encountered:
create a list of sample http queries to verify API server correctness. these queries should touch every endpoint, and do so through all available routes. the queries should also use many different parameters and options applicable for each endpoint, ideally covering them with an exhaustive set of such combinations. the queries can be generated by hand or extracted (and maybe modified) from production web server logs. the queries should use relatively older dates as their parameters so that we can be sure the returned results dont change, and remain "known good".
there are a few different ways to evaluate the responses from these queries, with increasing complexity of implementation (but also increasing value of correctness assurance):
(getting the known/correct result data or counts is as simple as running the queries on an already working system.)
similar to what was done in #1086, this should be connected to a production-like db backend so that we have "real" data to pull from. also like that referenced PR, this could be done with
locust
, but that may limit us to just verifying http return codes and not allow deeper inspection like counting rows returned or matching content of responses.this issue is closely related to #1068, but this doesnt necessarily need to be a proper part of our python suite of integration tests.
this would have saved us the trouble we saw with rollout of 0.4.8, which we hastily fixed in #1121
The text was updated successfully, but these errors were encountered: