-
-
Notifications
You must be signed in to change notification settings - Fork 529
Better Benchmarking for openapi-fetch? #1818
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
First of all, my only real-life experience with benchmarking is on Scala.js (Scala to JavaScript transpiler, I'm one of the maintainers). So far, we have performed 3 types for benchmarks (I'll compare them to what we probably want here below):
We execute these benchmarks on-demand manually (as in: if we have a hunch it might affect performance, not even on release). We have not automated infrastructure to run them whatsoever. The first benchmark bullet is probably the one most similar to what we'd want here, of course also the one I haven't been working on at all 🤷 . I think one core difference between the Scala.js and the (current) openapi-fetch benchmarks is that the Scala.js benchmarks are CPU bound (they do little to no I/O). The openapi-fetch benchmarks as they are now, are likely heavily I/O bound: I do not know how I have a feeling that for better low-level performance analysis, we might need to split the benchmarks:
I doubt it is feasible to build the second kind of benchmark for other frameworks. Maybe the ones that also wrap HTH. Happy to discuss further. |
Love all these thoughts. I don’t disagree with any of it!
That is a very good observation that I hadn’t considered. you’re right, regardless,
Agree completely. Also agree on not bothering with monitoring low-level benchmarks on other libraries, but still having some (manually-run) way to check if any PR significantly impacts JIT performance. Benchmarking other libraries I like doing for 2 purposes:
|
That's a very good point. I never thought about it that way. But indeed, that can give you an "achievable baseline". |
Just removed MSW from the benchmarks and that indeed seemed to contribute heavily toward the fluctuations. I’m sure there will still be a little flaking, as is normal, but it seems better from what it was. But without MSW we’ll have to mock the internals for some of the external libraries more carefully, because it was doing a good job of automatically shimming fetch. So we’ll need to mock axios, superagent, and openapi-typescript-codegen’s internal fetchers without accidentally mocking internal runtime code and giving it a fake “boost” (axios especially, since you can give it a custom fetcher but I want to avoid doing that because that’s an entirely different internal codepath, one that isn’t common; I’d like to mock in a way that all libraries are called with their defaults if at all possible) |
yes, that's the downside unfortunately :( |
@drwpow, I'm taking the liberty to moving your comment to an issue so we do not lose the discussion.
Originally posted by @drwpow in #1810 (comment)
The text was updated successfully, but these errors were encountered: