Skip to content

Add performance regression test #1152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ernado opened this issue May 18, 2020 · 4 comments
Open

Add performance regression test #1152

ernado opened this issue May 18, 2020 · 4 comments
Assignees
Labels
area: benchmark enhancement New feature or improvement

Comments

@ernado
Copy link
Member

ernado commented May 18, 2020

We should check that MR will not significantly regress linter performance, like:

  • CPU usage
  • Memory usage
  • Overall time spent
  • File system usage (?)

Currently I'm using fully-codegenerated kubernetes repo for manual regression tests,
something like that:

ernado@nexus:/src/kubernetes$ golangci-lint cache clean
ernado@nexus:/src/kubernetes$ /usr/bin/time --verbose golangci-lint run --timeout 10m --verbose
INFO [config_reader] Config search paths: [./ /src/kubernetes /src /] 
INFO [lintersdb] Active 10 linters: [deadcode errcheck gosimple govet ineffassign staticcheck structcheck typecheck unused varcheck] 
INFO [loader] Go packages loading at mode 575 (exports_file|imports|deps|files|name|types_sizes|compiled_files) took 6.517475408s 
INFO [runner/filename_unadjuster] Pre-built 0 adjustments in 263.946976ms
# ...

I want to automate this in some way. Probably, some benchmarks?

@ernado ernado added enhancement New feature or improvement area: tests Continuous integration, tests and other checks labels May 18, 2020
@jirfag
Copy link
Contributor

jirfag commented May 18, 2020

Hi, it looks like we can run golangci-lint on the self repo with all linters enabled and parse time and peak memory from logs. Then check that these values are in the specified range for our CI machines.

And do the same nightly with some large repo like k8s.

@ernado
Copy link
Member Author

ernado commented May 18, 2020

So, we can split benches in two categories:

  1. Microbenchmarks based on go benchmark framework (testing.B)
  2. End-to-end benchmarks

I think both should be run twice (version with and without PR changes) to avoid any noise.
The (2) is more obvious to interpret, e.g. "Linting duration increased by 10s" or "Peak memory consumption decreased by 50 Mb" but can be heavy and noisy.

Not sure about (1), but they are integrated in go tooling and have much faster feedback cycle.

Also we can automate (2) on big opensource repo, something like make bench that will build and compare HEAD to latest release, displaying formatted changes.

@stale
Copy link

stale bot commented Jun 2, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale No recent correspondence or work activity label Jun 2, 2021
@ldez ldez removed the stale No recent correspondence or work activity label Jun 2, 2021
@stale
Copy link

stale bot commented Jul 10, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale No recent correspondence or work activity label Jul 10, 2022
@ldez ldez added pinned and removed stale No recent correspondence or work activity labels Jul 10, 2022
@ldez ldez removed the pinned label Mar 24, 2024
@ldez ldez added area: benchmark and removed area: tests Continuous integration, tests and other checks labels May 28, 2024
@ldez ldez self-assigned this Jun 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: benchmark enhancement New feature or improvement
Projects
None yet
Development

No branches or pull requests

3 participants