-
-
Notifications
You must be signed in to change notification settings - Fork 46.6k
Speedup our eight slowest pytests (one at a time please) #9718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I pointed out in #8785 that |
I checked the maths/prime_numbers.py file. In this file, there are three methods for finding prime numbers using different techniques. The simplest method, referred to as slow_prime (which uses a loop to iterate from 2 to n and print prime numbers), is taking a lot of time. Do we need it there? Since the other two methods are faster with just minor modifications for finding prime numbers, do we still need the slow_prime method in there? |
Can Lines 20 to 21 in 87494f1
Let's lower the 10_000 to 1_000
|
If 'slow_prime()' is tested on small test cases, then the other two methods will also be tested on small test cases to compare the time taken among them. |
If I reduce the test cases, do I need to raise a PR to check whether they finished faster, or is there any other method I can use to verify if the issue is resolved before submitting a PR? |
When things are run on CI platforms like GitHub Actions then an environment variable Can you use https://docs.python.org/3/library/os.html#os.getenv to see if |
Okay, thanks. Let me try. |
But if it is not defined, setting the test case to 10_000 will take the same amount of time as before, doesn't it? |
Yes, but we do not care because it no longer slows down our build process. |
I sped it up in #9851. |
I was reviewing the backtracking/power_sum.py code, and I noticed that the main issue causing slow execution is the large value of 'needed_sum' parameter.To improve performance, is it correct to reduce the 'needed_sum' value? |
@duongoku Would you be willing to advise @Muhammadummerr on the best way to speed up the slow doctests in backtracking/power_sum.py ? |
I would love to hear from @duongoku. |
@cclauss tried backtracking/word_search.py to measure the execution time of each option: code: called each option 10 times
The longest time, as expected, is 'AAAAAAAAAAAAABB', it is a gigantic number of times longer. Output:
|
@cclauss backtracking/word_search.py: can I replace this large(AAAAAAAAAAAAABB) array with a small(ABB) one?
|
If use the image Made a pull request #10161. Made a pull request #10188 word_search - replacing the example in doctest. |
Check the PR #9978. |
@cclauss Every algorithm on your list has been sped up (except for |
@CaioCordeiro Would you be willing to look at |
Feature description
At the end of our GitHub Actions
build
jobs, there is a list of the slowest ones.Are there ways to speed up these tests without reducing our functionality or our code coverage?
Please only fix one algorithm per pull request.
================= 1506 passed, 25 warnings in 71.23s (0:01:11) =================
Also, those 25 pytest warnings are worth fixing!!!
The text was updated successfully, but these errors were encountered: