-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
Increase timeout in hypothesis for test_apply.py #23849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hello @alimcmaster1! Thanks for submitting the PR.
|
Codecov Report
@@ Coverage Diff @@
## master #23849 +/- ##
=======================================
Coverage 92.28% 92.28%
=======================================
Files 161 161
Lines 51500 51500
=======================================
Hits 47528 47528
Misses 3972 3972
Continue to review full report at Codecov.
|
I feel like it would be better if we could maybe just simplify this test to test the original bug in question. I'm not convinces some of the hypothesis search parameters are entirely necessary e.g. testing a single dtype dataframe with 2 to 5 columns. This was a minimal example that would test the same bug: #22150 (comment) |
Thanks @mroeschke for the thoughts - ill update with a few sensible test case that should clearly highlight the original bug. |
@mroeschke thoughts on something like this as a replacement test? I varied the number of columns at it was reported in the issue that the bug was dependent on # of cols.
|
Sure, that test looks good. If we are only testing one |
I am not sure this will fix some of the recent time out issues, but let's give a try. @alimcmaster1 also would love a followup based on @mroeschke suggestion here. |
git diff upstream/master -u -- "*.py" | flake8 --diff
In this build
We see the following error:
<pandas.tests.frame.test_apply.TestDataFrameAggregate instance at 0x0000000036892A48>, 2018-11-Unreliable test timings! On an initial run, this test took 631.00ms, which exceeded the deadline of 500.00ms, but on a subsequent run it took 3.00 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.
Time difference perhaps due to given parameters for the test case, currently this test case uses the default value which is 500ms. Perhaps I could be more aggressive here and go for 800ms, thoughts?
Issue introduced in this change