Skip to content

TST: add hypothesis-based tests #20590

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 15 commits into from
Closed

Conversation

sushobhit27
Copy link

@sushobhit27 sushobhit27 commented Apr 3, 2018

Addition of "hypothesis usage" in test cases of
tests/reshape/test_util.py as kind of POC.

Addition of "hypothesis usage" in test cases of
tests/reshape/test_util.py as kind of POC.
@pep8speaks
Copy link

pep8speaks commented Apr 3, 2018

Hello @sushobhit27! Thanks for updating the PR.

Line 12:32: E261 at least two spaces before inline comment

Comment last updated on May 22, 2018 at 04:09 Hours UTC

@TomAugspurger
Copy link
Contributor

Could you please check how it (hypothesis module) can be available, so that tests can be run.

You'll need to add hypothesis to one (or all) of the ci/*.run files. Also add it to ci/environment-dev.yml.

@sushobhit27
Copy link
Author

@TomAugspurger Now all test have passed. Can you please check what else is required at my end.

Copy link
Contributor

@jreback jreback left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will have to look
this is adding lots of code in a very specific file
need a more general way to do this

@jreback
Copy link
Contributor

jreback commented Apr 3, 2018

test are failing in this

Copy link
Contributor

@TomAugspurger TomAugspurger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most contributors -- and maintainers :) -- will be unfamiliar with hypothesis. Could you add a section to the contributing documentation, along with links to the hypothesis docs? Mainly I'd be interested in hearing about what tests makes sense for hypothesis, and perhaps a small examples using it.


@st.composite
def get_seq(draw, types, mixed=False, min_size=None, max_size=None, transform_func=None):
"""helper function to generate strategy for creating lists. parameters define the nature of to be generated list.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines are too long (check the log output or run flake8 locally)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def test_datetimeindex(self):
# regression test for GitHub issue #6439
# make sure that the ordering on datetimeindex is consistent
x = date_range('2000-01-01', periods=2)
d = st.dates(min_value=date(1900, 1, 1), max_value=date(2100, 1, 1)).example()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why generate them here instead of using @given?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no specific reason but copy pasted after testing on console, as example() function can be easily used to check generated value :) and ya it can be moved to "given" decorator.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Apr 3, 2018 via email

@sushobhit27
Copy link
Author

sushobhit27 commented Apr 3, 2018

@TomAugspurger,
Main purpose of using hypothesis or property based testing is to just write test cases based on the property/specification of the functionality being tested and let the framework generate all the different and random test cases for it.
This will initially make the test cases fail but due to which we are able to find edge cases and then keep on refining boundary for input data of our test cases. e.g in below example initially test case failed when x was a list with empty string, thus in a way helped me to figure out that this is a case for test_empty function (due to which I adeed assume(len(x) != 0) statement).

def test_simple(self):
    x, y = list('ABC'), [1, 22]
    result1, result2 = cartesian_product([x, y])
    expected1 = np.array(['A', 'A', 'B', 'B', 'C', 'C'])
    expected2 = np.array([1, 22, 1, 22, 1, 22])
    tm.assert_numpy_array_equal(result1, expected1)
    tm.assert_numpy_array_equal(result2, expected2)
	

@settings(max_examples=NO_OF_EXAMPLES_PER_TEST_CASE)
@given(get_seq((str,), False, 1, 1),
       get_seq((int,), False, 1, 2))
def test_simple(self, x, y):
    x = list(x[0])
    # non-empty test case is handled in test_empty, therefore ignore it here
    assume(len(x) != 0)
    result1, result2 = cartesian_product([x, y])
    expected1 = np.array([item1 for item1 in x for item2 in y])
    expected2 = np.array([item2 for item1 in x for item2 in y])

    tm.assert_numpy_array_equal(result1, expected1)
    tm.assert_numpy_array_equal(result2, expected2)

In my personal experience, I have found it to be most helpful in reducing huge parametized test cases where many examples are overlapping or can be tested by a single hypothesis statement.

In very layman terms, take another example of testing python's sum function using hypothesis by just a single statement.
from hypothesis import strategies as st
from hypothesis import given

@given(st.lists(st.integers()))
def test_sum(seq):
total = 0
for item in seq:
total += item
assert sum(seq) == total

While, IMO in case of example based testing at least 3-4 example would be required in terms of parametize statement.

Below are some links for property bases testing usage:
https://hypothesis.readthedocs.io/en/latest/quickstart.html
http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html
https://hypothesis.works/articles/what-is-property-based-testing/

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Apr 3, 2018 via email

@sushobhit27
Copy link
Author

@TomAugspurger sure, I will do the same in a day or two in addition to incorporating other review comments.

Addition of "hypothesis usage" in test cases of tests/reshape/test_util.py as kind of POC.

Incorporate review comments.
Resolve flake8 warning.
@codecov
Copy link

codecov bot commented Apr 4, 2018

Codecov Report

Merging #20590 into master will decrease coverage by 0.04%.
The diff coverage is 17.24%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #20590      +/-   ##
==========================================
- Coverage   91.84%   91.79%   -0.05%     
==========================================
  Files         153      154       +1     
  Lines       49502    49534      +32     
==========================================
+ Hits        45463    45471       +8     
- Misses       4039     4063      +24
Flag Coverage Δ
#multiple 90.19% <17.24%> (-0.05%) ⬇️
#single 41.86% <17.24%> (-0.02%) ⬇️
Impacted Files Coverage Δ
pandas/util/_hypothesis.py 17.24% <17.24%> (ø)
pandas/core/indexes/multi.py 94.99% <0%> (-0.09%) ⬇️
pandas/core/frame.py 97.22% <0%> (-0.01%) ⬇️
pandas/core/series.py 94.12% <0%> (ø) ⬆️
pandas/core/indexes/base.py 96.68% <0%> (+0.04%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 81358e8...fa5bd75. Read the comment docs.

Addition of "hypothesis usage" in test cases of tests/reshape/test_util.py as kind of POC.

Incorporate review comments.
Resolve flake8 warning.
Add section for hypothesis in contributing.rst
Addition of "hypothesis usage" in test cases of tests/reshape/test_util.py as kind of POC.

Incorporate review comments.
Resolve flake8 warning.
Add section for hypothesis in contributing.rst
@sushobhit27
Copy link
Author

@TomAugspurger Now all test are passing and I have also refactored the code with review comments. Also added section of hypothesis in contributing.rst.
@jreback can you please elaborate what kind of changes you want.

@jorisvandenbossche jorisvandenbossche changed the title TST, fix for issue #17978. TST: add hypothesis-based tests Apr 5, 2018
@jorisvandenbossche jorisvandenbossche added the Testing pandas testing functions or related to the test suite label Apr 5, 2018
@@ -10,3 +10,4 @@ matplotlib=1.4.3
sqlalchemy=0.8.1
lxml
scipy
hypothesis>=3.46.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are user requirements, not testing requirements. so remove from EACH of these except for environment-dev, rather adding ci/install_travis_travis, install_circle and appveyor.yaml (search for moto and put it next to that)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly in which file I should add the requirement, as earlier, CI tests failed in absence of hypothesis package requirement in *.run

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put them above

ci/install_travis_travis, ci/install_circle and appveyor.yaml

@@ -775,6 +775,78 @@ Tests that we have ``parametrized`` are now accessible via the test name, for ex
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED

Transitioning to ``hypothesis``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not transition, its just Using hypthosis



output of test cases:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make this a bit more succint. hypthosis not going to replace much of our parameterized tests, rather in some cases will simply add more coverage. so downscale this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it mean changing only content or also changing example?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the context, make this whole section shorter

@@ -4,31 +4,147 @@
import pandas.util.testing as tm
from pandas.core.reshape.util import cartesian_product

from hypothesis import strategies as st
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any hypothesis related things that we want to import, pls put in a separate file in pandas.util._hypthoesis.py and import from there. (these are the generic things similar to what we do in conftest.py)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree.

moved generic thing to pandas.utils._hypothesis.py.
not sure of what exactly was required to change but still tried to
change the content as per review comments.
test_empty was failing due to "hypothesis.errors.FailedHealthCheck" error on travis only, therefore decrease the size for lists.
@sushobhit27
Copy link
Author

@jreback , @TomAugspurger
I know its going on for too long, but can you please have a look at latest changes.
Travis is failing due to other test case, TestClipboard.test_round_trip_valid_encodings to be precise.

@@ -10,3 +10,4 @@ matplotlib=1.4.3
sqlalchemy=0.8.1
lxml
scipy
hypothesis>=3.46.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put them above

ci/install_travis_travis, ci/install_circle and appveyor.yaml

@@ -775,6 +775,80 @@ Tests that we have ``parametrized`` are now accessible via the test name, for ex
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED

Using ``hypothesis``
~~~~~~~~~~~~~~~~~~~~
With the transition to pytest, things have become easier for testing by having reduced boilerplate for test cases and also by utilizing pytest's features like parametizing, skipping and marking test cases.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this 'transition to pytest



output of test cases:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the context, make this whole section shorter

@@ -0,0 +1,97 @@
import string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a doc-string to this module



def get_elements(elem_type):
strategy = st.nothing()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add doc-strings to each

from datetime import date
from dateutil import relativedelta

from pandas.util._hypothesis import (st,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make this simpler.

from pandas.util import _hypothesis as hp

assume)


NO_OF_EXAMPLES_PER_TEST_CASE = 20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hard-code this for now in the examples. have to see how this behaves.

def test_simple(self):
x, y = list('ABC'), [1, 22]
@settings(max_examples=NO_OF_EXAMPLES_PER_TEST_CASE)
@given(get_seq((str,), False, 1, 1),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any way to name these things for given, pretty non-intuitive here

Copy link
Author

@sushobhit27 sushobhit27 Apr 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already giving them a name in some cases, as arguments in test case function. e.g
@given(st.lists(st.nothing()),
get_seq((int,), False, min_size=1, max_size=10),
get_seq((str,), False, min_size=1, max_size=10))
def test_empty(self, empty_list, list_of_int, list_of_str):

but local function can be added like below:
def get_str_list_with_single_element():
return get_seq((str,), False, 1, 1)

and then used as below
@given(get_str_list_with_single_element(),
get_seq((int,), False, 1, 2))
def test_simple(self, x, y):

but that would be too cumbersome. Instead, for each argument in given decorator, having comments explaining each returned strategy can be more suitable.

@given(get_seq((str,), False, 1, 1),
get_seq((int,), False, 1, 2))
def test_simple(self, x, y):
x = list(x[0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this is a list?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to map test case as closely as possible to original test case, where x = list('ABC').
Although, x = st.lists(st.text(string.ascii_letters, min_size=1, max_size=1), min_size=1), would have achieved the same effect of getting list of single characters, but it was not possible using get_seq function.

from hypothesis import (given,
settings,
assume,
strategies as st,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add some direct documentation / examples in this file

Incorporate review comments.
Incorporate review comments.
@sushobhit27
Copy link
Author

@jreback I have incorporated all review comments except "removing hypothesis dependency from most of the files" as I could not really understand your below comment:
"I put them above

ci/install_travis_travis, ci/install_circle and appveyor.yaml"
Again travis test is failing due to some other test case.

@sushobhit27
Copy link
Author

@jreback, @TomAugspurger
Was waiting for feedback, trying luck again :)

@jreback
Copy link
Contributor

jreback commented Apr 19, 2018

@sushobhit27 I indicated where you need to update these files in pandas/ci/install_circle, pandas/ci/appeveyor.yml and pandas/ci/install_travis.sh

Remove hypothesis requirement from *.run files.
@sushobhit27
Copy link
Author

@jreback I have tried to do changes as per your comments and now have removed hypothesis dependency from all *.run, except from the 3 files which you mentioned, although I am not sure if I did it correctly.
Let me know if it is still wrong.

Copy link
Contributor

@jreback jreback left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this causes multiple tests to be run (like parameterized)? are these deterministic? these for sure need to be as we don't want non-deterministic ones on the ci (IOW that cannot be reproduced locally). if they are seed based this would be fine.

@@ -13,3 +13,4 @@ dependencies:
- pytz
- setuptools>=3.3
- sphinx
- hypothesis>=3.46.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need a new-line

@@ -65,6 +65,7 @@ fi
echo "[create env: ${REQ_BUILD}]"
time conda create -n pandas -q --file=${REQ_BUILD} || exit 1
time conda install -n pandas pytest>=3.1.0 || exit 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can just add it on the previous line

@@ -104,6 +104,7 @@ if [ -e ${REQ} ]; then
fi

time conda install -n pandas pytest>=3.1.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

@@ -11,4 +11,4 @@ psycopg2
pymysql=0.6.0
sqlalchemy=0.7.8
xlsxwriter=0.5.2
jinja2=2.8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert these, they shouldn't have any changes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will rebase my branch.

@@ -775,6 +775,46 @@ Tests that we have ``parametrized`` are now accessible via the test name, for ex
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED

Using ``hypothesis``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a ref tag here

@@ -775,6 +775,46 @@ Tests that we have ``parametrized`` are now accessible via the test name, for ex
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED

Using ``hypothesis``
~~~~~~~~~~~~~~~~~~~~
With the usage of pytest, things have become easier for testing by having reduced boilerplate for test cases and also by utilizing pytest's features like parametizing, skipping and marking test cases.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use double-backticks around pytest


However, one has to still come up with input data examples which can be tested against the functionality. There is always a possibility to skip testing an example which could have failed the test case.

Hypothesis is a python package which helps in overcoming this issue by generating the input data based on some set of specifications provided by the user.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

link to the package


class TestCartesianProduct(object):

def test_simple(self):
x, y = list('ABC'), [1, 22]
@hp.settings(max_examples=20)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this is a lot of boilerplate here, this has to be simpler, we have thousands of tests (sure most cannot use this), but I think you would need to have this reduced to basically a 1-liner to have people use it. can you put some functions in _hypthesis to make this much more readable.

this comment for each of the additions here.

Copy link
Author

@sushobhit27 sushobhit27 Apr 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jreback I have not seen a case where a test case is passing and then sometimes failing. I have raised pull request quite a few times for this issue and haven't seen non deterministic behavior till now.

Yes, test cases run just like parametrized test cases.
I am not very sure about the seed thing, but a failing test case is always reproducible locally, as hypothesis maintains some kind of cache for failed examples.
Also there is always a seed provided in case of failed example, which can be used to reproduce the same test example again. For more info, check below link:
http://hypothesis.readthedocs.io/en/latest/reproducing.html

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""so this is a lot of boilerplate here, this has to be simpler, we have thousands of tests (sure most cannot use this), but I think you would need to have this reduced to basically a 1-liner to have people use it. can you put some functions in _hypthesis to make this much more readable."""

If you are talking about boilerplate, like below decorators, I don't think, it can be further reduced as just like, we are bound to have different parametrize for different test cases in pytest, the same issue is with below code. For some function it will be less, for other it can be more.
may be when code evolves, more common code comes out to be refactored.

@hp.settings(max_examples=20)
@hp.given(hp.st.lists(hp.st.text(string.ascii_letters, min_size=1, max_size=1),
min_size=1, max_size=3),
hp.get_seq((int,), False, 1, 2))
def test_simple(self, x, y):

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Apr 24, 2018 via email

@sushobhit27
Copy link
Author

@TomAugspurger I totally forgot about this PR, now I have rebased branch. You can have a look at it now, as all tests are now passing.

@jreback
Copy link
Contributor

jreback commented Aug 20, 2018

closing in favor of #22280

@jreback jreback closed this Aug 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Testing pandas testing functions or related to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate using Hypothesis for some tests
5 participants