Skip to content

Bug: assert_produces_warning(None) not raising AssertionError with warning #38626

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Dec 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions pandas/_testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -2724,11 +2724,10 @@ class for all warnings. To check that no warning is returned,
extra_warnings = []

for actual_warning in w:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this code functionally the same as existing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought if expected_warning is False or None, then extra_warnings does not get appended to because of the continue. So the check at the end for raising on extra warnings doesn't get triggered.

Copy link
Member

@ivanovmg ivanovmg Dec 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current implementation will ignore the else clause below, which consolidates extra warnings. I see that I messed that up in one of my previous PRs. This PR reverts that error.
Probably, if some functions are extracted, that would improve the readability of the logic (maybe in a separate PR).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The added tests do raise on master as well, so behavior for them has changed

if not expected_warning:
continue

expected_warning = cast(Type[Warning], expected_warning)
if issubclass(actual_warning.category, expected_warning):
if expected_warning and issubclass(
actual_warning.category, expected_warning
):
Comment on lines -2727 to +2730
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a good catch.

saw_warning = True

if check_stacklevel and issubclass(
Expand Down
1 change: 1 addition & 0 deletions pandas/tests/arithmetic/test_timedelta64.py
Original file line number Diff line number Diff line change
Expand Up @@ -543,6 +543,7 @@ def test_tda_add_sub_index(self):
expected = tdi - tdi
tm.assert_index_equal(result, expected)

@pytest.mark.xfail(reason="GH38630", strict=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason you are passing strict? we default to true - eg if these are fixed we want the tests to fail (as a hint to remove the xfail)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Problem I ran into is that some parameterizations actually pass, so strict=True fails for these. Couldn't figure out a way to xfail only the failing combinations because the parameterizations are complex (2 defined elsewhere in fixtures, 1 uses multiple calls to pytest.mark.parametrize).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok that's fine

ping on green

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing 2 failures where pandas/tests/io/parser/test_common.py::test_chunks_have_consistent_numerical_type[python] gives unexpected ResourceWarning.

Do you know if this warning occurs consistently? Should something in the test be modified to handle a potential ResourceWarning? Or just another xfail case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we ar trying to track these cases down as something is leaking

if they r causing actual failures then ok to xfail (and list these in the associated issue with checkboxes)

def test_tda_add_dt64_object_array(self, box_with_array, tz_naive_fixture):
# Result should be cast back to DatetimeArray
box = box_with_array
Expand Down
1 change: 1 addition & 0 deletions pandas/tests/frame/test_arithmetic.py
Original file line number Diff line number Diff line change
Expand Up @@ -821,6 +821,7 @@ def test_frame_with_frame_reindex(self):
(np.datetime64(20, "ns"), "<M8[ns]"),
],
)
@pytest.mark.xfail(reason="GH38630", strict=False)
@pytest.mark.parametrize(
"op",
[
Expand Down
1 change: 1 addition & 0 deletions pandas/tests/indexes/test_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,6 +357,7 @@ def test_ravel_deprecation(self, index):
with tm.assert_produces_warning(FutureWarning):
index.ravel()

@pytest.mark.xfail(reason="GH38630", strict=False)
def test_asi8_deprecation(self, index):
# GH#37877
if isinstance(
Expand Down
1 change: 1 addition & 0 deletions pandas/tests/io/parser/test_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -1136,6 +1136,7 @@ def test_parse_integers_above_fp_precision(all_parsers):
tm.assert_frame_equal(result, expected)


@pytest.mark.xfail(reason="GH38630, sometimes gives ResourceWarning", strict=False)
def test_chunks_have_consistent_numerical_type(all_parsers):
parser = all_parsers
integers = [str(i) for i in range(499999)]
Expand Down
17 changes: 17 additions & 0 deletions pandas/tests/util/test_assert_produces_warning.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,3 +152,20 @@ def test_right_category_wrong_match_raises(pair_different_warnings):
with tm.assert_produces_warning(target_category, match=r"^Match this"):
warnings.warn("Do not match it", target_category)
warnings.warn("Match this", other_category)


@pytest.mark.parametrize("false_or_none", [False, None])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good tests to prevent the issue from happening!

class TestFalseOrNoneExpectedWarning:
def test_raise_on_warning(self, false_or_none):
msg = r"Caused unexpected warning\(s\)"
with pytest.raises(AssertionError, match=msg):
with tm.assert_produces_warning(false_or_none):
f()

def test_no_raise_without_warning(self, false_or_none):
with tm.assert_produces_warning(false_or_none):
pass

def test_no_raise_with_false_raise_on_extra(self, false_or_none):
with tm.assert_produces_warning(false_or_none, raise_on_extra_warnings=False):
f()