Skip to content

TST: Fail on warning #22699

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 42 commits into from
Sep 18, 2018
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
9480916
TST: Fail on warning
TomAugspurger Sep 13, 2018
bd5a419
more
TomAugspurger Sep 13, 2018
36279c9
Fixed base
TomAugspurger Sep 14, 2018
ad69f50
Merge remote-tracking branch 'upstream/master' into pytest-warnings
TomAugspurger Sep 14, 2018
43d7a78
ABC compat
TomAugspurger Sep 14, 2018
4919b0a
limit to dev
TomAugspurger Sep 14, 2018
2804192
Explicit warnings
TomAugspurger Sep 14, 2018
7ad249a
Fixed plotting check
TomAugspurger Sep 14, 2018
953fde1
Set for NumPy dev
TomAugspurger Sep 14, 2018
d2dd0af
collections
TomAugspurger Sep 14, 2018
b813e41
more collections, actually set
TomAugspurger Sep 14, 2018
3baba66
Always error ResourceWarning
TomAugspurger Sep 14, 2018
3067ed1
DeprecationWarnings
TomAugspurger Sep 14, 2018
614f514
Pop from sys.modules
TomAugspurger Sep 14, 2018
37a3d39
redo resourcewarning
TomAugspurger Sep 14, 2018
80ce447
wip
TomAugspurger Sep 14, 2018
61beba7
Panel
TomAugspurger Sep 14, 2018
cf1ff63
more warnings
TomAugspurger Sep 14, 2018
a9a672d
lint
TomAugspurger Sep 14, 2018
da4961a
another
TomAugspurger Sep 14, 2018
bed003b
some more
TomAugspurger Sep 14, 2018
8bd8711
future->depr
TomAugspurger Sep 14, 2018
ba46eef
future->depr
TomAugspurger Sep 14, 2018
3b0b9b0
silence
TomAugspurger Sep 14, 2018
e85d8d7
mean
TomAugspurger Sep 14, 2018
9eeb6a6
Merge remote-tracking branch 'upstream/master' into pytest-warnings
TomAugspurger Sep 16, 2018
a89721e
import again
TomAugspurger Sep 16, 2018
9813b4c
ignore both
TomAugspurger Sep 16, 2018
b7f0198
fixed syntax
TomAugspurger Sep 16, 2018
dfc767c
Lint
TomAugspurger Sep 16, 2018
51f77b5
Fix docs
TomAugspurger Sep 17, 2018
5a1b8ee
remove record
TomAugspurger Sep 17, 2018
83cd9ae
fixups
TomAugspurger Sep 17, 2018
d10a0cc
Filter again
TomAugspurger Sep 17, 2018
cf60217
print
TomAugspurger Sep 17, 2018
61587ec
Revert "print"
TomAugspurger Sep 17, 2018
cfb5ae2
Close all the handles
TomAugspurger Sep 17, 2018
2591677
update contributing
TomAugspurger Sep 18, 2018
1d39520
Merge remote-tracking branch 'upstream/master' into pytest-warnings
TomAugspurger Sep 18, 2018
5f0eefe
Docs
TomAugspurger Sep 18, 2018
7f618e9
Merge remote-tracking branch 'upstream/master' into pytest-warnings
TomAugspurger Sep 18, 2018
4990fc2
Fixed merge conflict
TomAugspurger Sep 18, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ matrix:
# In allow_failures
- dist: trusty
env:
- JOB="3.6, NumPy dev" ENV_FILE="ci/travis-36-numpydev.yaml" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
- JOB="3.7, NumPy dev" ENV_FILE="ci/travis-37-numpydev.yaml" TEST_ARGS="--skip-slow --skip-network -W error" PANDAS_TESTING_MODE="deprecate"
addons:
apt:
packages:
Expand All @@ -79,7 +79,7 @@ matrix:
- JOB="3.6, slow" ENV_FILE="ci/travis-36-slow.yaml" SLOW=true
- dist: trusty
env:
- JOB="3.6, NumPy dev" ENV_FILE="ci/travis-36-numpydev.yaml" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
- JOB="3.7, NumPy dev" ENV_FILE="ci/travis-37-numpydev.yaml" TEST_ARGS="--skip-slow --skip-network -W error" PANDAS_TESTING_MODE="deprecate"
addons:
apt:
packages:
Expand Down
2 changes: 1 addition & 1 deletion ci/travis-36-numpydev.yaml → ci/travis-37-numpydev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: pandas
channels:
- defaults
dependencies:
- python=3.6*
- python=3.7*
- pytz
- Cython>=0.28.2
# universal
Expand Down
26 changes: 26 additions & 0 deletions doc/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -859,6 +859,32 @@ preferred if the inputs or logic are simple, with Hypothesis tests reserved
for cases with complex logic or where there are too many combinations of
options or subtle interactions to test (or think of!) all of them.

.. _warnings:

Warnings
~~~~~~~~

By default, pandas test suite will fail if any unhandled warnings are emitted.

If your change involves checking that a warning is actually emitted, use
``tm.assert_produces_warning(ExpectedWarning)``. We prefer this to pytest's
``pytest.warns`` context manager because ours checks that the warning's stacklevel
is set correctly.

If you have a test that would emit a warning, but you aren't actually testing the
warning it self (say because it's going to be removed in the future, or because we're
Copy link
Member

@gfyoung gfyoung Sep 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it selfitself

matching a 3rd-party library's behavior), then use ``pytest.mark.filterwarnings`` to
ignore the error.

```
@pytest.mark.filterwarnings("ignore:msg:category")
def test_thing(self):
...
```

If the test generates a warning of class ``category`` whose message starts
with ``msg``, the warning will be ignored and the test will pass.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also document the

with warnings.catch_warnings():
    warnings.simplefilter(..)
    ...

when you only want to ignore warnings in a part of the test?



Running the test suite
----------------------
Expand Down
12 changes: 12 additions & 0 deletions pandas/compat/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
import struct
import inspect
from collections import namedtuple
import collections
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using ABCs from collections is deprecated. Need to use collections.abc instead (which isn't in Py2).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming the subsequent changes didn't generate any failures does this point to a potential lack of compatibility test coverage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow 100%, but I don't think so. On 3.7, you basically couldn't import pandas when -W error:DeprecationWarning is set, so this was definitely being hit..


PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] >= 3
Expand Down Expand Up @@ -135,6 +136,11 @@ def lfilter(*args, **kwargs):

from importlib import reload
reload = reload
Hashable = collections.abc.Hashable
Iterable = collections.abc.Iterable
Mapping = collections.abc.Mapping
Sequence = collections.abc.Sequence
Sized = collections.abc.Sized

else:
# Python 2
Expand Down Expand Up @@ -190,6 +196,12 @@ def get_range_parameters(data):

reload = builtins.reload

Hashable = collections.Hashable
Iterable = collections.Iterable
Mapping = collections.Mapping
Sequence = collections.Sequence
Sized = collections.Sized

if PY2:
def iteritems(obj, **kw):
return obj.iteritems(**kw)
Expand Down
9 changes: 8 additions & 1 deletion pandas/compat/chainmap_impl.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
from collections import MutableMapping
import sys

PY3 = sys.version_info[0] >= 3

if PY3:
from collections.abc import MutableMapping
else:
from collections import MutableMapping

try:
from thread import get_ident
Expand Down
11 changes: 11 additions & 0 deletions pandas/conftest.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
import sys
import importlib

import pytest
Expand Down Expand Up @@ -31,6 +32,16 @@ def pytest_addoption(parser):
help="Fail if a test is skipped for missing data file.")


def pytest_collection_modifyitems(items):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we'll want this permanently. This is just to track down #22675 in case it happens on this branch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this as part of the ultimate change or no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm tempted to leave it in master for a week or two to see if we catch any of the failures in action. I've made myself a reminder to remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sighhhhhhhhhhhhhhhhhhhhhhhhhhhh. we got one: https://travis-ci.org/pandas-dev/pandas/jobs/429657921#L2280, but that doesn't actually fail the right test, because the warning is in the file object's __del__, just prints out to stderr rather than raising an exception (to fail the test).

I'm going to do some debugging on the branch.

# Make unhandled ResourceWarnings fail early to track down
# https://github.com/pandas-dev/pandas/issues/22675
if PY3:
for item in items:
item.add_marker(
pytest.mark.filterwarnings("error::ResourceWarning")
)


def pytest_runtest_setup(item):
if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
pytest.skip("skipping due to --skip-slow")
Expand Down
5 changes: 3 additions & 2 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
intended for public consumption
"""
from __future__ import division
from warnings import warn, catch_warnings
from warnings import warn, catch_warnings, simplefilter
from textwrap import dedent

import numpy as np
Expand Down Expand Up @@ -91,7 +91,8 @@ def _ensure_data(values, dtype=None):

# ignore the fact that we are casting to float
# which discards complex parts
with catch_warnings(record=True):
with catch_warnings():
simplefilter("ignore", np.ComplexWarning)
values = ensure_float64(values)
return values, 'float64', 'float64'

Expand Down
1 change: 1 addition & 0 deletions pandas/core/arrays/datetimelike.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ def cmp_method(self, other):
# numpy will show a DeprecationWarning on invalid elementwise
# comparisons, this will raise in the future
with warnings.catch_warnings(record=True):
warnings.filterwarnings("ignore", "elementwise", FutureWarning)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not fully related to this change, but do we actually want to ignore the warning here? Shouldn't we rather fix it if numpy will do something differently in the future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added to #22698

with np.errstate(all='ignore'):
result = op(self.values, np.asarray(other))

Expand Down
8 changes: 7 additions & 1 deletion pandas/core/arrays/integer.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

from pandas._libs.lib import infer_dtype
from pandas.util._decorators import cache_readonly
from pandas.compat import u, range
from pandas.compat import u, range, string_types
from pandas.compat import set_function_name

from pandas.core.dtypes.cast import astype_nansafe
Expand Down Expand Up @@ -147,6 +147,11 @@ def coerce_to_array(values, dtype, mask=None, copy=False):
dtype = values.dtype

if dtype is not None:
if (isinstance(dtype, string_types) and
(dtype.startswith("Int") or dtype.startswith("UInt"))):
# Avoid DeprecationWarning from NumPy about np.dtype("Int64")
# https://github.com/numpy/numpy/pull/7476
dtype = dtype.lower()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jreback could you take a close look here. I think you ran into this when writing a test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a test that causes this warning? (I can see this locally, but I didn't see it appearing when running tests)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, ignore previous comment, I thought pytest automatically shows all warnings, which is not the case. Shouldn't we edit out setup.cfg to do that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a deprecation warning from NumPy right now, which is filtered interactive, but pytest removes that filter by default. Some of the integer_array tests were hitting this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I didn't know numpy did this:

In [1]: np.dtype('Int64')
/Users/jreback/miniconda3/envs/pandas/bin/ipython:1: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
  #!/Users/jreback/miniconda3/envs/pandas/bin/python
Out[1]: dtype('int64')

I guess so, thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I'm waiting to hear back on numpy/numpy#7476 (comment)

If np.dtype("int") == "Int64" errors in the future, we may have to rework some things.

if not issubclass(type(dtype), _IntegerDtype):
try:
dtype = _dtypes[str(np.dtype(dtype))]
Expand Down Expand Up @@ -508,6 +513,7 @@ def cmp_method(self, other):
# numpy will show a DeprecationWarning on invalid elementwise
# comparisons, this will raise in the future
with warnings.catch_warnings(record=True):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

general question: is the record=True needed here?
(I have the feeling it is used in many placed thinking that it does what the simplefilter(..) is still needed for)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, record isn't needed here (and should be avoided). It should only be used when you need to make some assertion about the value yielded by the contextmanager.

warnings.filterwarnings("ignore", "elementwise", FutureWarning)
with np.errstate(all='ignore'):
result = op(self._data, other)

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ def standardize_mapping(into):
return partial(
collections.defaultdict, into.default_factory)
into = type(into)
if not issubclass(into, collections.Mapping):
if not issubclass(into, compat.Mapping):
raise TypeError('unsupported type: {into}'.format(into=into))
elif into == collections.defaultdict:
raise TypeError(
Expand Down
1 change: 1 addition & 0 deletions pandas/core/computation/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,6 +323,7 @@ def eval(expr, parser='pandas', engine=None, truediv=True,
# to use a non-numeric indexer
try:
with warnings.catch_warnings(record=True):
# TODO: Filter the warnings we actually care about here.
target[assigner] = ret
except (TypeError, IndexError):
raise ValueError("Cannot assign expression output to target")
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/dtypes/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
import collections
import re
import numpy as np
from collections import Iterable
from numbers import Number
from pandas import compat
from pandas.compat import (PY2, string_types, text_type,
string_and_binary_types, re_type)
from pandas._libs import lib
Expand Down Expand Up @@ -112,7 +112,7 @@ def _iterable_not_string(obj):
False
"""

return (isinstance(obj, collections.Iterable) and
return (isinstance(obj, compat.Iterable) and
not isinstance(obj, string_types))


Expand Down Expand Up @@ -284,7 +284,7 @@ def is_list_like(obj):
False
"""

return (isinstance(obj, Iterable) and
return (isinstance(obj, compat.Iterable) and
# we do not count strings/unicode/bytes as list-like
not isinstance(obj, string_and_binary_types) and
# exclude zero-dimensional numpy arrays, effectively scalars
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -418,9 +418,9 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
copy=copy)

# For data is list-like, or Iterable (will consume into list)
elif (isinstance(data, collections.Iterable)
elif (isinstance(data, compat.Iterable)
and not isinstance(data, string_and_binary_types)):
if not isinstance(data, collections.Sequence):
if not isinstance(data, compat.Sequence):
data = list(data)
if len(data) > 0:
if is_list_like(data[0]) and getattr(data[0], 'ndim', 1) == 1:
Expand Down Expand Up @@ -7655,7 +7655,7 @@ def _to_arrays(data, columns, coerce_float=False, dtype=None):
if isinstance(data[0], (list, tuple)):
return _list_to_arrays(data, columns, coerce_float=coerce_float,
dtype=dtype)
elif isinstance(data[0], collections.Mapping):
elif isinstance(data[0], compat.Mapping):
return _list_of_dict_to_arrays(data, columns,
coerce_float=coerce_float, dtype=dtype)
elif isinstance(data[0], Series):
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/groupby/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -758,7 +758,7 @@ def aggregate(self, func_or_funcs, *args, **kwargs):
if isinstance(func_or_funcs, compat.string_types):
return getattr(self, func_or_funcs)(*args, **kwargs)

if isinstance(func_or_funcs, collections.Iterable):
if isinstance(func_or_funcs, compat.Iterable):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in theory we could add a lint rule to avoid using collection.Iterable, and instead just use our compat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're close enough to py3-only that we'll be OK. Writing that regex would be a bit fiddly since we'd need to enumerate all the ABCs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, certainly a follow up issue is ok

# Catch instances of lists / tuples
# but not the class list / tuple itself.
ret = self._aggregate_multiple_funcs(func_or_funcs,
Expand Down
1 change: 1 addition & 0 deletions pandas/core/indexes/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ def cmp_method(self, other):
# numpy will show a DeprecationWarning on invalid elementwise
# comparisons, this will raise in the future
with warnings.catch_warnings(record=True):
warnings.filterwarnings("ignore", "elementwise", FutureWarning)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point in using warnings.filterwarnings instead of pytest.mark.filterwarnings? Wondering if we shouldn't always use the latter for consistency

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

laziness / not wanting to introduce copy-paste errors. New code should essentially always use marks, and I can make the changes here if you want. I mostly cleaned up Panel to remove these catch_warnings / filterwarnings, but introduced one copy-paste error in the process (caught before pushing I think), so I'm inclined to not make unnecessary changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Maybe not a hard rule on this change but it would be helpful to use this instead in functions where there is a big diff

with np.errstate(all='ignore'):
result = op(self.values, np.asarray(other))

Expand Down
1 change: 1 addition & 0 deletions pandas/core/internals/blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -3490,6 +3490,7 @@ def _putmask_smart(v, m, n):

# we ignore ComplexWarning here
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore", np.ComplexWarning)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the context manager restore all of the warnings filters after? (I guess that is the point?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's the idea. Ideally libraries shouldn't modify the global warnings registry without good cause.

nn_at = nn.astype(v.dtype)

# avoid invalid dtype comparisons
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,8 +242,8 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
raise TypeError("{0!r} type is unordered"
"".format(data.__class__.__name__))
# If data is Iterable but not list-like, consume into list.
elif (isinstance(data, collections.Iterable)
and not isinstance(data, collections.Sized)):
elif (isinstance(data, compat.Iterable)
and not isinstance(data, compat.Sized)):
data = list(data)
else:

Expand Down
2 changes: 2 additions & 0 deletions pandas/core/window.py
Original file line number Diff line number Diff line change
Expand Up @@ -2387,11 +2387,13 @@ def dataframe_from_int_dict(data, frame_template):
if not arg2.columns.is_unique:
raise ValueError("'arg2' columns are not unique")
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore", RuntimeWarning)
X, Y = arg1.align(arg2, join='outer')
X = X + 0 * Y
Y = Y + 0 * X

with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore", RuntimeWarning)
res_columns = arg1.columns.union(arg2.columns)
for col in res_columns:
if col in X and col in Y:
Expand Down
4 changes: 2 additions & 2 deletions pandas/io/html.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@
import os
import re
import numbers
import collections

from distutils.version import LooseVersion

from pandas.core.dtypes.common import is_list_like
from pandas.errors import EmptyDataError
from pandas.io.common import _is_url, urlopen, _validate_header_arg
from pandas.io.parsers import TextParser
from pandas import compat
from pandas.compat import (lrange, lmap, u, string_types, iteritems,
raise_with_traceback, binary_type)
from pandas import Series
Expand Down Expand Up @@ -859,7 +859,7 @@ def _validate_flavor(flavor):
flavor = 'lxml', 'bs4'
elif isinstance(flavor, string_types):
flavor = flavor,
elif isinstance(flavor, collections.Iterable):
elif isinstance(flavor, compat.Iterable):
if not all(isinstance(flav, string_types) for flav in flavor):
raise TypeError('Object of type {typ!r} is not an iterable of '
'strings'
Expand Down
3 changes: 2 additions & 1 deletion pandas/io/pickle.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,8 @@ def try_read(path, encoding=None):
# GH 6899
try:
with warnings.catch_warnings(record=True):
# We want to silencce any warnings about, e.g. moved modules.
# We want to silence any warnings about, e.g. moved modules.
warnings.simplefilter("ignore", Warning)
return read_wrapper(lambda f: pkl.load(f))
except Exception:
# reg/patched pickle
Expand Down
17 changes: 8 additions & 9 deletions pandas/tests/api/test_api.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
import sys
from warnings import catch_warnings

import pytest
import pandas as pd
Expand Down Expand Up @@ -175,30 +174,30 @@ def test_get_store(self):

class TestJson(object):

@pytest.mark.filterwarnings("ignore")
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.json.dumps([])
pd.json.dumps([])


class TestParser(object):

@pytest.mark.filterwarnings("ignore")
def test_deprecation_access_func(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should these actually be tm.assert_produces_warning(FutureWarning)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm I see your comment below

with catch_warnings(record=True):
pd.parser.na_values
pd.parser.na_values


class TestLib(object):

@pytest.mark.filterwarnings("ignore")
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.lib.infer_dtype('foo')
pd.lib.infer_dtype('foo')


class TestTSLib(object):

@pytest.mark.filterwarnings("ignore")
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.tslib.Timestamp('20160101')
pd.tslib.Timestamp('20160101')


class TestTypes(object):
Expand Down
Loading