Skip to content

CI: Fixing possible bugs in the CI #23727

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Nov 24, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion ci/azure/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
CONDA_ENV: pandas
TEST_ARGS: "--skip-slow --skip-network"

py36_locale:
py37_locale:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In CONDA_PY we have 37, as well as in the yaml file, so I assume the name is wrong.

ENV_FILE: ci/deps/azure-37-locale.yaml
CONDA_PY: "37"
CONDA_ENV: pandas
Expand All @@ -27,6 +27,7 @@ jobs:
CONDA_PY: "36"
CONDA_ENV: pandas
TEST_ARGS: "--only-slow --skip-network"
LOCALE_OVERRIDE: "it_IT.UTF-8"

steps:
- script: |
Expand Down
4 changes: 2 additions & 2 deletions ci/deps/azure-37-locale.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ dependencies:
- pymysql
- pytables
- python-dateutil
- python=3.6*
- python=3.7*
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the name of the file we say it's 37, I guess the version here is wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this is breaking the installation of this build, as moto can't be installed with python 3.7

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like conda-forge/moto-feedstock#15 is failing on a dependency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@datapythonista @TomAugspurger
Moto is supporting python 3.7. starting with 1.3.7 (the latest version), but that isn't reflected in the requirements yet: getmoto/moto#1886

It's possible to install it through pip - I did that in #23731, but then the problem is that 1.3.7 requires a boto version that will run into #23754, probably because of a moto issue: getmoto/moto#1941

- pytz
- s3fs
- scipy
Expand All @@ -30,6 +30,6 @@ dependencies:
# universal
- pytest
- pytest-xdist
- moto
- pip:
- hypothesis>=3.58.0
- moto # latest moto in conda-forge fails with 3.7, move to conda dependencies when this is fixed
9 changes: 7 additions & 2 deletions ci/script_multi.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ source activate pandas

if [ -n "$LOCALE_OVERRIDE" ]; then
export LC_ALL="$LOCALE_OVERRIDE";
export LANG="$LOCALE_OVERRIDE";
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have this in ci/script_single.sh but not here. I don't see why in one case should be needed and not in the other. I guess it's missing here (or could be removed there).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could add this would prob be ok
but all of the locale tests are done in the single i think

echo "Setting LC_ALL to $LOCALE_OVERRIDE"

pycmd='import pandas; print("pandas detected console encoding: %s" % pandas.get_option("display.encoding"))'
Expand All @@ -32,8 +33,12 @@ elif [ "$COVERAGE" ]; then

elif [ "$SLOW" ]; then
TEST_ARGS="--only-slow --skip-network"
echo pytest -m "not single and slow" -v --durations=10 --junitxml=test-data-multiple.xml --strict $TEST_ARGS pandas
pytest -m "not single and slow" -v --durations=10 --junitxml=test-data-multiple.xml --strict $TEST_ARGS pandas
# The `-m " and slow"` is redundant here, as `--only-slow` is already used (via $TEST_ARGS). But is needed, because with
# `--only-slow` fast tests are skipped, but each of them is printed in the log (which can be avoided with `-q`),
# and also added to `test-data-multiple.xml`, and then printed in the log in the call to `ci/print_skipped.py`.
# Printing them to the log makes the log exceed the maximum size allowed by Travis and makes the build fail.
echo pytest -n 2 -m "not single and slow" --durations=10 --junitxml=test-data-multiple.xml --strict $TEST_ARGS pandas
pytest -n 2 -m "not single and slow" --durations=10 --junitxml=test-data-multiple.xml --strict $TEST_ARGS pandas
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not easy to see in the diff, but note that I added the -n 2 here, so the slow tests are being called with two processes in script_multi.sh now.


else
echo pytest -n 2 -m "not single" --durations=10 --junitxml=test-data-multiple.xml --strict $TEST_ARGS pandas
Expand Down
74 changes: 47 additions & 27 deletions pandas/tests/io/test_excel.py
Original file line number Diff line number Diff line change
@@ -1,31 +1,31 @@
# pylint: disable=E1101
import os
import warnings
from datetime import datetime, date, time, timedelta
from collections import OrderedDict
import contextlib
from datetime import date, datetime, time, timedelta
from distutils.version import LooseVersion
from functools import partial
import os
import warnings
from warnings import catch_warnings
from collections import OrderedDict

import numpy as np
import pytest
from numpy import nan
import pytest

import pandas as pd
import pandas.util.testing as tm
from pandas.compat import PY36, BytesIO, iteritems, map, range, u
import pandas.util._test_decorators as td

import pandas as pd
from pandas import DataFrame, Index, MultiIndex, Series
from pandas.compat import u, range, map, BytesIO, iteritems, PY36
from pandas.core.config import set_option, get_option
from pandas.core.config import get_option, set_option
import pandas.util.testing as tm
from pandas.util.testing import ensure_clean, makeCustomDataframe as mkdf

from pandas.io.common import URLError
from pandas.io.excel import (
ExcelFile, ExcelWriter, read_excel, _XlwtWriter, _OpenpyxlWriter,
register_writer, _XlsxWriter
)
ExcelFile, ExcelWriter, _OpenpyxlWriter, _XlsxWriter, _XlwtWriter,
read_excel, register_writer)
from pandas.io.formats.excel import ExcelFormatter
from pandas.io.parsers import read_csv
from pandas.util.testing import ensure_clean, makeCustomDataframe as mkdf


_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
Expand All @@ -36,6 +36,20 @@
_mixed_frame['foo'] = 'bar'


@contextlib.contextmanager
def ignore_xlrd_time_clock_warning():
"""
Context manager to ignore warnings raised by the xlrd library,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, is this still present in 1.0.0 which is now our min version?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it happens in 1.1.0 (the latest), which is the version I have installed, so I think these warning ignores are needed for now

regarding the deprecation of `time.clock` in Python 3.7.
"""
with warnings.catch_warnings():
warnings.filterwarnings(
action='ignore',
message='time.clock has been deprecated',
category=DeprecationWarning)
yield


@td.skip_if_no('xlrd', '1.0.0')
class SharedItems(object):

Expand Down Expand Up @@ -114,20 +128,23 @@ def test_usecols_int(self, ext):
# usecols as int
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
df1 = self.get_exceldf("test1", ext, "Sheet1",
index_col=0, usecols=3)
with ignore_xlrd_time_clock_warning():
df1 = self.get_exceldf("test1", ext, "Sheet1",
index_col=0, usecols=3)

# usecols as int
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
df2 = self.get_exceldf("test1", ext, "Sheet2", skiprows=[1],
index_col=0, usecols=3)
with ignore_xlrd_time_clock_warning():
df2 = self.get_exceldf("test1", ext, "Sheet2", skiprows=[1],
index_col=0, usecols=3)

# parse_cols instead of usecols, usecols as int
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
df3 = self.get_exceldf("test1", ext, "Sheet2", skiprows=[1],
index_col=0, parse_cols=3)
with ignore_xlrd_time_clock_warning():
df3 = self.get_exceldf("test1", ext, "Sheet2", skiprows=[1],
index_col=0, parse_cols=3)

# TODO add index to xls file)
tm.assert_frame_equal(df1, df_ref, check_names=False)
Expand All @@ -145,8 +162,9 @@ def test_usecols_list(self, ext):
index_col=0, usecols=[0, 2, 3])

with tm.assert_produces_warning(FutureWarning):
df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
index_col=0, parse_cols=[0, 2, 3])
with ignore_xlrd_time_clock_warning():
df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
index_col=0, parse_cols=[0, 2, 3])

# TODO add index to xls file)
tm.assert_frame_equal(df1, dfref, check_names=False)
Expand All @@ -165,8 +183,9 @@ def test_usecols_str(self, ext):
index_col=0, usecols='A:D')

with tm.assert_produces_warning(FutureWarning):
df4 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
index_col=0, parse_cols='A:D')
with ignore_xlrd_time_clock_warning():
df4 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
index_col=0, parse_cols='A:D')

# TODO add index to xls, read xls ignores index name ?
tm.assert_frame_equal(df2, df1, check_names=False)
Expand Down Expand Up @@ -618,8 +637,9 @@ def test_sheet_name_and_sheetname(self, ext):
df1 = self.get_exceldf(filename, ext,
sheet_name=sheet_name, index_col=0) # doc
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
df2 = self.get_exceldf(filename, ext, index_col=0,
sheetname=sheet_name) # backward compat
with ignore_xlrd_time_clock_warning():
df2 = self.get_exceldf(filename, ext, index_col=0,
sheetname=sheet_name) # backward compat

excel = self.get_excelfile(filename, ext)
df1_parse = excel.parse(sheet_name=sheet_name, index_col=0) # doc
Expand Down
1 change: 0 additions & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,6 @@ skip=
pandas/tests/io/test_parquet.py,
pandas/tests/io/generate_legacy_storage_files.py,
pandas/tests/io/test_common.py,
pandas/tests/io/test_excel.py,
pandas/tests/io/test_feather.py,
pandas/tests/io/test_s3.py,
pandas/tests/io/test_html.py,
Expand Down