-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
CI: fix failing script/tests #40457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: fix failing script/tests #40457
Conversation
jbrockmendel
commented
Mar 16, 2021
- closes #xxxx
- tests added / passed
- Ensure all linting tests pass, see here for how to run them
- whatsnew entry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jbrockmendel ,
bit confused here, don't these tests pass already? I just tried pull master and they seem fine:
$ cd scripts/
$ pytest
======================================== test session starts =========================================
platform linux -- Python 3.8.6, pytest-6.2.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/marco/pandas-marco, configfile: setup.cfg
plugins: hypothesis-6.8.1, cov-2.11.1, instafail-0.4.1, monkeytype-1.1.0, forked-1.2.0, xdist-2.2.1, asyncio-0.14.0
collected 72 items
tests/test_inconsistent_namespace_check.py ........ [ 11%]
tests/test_no_bool_in_generic.py .... [ 16%]
tests/test_validate_docstrings.py ..................................... [ 68%]
tests/test_validate_unwanted_patterns.py ....................... [100%]
========================================= 72 passed in 1.38s =========================================
Also, tokenize-rt
is a dependency of pyupgrade
so I didn't think it would be necessary to include it in the environment file
EDIT
Running from the root directory also works:
$ pytest scripts/
======================================== test session starts =========================================
platform linux -- Python 3.8.6, pytest-6.2.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/marco/pandas-marco, configfile: setup.cfg
plugins: hypothesis-6.8.1, cov-2.11.1, instafail-0.4.1, monkeytype-1.1.0, forked-1.2.0, xdist-2.2.1, asyncio-0.14.0
collected 72 items
scripts/tests/test_inconsistent_namespace_check.py ........ [ 11%]
scripts/tests/test_no_bool_in_generic.py .... [ 16%]
scripts/tests/test_validate_docstrings.py ..................................... [ 68%]
scripts/tests/test_validate_unwanted_patterns.py ....................... [100%]
========================================= 72 passed in 1.37s =========================================
As for how to run them, I think the import validate_docstrings
in test_validate_docstrings.py
(which predates me) suggests they should be run from within scripts
Also, this is already checked here:
pandas/.github/workflows/ci.yml
Lines 67 to 69 in b172a60
- name: Testing docstring validation script | |
run: pytest --capture=no --strict-markers scripts | |
if: always() |
Makes sense, will revert |
what flake8 version are you using? |
|
cool merge when ready @MarcoGorelli @jbrockmendel |
Ah yeah, I had 3.8.4, hadn't done Thanks for fixing this + explaining! Just to clarify, it's not #40439 which broke the check, but the fact that |
@meeseeksdev backport 1.2.x |
This comment has been minimized.
This comment has been minimized.
Co-authored-by: jbrockmendel <[email protected]>
* CI: fix failing script/tests * get tokenize_rt from pip * revert tokenize_rt dep