-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: pd.test() fails on Python v3.9.6 #46498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
you are welcome to have a look and report specific bugs please read the developer documentation |
I'm disappointed that I have to read developer documentation in order for a simple install to pass the tests describes in its own installation page. Can you perhaps be more specific about the apparent issue? |
you are welcome to report a bug if your platform / install doesn't work then u can report a specific issue |
I appreciate your efforts and attention. I've been doing test-driven development for as long as the concept has existed, I understand the challenges. If those 150k tests passed on the commit for v1.4.1, then there must be a difference between the configuration on which they passed and mine. I've shown the result of my attempt to run what I think is the same test suite that is described in the documentation -- yet the header in the test run presented in the documentation says Why is my suite collecting more than TEN TIMES as many items as the documentation (156958 vs 12145)? Why do I see "37 skipped" items as opposed to "3 skipped" in the documentation? Why does the header in mine mention the I am under the impression that this exchange IS how I report a specific issue. I'm attempting to offer as much information as I can in hopes that your volunteer team -- who are far more familiar with this package then me -- might recognize some obvious explanation(s). It seems to me that the difference between the test results I see on my system and the test results offered in the documentation is specific enough that it is worth investigating. All the testing in the world doesn't help if well-documented reports of test failures are rejected as not specific enough. I've provided the information requested below. I'm happy to provide more information as needed. |
That piece of documentation must just be out of date, will open an issue about it. It would be great if the test suite could run that quickly :)
It would be helpful to know which specific tests failed (and if those individual tests fail when run in isolation with |
I've attached a session log from earlier today, showing the complete SSH Client session. I hope this helps! This is a shell connected to an AWS EC2 t3.xlarge instance running Rocky Linux v8.5. There are ample CPU, storage, and memory resources available. |
Thanks! Hmm, except for the clipboard tests, errors all look like
with the failing tests as
|
I don't see any changes in the documentation. Can you clarify where I can find the correct information about how to run the test suite? |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Note the several failures, warnings and errors as well as the extended (997.19s) time.
Expected Behavior
I'm a reasonably proficient Python developer attempting to evaluate whether
pandas
can solve a specific data imputation issue that has come up in a project. I'm therefore new topandas
, while not new to Python or the rest of my toolchain.Surely a stable distribution of any package should pass its own documented tests! I see so many failures that I'm left doubting whether
pandas
is running at all. Since I am new topandas
, I have no way of knowing which if any of these failures, errors, and warnings are significant and which are not.I have installed Pandas using
pip3
on a Rocky Linux v8.5 (CentOS 8) system running python 3.9. I did the installation usingpip3
and following the instructions in thepandas
documentation.I expect any standard "stable" version to pass most of its published tests. In the
pandas
documentation, the expected results are:Significantly, I expect to have 0 failures, and 0 errors. I'm not sure about "xfailed", "xpassed", and "warnings".
I have tried using
pip3
to install various combinations of the 30+ dependencies mentioned in the documentation. Seven of those causepip3
to fail, and so I skipped them.By the time I installed all of the installable dependencies, the test results worsened:
Installed Versions
pandas : 1.4.1
numpy : 1.22.3
pytz : 2021.3
dateutil : 2.8.2
pip : 20.2.4
setuptools : 50.3.2
Cython : None
pytest : 7.1.1
hypothesis : 6.39.4
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : 1.3.4
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : 2.8.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 7.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.8.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
The text was updated successfully, but these errors were encountered: