-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PERF: pd.BooleanDtype in row operations is still very slow #56903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the report. I cannot reproduce on 2.1.4:
however I get similar times on 2.0.3:
Can you try running this on the 2.2.0 release candidate? |
Thank you for your response. I run it on the Linux server and my mac with pandas==2.2.0rc0 and it turns out it only run slower on the Linux, which is kinda strange. I guess it's the problem with the OS rather than the pandas library if no one else has such perf issue.
|
On Linux where you have 2.1 or later installed, can you try running |
Yeah, thank you, it does work...making it 4 times faster, but still slower than 263ms. I found the
|
|
Thank you. I understand it now. Btw, could anyone else encounter the |
This looks like a “lack of 2D EAs” thing |
Sorry I did not get the point. Did you mean the performance downgrade comes from lacking data structure(2D EA) support?
|
I had a similar issue last year: #54389, I just retested with 2.2.2, and it got better, but it's still more than 6x slower than a naive fallback implementation (down from 2000x), which makes me wonder what kind of fallback pandas is using that is so slow... import pandas as pd
import numpy as np
from functools import reduce
import operator
data = np.random.randn(10_000, 10) > 0.5
df_numpy = pd.DataFrame(data, dtype=bool)
df_arrow = df_numpy.astype("bool[pyarrow]")
%timeit df_numpy.all(axis="index") # 216 µs ± 1.58 µs
%timeit df_numpy.all(axis="columns") # 274 µs ± 2.86 µs
%timeit df_arrow.all(axis="index") # 442 µs ± 3.07 µs
%timeit df_arrow.all(axis="columns") # 2.29 ms ± 6.59 µs
%timeit reduce(operator.__and__, (s for _, s in df_arrow.items())) # 362 µs ± 1.16 µs `pd.show_versions()`INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.11.7.final.0
python-bits : 64
OS : Linux
OS-release : 6.5.0-27-generic
Version : #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 69.5.1
pip : 24.0
Cython : None
pytest : 8.1.1
hypothesis : 6.100.1
sphinx : 7.3.5
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.23.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.3.1
gcsfs : None
matplotlib : 3.8.4
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 15.0.2
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
The performance issue is the same as #52016. I assume it may be caused by some dependency missing but i installed some accelerating dependency like numba and pyarrow and the time is still larger than 10 seconds. @rhshadrach
Installed Versions
INSTALLED VERSIONS
commit : a671b5a
python : 3.9.18.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-88-generic
Version : #98~20.04.1-Ubuntu SMP Mon Oct 9 16:43:45 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.1.4
numpy : 1.26.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : 3.0.8
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.18.1
pandas_datareader : None
bs4 : None
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.4
qtpy : None
pyqt5 : None
Prior Performance
No response
The text was updated successfully, but these errors were encountered: