Skip to content

PERF: pd.BooleanDtype in row operations is still very slow #56903

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 3 tasks
Alexia-I opened this issue Jan 16, 2024 · 10 comments
Open
2 of 3 tasks

PERF: pd.BooleanDtype in row operations is still very slow #56903

Alexia-I opened this issue Jan 16, 2024 · 10 comments
Labels
NA - MaskedArrays Related to pd.NA and nullable extension arrays Needs Info Clarification about behavior needed to assess issue Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc.

Comments

@Alexia-I
Copy link

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the main branch of pandas.

Reproducible Example

The performance issue is the same as #52016. I assume it may be caused by some dependency missing but i installed some accelerating dependency like numba and pyarrow and the time is still larger than 10 seconds. @rhshadrach

import pandas as pd
import numpy as np

shape = 250_000, 100
mask = pd.DataFrame(np.random.randint(0, 1, size=shape))


np_mask = mask.astype(bool)
pd_mask = mask.astype(pd.BooleanDtype())

assert all(isinstance(dtype, pd.BooleanDtype) for dtype in pd_mask.dtypes)
assert all(isinstance(dtype, np.dtype) for dtype in np_mask.dtypes)
# column operations are not that much slower
%timeit pd_mask.any(axis=0) 
%timeit np_mask.any(axis=0)
# using pandas.BooleanDtype back end for ROW operations is MUCH SLOWER
%timeit pd_mask.any(axis=1)
%timeit np_mask.any(axis=1)

13.1 ms ± 522 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
5.7 ms ± 21.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
10.2 s ± 1.41 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
7.74 ms ± 40.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Installed Versions

INSTALLED VERSIONS

commit : a671b5a
python : 3.9.18.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-88-generic
Version : #98~20.04.1-Ubuntu SMP Mon Oct 9 16:43:45 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 2.1.4
numpy : 1.26.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : 3.0.8
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.18.1
pandas_datareader : None
bs4 : None
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.4
qtpy : None
pyqt5 : None

Prior Performance

No response

@Alexia-I Alexia-I added Needs Triage Issue that has not been reviewed by a pandas team member Performance Memory or execution speed performance labels Jan 16, 2024
@rhshadrach
Copy link
Member

Thanks for the report. I cannot reproduce on 2.1.4:

15.8 ms ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.28 ms ± 9.97 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
517 ms ± 1.12 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.88 ms ± 3.21 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

however I get similar times on 2.0.3:

14.6 ms ± 40.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.33 ms ± 5.03 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
11.6 s ± 23.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.92 ms ± 7.44 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Can you try running this on the 2.2.0 release candidate? pip install pandas==2.2.0rc0 to install via pip.

@rhshadrach rhshadrach added NA - MaskedArrays Related to pd.NA and nullable extension arrays Reduction Operations sum, mean, min, max, etc. Needs Info Clarification about behavior needed to assess issue and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Jan 16, 2024
@Alexia-I
Copy link
Author

Alexia-I commented Jan 16, 2024

Thank you for your response.

I run it on the Linux server and my mac with pandas==2.2.0rc0 and it turns out it only run slower on the Linux, which is kinda strange. I guess it's the problem with the OS rather than the pandas library if no one else has such perf issue.

# on linux
OS                    : Linux
OS-release            : 5.15.0-88-generic
Version               : #98~20.04.1-Ubuntu SMP Mon Oct 9 16:43:45 UTC 2023
machine               : x86_64
processor             : x86_64

11.9 ms ± 139 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
5.66 ms ± 32.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
12.9 s ± 573 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
7.73 ms ± 17.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Darwin
OS-release            : 23.0.0
Version               : Darwin Kernel Version 23.0.0: Thu Aug 17 21:23:05 PDT 2023; root:xnu-10002.1.11~3/RELEASE_ARM64_T6000
machine               : arm64
processor             : arm

9.85 ms ± 9.67 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.71 ms ± 26.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
263 ms ± 4.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.56 ms ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@rhshadrach
Copy link
Member

On Linux where you have 2.1 or later installed, can you try running pd.options.future.infer_string = True. I have a suspicion you are (accidentally) running 2.0.x and this is an option that was only added in 2.1.

@Alexia-I
Copy link
Author

Yeah, thank you, it does work...making it 4 times faster, but still slower than 263ms. I found the pd.options.future.infer_string = True in the pandas whatsnew doc. Is this the default setting? Could my performance issue be due to a misuse on my part?

11.7 ms ± 61.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.2 ms ± 41.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.74 s ± 831 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
8.21 ms ± 518 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@rhshadrach
Copy link
Member

rhshadrach commented Jan 16, 2024

pd.options.future.infer_string = True should have no impact on the performance here, it was just an option that was added in 2.1 to tell if you were running on 2.0.x vs 2.1.x (or later). I believe you're just seeing a large (random) variance on timings - this could be the case if you have other processes on your machine using resources.

@Alexia-I
Copy link
Author

Thank you. I understand it now. Btw, could anyone else encounter the pd.options.future.infer_string setting issue causing the runtime larger than expected if they do not set it as True in advance?

@jbrockmendel
Copy link
Member

This looks like a “lack of 2D EAs” thing

@Alexia-I
Copy link
Author

Sorry I did not get the point. Did you mean the performance downgrade comes from lacking data structure(2D EA) support?

This looks like a “lack of 2D EAs” thing

@jorisvandenbossche jorisvandenbossche changed the title PERF: PERF: pd.BooleanDtype in row operations is still very slow PERF: pd.BooleanDtype in row operations is still very slow Jan 19, 2024
@randolf-scholz
Copy link
Contributor

randolf-scholz commented Apr 18, 2024

I had a similar issue last year: #54389, I just retested with 2.2.2, and it got better, but it's still more than 6x slower than a naive fallback implementation (down from 2000x), which makes me wonder what kind of fallback pandas is using that is so slow...

import pandas as pd
import numpy as np
from functools import reduce
import operator

data = np.random.randn(10_000, 10) > 0.5
df_numpy = pd.DataFrame(data, dtype=bool)
df_arrow = df_numpy.astype("bool[pyarrow]")
%timeit df_numpy.all(axis="index")     # 216 µs ± 1.58 µs
%timeit df_numpy.all(axis="columns")   # 274 µs ± 2.86 µs
%timeit df_arrow.all(axis="index")     # 442 µs ± 3.07 µs
%timeit df_arrow.all(axis="columns")   # 2.29 ms ± 6.59 µs
%timeit reduce(operator.__and__, (s for _, s in df_arrow.items()))  # 362 µs ± 1.16 µs
`pd.show_versions()`
INSTALLED VERSIONS
------------------
commit                : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python                : 3.11.7.final.0
python-bits           : 64
OS                    : Linux
OS-release            : 6.5.0-27-generic
Version               : #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
machine               : x86_64
processor             : x86_64
byteorder             : little
LC_ALL                : None
LANG                  : en_US.UTF-8
LOCALE                : en_US.UTF-8

pandas                : 2.2.2
numpy                 : 1.26.4
pytz                  : 2024.1
dateutil              : 2.9.0.post0
setuptools            : 69.5.1
pip                   : 24.0
Cython                : None
pytest                : 8.1.1
hypothesis            : 6.100.1
sphinx                : 7.3.5
blosc                 : None
feather               : None
xlsxwriter            : None
lxml.etree            : None
html5lib              : None
pymysql               : None
psycopg2              : None
jinja2                : 3.1.3
IPython               : 8.23.0
pandas_datareader     : None
adbc-driver-postgresql: None
adbc-driver-sqlite    : None
bs4                   : 4.12.3
bottleneck            : None
dataframe-api-compat  : None
fastparquet           : None
fsspec                : 2024.3.1
gcsfs                 : None
matplotlib            : 3.8.4
numba                 : None
numexpr               : None
odfpy                 : None
openpyxl              : 3.1.2
pandas_gbq            : None
pyarrow               : 15.0.2
pyreadstat            : None
python-calamine       : None
pyxlsb                : None
s3fs                  : None
scipy                 : 1.13.0
sqlalchemy            : None
tables                : None
tabulate              : None
xarray                : None
xlrd                  : None
zstandard             : None
tzdata                : 2024.1
qtpy                  : None
pyqt5                 : None

@randolf-scholz
Copy link
Contributor

When aggregating over columns, it appears an expensive factorize method is called (left: axis=columns, right: axis=index):
Screenshot from 2024-04-18 19-31-27

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NA - MaskedArrays Related to pd.NA and nullable extension arrays Needs Info Clarification about behavior needed to assess issue Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc.
Projects
None yet
Development

No branches or pull requests

4 participants