-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: GroupBy.mean() is extremely slow with sparse arrays #36123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you create the DataFrame in memory? (see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports). |
import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.rand(1000, 1000) > 0.8, dtype=int)
df['g'] = ['A' if x else 'B' for x in np.random.rand(1000) > 0.5]
%timeit -n 3 -r 5 df.groupby('g').mean()
sdf = df.set_index('g').astype('Sparse[int]').reset_index()
%timeit -n 3 -r 5 sdf.groupby('g').mean() Output:
|
Thanks (can you add that to the original post?) Next thing we need to figure out: is this specific to Sparse, or is it a general issue with all extensionarrays? ExtensionArrays can't be sent to Cython, so we have to take the slower python path. Right now my hunch is that it applies to all EAs. Can you do some investigation? |
I am not familiar with pandas codebase and Cython at all, so I probably cannot. But any help is appreciated. |
There's a slowdown with
|
Yes! It's unbearably slow. Please help! In my case, a sparse Sparse |
Probably related to the BlockManager structure. In case of numpy dtypes we get one block. For ExtensionArrays we get 1k blocks which causes the slowdown. Not sure if there is an efficient way around this |
@phofl is correct this is mostly about the lack of 2D EAs. Part of it could also be alleviated by implementing SparseArray._groupby_op, but that is a pretty bug task. |
[x ] I have checked that this issue has not already been reported.
[ x] I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Output:
Dense:
11 ms ± 430 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)
Sparse:
3.91 s ± 33.5 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)
or subset of a real-word dataset:
Output:
Dense:
9.2 ms ± 403 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)
Sparse:
3.73 s ± 126 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)
Problem description
GroupBy.mean()
with the SparseArray seems extremely slow compared to dense matrices (here around 355 times).Output of
pd.show_versions()
INSTALLED VERSIONS
commit : f2ca0a2
python : 3.8.5.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.1
setuptools : 49.2.1.post20200802
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.17.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.0
fastparquet : None
gcsfs : None
matplotlib : 3.3.0
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : 3.6.1
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : 0.50.1
I attached the file here for reproducibility.
pbmc-sparse-df.csv.gz
The text was updated successfully, but these errors were encountered: