Skip to content

BUG: GroupBy.mean() is extremely slow with sparse arrays #36123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task
gokceneraslan opened this issue Sep 4, 2020 · 9 comments
Open
1 task

BUG: GroupBy.mean() is extremely slow with sparse arrays #36123

gokceneraslan opened this issue Sep 4, 2020 · 9 comments
Labels
Groupby Performance Memory or execution speed performance Sparse Sparse Data Type

Comments

@gokceneraslan
Copy link

gokceneraslan commented Sep 4, 2020

  • [x ] I have checked that this issue has not already been reported.

  • [ x] I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

import pandas as pd
import numpy as np

np.random.seed(0)
df = pd.DataFrame(np.random.rand(1000, 1000) > 0.8, dtype=int)
df['g'] = ['A' if x else 'B' for x in np.random.rand(1000) > 0.5]

%timeit -n 3 -r 5 df.groupby('g').mean()

sdf = df.set_index('g').astype('Sparse[int]').reset_index()
%timeit -n 3 -r 5 sdf.groupby('g').mean()

Output:

Dense:
11 ms ± 430 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)

Sparse:
3.91 s ± 33.5 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)

or subset of a real-word dataset:

import pandas as pd

df = pd.read_csv('https://github.com/pandas-dev/pandas/files/5176354/pbmc-sparse-df.csv.gz', 
                 index_col=0).astype(int).reset_index()
%timeit -n 3 -r 5 df.groupby('Cell type').mean()

sdf = df.set_index('Cell type').astype(pd.SparseDtype(int, fill_value=0)).reset_index()
%timeit -n 3 -r 5 sdf.groupby('Cell type').mean()

Output:

Dense:
9.2 ms ± 403 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)

Sparse:
3.73 s ± 126 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)

Problem description

GroupBy.mean() with the SparseArray seems extremely slow compared to dense matrices (here around 355 times).

Output of pd.show_versions()

INSTALLED VERSIONS

commit : f2ca0a2
python : 3.8.5.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.1
setuptools : 49.2.1.post20200802
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.17.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.0
fastparquet : None
gcsfs : None
matplotlib : 3.3.0
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : 3.6.1
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : 0.50.1

I attached the file here for reproducibility.

pbmc-sparse-df.csv.gz

@gokceneraslan gokceneraslan added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Sep 4, 2020
@TomAugspurger
Copy link
Contributor

Can you create the DataFrame in memory? (see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).

@TomAugspurger TomAugspurger added Needs Info Clarification about behavior needed to assess issue and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Sep 4, 2020
@gokceneraslan
Copy link
Author

gokceneraslan commented Sep 4, 2020

import pandas as pd
import numpy as np

np.random.seed(0)
df = pd.DataFrame(np.random.rand(1000, 1000) > 0.8, dtype=int)
df['g'] = ['A' if x else 'B' for x in np.random.rand(1000) > 0.5]

%timeit -n 3 -r 5 df.groupby('g').mean()

sdf = df.set_index('g').astype('Sparse[int]').reset_index()
%timeit -n 3 -r 5 sdf.groupby('g').mean()

Output:

11 ms ± 430 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)
3.91 s ± 33.5 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)

@TomAugspurger
Copy link
Contributor

Thanks (can you add that to the original post?)

Next thing we need to figure out: is this specific to Sparse, or is it a general issue with all extensionarrays? ExtensionArrays can't be sent to Cython, so we have to take the slower python path. Right now my hunch is that it applies to all EAs. Can you do some investigation?

@TomAugspurger TomAugspurger added Performance Memory or execution speed performance Groupby and removed Needs Info Clarification about behavior needed to assess issue labels Sep 4, 2020
@gokceneraslan
Copy link
Author

I am not familiar with pandas codebase and Cython at all, so I probably cannot. But any help is appreciated.

@gokceneraslan
Copy link
Author

gokceneraslan commented Sep 4, 2020

I can c/p snakeviz profiles, if it helps:

Dense:

%snakeviz df.groupby('g').mean()

image

image

Sparse:

%snakeviz sdf.groupby('g').mean()

image

image

@arw2019
Copy link
Member

arw2019 commented Sep 4, 2020

There's a slowdown withInt64

   ...: %timeit -n 3 -r 5 df.groupby('g').mean() 
   ...:  
   ...: sdf = df.set_index('g').astype('Int64').reset_index() 
   ...: %timeit -n 3 -r 5 sdf.groupby('g').mean() 
   ...:                                                                                                                                                                                                           
8.06 ms ± 966 µs per loop (mean ± std. dev. of 5 runs, 3 loops each)
333 ms ± 10.6 ms per loop (mean ± std. dev. of 5 runs, 3 loops each)

@v1gnesh
Copy link

v1gnesh commented Jan 11, 2021

Yes! It's unbearably slow. Please help!
All memory-saving gains from using sparse is being wiped out by extremely long CPU time.

In my case, a sparse df (shape of (100317, 24) with all uint16) takes just 636.9 KB in memory.
Dense df (same shape) takes 4.6 MB in memory.

Sparse groupby took 527.5207s, whereas dense groupby finished in 0.2229s

@mroeschke mroeschke added ExtensionArray Extending pandas with custom dtypes or arrays. and removed Bug labels Aug 13, 2021
@phofl
Copy link
Member

phofl commented Dec 29, 2021

Probably related to the BlockManager structure. In case of numpy dtypes we get one block. For ExtensionArrays we get 1k blocks which causes the slowdown. Not sure if there is an efficient way around this

@jbrockmendel
Copy link
Member

@phofl is correct this is mostly about the lack of 2D EAs. Part of it could also be alleviated by implementing SparseArray._groupby_op, but that is a pretty bug task.

@jbrockmendel jbrockmendel added Sparse Sparse Data Type and removed ExtensionArray Extending pandas with custom dtypes or arrays. labels Jul 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Groupby Performance Memory or execution speed performance Sparse Sparse Data Type
Projects
None yet
Development

No branches or pull requests

7 participants