Skip to content

pivot_table very slow on Categorical data; how about an observed keyword argument? #24923

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lks100 opened this issue Jan 25, 2019 · 2 comments · Fixed by #24953
Closed

pivot_table very slow on Categorical data; how about an observed keyword argument? #24923

lks100 opened this issue Jan 25, 2019 · 2 comments · Fixed by #24953
Labels
Categorical Categorical Data Type Performance Memory or execution speed performance
Milestone

Comments

@lks100
Copy link

lks100 commented Jan 25, 2019

Code Sample, a copy-pastable example if possible

In [1]: df = pd.DataFrame({'col1': list('abcde'), 'col2': list('fghij'), 'col3': [1, 2, 3, 4, 5]})                                                                                                                                                                                                                                 

In [1]: df.pivot_table(index='col1', values='col3', columns='col2', aggfunc=np.sum, fill_value=0)                                                                                                                                                                                                                                  
Out[1]: 
col2  f  g  h  i  j
col1               
a     1  0  0  0  0
b     0  2  0  0  0
c     0  0  3  0  0
d     0  0  0  4  0
e     0  0  0  0  5

In [2]: %timeit df.pivot_table(index='col1', values='col3', columns='col2', aggfunc=np.sum, fill_value=0)                                                                                                                                                                                                                         
5.56 ms ± 89.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [3]: df.col1 = df.col1.astype('category')                                                                                                                                                                                                                                                                                      

In [4]: df.col2 = df.col2.astype('category')                                                                                                                                                                                                                                                                                      

In [5]: %timeit df.pivot_table(index='col1', values='col3', columns='col2', aggfunc=np.sum, fill_value=0)                                                                                                                                                                                                                         
94.7 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Problem description

groupby has an observed keyword argument (which was added to speed up grouping with categorical data by avoiding cartesian cross-products).

Unfortunately, pivot_table does not have such an option, and since it ultimately calls groupby with observed=False in pivot.py, pivot tables ultimately can be extremely slow for categorical data. Note the simple example above, which runs almost 20 times slower for categorical data (and it would be much worse if the categories were larger).

I believe by simply adding an observed=False keyword argument to pivot_table, the code will be backwards compatible, but then categorical data will be processed efficiently.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.7.2.final.0
python-bits: 64
OS: Darwin
OS-release: 18.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: en_US.UTF-8

pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 40.6.2
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: 1.2.0
xlwt: None
xlsxwriter: 1.1.2
lxml: None
bs4: 4.6.3
html5lib: None
sqlalchemy: 1.2.15
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.7.0

@jreback jreback added Performance Memory or execution speed performance Categorical Categorical Data Type labels Jan 30, 2019
@jreback jreback added this to the 0.25.0 milestone Jan 30, 2019
@randerse10
Copy link

I was pivoting with a five-level-deep index on a tiny (<600 row) data frame that was taking an eternity to execute. I tried playing around with different data types (some were categorical), but that did not significantly change the execution time. Using observed=True reduced the time from 46 seconds to 16.9 ms according to %%timeit in jupyter notebook. I really can't say how or why that is yet, however I thought it significant enough to report that here.

@michaelzhang608
Copy link

michaelzhang608 commented Aug 27, 2021

@randerse10
Seems like it's because Pandas is doing the cartesian product of all your indices (I had the same problem): https://stackoverflow.com/a/50953338

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Categorical Categorical Data Type Performance Memory or execution speed performance
Projects
None yet
4 participants