You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
groupby has an observed keyword argument (which was added to speed up grouping with categorical data by avoiding cartesian cross-products).
Unfortunately, pivot_table does not have such an option, and since it ultimately calls groupby with observed=False in pivot.py, pivot tables ultimately can be extremely slow for categorical data. Note the simple example above, which runs almost 20 times slower for categorical data (and it would be much worse if the categories were larger).
I believe by simply adding an observed=False keyword argument to pivot_table, the code will be backwards compatible, but then categorical data will be processed efficiently.
I was pivoting with a five-level-deep index on a tiny (<600 row) data frame that was taking an eternity to execute. I tried playing around with different data types (some were categorical), but that did not significantly change the execution time. Using observed=True reduced the time from 46 seconds to 16.9 ms according to %%timeit in jupyter notebook. I really can't say how or why that is yet, however I thought it significant enough to report that here.
Code Sample, a copy-pastable example if possible
Problem description
groupby has an observed keyword argument (which was added to speed up grouping with categorical data by avoiding cartesian cross-products).
Unfortunately, pivot_table does not have such an option, and since it ultimately calls groupby with observed=False in pivot.py, pivot tables ultimately can be extremely slow for categorical data. Note the simple example above, which runs almost 20 times slower for categorical data (and it would be much worse if the categories were larger).
I believe by simply adding an observed=False keyword argument to pivot_table, the code will be backwards compatible, but then categorical data will be processed efficiently.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.7.2.final.0
python-bits: 64
OS: Darwin
OS-release: 18.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 40.6.2
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: 1.2.0
xlwt: None
xlsxwriter: 1.1.2
lxml: None
bs4: 4.6.3
html5lib: None
sqlalchemy: 1.2.15
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.7.0
The text was updated successfully, but these errors were encountered: