Skip to content

BUG: Performance regression in categorical indexer #34162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
Zaharid opened this issue May 13, 2020 · 3 comments
Closed
2 of 3 tasks

BUG: Performance regression in categorical indexer #34162

Zaharid opened this issue May 13, 2020 · 3 comments
Labels

Comments

@Zaharid
Copy link

Zaharid commented May 13, 2020

  • I have checked that this issue has not already been reported.

Apologies if it has been, there are a few issues that may be related, in particular
#33924
#28122

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

import pandas as pd
import numpy as np

print(pd.__version__)

df = pd.DataFrame(np.random.rand(100, 5))
df['event'] = [*range(len(df)//2)]*2

b1 = pd.cut(df.loc[:,0], bins=[0, .2,.4,.6,.8, 1])
b2 = pd.cut(df.loc[:,1], bins=[0, .2,.4,.6,.8, 1])
b3 = pd.cut(df.loc[:,2], bins=[0, .2,.4,.6,.8, 1])

for _ in range(50):
    df.groupby([b1, b2, b3,'event'])[4].sum()

Problem description

The above example runs some 20 times slower on pandas version 1.0 as compared to 0.24. For 0.24 timing the script reports

$ time python xx.py                                                                                (/tmp/pandas) 
0.24.2
0.92user 0.03system 0:00.95elapsed 99%CPU (0avgtext+0avgdata 70544maxresident)k
0inputs+0outputs (0major+10302minor)pagefaults 0swaps

and most of the loop runtime is comparable to the import time. However for pandas 1.0 I get

 $ time python xx.py                                                                                (/tmp/pandas) 
1.0.3
13.18user 0.03system 0:13.22elapsed 99%CPU (0avgtext+0avgdata 74384maxresident)k
0inputs+0outputs (0major+13029minor)pagefaults 0swaps

I ran py-spy both cases. This is what I get for 0.24:

24

and this is for 1.0

10

Comparing the two, there seems to be some additional reindexing happening, which is responsible for most of the runtime.

Problem description

The code snipped above works around 20 times slower in pandas 1.03 compared to pandas 0.24.

Output of pd.show_versions()

Slow 1.0 version

INSTALLED VERSIONS

commit : None
python : 3.8.2.final.0
python-bits : 64
OS : Linux
OS-release : 5.3.0-51-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.0.3
numpy : 1.18.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.2.0.post20200511
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

Fast 0.24 version INSTALLED VERSIONS ------------------ commit: None python: 3.8.2.final.0 python-bits: 64 OS: Linux OS-release: 5.3.0-51-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8

pandas: 0.24.2
pytest: None
pip: 20.0.2
setuptools: 46.2.0.post20200511
Cython: None
numpy: 1.18.1
scipy: None
pyarrow: None
xarray: None
IPython: 7.13.0
sphinx: None
patsy: None
dateutil: 2.8.1
pytz: 2020.1
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.11.2
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None

@Zaharid Zaharid added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels May 13, 2020
@jorisvandenbossche
Copy link
Member

@Zaharid thanks for the report!

Possibly related issues: #32918, #30552

@arw2019
Copy link
Member

arw2019 commented May 14, 2020

I'm interested in working on this, and the related issue (#32918)!

I'll dig into it and post my progress here.

@TomAugspurger
Copy link
Contributor

@ Zaharid I don't think these are comparable. On 0.24.2 I get the (incorrect) result of just ~100 rows because . If you specify observed=True, you should see similar timings

# master
In [6]: %timeit df.groupby([b1, b2, b3,'event'], observed=True)[4].sum()
11.2 ms ± 321 µs per loop (mean ± std. dev. of 7 runs, 100 loops each
# 0.24.2
   ...: %timeit df.groupby([b1, b2, b3,'event'])[4].sum()

11.4 ms ± 240 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@TomAugspurger TomAugspurger removed the Needs Triage Issue that has not been reviewed by a pandas team member label May 19, 2020
@TomAugspurger TomAugspurger added this to the No action milestone May 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants