Skip to content

PERF: agg is an order of magnitude slower with pyarrow dtypes #54065

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 3 tasks
vkhodygo opened this issue Jul 10, 2023 · 8 comments
Open
2 of 3 tasks

PERF: agg is an order of magnitude slower with pyarrow dtypes #54065

vkhodygo opened this issue Jul 10, 2023 · 8 comments
Labels
Arrow pyarrow functionality Groupby Performance Memory or execution speed performance

Comments

@vkhodygo
Copy link

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the main branch of pandas.

Reproducible Example

Conversion to pyarrow datatypes changes the performance drastically. I did a bit of profiling, it looks like agg is to blame here. With the recent introduction of PEP 668 testing the code on the latest branch is cumbersome and so I didn't. There are also potentially relevant issues, but it's not the same: #50121, #46505

from datetime import datetime

import numpy as np
import pandas as pd

symbols = 1000
start = datetime(2023, 1, 1)
end = datetime(2023, 1, 2)
data_cols = ['A', 'B', 'C', 'D', 'E']
agg_props = {'A': 'first', 'B': 'max', 'C': 'min', 'D': 'last', 'E': 'sum'}
base, sample = '1min', '5min'

def pandas_resample(df: pd.DataFrame):
    return (df
            .sort_values(['sid', 'timestamp'])
            .set_index('timestamp')
            .groupby('sid')
            .resample(sample, label='left', closed='left')
            .agg(agg_props)
            .reset_index()
           )

_rng = np.random.default_rng(123)
timestamps = pd.date_range(start, end, freq=base)
df = pd.DataFrame({'timestamp': pd.DatetimeIndex(timestamps),
                   **{_col: _rng.integers(50, 150, len(timestamps)) for _col in data_cols}})
ids = pd.DataFrame({'sid': _rng.integers(1000, 2000, symbols)})
df['id'] = 1
ids['id'] = 1
full_df = ids.merge(df, on='id').drop(columns=['id'])
%timeit pandas_resample(full_df.copy())
1.68 s ± 50.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
full_df['sid'] =  full_df['sid'].astype("uint16[pyarrow]")
full_df['A'] =  full_df['A'].astype("int16[pyarrow]")
full_df['B'] =  full_df['B'].astype("int16[pyarrow]")
full_df['C'] =  full_df['C'].astype("int16[pyarrow]")
full_df['D'] =  full_df['D'].astype("int16[pyarrow]")
full_df['E'] =  full_df['E'].astype("int16[pyarrow]")
%timeit pandas_resample(full_df.copy())
36.4 s ± 1.26 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

Installed Versions

INSTALLED VERSIONS

commit : 965ceca
python : 3.11.3.final.0
python-bits : 64
OS : Linux
OS-release : 6.4.2-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Thu, 06 Jul 2023 18:35:54 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : en_GB.UTF-8
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8

pandas : 2.0.2
numpy : 1.25.0
pytz : 2023.3
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.1.2
Cython : 0.29.36
pytest : 7.4.0
hypothesis : 6.75.3
sphinx : 7.0.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.14.0
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : 2022.11.0
gcsfs : None
matplotlib : 3.7.1
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 10.0.1
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.11.1
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : 2023.1.0
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None

Prior Performance

No response

@vkhodygo vkhodygo added Needs Triage Issue that has not been reviewed by a pandas team member Performance Memory or execution speed performance labels Jul 10, 2023
@samukweku
Copy link
Contributor

I don't think groupby has been implemented for pyarrow objects yet

@rhshadrach rhshadrach added Groupby Arrow pyarrow functionality labels Jul 10, 2023
@rhshadrach
Copy link
Member

Thanks for the report; for the two timings in the OP, on main I get

1.64 s ± 3.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.08 s ± 73.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Closing as a duplicate of #52070.

@vkhodygo
Copy link
Author

@rhshadrach Thanks for that, I see it'll be in the v2.1 release.

However, it's still significantly slower, whereas one should expect much faster vectorized operations when dealing with 16 bit data types. That's one of the benefits of the conversion.

#52070 says

The new engine is 2x slower than the old engine.

We still observe a 20-30% slow-down in test cases, not sure that this counts as resolved.

@rhshadrach
Copy link
Member

rhshadrach commented Jul 15, 2023

Fair point; reopening. Further investigations on the perf difference are welcome!

@rhshadrach rhshadrach reopened this Jul 15, 2023
@lithomas1
Copy link
Member

Is the idea to avoid the conversion to numpy and to do the entire groupby operation pyarrow side?

@lithomas1 lithomas1 removed the Needs Triage Issue that has not been reviewed by a pandas team member label Jul 18, 2023
@vkhodygo
Copy link
Author

@rhshadrach Thanks, I'll see what I can do.

@lithomas1 Yes, it seems switching between different backends is a little bit cumbersome.

@samukweku
Copy link
Contributor

@rhshadrach could you kindly point me in the right direction (which section of the code should I look at) to work on this?

@rhshadrach
Copy link
Member

@samukweku - I recommend starting by profiling the code for NumPy vs pyarrow dtypes and seeing where significant differences pop out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arrow pyarrow functionality Groupby Performance Memory or execution speed performance
Projects
None yet
Development

No branches or pull requests

4 participants