-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: GroupBy().fillna() performance regression #36757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
Groupby
Missing-data
np.nan, pd.NaT, pd.NA, dropna, isnull, interpolate
Performance
Memory or execution speed performance
Milestone
Comments
Thanks @alippai for the report, can confirm this reproduces on master: In [5]: import pandas as pd
...: import numpy as np
...:
...: N = 2000
...: df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
...: df = df.sort_values("A").set_index("A")
...: %time df.groupby("A")["B"].fillna(method="ffill")
CPU times: user 1.09 s, sys: 571 ms, total: 1.66 s
Wall time: 1.66 s
Out[5]:
A
1 NaN
1 1.0
1 1.0
1 1.0
1 1.0
...
1 1.0
1 1.0
1 1.0
1 1.0
1 1.0
Name: B, Length: 2000, dtype: float64 on 1.0.5: In [8]: import pandas as pd
...: import numpy as np
...:
...: N = 2000
...: df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
...: df = df.sort_values("A").set_index("A")
...:
...: %time df.groupby("A")["B"].fillna(method="ffill")
CPU times: user 3.99 ms, sys: 0 ns, total: 3.99 ms
Wall time: 3.39 ms
Out[8]:
A
1 NaN
1 1.0
1 1.0
1 1.0
1 1.0
...
1 1.0
1 1.0
1 1.0
1 1.0
1 1.0
Name: B, Length: 2000, dtype: float64 |
|
As for larger N (starting from 10k) this never completes, can we consider adding back the |
Running profiler gives:
|
take |
smithto1
added a commit
to smithto1/pandas
that referenced
this issue
Oct 15, 2020
It seems the regression was introduced in #30679 |
5 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Groupby
Missing-data
np.nan, pd.NaT, pd.NA, dropna, isnull, interpolate
Performance
Memory or execution speed performance
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Problem description
The groupby + fillna gets extremely slow increasing the N.
This is a regression from 1.0.5->1.1.0.
Note: if I remove the
.set_index("A")
it's fast again.Expected Output
Same output, just faster.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : d9fff27
python : 3.7.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.110-1.el7.elrepo.x86_64
Version : #1 SMP Fri Jan 5 11:35:48 EST 2018
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.6.0.post20200917
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: