-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PERF: MultiIndex equals method got 20x slower #43549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have run pyspy on the code. That reveals that all the time is spent on line 419 in the pandas/core/dtypes/missing.py file, inside the array_equivalent method: def array_equivalent(
left, right, strict_nan: bool = False, dtype_equal: bool = False
) -> bool:
left, right = np.asarray(left), np.asarray(right) |
The full "stack trace" from pyspy is:
|
Best guess is it has to do with the warnings.catch_warnings now in Two options come to mind: a) go into MultiIndex.equals and do something to avoid calling array_equivalent, b) go into ints_to_pydatetime and avoid calling the Timestamp constructor directly |
Better guess: In MultiIndex.equals we have changed the way we construct self_values from
to
the latter is a DatetimeArray in this case, and np.asarray gets called on it in the array_equivalent call that follows. As a result, np.asarray is being called on a much larger array, so is much slower. Option 1: construct self_values in something better-resembling the old version Option 2 is the better option. |
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the master branch of pandas.
Reproducible Example
from dateutil import tz
import pandas as pd
from time import time
dates = pd.date_range('2010-01-01', periods=1000, tz=tz.tzutc())
index = pd.MultiIndex.from_product([range(100), dates])
index2 = index.copy()
start_time = time()
index.equals(index2)
used_time = time() - start_time
print("Took", int(used_time*1000), "milliseconds")
Installed Versions
commit : 73c6825
python : 3.9.6.final.0
python-bits : 64
OS : Darwin
OS-release : 20.5.0
Version : Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.3
numpy : 1.21.2
pytz : 2021.1
dateutil : 2.8.2
pip : 21.2.4
setuptools : 57.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
Prior Performance
This issue did not exist in pandas 1.2.5. The timing measured by the script with pandas 1.2.5 on my computer is about 30 milliseconds, while the timing in pandas 1.3.1, 1.3.2 and 1.3.3 is about 600 ms. Thus there has been a 20x increase in run time. This severely impacts the performance of our application. A section of our code that used to take about 30 seconds, now takes about 10 minutes (that code uses
pd.concat
a lot, which in turn callsMultiIndex.equals
).The text was updated successfully, but these errors were encountered: