-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
PERF: Performance regression with memory_usage(deep=True) on object
columns
#33012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Was the previous (0.23.4) output correct?
|
The column sizes look the same. Looks like the index size changed in v0.25.0. v1.0.3
v0.25.0
v0.24.0
v0.23.4
|
Thanks for posting all the timings. The slowdown from 0.25.0 to 1.0.3 looks concerning. It seems primarily driven by doing Lines 1388 to 1390 in 28e0f18
That's now a @jorisvandenbossche do you know if we've discussed adding Lines 1385 to 1386 in 28e0f18
.to_numpy() on the object passed to memory_usage_of_objects .
|
With that diff diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 67e3807c47..a8d04fee8d 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -431,6 +431,14 @@ class ExtensionArray:
# on the number of bytes needed.
raise AbstractMethodError(self)
+ def memory_usage(self, deep=True):
+ from pandas.core.dtypes.common import is_object_dtype
+ from pandas.compat import PYPY
+ v = self.nbytes
+ if deep and is_object_dtype(self.dtype) and not PYPY:
+ v += lib.memory_usage_of_objects(self.to_numpy())
+ return v
+
# ------------------------------------------------------------------------
# Additional Methods
# ------------------------------------------------------------------------ In [2]: %timeit df.memory_usage(deep=True)
89.4 ms ± 1.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) vs master In [2]: %timeit df.memory_usage(deep=True)
2.56 s ± 114 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) |
I think the |
(I am actually surprised that |
I'm also surprised that worked.
This is probably safe to assume... So then this requires Updating Line 1390 in e7ee418
self._values rather than self._array and then adding an ASV, probably in asv_bench/benchmarks/frame_methods.py that has a DataFrame with object-dtype columns.
|
take |
Thanks for the fix guys! |
Code Sample, a copy-pastable example if possible
Problem description
Performance of
memory_usage(deep=True)
onobject
columns seems to have regressed significantly since v0.23.4. Once in v0.24.0, and another regression in v1.0.0 that remains in v1.0.3.Output
v1.0.3
v0.24.0
v0.23.4
Removing
df["a"] = df["a"].astype("object")
reverts it back to the expected magnitude of speed in v1.0.3:Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.6.5.final.0
python-bits : 64
OS : Linux
OS-release : 3.16.0-77-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_AU.UTF-8
LOCALE : en_AU.UTF-8
pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
The text was updated successfully, but these errors were encountered: