Skip to content

PERF: Performance regression with memory_usage(deep=True) on object columns #33012

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Lawrr opened this issue Mar 25, 2020 · 9 comments · Fixed by #33102
Closed

PERF: Performance regression with memory_usage(deep=True) on object columns #33012

Lawrr opened this issue Mar 25, 2020 · 9 comments · Fixed by #33102
Assignees
Labels
ExtensionArray Extending pandas with custom dtypes or arrays. good first issue Performance Memory or execution speed performance
Milestone

Comments

@Lawrr
Copy link

Lawrr commented Mar 25, 2020

Code Sample, a copy-pastable example if possible

import time

import pandas as pd
import numpy as np

df = pd.DataFrame({
    "a": np.empty(10000000),
    "b": np.empty(10000000),
})

df["a"] = df["a"].astype("object")

s = time.time()
mem = df.memory_usage(deep=True)
print("memory_usage(deep=True) took %.4fsecs" % (time.time() - s))

Problem description

Performance of memory_usage(deep=True) on object columns seems to have regressed significantly since v0.23.4. Once in v0.24.0, and another regression in v1.0.0 that remains in v1.0.3.

Output
v1.0.3

memory_usage(deep=True) took 26.4566secs

v0.24.0

memory_usage(deep=True) took 6.0479secs

v0.23.4

memory_usage(deep=True) took 0.4633secs

Removing df["a"] = df["a"].astype("object") reverts it back to the expected magnitude of speed in v1.0.3:

memory_usage(deep=True) took 0.0024secs

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.6.5.final.0
python-bits : 64
OS : Linux
OS-release : 3.16.0-77-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_AU.UTF-8
LOCALE : en_AU.UTF-8

pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

@TomAugspurger
Copy link
Contributor

Was the previous (0.23.4) output correct?

Removing df["a"] = df["a"].astype("object") reverts it back to the expected magnitude of speed in v1.0.3:

deep=True only has an effect for object dtype. With object dtype we need to inspect each value for its memory usage.

@TomAugspurger TomAugspurger added the Needs Info Clarification about behavior needed to assess issue label Mar 25, 2020
@Lawrr
Copy link
Author

Lawrr commented Mar 25, 2020

The column sizes look the same. Looks like the index size changed in v0.25.0.

v1.0.3

memory_usage(deep=True) took 26.5421secs
Index          128
a        320000000
b         80000000
dtype: int64

v0.25.0

memory_usage(deep=True) took 4.7278secs
Index          128
a        320000000
b         80000000
dtype: int64

v0.24.0

memory_usage(deep=True) took 4.5771secs
Index           80
a        320000000
b         80000000
dtype: int64

v0.23.4

memory_usage(deep=True) took 0.4579secs
Index           80
a        320000000
b         80000000
dtype: int64

@TomAugspurger
Copy link
Contributor

Thanks for posting all the timings. The slowdown from 0.25.0 to 1.0.3 looks concerning. It seems primarily driven by doing lib.memory_usage_of_objects(self.array) in

pandas/pandas/core/base.py

Lines 1388 to 1390 in 28e0f18

v = self.array.nbytes
if deep and is_object_dtype(self) and not PYPY:
v += lib.memory_usage_of_objects(self.array)

That's now a PandasArray, and it involves doing a getitem on every element, which is slower that NumPy's ndarray.

@jorisvandenbossche do you know if we've discussed adding memory_usage to the EA interface? There's a check for it at

pandas/pandas/core/base.py

Lines 1385 to 1386 in 28e0f18

if hasattr(self.array, "memory_usage"):
return self.array.memory_usage(deep=deep)
. Ideally PandasArray would be able to do a .to_numpy() on the object passed to memory_usage_of_objects.

@TomAugspurger TomAugspurger added Performance Memory or execution speed performance ExtensionArray Extending pandas with custom dtypes or arrays. and removed Needs Info Clarification about behavior needed to assess issue labels Mar 25, 2020
@TomAugspurger
Copy link
Contributor

With that diff

diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 67e3807c47..a8d04fee8d 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -431,6 +431,14 @@ class ExtensionArray:
         # on the number of bytes needed.
         raise AbstractMethodError(self)
 
+    def memory_usage(self, deep=True):
+        from pandas.core.dtypes.common import is_object_dtype
+        from pandas.compat import PYPY
+        v = self.nbytes
+        if deep and is_object_dtype(self.dtype) and not PYPY:
+            v += lib.memory_usage_of_objects(self.to_numpy())
+        return v
+
     # ------------------------------------------------------------------------
     # Additional Methods
     # ------------------------------------------------------------------------
In [2]: %timeit df.memory_usage(deep=True)
89.4 ms ± 1.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

vs master

In [2]: %timeit df.memory_usage(deep=True)
2.56 s ± 114 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

@jorisvandenbossche
Copy link
Member

I think the Series.memory_usage implementation should simply use self._values instead of self.array? ExtensionArrays can never be object dtype, so never take that path.

@jorisvandenbossche
Copy link
Member

(I am actually surprised that memory_usage_of_objects works with passing a PandasArray, I would have though cython to raise an error when not getting an ndarray/memoryview)

@TomAugspurger
Copy link
Contributor

I'm also surprised that worked.

ExtensionArrays can never be object dtype, so never take that path.

This is probably safe to assume... So then this requires Updating

v += lib.memory_usage_of_objects(self.array)
to use self._values rather than self._array and then adding an ASV, probably in asv_bench/benchmarks/frame_methods.py that has a DataFrame with object-dtype columns.

@TomAugspurger TomAugspurger added this to the Contributions Welcome milestone Mar 26, 2020
@neilkg
Copy link
Contributor

neilkg commented Mar 28, 2020

take

@Lawrr
Copy link
Author

Lawrr commented Mar 31, 2020

Thanks for the fix guys!

@simonjayhawkins simonjayhawkins modified the milestones: Contributions Welcome, 1.0.4 May 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ExtensionArray Extending pandas with custom dtypes or arrays. good first issue Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants