Skip to content

PERF: Fix reference leak in read_hdf #50714

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 16, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions asv_bench/benchmarks/io/hdf.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,26 @@ def setup(self, format):
self.df["object"] = tm.makeStringIndex(N)
self.df.to_hdf(self.fname, "df", format=format)

# Numeric df
self.df1 = self.df.copy()
self.df1 = self.df1.reset_index()
self.df1.to_hdf(self.fname, "df1", format=format)

def time_read_hdf(self, format):
read_hdf(self.fname, "df")

def mem_read_hdf_index(self, format):
# Check to make sure that index is not a view into
# the original recarray (which prevents it from being freed)
# xref GH 37441
# TODO: Don't abuse internals, need to fix asv
# to detect memory of ndarray views properly
df1 = read_hdf(self.fname, "df1")
return df1.index._data.base # Will be None(0 bytes) if not a view
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan myself of how this is written.

asv's memory benchmarks use pympler.asizeof internally, but with a view of a numpy array, it'll only give us the size of the view.

peakmem avoids this problem by tracking RSS, but that doesn't catch this improvement, since this only improves the memory usage at the end.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we assert this is None in a unit test instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's probably better.


def peakmem_read_hdf(self, format):
read_hdf(self.fname, "df")

def time_write_hdf(self, format):
self.df.to_hdf(self.fname, "df", format=format)

Expand Down
1 change: 1 addition & 0 deletions doc/source/whatsnew/v2.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -857,6 +857,7 @@ Performance improvements
- Performance improvement in :func:`to_datetime` when format is given or can be inferred (:issue:`50465`)
- Performance improvement in :func:`read_csv` when passing :func:`to_datetime` lambda-function to ``date_parser`` and inputs have mixed timezone offsetes (:issue:`35296`)
- Performance improvement in :meth:`.SeriesGroupBy.value_counts` with categorical dtype (:issue:`46202`)
- Fixed a reference leak in :func:`read_hdf` (:issue:`37441`)

.. ---------------------------------------------------------------------------
.. _whatsnew_200.bug_fixes:
Expand Down
4 changes: 3 additions & 1 deletion pandas/io/pytables.py
Original file line number Diff line number Diff line change
Expand Up @@ -2057,7 +2057,9 @@ def convert(

# values is a recarray
if values.dtype.fields is not None:
values = values[self.cname]
# Copy, otherwise values will be a view
# preventing the original recarry from being free'ed
values = values[self.cname].copy()

val_kind = _ensure_decoded(self.kind)
values = _maybe_convert(values, val_kind, encoding, errors)
Expand Down