Skip to content

DataFrame.sum() creates temporary copy in memory #16788

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
flo-compbio opened this issue Jun 28, 2017 · 9 comments
Closed

DataFrame.sum() creates temporary copy in memory #16788

flo-compbio opened this issue Jun 28, 2017 · 9 comments
Labels
Numeric Operations Arithmetic, Comparison, and Logical operations Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc.

Comments

@flo-compbio
Copy link

Somehow, DataFrame.sum() seems to always create a temporary copy of the data frame in the memory.

Code Sample

First, we create a large, 3.7 GB DataFrame with many columns:

import pandas as pd
import numpy as np

p = 500
n = 1000000
dtype = np.float64

df = pd.DataFrame(np.arange(p*n, dtype=dtype).reshape((p, n)))
print(df.info())

Output:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Columns: 1000000 entries, 0 to 999999
dtypes: float64(1000000)
memory usage: 3.7 GB

Next, we want to sum over the rows:

# sum over rows
s = df.sum(axis=0)  # this step requires > 7GB of memory (!!!)
s.to_frame.info()

Output:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 1 columns):
0    1000000 non-null float64
dtypes: float64(1)
memory usage: 7.6 MB

By monitoring the memory consumption of Python during the execution of this step using top (execution takes a 2-3 seconds on my machine), I can see that the consumption temporarily goes up two-fold, indicating that a copy of the entire frame is created in memory. However, the following code achieves the same result without creating a copy.

y = df.values.sum(axis=0)   # requires < 4 GB of memory
y = pd.Series(y, index=df.columns)
y.to_frame().info()

Output:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 1 columns):
0    1000000 non-null float64
dtypes: float64(1)
memory usage: 7.6 MB

Problem description

Creating a copy of the data frame seems unnecessary for summing (numpy does it without creating a copy). The current implementation of DataFrame.sum() makes it impossible to sum over data frames if there isn't enough memory available to create a copy.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.0-24-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.20.2
pytest: 3.1.2
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.0
xarray: None
IPython: 6.1.0
sphinx: 1.6.1
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.9.6
lxml: None
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None

@chris-b1
Copy link
Contributor

I would recommend that you install bottleneck - with it installed this doesn't use any extra memory - and its sum impl is much faster than what pandas falls back to.

In [11]: %timeit df.sum(axis=1)
677 ms ± 7.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [12]: pd.options.compute.use_bottleneck = False

In [13]: %timeit df.sum(axis=1)
3.5 s ± 130 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

@flo-compbio
Copy link
Author

Thanks for the tip. For my purposes, the workaround that I described works just as well. My point was that maybe the default DataFrame.sum() implementation itself could be improved.

@chris-b1
Copy link
Contributor

It does seem like there is an avoidable copy here

values, mask, dtype, dtype_max = _get_values(values, skipna, 0)

@chris-b1 chris-b1 added Difficulty Intermediate Performance Memory or execution speed performance labels Jun 28, 2017
@chris-b1 chris-b1 added this to the Next Major Release milestone Jun 28, 2017
@jreback
Copy link
Contributor

jreback commented Jun 28, 2017

well if you can avoid copy w/o changing any semantics sure

@jorisvandenbossche
Copy link
Member

So the reason of the unavoidable copy is that we use putmask to deal with missing values (so change the missing values to 0 in case of sum). But, this is only needed when there actually are missing values. So in principle we could check for that and only copy in such a case (if this doesn't introduce a perf penalty for the check).

@chris-b1
Copy link
Contributor

Oh, right. I suppose this could be left open but the right answer is to use bottleneck - it does even better than that.

@karkirowle
Copy link

I have a quite similar trouble even with bottleneck, it takes up all of my memory and freezes my computer. Is there a way to avoid this? https://stackoverflow.com/questions/45350545/how-to-avoid-this-memory-leak-caused-by-dataframe-sum-in-pandas

@jreback
Copy link
Contributor

jreback commented Jul 27, 2017

@karkirowle your example is not similar at all

this issue is about summing floats

you are summing strings which is horribly inefficient and memory hungry
you need to rethink what you think you are doing

@mzeitlin11
Copy link
Member

No longer see memory > 7 GB on master and the copy is avoided here:

if mask.any():
if dtype_ok or datetimelike:
values = values.copy()
np.putmask(values, mask, fill_value)
else:
# np.where will promote if needed
values = np.where(~mask, values, fill_value)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Numeric Operations Arithmetic, Comparison, and Logical operations Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc.
Projects
None yet
Development

No branches or pull requests

7 participants