Skip to content

High Memory Usage when Concatenating Partially Indexed Series to Multiindexed Dataframe #20803

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cyniphile opened this issue Apr 23, 2018 · 1 comment
Labels
Indexing Related to indexing on series/frames, not to indexes themselves MultiIndex Performance Memory or execution speed performance

Comments

@cyniphile
Copy link

cyniphile commented Apr 23, 2018

Code Sample, a copy-pastable example if possible

import numpy as np
import pandas as pd
import resourcemultindex = pd.MultiIndex.from_product([range(1000), range(1000)])
df = pd.DataFrame(np.random.randint(low=0, high=10, size=(len(multindex), 1)),
                  columns=['a'], index=multindex)
print('base:', resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
​
df['int_col'] = 0
print("insert int col:", 
      resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
​
df['str_col'] = 'hey'
print("insert string col:",
      resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
​
s1 = pd.Series(1, index=multindex)
df['multindex_int'] = s1
print("insert multindexed int col:", 
      resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
​
partial_multindex = multindex[:1000]
s2 = pd.Series(1, index=partial_multindex)
df['partial_multindex_int'] = s2
print("insert partial multindexed int col:", 
      resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)

Output:

base: 91360
insert int col 99188
insert string col 107108
insert multindexed int col 122664
insert partial multindexed int col 257984

Problem description

I have a big Dataframe df (~100m rows) with a multiindex. I want to concat a series s to it as a new column. This series' index is a subset of df's index.

If I simply do df['new'] = s or df = pd.concat([df, s], axis=1) these operations take up a massive amount of memory (as much as df) and crash my computer. Adding a new column df['new'] = "hi" doesn't cause this problem. See code example; memory usage doubles on the last operation, but clearly not much new data is being added.

Expected Output

n/a

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-16-lowlatency
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.22.0
pytest: None
pip: 9.0.3
setuptools: 39.0.1
Cython: 0.28.1
numpy: 1.14.2
scipy: 1.0.1
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: None
patsy: 0.5.0
dateutil: 2.7.2
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 1.0.1
sqlalchemy: 1.2.6
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.6.0+16.g1763dbd
None

@mroeschke mroeschke added Indexing Related to indexing on series/frames, not to indexes themselves Performance Memory or execution speed performance labels Jan 13, 2019
@mroeschke
Copy link
Member

Related tracking is in #38650 so closing in favor of that issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Indexing Related to indexing on series/frames, not to indexes themselves MultiIndex Performance Memory or execution speed performance
Projects
None yet
Development

No branches or pull requests

3 participants