Description
Code Sample, a copy-pastable example if possible
Python 3.6.0 (default, Dec 29 2016, 21:40:24)
[GCC 4.9.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> df = pd.DataFrame({'a': [str(i) for i in range(10000)]})
>>> df.memory_usage(index=True, deep=True)
Index 80
a 608890
dtype: int64
>>> j = df.to_json()
>>> df.memory_usage(index=True, deep=True)
Index 80
a 804450
dtype: int64
Compared to Python 2.7.12
Python 2.7.12 (default, Jul 18 2016, 15:02:52)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> df = pd.DataFrame({'a': [str(i) for i in range(10000)]})
>>> df.memory_usage(index=True, deep=True)
Index 72
a 488890
dtype: int64
>>> j = df.to_json()
>>> df.memory_usage(index=True, deep=True)
Index 72
a 488890
dtype: int64
Problem description
Calling to_json
should not have any impact on the reported memory usage of a DataFrame. Just like in Python 2. The observed increase above is 32% which is really high.
This only seems to happen with dataframes that have strings in them.
I've also tested calling to_csv
, that does not trigger this behaviour.
Furthermore it seems like the memory usage is quite a lot higher in Python 3 compared to the equivalent data frame in Python 2 (~25% in the example above). I guess this is more related to strings in Python 2 vs Python 3 than Pandas though?
Expected Output
No change in reported memory usage after calling to_json
.
Output of pd.show_versions()
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 34.1.0
Cython: None
numpy: 1.12.0
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: 2.6.2
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None