BUG: memory_usage(deep=True) of dtype='string' is wrong #33963
Labels
Bug
ExtensionArray
Extending pandas with custom dtypes or arrays.
Strings
String extension data type and string data
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Problem description
memory_usage(deep=True) for new 'string' dtype is not 'deep' size.
Expected Output
In [3]: s = pd.Series(['a','b','c'],dtype='string')
...: print(s.memory_usage())
...: print(s.memory_usage(deep=True))
152
326
Output of
pd.show_versions()
pandas : 1.0.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.0
pip : 20.1
setuptools : 40.8.0
Cython : 0.29.16
pytest : 5.2.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 1.2.2
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.8.0
pandas_datareader: None
bs4 : 4.9.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.0
pandas_gbq : None
pyarrow : 0.15.0
pytables : None
pytest : 5.2.1
pyxlsb : None
s3fs : 0.2.2
scipy : 1.3.1
sqlalchemy : None
tables : None
tabulate : 0.8.5
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : 1.2.2
numba : None
The text was updated successfully, but these errors were encountered: