-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: Memory leak on v1.5.0 when using pd.to_datetime() #49164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
sorry, might be unrelated, as there's no warnings here |
Can you reproduce @MarcoGorelli ? The memory usage is constant on my machine after the garbage collector kicks in. This is consistent with 1.4.0 |
I haven't tried to reproduce it yet - @mortymike 1.5.0 did introduce some warnings in |
Thanks for looking into this - there are no errors or warnings on the production instance, memory consumption just slowly increases with each iteration of the loop until the VM kills the process with 1.5.0. The example I provided has higher peak and average memory consumption for me on my Macbook (it may help to increase wait time between loops to see memory profile per loop iteration), but maybe this is not the root cause since garbage collector is actually running. Since my production instance is 100% reproducible and fatal every time on 1.5.0, I can try a couple things later today:
|
Updates on the above:
All this to say it looks like the latest |
Thanks for investigating, it could be possible that warnings were raised and caught again internally |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
There seems to be a memory leak in the latest 1.5.0 release when using
pd.to_datetime()
on a column. I triaged my production code to thepd.to_datetime()
line and can verify that this line is the cause. My code loops over a function similar to this with a differentdf
several hundred times, so I've created an example here to help replicate my situation.Running the above code on my personal machine, I can see a significant difference in memory consumption between 1.4.0 and 1.5.0 (I haven't tested any version between 1.4.0 and 1.5.0). However, my machine does appear to run the garbage collector and reduce memory periodically (something the production Docker container does not do with 1.5.0, but does do with 1.4.0).
I've provided an example above, which is a stripped down version of my production application. The process quickly consumes all available memory in my production Docker container and is Killed by the host machine with 1.5.0. Reverting to 1.4.0, the application does not have a memory leak, and everything else is identical.
Expected Behavior
1.4.0 and 1.5.0 memory consumption should be more or less identical when using pd.to_datetime(). There should not be a memory leak on 1.5.0 using this function.
Installed Versions
INSTALLED VERSIONS
commit : 87cfe4e
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.5.0
numpy : 1.23.4
pytz : 2022.4
dateutil : 2.8.2
setuptools : 63.4.3
pip : 22.2.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None
The text was updated successfully, but these errors were encountered: