-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
DOC/DEPR: to_hdf
produces a different file each time
#51456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
While I agree that the file differ, h5diff shows that they are the same. I also found them identical when I hand-compared the contents in HDFView. I suspect this bug is upstream as I can't see how pandas is producing this difference. |
One observation from cmp -l file_a.h5 fileba.h5 | gawk '{printf "%08X %02X %02X\n", $1, strtonum(0$2), strtonum(0$3)}'
Is that the places where the files differ repeat. |
we had the same issue few years ago and we made a an issue (#32682) and later MR. You have to use |
Would be great if someone wanted to add to the Notes section explaining this. |
Should |
IMHO comment in Note section mentioning the consistency of hashes would be fine. |
I think we can agree this is not bug, but the question is if we've set a bad default here, i.e. do we want to change I can see the utility of storing the creation time, but also in getting the same checksum each time, so I guess a decision just has to be made. I'm ok with documenting this on the |
to_hdf
produces a different file each timeto_hdf
produces a different file each time
I also think this is a cause of a bad default. Perhaps switch the default to |
I think that's reasonable. If people want to save the time, they can use |
I feel like this is more involved than just setting the More specifically, running the following script several times will show that only if import hashlib
import numpy as np
import pandas as pd
test_data = np.arange(9).reshape(3,3)
df = pd.DataFrame(test_data)
key = "test"
df.to_hdf(f'to_hdf.h5', key=key)
print(hashlib.md5(open("to_hdf.h5", "rb").read()).hexdigest(), ' to_hdf')
for i, kws in enumerate((
{'track_times': False},
{'track_times': False, 'index': None},
{'track_times': False, 'format': "table"},
{'track_times': False, 'format': "table", 'index': None},
)):
with pd.HDFStore(f'store_{i}.h5') as store:
store.put(key=key, value=df, **kws)
print(hashlib.md5(open(f"store_{i}.h5", "rb").read()).hexdigest(), f' hdfstore.put({kws})') Is this a known limitation? |
Can you dig into the hdf files and find the difference across the runs with track_time False? |
Not sure how much that helps but quickly hexdumping the contents of the HDF5 files from 2 different runs from the
|
I can't reproduce. I modified your code to output the data multiple times. import hashlib
from collections import defaultdict
import numpy as np
import pandas as pd
test_data = np.arange(9).reshape(3,3)
df = pd.DataFrame(test_data)
key = "test"
df.to_hdf(f'to_hdf.h5', key=key)
print(hashlib.md5(open("to_hdf.h5", "rb").read()).hexdigest(), ' to_hdf')
hashes = defaultdict(set)
configs = ( {'track_times': False},
{'track_times': False, 'index': None},
{'track_times': False, 'format': "table"},
{'track_times': False, 'format': "table", 'index': None},
)
for i, kws in enumerate(configs):
for i in range(10):
base = str(kws).replace(" ","").replace("'","").replace(":","-").replace("}","").replace("{","").replace(",","-")
file_name = f"store {base}.h5"
with pd.HDFStore(file_name) as store:
print(key, kws)
store.put(key=key, value=df, **kws)
digest = hashlib.md5(open(file_name, "rb").read()).hexdigest()
hashes_key = tuple([(k,v) for k,v in kws.items()])
hashes[hashes_key].update([digest])
print(digest, f' hdfstore.put({kws})') The key output is In [39]: hashes
Out[39]:
defaultdict(set,
{(('track_times', False),): {'310256407fa6993982e0d6e7222301c0'},
(('track_times', False),
('index', None)): {'310256407fa6993982e0d6e7222301c0'},
(('track_times', False),
('format', 'table')): {'4733726fb8707bcb43a0124652b1c961'},
(('track_times', False),
('format', 'table'),
('index', None)): {'7ea9df78420924c89dde093f0e88b709'}}) |
Hmm, looks like a "double" Heisenbug to me:
|
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Currently,
to_hdf
produces a different file, even though the data itself does not change. Perhaps there are timestamps or similar added on a lower level, which ultimately produces different hashes for practically the same contents.Expected Behavior
I would expect
to_hdf
to be deterministic, so it should write the same bits to the disk if the input stays the same. In my case this would help versioning hdf5 files produced with pandas, since I can use the md5/sha/whatever hash to reference a specific version, and verify that my pipeline is producing the same (or different) data files.As a comparison, h5py produces the same file each time.
Installed Versions
INSTALLED VERSIONS
commit : 8dab54d
python : 3.10.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-52-generic
Version : #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_GB.UTF-8
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.5.2
numpy : 1.23.5
pytz : 2022.6
dateutil : 2.8.2
setuptools : 65.6.3
pip : 23.0
Cython : 0.29.32
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 3.0.3
lxml.etree : 4.9.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.8.0
pandas_datareader: None
bs4 : 4.8.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : 2022.11.0
gcsfs : None
matplotlib : 3.6.2
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.0
snappy : None
sqlalchemy : None
tables : 3.8.0
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
zstandard : None
tzdata : 2022.6
The text was updated successfully, but these errors were encountered: