-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
DOC: update the DataFrame.to_hdf() docstirng #20186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
19bc38a
6d42dd1
7781bb5
429afbe
eebfc39
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1786,7 +1786,8 @@ def to_json(self, path_or_buf=None, orient=None, date_format=None, | |
index=index) | ||
|
||
def to_hdf(self, path_or_buf, key, **kwargs): | ||
"""Write the contained data to an HDF5 file using HDFStore. | ||
""" | ||
Write the contained data to an HDF5 file using HDFStore. | ||
|
||
Parameters | ||
---------- | ||
|
@@ -1834,6 +1835,42 @@ def to_hdf(self, path_or_buf, key, **kwargs): | |
If applying compression use the fletcher32 checksum | ||
dropna : boolean, default False. | ||
If true, ALL nan rows will not be written to store. | ||
|
||
See Also | ||
-------- | ||
DataFrame.to_csv : write out to a csv file. | ||
DataFrame.to_sql : write to a sql table. | ||
DataFrame.to_feather : write out feather-format for DataFrames. | ||
|
||
Examples | ||
-------- | ||
>>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, | ||
... index=['a', 'b', 'c']) | ||
>>> df.to_hdf('data.h5', key='df', mode='w') | ||
|
||
We can append another object to the same file: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would use "add" here instead of "append", because "append" is also a keyword with a different behaviour (appending rows to the same table, not the same file) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||
|
||
>>> s = pd.Series([1, 2, 3, 4]) | ||
>>> s.to_hdf('data.h5', key='s') | ||
|
||
Reading from HDF file: | ||
|
||
>>> pd.read_hdf('data.h5', 'df') | ||
A B | ||
a 1 4 | ||
b 2 5 | ||
c 3 6 | ||
>>> pd.read_hdf('data.h5', 's')) | ||
0 1 | ||
1 2 | ||
2 3 | ||
3 4 | ||
dtype: int64 | ||
|
||
Notes | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think Notes are before examples. If you want to put a link, link to the section in the io.rst docs. You can add the HDF5 link in there if you want (in an appropriate location) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually they already have link to the same manual there, so just removed this section. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also don't know why, but previously not all changes were pushed here, some remained locally, fixed it. Sorry for that. |
||
----- | ||
Learn more about `Hierarchical Data Format (HDF) | ||
<https://support.hdfgroup.org/HDF5/whatishdf5.html>`__. | ||
""" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you add in the end here a code block with
(so running the doctests does not leave behind files) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done Many thanks for you comments, they are really useful. |
||
|
||
from pandas.io import pytables | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add DataFrame.to_parquet, read_hdf