@@ -1138,19 +1138,39 @@ def put(
1138
1138
Write DataFrame index as a column.
1139
1139
append : bool, default False
1140
1140
This will force Table format, append the input data to the existing.
1141
+ complib : default None
1142
+ This parameter is currently not accepted.
1143
+ complevel : int, 0-9, default None
1144
+ Specifies a compression level for data.
1145
+ A value of 0 or None disables compression.
1146
+ min_itemsize : int, dict, or None
1147
+ Dict of columns that specify minimum str sizes.
1148
+ nan_rep : str
1149
+ Str to use as str nan representation.
1141
1150
data_columns : list of columns or True, default None
1142
1151
List of columns to create as data columns, or True to use all columns.
1143
1152
See `here
1144
1153
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
1145
1154
encoding : str, default None
1146
1155
Provide an encoding for strings.
1156
+ errors : str, default 'strict'
1157
+ The error handling scheme to use for encoding errors.
1158
+ The default is 'strict' meaning that encoding errors raise a
1159
+ UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
1160
+ 'xmlcharrefreplace' as well as any other name registered with
1161
+ codecs.register_error that can handle UnicodeEncodeErrors.
1147
1162
track_times : bool, default True
1148
1163
Parameter is propagated to 'create_table' method of 'PyTables'.
1149
1164
If set to False it enables to have the same h5 files (same hashes)
1150
1165
independent on creation time.
1151
1166
dropna : bool, default False, optional
1152
1167
Remove missing values.
1153
1168
1169
+ See Also
1170
+ --------
1171
+ HDFStore.info : Prints detailed information on the store.
1172
+ HDFStore.get_storer : Returns the storer object for a key.
1173
+
1154
1174
Examples
1155
1175
--------
1156
1176
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
0 commit comments