@@ -1144,19 +1144,39 @@ def put(
1144
1144
Write DataFrame index as a column.
1145
1145
append : bool, default False
1146
1146
This will force Table format, append the input data to the existing.
1147
+ complib : default None
1148
+ This parameter is currently not accepted.
1149
+ complevel : int, 0-9, default None
1150
+ Specifies a compression level for data.
1151
+ A value of 0 or None disables compression.
1152
+ min_itemsize : int, dict, or None
1153
+ Dict of columns that specify minimum str sizes.
1154
+ nan_rep : str
1155
+ Str to use as str nan representation.
1147
1156
data_columns : list of columns or True, default None
1148
1157
List of columns to create as data columns, or True to use all columns.
1149
1158
See `here
1150
1159
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
1151
1160
encoding : str, default None
1152
1161
Provide an encoding for strings.
1162
+ errors : str, default 'strict'
1163
+ The error handling scheme to use for encoding errors.
1164
+ The default is 'strict' meaning that encoding errors raise a
1165
+ UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
1166
+ 'xmlcharrefreplace' as well as any other name registered with
1167
+ codecs.register_error that can handle UnicodeEncodeErrors.
1153
1168
track_times : bool, default True
1154
1169
Parameter is propagated to 'create_table' method of 'PyTables'.
1155
1170
If set to False it enables to have the same h5 files (same hashes)
1156
1171
independent on creation time.
1157
1172
dropna : bool, default False, optional
1158
1173
Remove missing values.
1159
1174
1175
+ See Also
1176
+ --------
1177
+ HDFStore.info : Prints detailed information on the store.
1178
+ HDFStore.get_storer : Returns the storer object for a key.
1179
+
1160
1180
Examples
1161
1181
--------
1162
1182
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
0 commit comments