You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/source/io.rst
+6
Original file line number
Diff line number
Diff line change
@@ -3267,6 +3267,12 @@ the database using :func:`~pandas.DataFrame.to_sql`.
3267
3267
3268
3268
data.to_sql('data', engine)
3269
3269
3270
+
With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This can be avoided by setting the ``chunksize`` parameter when calling ``to_sql``. For example, the following writes ``data`` to the database in batches of 1000 rows at a time:
3271
+
3272
+
.. ipython:: python
3273
+
3274
+
data.to_sql('data', engine, chunksize=1000)
3275
+
3270
3276
.. note::
3271
3277
3272
3278
Due to the limited support for timedelta's in the different database
Copy file name to clipboardExpand all lines: doc/source/v0.15.0.txt
+3
Original file line number
Diff line number
Diff line change
@@ -425,6 +425,9 @@ Known Issues
425
425
426
426
Enhancements
427
427
~~~~~~~~~~~~
428
+
429
+
- Added support for a ``chunksize`` parameter to ``to_sql`` function. This allows DataFrame to be written in chunks and avoid packet-size overflow errors (:issue:`8062`)
430
+
428
431
- Added support for bool, uint8, uint16 and uint32 datatypes in ``to_stata`` (:issue:`7097`, :issue:`7365`)
429
432
430
433
- Added ``layout`` keyword to ``DataFrame.plot`` (:issue:`6667`)
0 commit comments