Skip to content

Commit b9bc0a6

Browse files
rmhowe425mroeschke
authored andcommitted
DEPR: Positional arguments in to_sql except name (pandas-dev#54397)
* Updated method header and whatsnew file * Updated unit tests to use keyword argument for con parameter. * Updating unit tests and implementation. * Updated documentation and unit tests. * Updating documentation and fixing unit tests. * Updating documentation. * Updating documentation and fixing failing unit tests. * Updating documentation and unit tests. * Updating implementation based on reviewer feedback. * Updating implementation to allow 'self' to be a positional arg. * Deprecating con positional arg in new test case. * Fixing typo * Fixing typo
1 parent 031c1db commit b9bc0a6

File tree

6 files changed

+127
-88
lines changed

6 files changed

+127
-88
lines changed

doc/source/user_guide/io.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -5651,7 +5651,7 @@ the database using :func:`~pandas.DataFrame.to_sql`.
56515651
data = pd.DataFrame(d, columns=c)
56525652
56535653
data
5654-
data.to_sql("data", engine)
5654+
data.to_sql("data", con=engine)
56555655
56565656
With some databases, writing large DataFrames can result in errors due to
56575657
packet size limitations being exceeded. This can be avoided by setting the
@@ -5660,7 +5660,7 @@ writes ``data`` to the database in batches of 1000 rows at a time:
56605660

56615661
.. ipython:: python
56625662
5663-
data.to_sql("data_chunked", engine, chunksize=1000)
5663+
data.to_sql("data_chunked", con=engine, chunksize=1000)
56645664
56655665
SQL data types
56665666
++++++++++++++
@@ -5680,7 +5680,7 @@ default ``Text`` type for string columns:
56805680
56815681
from sqlalchemy.types import String
56825682
5683-
data.to_sql("data_dtype", engine, dtype={"Col_1": String})
5683+
data.to_sql("data_dtype", con=engine, dtype={"Col_1": String})
56845684
56855685
.. note::
56865686

@@ -5849,7 +5849,7 @@ have schema's). For example:
58495849

58505850
.. code-block:: python
58515851
5852-
df.to_sql("table", engine, schema="other_schema")
5852+
df.to_sql(name="table", con=engine, schema="other_schema")
58535853
pd.read_sql_table("table", engine, schema="other_schema")
58545854
58555855
Querying
@@ -5876,7 +5876,7 @@ Specifying this will return an iterator through chunks of the query result:
58765876
.. ipython:: python
58775877
58785878
df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
5879-
df.to_sql("data_chunks", engine, index=False)
5879+
df.to_sql(name="data_chunks", con=engine, index=False)
58805880
58815881
.. ipython:: python
58825882

doc/source/whatsnew/v0.14.0.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -437,7 +437,7 @@ This ``engine`` can then be used to write or read data to/from this database:
437437
.. ipython:: python
438438
439439
df = pd.DataFrame({'A': [1, 2, 3], 'B': ['a', 'b', 'c']})
440-
df.to_sql('db_table', engine, index=False)
440+
df.to_sql(name='db_table', con=engine, index=False)
441441
442442
You can read data from a database by specifying the table name:
443443

doc/source/whatsnew/v2.1.0.rst

+2
Original file line numberDiff line numberDiff line change
@@ -260,6 +260,7 @@ Other enhancements
260260
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
261261
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
262262
- Performance improvement in :meth:`GroupBy.quantile` (:issue:`51722`)
263+
-
263264

264265
.. ---------------------------------------------------------------------------
265266
.. _whatsnew_210.notable_bug_fixes:
@@ -600,6 +601,7 @@ Other Deprecations
600601
- Deprecated the use of non-supported datetime64 and timedelta64 resolutions with :func:`pandas.array`. Supported resolutions are: "s", "ms", "us", "ns" resolutions (:issue:`53058`)
601602
- Deprecated values "pad", "ffill", "bfill", "backfill" for :meth:`Series.interpolate` and :meth:`DataFrame.interpolate`, use ``obj.ffill()`` or ``obj.bfill()`` instead (:issue:`53581`)
602603
- Deprecated the behavior of :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`Series.argmax`, :meth:`Series.argmin` with either all-NAs and skipna=True or any-NAs and skipna=False returning -1; in a future version this will raise ``ValueError`` (:issue:`33941`, :issue:`33942`)
604+
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_sql` except ``name``. (:issue:`54229`)
603605
-
604606

605607
.. ---------------------------------------------------------------------------

pandas/core/generic.py

+14-8
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,10 @@
9797
SettingWithCopyWarning,
9898
_chained_assignment_method_msg,
9999
)
100-
from pandas.util._decorators import doc
100+
from pandas.util._decorators import (
101+
deprecate_nonkeyword_arguments,
102+
doc,
103+
)
101104
from pandas.util._exceptions import find_stack_level
102105
from pandas.util._validators import (
103106
check_dtype_backend,
@@ -2792,6 +2795,9 @@ def to_hdf(
27922795
)
27932796

27942797
@final
2798+
@deprecate_nonkeyword_arguments(
2799+
version="3.0", allowed_args=["self", "name"], name="to_sql"
2800+
)
27952801
def to_sql(
27962802
self,
27972803
name: str,
@@ -2911,7 +2917,7 @@ def to_sql(
29112917
1 User 2
29122918
2 User 3
29132919
2914-
>>> df.to_sql('users', con=engine)
2920+
>>> df.to_sql(name='users', con=engine)
29152921
3
29162922
>>> from sqlalchemy import text
29172923
>>> with engine.connect() as conn:
@@ -2922,14 +2928,14 @@ def to_sql(
29222928
29232929
>>> with engine.begin() as connection:
29242930
... df1 = pd.DataFrame({'name' : ['User 4', 'User 5']})
2925-
... df1.to_sql('users', con=connection, if_exists='append')
2931+
... df1.to_sql(name='users', con=connection, if_exists='append')
29262932
2
29272933
29282934
This is allowed to support operations that require that the same
29292935
DBAPI connection is used for the entire operation.
29302936
29312937
>>> df2 = pd.DataFrame({'name' : ['User 6', 'User 7']})
2932-
>>> df2.to_sql('users', con=engine, if_exists='append')
2938+
>>> df2.to_sql(name='users', con=engine, if_exists='append')
29332939
2
29342940
>>> with engine.connect() as conn:
29352941
... conn.execute(text("SELECT * FROM users")).fetchall()
@@ -2939,7 +2945,7 @@ def to_sql(
29392945
29402946
Overwrite the table with just ``df2``.
29412947
2942-
>>> df2.to_sql('users', con=engine, if_exists='replace',
2948+
>>> df2.to_sql(name='users', con=engine, if_exists='replace',
29432949
... index_label='id')
29442950
2
29452951
>>> with engine.connect() as conn:
@@ -2956,7 +2962,7 @@ def to_sql(
29562962
... stmt = insert(table.table).values(data).on_conflict_do_nothing(index_elements=["a"])
29572963
... result = conn.execute(stmt)
29582964
... return result.rowcount
2959-
>>> df_conflict.to_sql("conflict_table", conn, if_exists="append", method=insert_on_conflict_nothing) # doctest: +SKIP
2965+
>>> df_conflict.to_sql(name="conflict_table", con=conn, if_exists="append", method=insert_on_conflict_nothing) # doctest: +SKIP
29602966
0
29612967
29622968
For MySQL, a callable to update columns ``b`` and ``c`` if there's a conflict
@@ -2973,7 +2979,7 @@ def to_sql(
29732979
... stmt = stmt.on_duplicate_key_update(b=stmt.inserted.b, c=stmt.inserted.c)
29742980
... result = conn.execute(stmt)
29752981
... return result.rowcount
2976-
>>> df_conflict.to_sql("conflict_table", conn, if_exists="append", method=insert_on_conflict_update) # doctest: +SKIP
2982+
>>> df_conflict.to_sql(name="conflict_table", con=conn, if_exists="append", method=insert_on_conflict_update) # doctest: +SKIP
29772983
2
29782984
29792985
Specify the dtype (especially useful for integers with missing values).
@@ -2989,7 +2995,7 @@ def to_sql(
29892995
2 2.0
29902996
29912997
>>> from sqlalchemy.types import Integer
2992-
>>> df.to_sql('integers', con=engine, index=False,
2998+
>>> df.to_sql(name='integers', con=engine, index=False,
29932999
... dtype={"A": Integer()})
29943000
3
29953001

pandas/io/sql.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -621,7 +621,7 @@ def read_sql(
621621
>>> conn = connect(':memory:')
622622
>>> df = pd.DataFrame(data=[[0, '10/11/12'], [1, '12/11/10']],
623623
... columns=['int_column', 'date_column'])
624-
>>> df.to_sql('test_data', conn)
624+
>>> df.to_sql(name='test_data', con=conn)
625625
2
626626
627627
>>> pd.read_sql('SELECT int_column, date_column FROM test_data', conn)

0 commit comments

Comments
 (0)