Skip to content

Latest commit

 

History

History
701 lines (558 loc) · 42.3 KB

v2.2.0.rst

File metadata and controls

701 lines (558 loc) · 42.3 KB

What's new in 2.2.0 (Month XX, 2024)

These are the changes in pandas 2.2.0. See :ref:`release` for a full changelog including other versions of pandas.

{{ header }}

Enhancements

ADBC Driver support in to_sql and read_sql

:func:`read_sql` and :meth:`~DataFrame.to_sql` now work with Apache Arrow ADBC drivers. Compared to traditional drivers used via SQLAlchemy, ADBC drivers should provide significant performance improvements, better type support and cleaner nullability handling.

import adbc_driver_postgresql.dbapi as pg_dbapi

df = pd.DataFrame(
    [
        [1, 2, 3],
        [4, 5, 6],
    ],
    columns=['a', 'b', 'c']
)
uri = "postgresql://postgres:postgres@localhost/postgres"
with pg_dbapi.connect(uri) as conn:
    df.to_sql("pandas_table", conn, index=False)

# for roundtripping
with pg_dbapi.connect(uri) as conn:
    df2 = pd.read_sql("pandas_table", conn)

The Arrow type system offers a wider array of types that can more closely match what databases like PostgreSQL can offer. To illustrate, note this (non-exhaustive) listing of types available in different databases and pandas backends:

numpy/pandas arrow postgres sqlite
int16/Int16 int16 SMALLINT INTEGER
int32/Int32 int32 INTEGER INTEGER
int64/Int64 int64 BIGINT INTEGER
float32 float32 REAL REAL
float64 float64 DOUBLE PRECISION REAL
object string TEXT TEXT
bool bool_ BOOLEAN  
datetime64[ns] timestamp(us) TIMESTAMP  
datetime64[ns,tz] timestamp(us,tz) TIMESTAMPTZ  
  date32 DATE  
  month_day_nano_interval INTERVAL  
  binary BINARY BLOB
  decimal128 DECIMAL [1]  
  list ARRAY [1]  
  struct
COMPOSITE TYPE
[1]
 

Footnotes

[1](1, 2, 3) Not implemented as of writing, but theoretically possible

If you are interested in preserving database types as best as possible throughout the lifecycle of your DataFrame, users are encouraged to leverage the dtype_backend="pyarrow" argument of :func:`~pandas.read_sql`

# for roundtripping
with pg_dbapi.connect(uri) as conn:
    df2 = pd.read_sql("pandas_table", conn, dtype_backend="pyarrow")

This will prevent your data from being converted to the traditional pandas/NumPy type system, which often converts SQL types in ways that make them impossible to round-trip.

For a full list of ADBC drivers and their development status, see the ADBC Driver Implementation Status documentation.

ExtensionArray.to_numpy converts to suitable NumPy dtype

:meth:`ExtensionArray.to_numpy` will now convert to a suitable NumPy dtype instead of object dtype for nullable extension dtypes.

Old behavior:

In [1]: ser = pd.Series([1, 2, 3], dtype="Int64")
In [2]: ser.to_numpy()
Out[2]: array([1, 2, 3], dtype=object)

New behavior:

.. ipython:: python

    ser = pd.Series([1, 2, 3], dtype="Int64")
    ser.to_numpy()

The default NumPy dtype (without any arguments) is determined as follows:

  • float dtypes are cast to NumPy floats
  • integer dtypes without missing values are cast to NumPy integer dtypes
  • integer dtypes with missing values are cast to NumPy float dtypes and NaN is used as missing value indicator
  • boolean dtypes without missing values are cast to NumPy bool dtype
  • boolean dtypes with missing values keep object dtype

Series.struct accessor for PyArrow structured data

The Series.struct accessor provides attributes and methods for processing data with struct[pyarrow] dtype Series. For example, :meth:`Series.struct.explode` converts PyArrow structured data to a pandas DataFrame. (:issue:`54938`)

.. ipython:: python

    import pyarrow as pa
    series = pd.Series(
        [
            {"project": "pandas", "version": "2.2.0"},
            {"project": "numpy", "version": "1.25.2"},
            {"project": "pyarrow", "version": "13.0.0"},
        ],
        dtype=pd.ArrowDtype(
            pa.struct([
                ("project", pa.string()),
                ("version", pa.string()),
            ])
        ),
    )
    series.struct.explode()

Series.list accessor for PyArrow list data

The Series.list accessor provides attributes and methods for processing data with list[pyarrow] dtype Series. For example, :meth:`Series.list.__getitem__` allows indexing pyarrow lists in a Series. (:issue:`55323`)

.. ipython:: python

    import pyarrow as pa
    series = pd.Series(
        [
            [1, 2, 3],
            [4, 5],
            [6],
        ],
        dtype=pd.ArrowDtype(
            pa.list_(pa.int64())
        ),
    )
    series.list[0]

Calamine engine for :func:`read_excel`

The calamine engine was added to :func:`read_excel`. It uses python-calamine, which provides Python bindings for the Rust library calamine. This engine supports Excel files (.xlsx, .xlsm, .xls, .xlsb) and OpenDocument spreadsheets (.ods) (:issue:`50395`).

There are two advantages of this engine:

  1. Calamine is often faster than other engines, some benchmarks show results up to 5x faster than 'openpyxl', 20x - 'odf', 4x - 'pyxlsb', and 1.5x - 'xlrd'. But, 'openpyxl' and 'pyxlsb' are faster in reading a few rows from large files because of lazy iteration over rows.
  2. Calamine supports the recognition of datetime in .xlsb files, unlike 'pyxlsb' which is the only other engine in pandas that can read .xlsb files.
pd.read_excel("path_to_file.xlsb", engine="calamine")

For more, see :ref:`io.calamine` in the user guide on IO tools.

Other enhancements

Notable bug fixes

These are bug fixes that might have notable behavior changes.

:func:`merge` and :meth:`DataFrame.join` now consistently follow documented sort behavior

In previous versions of pandas, :func:`merge` and :meth:`DataFrame.join` did not always return a result that followed the documented sort behavior. pandas now follows the documented sort behavior in merge and join operations (:issue:`54611`).

As documented, sort=True sorts the join keys lexicographically in the resulting :class:`DataFrame`. With sort=False, the order of the join keys depends on the join type (how keyword):

  • how="left": preserve the order of the left keys
  • how="right": preserve the order of the right keys
  • how="inner": preserve the order of the left keys
  • how="outer": sort keys lexicographically

One example with changing behavior is inner joins with non-unique left join keys and sort=False:

.. ipython:: python

    left = pd.DataFrame({"a": [1, 2, 1]})
    right = pd.DataFrame({"a": [1, 2]})
    result = pd.merge(left, right, how="inner", on="a", sort=False)

Old Behavior

In [5]: result
Out[5]:
   a
0  1
1  1
2  2

New Behavior

.. ipython:: python

    result

:func:`merge` and :meth:`DataFrame.join` no longer reorder levels when levels differ

In previous versions of pandas, :func:`merge` and :meth:`DataFrame.join` would reorder index levels when joining on two indexes with different levels (:issue:`34133`).

.. ipython:: python

    left = pd.DataFrame({"left": 1}, index=pd.MultiIndex.from_tuples([("x", 1), ("x", 2)], names=["A", "B"]))
    right = pd.DataFrame({"right": 2}, index=pd.MultiIndex.from_tuples([(1, 1), (2, 2)], names=["B", "C"]))
    result = left.join(right)

Old Behavior

In [5]: result
Out[5]:
       left  right
B A C
1 x 1     1      2
2 x 2     1      2

New Behavior

.. ipython:: python

    result

Backwards incompatible API changes

Increased minimum versions for dependencies

Some minimum supported versions of dependencies were updated. If installed, we now require:

Package Minimum Version Required Changed
    X X

For optional libraries the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. Optional libraries below the lowest tested version may still work, but are not considered supported.

Package Minimum Version Changed
    X

See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.

Other API changes

Deprecations

Deprecate aliases M, Q, Y, etc. in favour of ME, QE, YE, etc. for offsets

Deprecated the following frequency aliases (:issue:`9586`):

offsets deprecated aliases new aliases
:class:`MonthEnd` M ME
:class:`BusinessMonthEnd` BM BME
:class:`SemiMonthEnd` SM SME
:class:`CustomBusinessMonthEnd` CBM CBME
:class:`QuarterEnd` Q QE
:class:`BQuarterEnd` BQ BQE
:class:`YearEnd` Y YE
:class:`BYearEnd` BY BYE

For example:

Previous behavior:

In [8]: pd.date_range('2020-01-01', periods=3, freq='Q-NOV')
Out[8]:
DatetimeIndex(['2020-02-29', '2020-05-31', '2020-08-31'],
              dtype='datetime64[ns]', freq='Q-NOV')

Future behavior:

.. ipython:: python

    pd.date_range('2020-01-01', periods=3, freq='QE-NOV')

Deprecated automatic downcasting

Deprecated the automatic downcasting of object dtype results in a number of methods. These would silently change the dtype in a hard to predict manner since the behavior was value dependent. Additionally, pandas is moving away from silent dtype changes (:issue:`54710`, :issue:`54261`).

These methods are:

Explicitly call :meth:`DataFrame.infer_objects` to replicate the current behavior in the future.

result = result.infer_objects(copy=False)

Set the following option to opt into the future behavior:

In [9]: pd.set_option("future.no_silent_downcasting", True)

Other Deprecations

Performance improvements

Bug fixes

Categorical

Datetimelike

Timedelta

Timezones

Numeric

Conversion

Strings

Interval

Indexing

Missing

MultiIndex

I/O

Period

Plotting

Groupby/resample/rolling

Reshaping

Sparse

ExtensionArray

Styler

Other

Contributors