Skip to content

Latest commit

 

History

History
1638 lines (1210 loc) · 94.4 KB

v0.24.0.rst

File metadata and controls

1638 lines (1210 loc) · 94.4 KB

What's New in 0.24.0 (January XX, 2019)

Warning

The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only.

See :ref:`install.dropping-27` for more.

{{ header }}

These are the changes in pandas 0.24.0. See :ref:`release` for a full changelog including other versions of pandas.

New features

Accessing the values in a Series or Index

:attr:`Series.array` and :attr:`Index.array` have been added for extracting the array backing a Series or Index. (:issue:`19954`, :issue:`23623`)

.. ipython:: python

   idx = pd.period_range('2000', periods=4)
   idx.array
   pd.Series(idx).array

Historically, this would have been done with series.values, but with .values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (like Categorical). For example, with :class:`PeriodIndex`, .values generates a new ndarray of period objects each time.

.. ipython:: python

   id(idx.values)
   id(idx.values)

If you need an actual NumPy array, use :meth:`Series.to_numpy` or :meth:`Index.to_numpy`.

.. ipython:: python

   idx.to_numpy()
   pd.Series(idx).to_numpy()

For Series and Indexes backed by normal NumPy arrays, this will be the same thing (and the same as .values).

.. ipython:: python

   ser = pd.Series([1, 2, 3])
   ser.array
   ser.to_numpy()

We haven't removed or deprecated :attr:`Series.values` or :attr:`DataFrame.values`, but we recommend and using .array or .to_numpy() instead.

See :ref:`Dtypes <basics.dtypes>` and :ref:`Attributes and Underlying Data <basics.attrs>` for more.

ExtensionArray operator support

A Series based on an ExtensionArray now supports arithmetic and comparison operators (:issue:`19577`). There are two approaches for providing operator support for an ExtensionArray:

  1. Define each of the operators on your ExtensionArray subclass.
  2. Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the ExtensionArray.

See the :ref:`ExtensionArray Operator Support <extending.extension.operator>` documentation section for details on both ways of adding operator support.

Optional Integer NA Support

Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of :ref:`extension types <extending.extension-types>`. Here is an example of the usage.

We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (:issue:`20700`, :issue:`20747`, :issue:`22441`, :issue:`21789`, :issue:`22346`)

.. ipython:: python

   s = pd.Series([1, 2, np.nan], dtype='Int64')
   s


Operations on these dtypes will propagate NaN as other pandas operations.

.. ipython:: python

   # arithmetic
   s + 1

   # comparison
   s == 1

   # indexing
   s.iloc[1:3]

   # operate with other dtypes
   s + s.iloc[1:3].astype('Int8')

   # coerce when needed
   s + 0.01

These dtypes can operate as part of of DataFrame.

.. ipython:: python

   df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})
   df
   df.dtypes


These dtypes can be merged & reshaped & casted.

.. ipython:: python

   pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
   df['A'].astype(float)

Reduction and groupby operations such as 'sum' work.

.. ipython:: python

   df.sum()
   df.groupby('B').A.sum()

Warning

The Integer NA support currently uses the captilized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.

read_html Enhancements

:func:`read_html` previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (:issue:`17054`)

.. ipython:: python

    result = pd.read_html("""
      <table>
        <thead>
          <tr>
            <th>A</th><th>B</th><th>C</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td colspan="2">1</td><td>2</td>
          </tr>
        </tbody>
      </table>""")

Previous Behavior:

In [13]: result
Out [13]:
[   A  B   C
 0  1  2 NaN]

New Behavior:

.. ipython:: python

    result


Storing Interval and Period Data in Series and DataFrame

:class:`Interval` and :class:`Period` data may now be stored in a :class:`Series` or :class:`DataFrame`, in addition to an :class:`IntervalIndex` and :class:`PeriodIndex` like previously (:issue:`19453`, :issue:`22862`).

.. ipython:: python

   ser = pd.Series(pd.interval_range(0, 5))
   ser
   ser.dtype

For periods:

.. ipython:: python

   pser = pd.Series(pd.date_range("2000", freq="D", periods=5))
   pser
   pser.dtype

Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a :class:`Series` or column of a :class:`DataFrame`.

Use :attr:`Series.array` to extract the underlying array of intervals or periods from the Series:

.. ipython:: python

   ser.array
   pser.array

Warning

For backwards compatibility, :attr:`Series.values` continues to return a NumPy array of objects for Interval and Period data. We recommend using :attr:`Series.array` when you need the array of data stored in the Series, and :meth:`Series.to_numpy` when you know you need a NumPy array.

See :ref:`Dtypes <basics.dtypes>` and :ref:`Attributes and Underlying Data <basics.attrs>` for more.

New Styler.pipe() method

The :class:`~pandas.io.formats.style.Styler` class has gained a :meth:`~pandas.io.formats.style.Styler.pipe` method. This provides a convenient way to apply users' predefined styling functions, and can help reduce "boilerplate" when using DataFrame styling functionality repeatedly within a notebook. (:issue:`23229`)

.. ipython:: python

    df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})

    def format_and_align(styler):
        return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
                      .set_properties(**{'text-align': 'right'}))

    df.style.pipe(format_and_align).set_caption('Summary of results.')

Similar methods already exist for other classes in pandas, including :meth:`DataFrame.pipe`, :meth:`pandas.core.groupby.GroupBy.pipe`, and :meth:`pandas.core.resample.Resampler.pipe`.

Joining with two multi-indexes

:func:`DataFrame.merge` and :func:`DataFrame.join` can now be used to join multi-indexed Dataframe instances on the overlaping index levels (:issue:`6360`)

See the :ref:`Merge, join, and concatenate <merging.Join_with_two_multi_indexes>` documentation section.

.. ipython:: python

   index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
                                          ('K1', 'X2')],
                                          names=['key', 'X'])

   left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
                        'B': ['B0', 'B1', 'B2']}, index=index_left)

   index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
                                           ('K2', 'Y2'), ('K2', 'Y3')],
                                           names=['key', 'Y'])

   right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
                         'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)

   left.join(right)

For earlier versions this can be done using the following.

.. ipython:: python

   pd.merge(left.reset_index(), right.reset_index(),
            on=['key'], how='inner').set_index(['key', 'X', 'Y'])


Renaming names in a MultiIndex

:func:`DataFrame.rename_axis` now supports index and columns arguments and :func:`Series.rename_axis` supports index argument (:issue:`19978`)

This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.

Example:

.. ipython:: python

    mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')],
                                    names=['AB', 'CD', 'EF'])
    df = pd.DataFrame([i for i in range(len(mi))], index=mi, columns=['N'])
    df
    df.rename_axis(index={'CD': 'New'})

See the :ref:`Advanced documentation on renaming<advanced.index_names>` for more details.

Other Enhancements

Backwards incompatible API changes

Percentage change on groupby

Fixed a bug where calling :func:`pancas.core.groupby.SeriesGroupBy.pct_change` or :func:`pandas.core.groupby.DataFrameGroupBy.pct_change` would previously work across groups when calculating the percent change, where it now correctly works per group (:issue:`21200`, :issue:`21235`).

.. ipython:: python

   df = pd.DataFrame({'grp': ['a', 'a', 'b'], 'foo': [1.0, 1.1, 2.2]})
   df

Previous behavior:

In [1]: df.groupby('grp').pct_change()
Out[1]:
   foo
0  NaN
1  0.1
2  1.0

New behavior:

.. ipython:: python

   df.groupby('grp').pct_change()

Dependencies have increased minimum versions

We have updated our minimum supported versions of dependencies (:issue:`21242`, :issue:`18742`, :issue:`23774`). If installed, we now require:

Package Minimum Version Required
numpy 1.12.0 X
bottleneck 1.2.0  
fastparquet 0.1.2  
matplotlib 2.0.0  
numexpr 2.6.1  
pandas-gbq 0.8.0  
pyarrow 0.7.0  
pytables 3.4.2  
scipy 0.18.1  
xlrd 1.0.0  
pytest (dev) 3.6  

Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (:issue:`21639` and :issue:`23053`).

os.linesep is used for line_terminator of DataFrame.to_csv

:func:`DataFrame.to_csv` now uses :func:`os.linesep` rather than '\n' for the default line terminator (:issue:`20353`). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.

Previous Behavior on Windows:

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: # When passing file PATH to to_csv,
   ...: # line_terminator does not work, and csv is saved with '\r\n'.
   ...: # Also, this converts all '\n's in the data to '\r\n'.
   ...: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'

In [4]: # When passing file OBJECT with newline option to
   ...: # to_csv, line_terminator works.
   ...: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False, line_terminator='\n')

In [5]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

New Behavior on Windows:

Passing line_terminator explicitly, set thes line terminator to that character.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False)

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

For file objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False)

In [3]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

Parsing Datetime Strings with Timezone Offsets

Previously, parsing datetime strings with UTC offsets with :func:`to_datetime` or :class:`DatetimeIndex` would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with :class:`Timestamp` which would preserve the UTC offset in the tz attribute. Now, :func:`to_datetime` preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (:issue:`17697`, :issue:`11736`, :issue:`22457`)

Previous Behavior:

In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[2]: Timestamp('2015-11-18 10:00:00')

In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

# Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)

New Behavior:

.. ipython:: python

    pd.to_datetime("2015-11-18 15:30:00+05:30")
    pd.Timestamp("2015-11-18 15:30:00+05:30")

Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz

.. ipython:: python

    pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2)

Parsing datetime strings with different UTC offsets will now create an Index of datetime.datetime objects with different UTC offsets

.. ipython:: python

    idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
                          "2015-11-18 16:30:00+06:30"])
    idx
    idx[0]
    idx[1]

Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC

.. ipython:: python

    pd.to_datetime(["2015-11-18 15:30:00+05:30",
                    "2015-11-18 16:30:00+06:30"], utc=True)

Time values in dt.end_time and to_timestamp(how='end')

The time values in :class:`Period` and :class:`PeriodIndex` objects are now set to '23:59:59.999999999' when calling :attr:`Series.dt.end_time`, :attr:`Period.end_time`, :attr:`PeriodIndex.end_time`, :func:`Period.to_timestamp()` with how='end', or :func:`PeriodIndex.to_timestamp()` with how='end' (:issue:`17157`)

Previous Behavior:

In [2]: p = pd.Period('2017-01-01', 'D')
In [3]: pi = pd.PeriodIndex([p])

In [4]: pd.Series(pi).dt.end_time[0]
Out[4]: Timestamp(2017-01-01 00:00:00)

In [5]: p.end_time
Out[5]: Timestamp(2017-01-01 23:59:59.999999999)

New Behavior:

Calling :attr:`Series.dt.end_time` will now result in a time of '23:59:59.999999999' as is the case with :attr:`Period.end_time`, for example

.. ipython:: python

   p = pd.Period('2017-01-01', 'D')
   pi = pd.PeriodIndex([p])

   pd.Series(pi).dt.end_time[0]

   p.end_time

Sparse Data Structure Refactor

SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (:issue:`21978`, :issue:`19056`, :issue:`22835`). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:

  • SparseArray is no longer a subclass of :class:`numpy.ndarray`. To convert a SparseArray to a NumPy array, use :func:`numpy.asarray`.
  • SparseArray.dtype and SparseSeries.dtype are now instances of :class:`SparseDtype`, rather than np.dtype. Access the underlying dtype with SparseDtype.subtype.
  • numpy.asarray(sparse_array) now returns a dense array with all the values, not just the non-fill-value values (:issue:`14167`)
  • SparseArray.take now matches the API of :meth:`pandas.api.extensions.ExtensionArray.take` (:issue:`19506`):
    • The default value of allow_fill has changed from False to True.
    • The out and mode parameters are now longer accepted (previously, this raised if they were specified).
    • Passing a scalar for indices is no longer allowed.
  • The result of :func:`concat` with a mix of sparse and dense Series is a Series with sparse values, rather than a SparseSeries.
  • SparseDataFrame.combine and DataFrame.combine_first no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
  • Setting :attr:`SparseArray.fill_value` to a fill value with a different dtype is now allowed.
  • DataFrame[column] is now a :class:`Series` with sparse values, rather than a :class:`SparseSeries`, when slicing a single column with sparse values (:issue:`23559`).
  • The result of :meth:`Series.where` is now a Series with sparse values, like with other extension arrays (:issue:`24077`)

Some new warnings are issued for operations that require or are likely to materialize a large dense array:

  • A :class:`errors.PerformanceWarning` is issued when using fillna with a method, as a dense array is constructed to create the filled array. Filling with a value is the efficient way to fill a sparse array.
  • A :class:`errors.PerformanceWarning` is now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.

In addition to these API breaking changes, many :ref:`Performance Improvements and Bug Fixes have been made <whatsnew_0240.bug_fixes.sparse>`.

Finally, a Series.sparse accessor was added to provide sparse-specific methods like :meth:`Series.sparse.from_coo`.

.. ipython:: python

   s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')
   s.sparse.density

:meth:`get_dummies` always returns a DataFrame

Previously, when sparse=True was passed to :func:`get_dummies`, the return value could be either a :class:`DataFrame` or a :class:`SparseDataFrame`, depending on whether all or a just a subset of the columns were dummy-encoded. Now, a :class:`DataFrame` is always returned (:issue:`24284`).

Previous Behavior

The first :func:`get_dummies` returns a :class:`DataFrame` because the column A is not dummy encoded. When just ["B", "C"] are passed to get_dummies, then all the columns are dummy-encoded, and a :class:`SparseDataFrame` was returned.

In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})

In [3]: type(pd.get_dummies(df, sparse=True))
Out[3]: pandas.core.frame.DataFrame

In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[4]: pandas.core.sparse.frame.SparseDataFrame
.. ipython:: python
   :suppress:

   df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})

New Behavior

Now, the return type is consistently a :class:`DataFrame`.

.. ipython:: python

   type(pd.get_dummies(df, sparse=True))
   type(pd.get_dummies(df[['B', 'C']], sparse=True))

Note

There's no difference in memory usage between a :class:`SparseDataFrame` and a :class:`DataFrame` with sparse values. The memory usage will be the same as in the previous version of pandas.

Raise ValueError in DataFrame.to_dict(orient='index')

Bug in :func:`DataFrame.to_dict` raises ValueError when used with orient='index' and a non-unique index instead of losing data (:issue:`22801`)

.. ipython:: python
    :okexcept:

    df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])
    df

    df.to_dict(orient='index')

Tick DateOffset Normalize Restrictions

Creating a Tick object (:class:`Day`, :class:`Hour`, :class:`Minute`, :class:`Second`, :class:`Milli`, :class:`Micro`, :class:`Nano`) with normalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (:issue:`21427`)

Previous Behavior:

In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [3]: ts
Out[3]: Timestamp('2018-06-11 18:01:14')

In [4]: tic = pd.offsets.Hour(n=2, normalize=True)
   ...:

In [5]: tic
Out[5]: <2 * Hours>

In [6]: ts + tic
Out[6]: Timestamp('2018-06-11 00:00:00')

In [7]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[7]: False

New Behavior:

.. ipython:: python

    ts = pd.Timestamp('2018-06-11 18:01:14')
    tic = pd.offsets.Hour(n=2)
    ts + tic + tic + tic == ts + (tic + tic + tic)


Period Subtraction

Subtraction of a Period from another Period will give a DateOffset. instead of an integer (:issue:`21314`)

.. ipython:: python

    june = pd.Period('June 2018')
    april = pd.Period('April 2018')
    june - april

Previous Behavior:

In [2]: june = pd.Period('June 2018')

In [3]: april = pd.Period('April 2018')

In [4]: june - april
Out [4]: 2

Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index

.. ipython:: python

    pi = pd.period_range('June 2018', freq='M', periods=3)
    pi - pi[0]

Previous Behavior:

In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')

Addition/Subtraction of NaN from :class:`DataFrame`

Adding or subtracting NaN from a :class:`DataFrame` column with timedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex and Series behavior (:issue:`22163`)

.. ipython:: python

   df = pd.DataFrame([pd.Timedelta(days=1)])
   df

In [2]: df - np.nan
...
TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'

Previous Behavior:

In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [5]: df - np.nan
Out[5]:
    0
0 NaT

DataFrame Comparison Operations Broadcasting Changes

Previously, the broadcasting behavior of :class:`DataFrame` comparison operations (==, !=, ...) was inconsistent with the behavior of arithmetic operations (+, -, ...). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (:issue:`22880`)

The affected cases are:

  • operating against a 2-dimensional np.ndarray with either 1 row or 1 column will now broadcast the same way a np.ndarray would (:issue:`23000`).
  • a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ValueError instead of operating column-by-column (:issue:`22880`.
  • a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ValueError (:issue:`22880`).

Previous Behavior:

In [3]: arr = np.arange(6).reshape(3, 2)
In [4]: df = pd.DataFrame(arr)

In [5]: df == arr[[0], :]
    ...: # comparison previously broadcast where arithmetic would raise
Out[5]:
       0      1
0   True   True
1  False  False
2  False  False
In [6]: df + arr[[0], :]
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)

In [7]: df == (1, 2)
    ...: # length matches number of columns;
    ...: # comparison previously raised where arithmetic would broadcast
...
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
In [8]: df + (1, 2)
Out[8]:
   0  1
0  1  3
1  3  5
2  5  7

In [9]: df == (1, 2, 3)
    ...:  # length matches number of rows
    ...:  # comparison previously broadcast where arithmetic would raise
Out[9]:
       0      1
0  False   True
1   True  False
2  False  False
In [10]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

New Behavior:

.. ipython:: python
   :okexcept:

   arr = np.arange(6).reshape(3, 2)
   df = pd.DataFrame(arr)
   df

.. ipython:: python

   # Comparison operations and arithmetic operations both broadcast.
   df == arr[[0], :]
   df + arr[[0], :]

.. ipython:: python

   # Comparison operations and arithmetic operations both broadcast.
   df == (1, 2)
   df + (1, 2)

# Comparison operations and arithmetic opeartions both raise ValueError.
In [6]: df == (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

In [7]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

DataFrame Arithmetic Operations Broadcasting Changes

:class:`DataFrame` arithmetic operations when operating with 2-dimensional np.ndarray objects now broadcast in the same way as np.ndarray broadcast. (:issue:`23000`)

Previous Behavior:

In [3]: arr = np.arange(6).reshape(3, 2)
In [4]: df = pd.DataFrame(arr)
In [5]: df + arr[[0], :]   # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [6]: df + arr[:, [1]]   # 1 column, 3 rows
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)

New Behavior:

.. ipython:: python

   arr = np.arange(6).reshape(3, 2)
   df = pd.DataFrame(arr)
   df

.. ipython:: python

   df + arr[[0], :]   # 1 row, 2 columns
   df + arr[:, [1]]   # 1 column, 3 rows


ExtensionType Changes

:class:`pandas.api.extensions.ExtensionDtype` Equality and Hashability

Pandas now requires that extension dtypes be hashable. The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your __init__ method. See :class:`pandas.api.extensions.ExtensionDtype` for more (:issue:`22476`).

Other changes

Series and Index Data-Dtype Incompatibilities

Series and Index constructors now raise when the data is incompatible with a passed dtype= (:issue:`15832`)

Previous Behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
0    18446744073709551615
dtype: uint64

New Behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
...
OverflowError: Trying to coerce negative values to unsigned integers

Crosstab Preserves Dtypes

:func:`crosstab` will preserve now dtypes in some cases that previously would cast from integer dtype to floating dtype (:issue:`22019`)

Previous Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4],
   ...:                    'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')
Out[4]:
b    3    4
a
1  0.5  0.0
2  0.5  1.0

New Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2],
   ...:                    'b': [3, 3, 4, 4, 4],
   ...:                    'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')

Datetimelike API Changes

Other API Changes

Deprecations

Integer Addition/Subtraction with Datetimes and Timedeltas is Deprecated

In the past, users could—in some cases—add or subtract integers or integer-dtype arrays from :class:`Timestamp`, :class:`DatetimeIndex` and :class:`TimedeltaIndex`.

This usage is now deprecated. Instead add or subtract integer multiples of the object's freq attribute (:issue:`21939`, :issue:`23878`).

Previous Behavior:

In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In [6]: ts + 2
Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')

In [7]: tdi = pd.timedelta_range('1D', periods=2)
In [8]: tdi - np.array([2, 1])
Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [10]: dti + pd.Index([1, 2])
Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

New Behavior:

.. ipython:: python
    :okwarning:

    ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
    ts + 2 * ts.freq

    tdi = pd.timedelta_range('1D', periods=2)
    tdi - np.array([2 * tdi.freq, 1 * tdi.freq])

    dti = pd.date_range('2001-01-01', periods=2, freq='7D')
    dti + pd.Index([1 * dti.freq, 2 * dti.freq])

Removal of prior version deprecations/changes

Performance Improvements

Documentation Changes

Bug Fixes

Categorical

Datetimelike

Timedelta

Timezones

Offsets

Numeric

Conversion

Strings

Interval

Indexing

Missing

MultiIndex

I/O

Proper handling of np.NaN in a string data-typed column with the Python engine

There was bug in :func:`read_excel` and :func:`read_csv` with the Python engine, where missing values turned to 'nan' with dtype=str and na_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (:issue:`20377`)

.. ipython:: python
   :suppress:

   from pandas.compat import StringIO

Previous Behavior:

In [5]: data = 'a,b,c\n1,,3\n4,5,6'
In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [7]: df.loc[0, 'b']
Out[7]:
'nan'

New Behavior:

.. ipython:: python

   data = 'a,b,c\n1,,3\n4,5,6'
   df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
   df.loc[0, 'b']

Notice how we now instead output np.nan itself instead of a stringified form of it.

Plotting

Groupby/Resample/Rolling

Reshaping

Sparse

Style

Build Changes

Other

  • Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
  • Require at least 0.28.2 version of cython to support read-only memoryviews (:issue:`21688`)

Contributors

.. contributors:: v0.23.4..HEAD