Skip to content

DOC: Remove notes to old Python/package versions #52640

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/development/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ Below is an example to define two original properties, "internal_cache" as a tem
Plotting backends
-----------------

Starting in 0.25 pandas can be extended with third-party plotting backends. The
pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib. For example:

Expand Down
8 changes: 0 additions & 8 deletions doc/source/getting_started/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,14 +149,6 @@ to install pandas with the optional dependencies to read Excel files.

The full list of extras that can be installed can be found in the :ref:`dependency section.<install.optional_dependencies>`

Installing with ActivePython
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Installation instructions for
`ActivePython <https://www.activestate.com/products/python/>`__ can be found
`here <https://www.activestate.com/products/python/>`__. Versions
2.7, 3.5 and 3.6 include pandas.

Installing using your Linux distribution's package manager.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -918,7 +918,7 @@ If you select a label *contained* within an interval, this will also select the
df.loc[2.5]
df.loc[[2.5, 3.5]]

Selecting using an ``Interval`` will only return exact matches (starting from pandas 0.25.0).
Selecting using an ``Interval`` will only return exact matches.

.. ipython:: python

Expand Down
9 changes: 2 additions & 7 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3999,7 +3999,7 @@ any pickled pandas object (or any other pickled object) from file:

.. warning::

:func:`read_pickle` is only guaranteed backwards compatible back to pandas version 0.20.3
:func:`read_pickle` is only guaranteed backwards compatible back to a few minor release.

.. _io.pickle.compression:

Expand Down Expand Up @@ -5922,11 +5922,6 @@ And then issue the following queries:
Google BigQuery
---------------

.. warning::

Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package ``pandas-gbq``. You can ``pip install pandas-gbq`` to get it.

The ``pandas-gbq`` package provides functionality to read/write from Google BigQuery.

pandas integrates with this external package. if ``pandas-gbq`` is installed, you can
Expand Down Expand Up @@ -6114,7 +6109,7 @@ SAS formats
-----------

The top-level function :func:`read_sas` can read (but not write) SAS
XPORT (.xpt) and (since *v0.18.0*) SAS7BDAT (.sas7bdat) format files.
XPORT (.xpt) and SAS7BDAT (.sas7bdat) format files.

SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
Expand Down
6 changes: 0 additions & 6 deletions doc/source/user_guide/merging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -510,12 +510,6 @@ all standard database join operations between ``DataFrame`` or named ``Series``
dataset.
* "many_to_many" or "m:m": allowed, but does not result in checks.

.. note::

Support for specifying index levels as the ``on``, ``left_on``, and
``right_on`` parameters was added in version 0.23.0.
Support for merging named ``Series`` objects was added in version 0.24.0.

The return type will be the same as ``left``. If ``left`` is a ``DataFrame`` or named ``Series``
and ``right`` is a subclass of ``DataFrame``, the return type will still be ``DataFrame``.

Expand Down
5 changes: 0 additions & 5 deletions doc/source/user_guide/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -182,11 +182,6 @@ account for missing data. For example:
Sum/prod of empties/nans
~~~~~~~~~~~~~~~~~~~~~~~~

.. warning::

This behavior is now standard as of v0.22.0 and is consistent with the default in ``numpy``; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
See :ref:`v0.22.0 whatsnew <whatsnew_0220>` for more.

The sum of an empty or all-NA Series or column of a DataFrame is 0.

.. ipython:: python
Expand Down
17 changes: 1 addition & 16 deletions doc/source/user_guide/text.rst
Original file line number Diff line number Diff line change
Expand Up @@ -206,8 +206,7 @@ and replacing any remaining whitespaces with underscores:

.. warning::

Before v.0.25.0, the ``.str``-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
The type of the Series is inferred and the allowed types (i.e. strings).

Generally speaking, the ``.str`` accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Expand Down Expand Up @@ -423,11 +422,6 @@ the ``join``-keyword.
s.str.cat(u)
s.str.cat(u, join="left")

.. warning::

If the ``join`` keyword is not passed, the method :meth:`~Series.str.cat` will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a ``FutureWarning`` will be raised if any of the involved indexes differ, since this default will change to ``join='left'`` in a future version.

The usual options are available for ``join`` (one of ``'left', 'outer', 'inner', 'right'``).
In particular, alignment also means that the different lengths do not need to coincide anymore.

Expand Down Expand Up @@ -503,15 +497,6 @@ Extracting substrings
Extract first match in each subject (extract)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. warning::

Before version 0.23, argument ``expand`` of the ``extract`` method defaulted to
``False``. When ``expand=False``, ``expand`` returns a ``Series``, ``Index``, or
``DataFrame``, depending on the subject and regular expression
pattern. When ``expand=True``, it always returns a ``DataFrame``,
which is more consistent and less confusing from the perspective of a user.
``expand=True`` has been the default since version 0.23.0.

The ``extract`` method accepts a `regular expression
<https://docs.python.org/3/library/re.html>`__ with at least one
capture group.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1794,7 +1794,7 @@ when plotting a large number of points.
Plotting backends
-----------------

Starting in version 0.25, pandas can be extended with third-party plotting backends. The
pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib.

Expand Down
3 changes: 0 additions & 3 deletions pandas/core/arrays/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,6 @@
] = """
%(summary)s
.. versionadded:: %(versionadded)s
Parameters
----------
data : array-like (1-dimensional)
Expand Down Expand Up @@ -187,7 +185,6 @@
% {
"klass": "IntervalArray",
"summary": "Pandas array for interval data that are closed on the same side.",
"versionadded": "0.24.0",
"name": "",
"extra_attributes": "",
"extra_methods": "",
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/config_init.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def use_numba_cb(key) -> None:
pc_large_repr_doc = """
: 'truncate'/'info'
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
show a truncated table, or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
"""

Expand Down
2 changes: 0 additions & 2 deletions pandas/core/dtypes/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,8 +240,6 @@ def union_categoricals(
...
TypeError: to union ordered Categoricals, all categories must be the same

New in version 0.20.0

Ordered categoricals with different categories or orderings can be
combined by using the `ignore_ordered=True` argument.

Expand Down
23 changes: 7 additions & 16 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -379,12 +379,6 @@
merge_asof : Merge on nearest keys.
DataFrame.join : Similar method using indices.

Notes
-----
Support for specifying index levels as the `on`, `left_on`, and
`right_on` parameters was added in version 0.23.0
Support for merging named Series objects was added in version 0.24.0

Examples
--------
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
Expand Down Expand Up @@ -1501,7 +1495,7 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
This method computes the matrix product between the DataFrame and the
values of an other Series, DataFrame or a numpy array.

It can also be called using ``self @ other`` in Python >= 3.5.
It can also be called using ``self @ other``.

Parameters
----------
Expand Down Expand Up @@ -1619,13 +1613,13 @@ def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:

def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
"""
Matrix multiplication using binary `@` operator in Python>=3.5.
Matrix multiplication using binary `@` operator.
"""
return self.dot(other)

def __rmatmul__(self, other) -> DataFrame:
"""
Matrix multiplication using binary `@` operator in Python>=3.5.
Matrix multiplication using binary `@` operator.
"""
try:
return self.T.dot(np.transpose(other)).T
Expand Down Expand Up @@ -2700,8 +2694,8 @@ def to_feather(self, path: FilePath | WriteBuffer[bytes], **kwargs) -> None:
it will be used as Root Directory path when writing a partitioned dataset.
**kwargs :
Additional keywords passed to :func:`pyarrow.feather.write_feather`.
Starting with pyarrow 0.17, this includes the `compression`,
`compression_level`, `chunksize` and `version` keywords.
This includes the `compression`, `compression_level`, `chunksize`
and `version` keywords.

.. versionadded:: 1.1.0

Expand Down Expand Up @@ -4631,8 +4625,8 @@ def select_dtypes(self, include=None, exclude=None) -> Self:
* To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or
``'timedelta64'``
* To select Pandas categorical dtypes, use ``'category'``
* To select Pandas datetimetz dtypes, use ``'datetimetz'`` (new in
0.20.0) or ``'datetime64[ns, tz]'``
* To select Pandas datetimetz dtypes, use ``'datetimetz'``
or ``'datetime64[ns, tz]'``

Examples
--------
Expand Down Expand Up @@ -9983,9 +9977,6 @@ def join(
Parameters `on`, `lsuffix`, and `rsuffix` are not supported when
passing a list of `DataFrame` objects.

Support for specifying index levels as the `on` parameter was added
in version 0.23.0.

Examples
--------
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
Expand Down
6 changes: 2 additions & 4 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2606,7 +2606,7 @@ def to_hdf(
A value of 0 or None disables compression.
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
Specifies the compression library to be used.
As of v0.20.2 these additional compressors for Blosc are supported
These additional compressors for Blosc are supported
(default if no compressor specified: 'blosc:blosclz'):
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
'blosc:zlib', 'blosc:zstd'}.
Expand Down Expand Up @@ -7537,9 +7537,7 @@ def interpolate(
'cubicspline': Wrappers around the SciPy interpolation methods of
similar names. See `Notes`.
* 'from_derivatives': Refers to
`scipy.interpolate.BPoly.from_derivatives` which
replaces 'piecewise_polynomial' interpolation method in
scipy 0.18.
`scipy.interpolate.BPoly.from_derivatives`.

axis : {{0 or 'index', 1 or 'columns', None}}, default None
Axis to interpolate along. For `Series` this parameter is unused
Expand Down
1 change: 0 additions & 1 deletion pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,6 @@ def _new_IntervalIndex(cls, d):
"klass": "IntervalIndex",
"summary": "Immutable index of intervals that are closed on the same side.",
"name": _index_doc_kwargs["name"],
"versionadded": "0.20.0",
"extra_attributes": "is_overlapping\nvalues\n",
"extra_methods": "",
"examples": textwrap.dedent(
Expand Down
4 changes: 1 addition & 3 deletions pandas/core/resample.py
Original file line number Diff line number Diff line change
Expand Up @@ -886,9 +886,7 @@ def interpolate(
'cubicspline': Wrappers around the SciPy interpolation methods of
similar names. See `Notes`.
* 'from_derivatives': Refers to
`scipy.interpolate.BPoly.from_derivatives` which
replaces 'piecewise_polynomial' interpolation method in
scipy 0.18.
`scipy.interpolate.BPoly.from_derivatives`.

axis : {{0 or 'index', 1 or 'columns', None}}, default None
Axis to interpolate along. For `Series` this parameter is unused
Expand Down
4 changes: 0 additions & 4 deletions pandas/core/reshape/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -389,10 +389,6 @@ def merge_asof(
- A "nearest" search selects the row in the right DataFrame whose 'on'
key is closest in absolute distance to the left's key.

The default is "backward" and is compatible in versions below 0.20.0.
The direction parameter was added in version 0.20.0 and introduces
"forward" and "nearest".

Optionally match on equivalent keys with 'by' before searching with 'on'.

Parameters
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -2891,7 +2891,7 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray:
one, or the Series and each columns of a DataFrame, or the Series and
each columns of an array.

It can also be called using `self @ other` in Python >= 3.5.
It can also be called using `self @ other`.

Parameters
----------
Expand Down Expand Up @@ -2963,13 +2963,13 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray:

def __matmul__(self, other):
"""
Matrix multiplication using binary `@` operator in Python>=3.5.
Matrix multiplication using binary `@` operator.
"""
return self.dot(other)

def __rmatmul__(self, other):
"""
Matrix multiplication using binary `@` operator in Python>=3.5.
Matrix multiplication using binary `@` operator.
"""
return self.dot(np.transpose(other))

Expand Down
2 changes: 1 addition & 1 deletion pandas/io/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ def stringify_path(

Notes
-----
Objects supporting the fspath protocol (python 3.6+) are coerced
Objects supporting the fspath protocol are coerced
according to its __fspath__ method.

Any other object is passed through unchanged, which includes bytes,
Expand Down
6 changes: 0 additions & 6 deletions pandas/io/gbq.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,6 @@ def read_gbq(
If set, limit the maximum number of rows to fetch from the query
results.

*New in version 0.12.0 of pandas-gbq*.

.. versionadded:: 1.1.0
progress_bar_type : Optional, str
If set, use the `tqdm <https://tqdm.github.io/>`__ library to
Expand All @@ -156,10 +154,6 @@ def read_gbq(
Use the :func:`tqdm.tqdm_gui` function to display a
progress bar as a graphical dialog box.

Note that this feature requires version 0.12.0 or later of the
``pandas-gbq`` package. And it requires the ``tqdm`` package. Slightly
different than ``pandas-gbq``, here the default is ``None``.

Returns
-------
df: DataFrame
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/pytables.py
Original file line number Diff line number Diff line change
Expand Up @@ -515,7 +515,7 @@ class HDFStore:
A value of 0 or None disables compression.
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
Specifies the compression library to be used.
As of v0.20.2 these additional compressors for Blosc are supported
These additional compressors for Blosc are supported
(default if no compressor specified: 'blosc:blosclz'):
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
'blosc:zlib', 'blosc:zstd'}.
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/io/formats/test_to_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ def test_to_csv_stdout_file(self, capsys):
reason=(
"Especially in Windows, file stream should not be passed"
"to csv writer without newline='' option."
"(https://docs.python.org/3.6/library/csv.html#csv.writer)"
"(https://docs.python.org/3/library/csv.html#csv.writer)"
),
)
def test_to_csv_write_to_open_file(self):
Expand Down
3 changes: 1 addition & 2 deletions pandas/tests/series/test_constructors.py
Original file line number Diff line number Diff line change
Expand Up @@ -1346,8 +1346,7 @@ def test_constructor_dict_list_value_explicit_dtype(self):

def test_constructor_dict_order(self):
# GH19018
# initialization ordering: by insertion order if python>= 3.6, else
# order by value
# initialization ordering: by insertion order
d = {"b": 1, "a": 0, "c": 2}
result = Series(d)
expected = Series([1, 0, 2], index=list("bac"))
Expand Down