Skip to content

Commit be170fc

Browse files
authored
DOC: Remove notes to old Python/package versions (#52640)
1 parent 70a31e7 commit be170fc

22 files changed

+24
-93
lines changed

doc/source/development/extending.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -450,7 +450,7 @@ Below is an example to define two original properties, "internal_cache" as a tem
450450
Plotting backends
451451
-----------------
452452

453-
Starting in 0.25 pandas can be extended with third-party plotting backends. The
453+
pandas can be extended with third-party plotting backends. The
454454
main idea is letting users select a plotting backend different than the provided
455455
one based on Matplotlib. For example:
456456

doc/source/getting_started/install.rst

-8
Original file line numberDiff line numberDiff line change
@@ -149,14 +149,6 @@ to install pandas with the optional dependencies to read Excel files.
149149

150150
The full list of extras that can be installed can be found in the :ref:`dependency section.<install.optional_dependencies>`
151151

152-
Installing with ActivePython
153-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154-
155-
Installation instructions for
156-
`ActivePython <https://www.activestate.com/products/python/>`__ can be found
157-
`here <https://www.activestate.com/products/python/>`__. Versions
158-
2.7, 3.5 and 3.6 include pandas.
159-
160152
Installing using your Linux distribution's package manager.
161153
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
162154

doc/source/user_guide/advanced.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -918,7 +918,7 @@ If you select a label *contained* within an interval, this will also select the
918918
df.loc[2.5]
919919
df.loc[[2.5, 3.5]]
920920
921-
Selecting using an ``Interval`` will only return exact matches (starting from pandas 0.25.0).
921+
Selecting using an ``Interval`` will only return exact matches.
922922

923923
.. ipython:: python
924924

doc/source/user_guide/io.rst

+2-7
Original file line numberDiff line numberDiff line change
@@ -3999,7 +3999,7 @@ any pickled pandas object (or any other pickled object) from file:
39993999

40004000
.. warning::
40014001

4002-
:func:`read_pickle` is only guaranteed backwards compatible back to pandas version 0.20.3
4002+
:func:`read_pickle` is only guaranteed backwards compatible back to a few minor release.
40034003

40044004
.. _io.pickle.compression:
40054005

@@ -5922,11 +5922,6 @@ And then issue the following queries:
59225922
Google BigQuery
59235923
---------------
59245924

5925-
.. warning::
5926-
5927-
Starting in 0.20.0, pandas has split off Google BigQuery support into the
5928-
separate package ``pandas-gbq``. You can ``pip install pandas-gbq`` to get it.
5929-
59305925
The ``pandas-gbq`` package provides functionality to read/write from Google BigQuery.
59315926

59325927
pandas integrates with this external package. if ``pandas-gbq`` is installed, you can
@@ -6114,7 +6109,7 @@ SAS formats
61146109
-----------
61156110

61166111
The top-level function :func:`read_sas` can read (but not write) SAS
6117-
XPORT (.xpt) and (since *v0.18.0*) SAS7BDAT (.sas7bdat) format files.
6112+
XPORT (.xpt) and SAS7BDAT (.sas7bdat) format files.
61186113

61196114
SAS files only contain two value types: ASCII text and floating point
61206115
values (usually 8 bytes but sometimes truncated). For xport files,

doc/source/user_guide/merging.rst

-6
Original file line numberDiff line numberDiff line change
@@ -510,12 +510,6 @@ all standard database join operations between ``DataFrame`` or named ``Series``
510510
dataset.
511511
* "many_to_many" or "m:m": allowed, but does not result in checks.
512512

513-
.. note::
514-
515-
Support for specifying index levels as the ``on``, ``left_on``, and
516-
``right_on`` parameters was added in version 0.23.0.
517-
Support for merging named ``Series`` objects was added in version 0.24.0.
518-
519513
The return type will be the same as ``left``. If ``left`` is a ``DataFrame`` or named ``Series``
520514
and ``right`` is a subclass of ``DataFrame``, the return type will still be ``DataFrame``.
521515

doc/source/user_guide/missing_data.rst

-5
Original file line numberDiff line numberDiff line change
@@ -182,11 +182,6 @@ account for missing data. For example:
182182
Sum/prod of empties/nans
183183
~~~~~~~~~~~~~~~~~~~~~~~~
184184

185-
.. warning::
186-
187-
This behavior is now standard as of v0.22.0 and is consistent with the default in ``numpy``; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
188-
See :ref:`v0.22.0 whatsnew <whatsnew_0220>` for more.
189-
190185
The sum of an empty or all-NA Series or column of a DataFrame is 0.
191186

192187
.. ipython:: python

doc/source/user_guide/text.rst

+1-16
Original file line numberDiff line numberDiff line change
@@ -206,8 +206,7 @@ and replacing any remaining whitespaces with underscores:
206206

207207
.. warning::
208208

209-
Before v.0.25.0, the ``.str``-accessor did only the most rudimentary type checks. Starting with
210-
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
209+
The type of the Series is inferred and the allowed types (i.e. strings).
211210

212211
Generally speaking, the ``.str`` accessor is intended to work only on strings. With very few
213212
exceptions, other uses are not supported, and may be disabled at a later point.
@@ -423,11 +422,6 @@ the ``join``-keyword.
423422
s.str.cat(u)
424423
s.str.cat(u, join="left")
425424
426-
.. warning::
427-
428-
If the ``join`` keyword is not passed, the method :meth:`~Series.str.cat` will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
429-
but a ``FutureWarning`` will be raised if any of the involved indexes differ, since this default will change to ``join='left'`` in a future version.
430-
431425
The usual options are available for ``join`` (one of ``'left', 'outer', 'inner', 'right'``).
432426
In particular, alignment also means that the different lengths do not need to coincide anymore.
433427

@@ -503,15 +497,6 @@ Extracting substrings
503497
Extract first match in each subject (extract)
504498
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
505499

506-
.. warning::
507-
508-
Before version 0.23, argument ``expand`` of the ``extract`` method defaulted to
509-
``False``. When ``expand=False``, ``expand`` returns a ``Series``, ``Index``, or
510-
``DataFrame``, depending on the subject and regular expression
511-
pattern. When ``expand=True``, it always returns a ``DataFrame``,
512-
which is more consistent and less confusing from the perspective of a user.
513-
``expand=True`` has been the default since version 0.23.0.
514-
515500
The ``extract`` method accepts a `regular expression
516501
<https://docs.python.org/3/library/re.html>`__ with at least one
517502
capture group.

doc/source/user_guide/visualization.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1794,7 +1794,7 @@ when plotting a large number of points.
17941794
Plotting backends
17951795
-----------------
17961796

1797-
Starting in version 0.25, pandas can be extended with third-party plotting backends. The
1797+
pandas can be extended with third-party plotting backends. The
17981798
main idea is letting users select a plotting backend different than the provided
17991799
one based on Matplotlib.
18001800

pandas/core/arrays/interval.py

-3
Original file line numberDiff line numberDiff line change
@@ -124,8 +124,6 @@
124124
] = """
125125
%(summary)s
126126
127-
.. versionadded:: %(versionadded)s
128-
129127
Parameters
130128
----------
131129
data : array-like (1-dimensional)
@@ -187,7 +185,6 @@
187185
% {
188186
"klass": "IntervalArray",
189187
"summary": "Pandas array for interval data that are closed on the same side.",
190-
"versionadded": "0.24.0",
191188
"name": "",
192189
"extra_attributes": "",
193190
"extra_methods": "",

pandas/core/config_init.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ def use_numba_cb(key) -> None:
275275
pc_large_repr_doc = """
276276
: 'truncate'/'info'
277277
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
278-
show a truncated table (the default from 0.13), or switch to the view from
278+
show a truncated table, or switch to the view from
279279
df.info() (the behaviour in earlier versions of pandas).
280280
"""
281281

pandas/core/dtypes/concat.py

-2
Original file line numberDiff line numberDiff line change
@@ -240,8 +240,6 @@ def union_categoricals(
240240
...
241241
TypeError: to union ordered Categoricals, all categories must be the same
242242
243-
New in version 0.20.0
244-
245243
Ordered categoricals with different categories or orderings can be
246244
combined by using the `ignore_ordered=True` argument.
247245

pandas/core/frame.py

+7-16
Original file line numberDiff line numberDiff line change
@@ -379,12 +379,6 @@
379379
merge_asof : Merge on nearest keys.
380380
DataFrame.join : Similar method using indices.
381381
382-
Notes
383-
-----
384-
Support for specifying index levels as the `on`, `left_on`, and
385-
`right_on` parameters was added in version 0.23.0
386-
Support for merging named Series objects was added in version 0.24.0
387-
388382
Examples
389383
--------
390384
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
@@ -1501,7 +1495,7 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
15011495
This method computes the matrix product between the DataFrame and the
15021496
values of an other Series, DataFrame or a numpy array.
15031497
1504-
It can also be called using ``self @ other`` in Python >= 3.5.
1498+
It can also be called using ``self @ other``.
15051499
15061500
Parameters
15071501
----------
@@ -1619,13 +1613,13 @@ def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
16191613

16201614
def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
16211615
"""
1622-
Matrix multiplication using binary `@` operator in Python>=3.5.
1616+
Matrix multiplication using binary `@` operator.
16231617
"""
16241618
return self.dot(other)
16251619

16261620
def __rmatmul__(self, other) -> DataFrame:
16271621
"""
1628-
Matrix multiplication using binary `@` operator in Python>=3.5.
1622+
Matrix multiplication using binary `@` operator.
16291623
"""
16301624
try:
16311625
return self.T.dot(np.transpose(other)).T
@@ -2700,8 +2694,8 @@ def to_feather(self, path: FilePath | WriteBuffer[bytes], **kwargs) -> None:
27002694
it will be used as Root Directory path when writing a partitioned dataset.
27012695
**kwargs :
27022696
Additional keywords passed to :func:`pyarrow.feather.write_feather`.
2703-
Starting with pyarrow 0.17, this includes the `compression`,
2704-
`compression_level`, `chunksize` and `version` keywords.
2697+
This includes the `compression`, `compression_level`, `chunksize`
2698+
and `version` keywords.
27052699
27062700
.. versionadded:: 1.1.0
27072701
@@ -4631,8 +4625,8 @@ def select_dtypes(self, include=None, exclude=None) -> Self:
46314625
* To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or
46324626
``'timedelta64'``
46334627
* To select Pandas categorical dtypes, use ``'category'``
4634-
* To select Pandas datetimetz dtypes, use ``'datetimetz'`` (new in
4635-
0.20.0) or ``'datetime64[ns, tz]'``
4628+
* To select Pandas datetimetz dtypes, use ``'datetimetz'``
4629+
or ``'datetime64[ns, tz]'``
46364630
46374631
Examples
46384632
--------
@@ -9983,9 +9977,6 @@ def join(
99839977
Parameters `on`, `lsuffix`, and `rsuffix` are not supported when
99849978
passing a list of `DataFrame` objects.
99859979
9986-
Support for specifying index levels as the `on` parameter was added
9987-
in version 0.23.0.
9988-
99899980
Examples
99909981
--------
99919982
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],

pandas/core/generic.py

+2-4
Original file line numberDiff line numberDiff line change
@@ -2606,7 +2606,7 @@ def to_hdf(
26062606
A value of 0 or None disables compression.
26072607
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
26082608
Specifies the compression library to be used.
2609-
As of v0.20.2 these additional compressors for Blosc are supported
2609+
These additional compressors for Blosc are supported
26102610
(default if no compressor specified: 'blosc:blosclz'):
26112611
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
26122612
'blosc:zlib', 'blosc:zstd'}.
@@ -7537,9 +7537,7 @@ def interpolate(
75377537
'cubicspline': Wrappers around the SciPy interpolation methods of
75387538
similar names. See `Notes`.
75397539
* 'from_derivatives': Refers to
7540-
`scipy.interpolate.BPoly.from_derivatives` which
7541-
replaces 'piecewise_polynomial' interpolation method in
7542-
scipy 0.18.
7540+
`scipy.interpolate.BPoly.from_derivatives`.
75437541
75447542
axis : {{0 or 'index', 1 or 'columns', None}}, default None
75457543
Axis to interpolate along. For `Series` this parameter is unused

pandas/core/indexes/interval.py

-1
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,6 @@ def _new_IntervalIndex(cls, d):
154154
"klass": "IntervalIndex",
155155
"summary": "Immutable index of intervals that are closed on the same side.",
156156
"name": _index_doc_kwargs["name"],
157-
"versionadded": "0.20.0",
158157
"extra_attributes": "is_overlapping\nvalues\n",
159158
"extra_methods": "",
160159
"examples": textwrap.dedent(

pandas/core/resample.py

+1-3
Original file line numberDiff line numberDiff line change
@@ -886,9 +886,7 @@ def interpolate(
886886
'cubicspline': Wrappers around the SciPy interpolation methods of
887887
similar names. See `Notes`.
888888
* 'from_derivatives': Refers to
889-
`scipy.interpolate.BPoly.from_derivatives` which
890-
replaces 'piecewise_polynomial' interpolation method in
891-
scipy 0.18.
889+
`scipy.interpolate.BPoly.from_derivatives`.
892890
893891
axis : {{0 or 'index', 1 or 'columns', None}}, default None
894892
Axis to interpolate along. For `Series` this parameter is unused

pandas/core/reshape/merge.py

-4
Original file line numberDiff line numberDiff line change
@@ -389,10 +389,6 @@ def merge_asof(
389389
- A "nearest" search selects the row in the right DataFrame whose 'on'
390390
key is closest in absolute distance to the left's key.
391391
392-
The default is "backward" and is compatible in versions below 0.20.0.
393-
The direction parameter was added in version 0.20.0 and introduces
394-
"forward" and "nearest".
395-
396392
Optionally match on equivalent keys with 'by' before searching with 'on'.
397393
398394
Parameters

pandas/core/series.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -2891,7 +2891,7 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray:
28912891
one, or the Series and each columns of a DataFrame, or the Series and
28922892
each columns of an array.
28932893
2894-
It can also be called using `self @ other` in Python >= 3.5.
2894+
It can also be called using `self @ other`.
28952895
28962896
Parameters
28972897
----------
@@ -2963,13 +2963,13 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray:
29632963

29642964
def __matmul__(self, other):
29652965
"""
2966-
Matrix multiplication using binary `@` operator in Python>=3.5.
2966+
Matrix multiplication using binary `@` operator.
29672967
"""
29682968
return self.dot(other)
29692969

29702970
def __rmatmul__(self, other):
29712971
"""
2972-
Matrix multiplication using binary `@` operator in Python>=3.5.
2972+
Matrix multiplication using binary `@` operator.
29732973
"""
29742974
return self.dot(np.transpose(other))
29752975

pandas/io/common.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,7 @@ def stringify_path(
243243
244244
Notes
245245
-----
246-
Objects supporting the fspath protocol (python 3.6+) are coerced
246+
Objects supporting the fspath protocol are coerced
247247
according to its __fspath__ method.
248248
249249
Any other object is passed through unchanged, which includes bytes,

pandas/io/gbq.py

-6
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,6 @@ def read_gbq(
134134
If set, limit the maximum number of rows to fetch from the query
135135
results.
136136
137-
*New in version 0.12.0 of pandas-gbq*.
138-
139137
.. versionadded:: 1.1.0
140138
progress_bar_type : Optional, str
141139
If set, use the `tqdm <https://tqdm.github.io/>`__ library to
@@ -156,10 +154,6 @@ def read_gbq(
156154
Use the :func:`tqdm.tqdm_gui` function to display a
157155
progress bar as a graphical dialog box.
158156
159-
Note that this feature requires version 0.12.0 or later of the
160-
``pandas-gbq`` package. And it requires the ``tqdm`` package. Slightly
161-
different than ``pandas-gbq``, here the default is ``None``.
162-
163157
Returns
164158
-------
165159
df: DataFrame

pandas/io/pytables.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -515,7 +515,7 @@ class HDFStore:
515515
A value of 0 or None disables compression.
516516
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
517517
Specifies the compression library to be used.
518-
As of v0.20.2 these additional compressors for Blosc are supported
518+
These additional compressors for Blosc are supported
519519
(default if no compressor specified: 'blosc:blosclz'):
520520
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
521521
'blosc:zlib', 'blosc:zstd'}.

pandas/tests/io/formats/test_to_csv.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ def test_to_csv_stdout_file(self, capsys):
508508
reason=(
509509
"Especially in Windows, file stream should not be passed"
510510
"to csv writer without newline='' option."
511-
"(https://docs.python.org/3.6/library/csv.html#csv.writer)"
511+
"(https://docs.python.org/3/library/csv.html#csv.writer)"
512512
),
513513
)
514514
def test_to_csv_write_to_open_file(self):

pandas/tests/series/test_constructors.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -1346,8 +1346,7 @@ def test_constructor_dict_list_value_explicit_dtype(self):
13461346

13471347
def test_constructor_dict_order(self):
13481348
# GH19018
1349-
# initialization ordering: by insertion order if python>= 3.6, else
1350-
# order by value
1349+
# initialization ordering: by insertion order
13511350
d = {"b": 1, "a": 0, "c": 2}
13521351
result = Series(d)
13531352
expected = Series([1, 0, 2], index=list("bac"))

0 commit comments

Comments
 (0)