diff --git a/doc/source/development/extending.rst b/doc/source/development/extending.rst index 1d52a5595472b..b829cfced6962 100644 --- a/doc/source/development/extending.rst +++ b/doc/source/development/extending.rst @@ -450,7 +450,7 @@ Below is an example to define two original properties, "internal_cache" as a tem Plotting backends ----------------- -Starting in 0.25 pandas can be extended with third-party plotting backends. The +pandas can be extended with third-party plotting backends. The main idea is letting users select a plotting backend different than the provided one based on Matplotlib. For example: diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst index e82cf8ff93bbc..9aa868dab30a6 100644 --- a/doc/source/getting_started/install.rst +++ b/doc/source/getting_started/install.rst @@ -149,14 +149,6 @@ to install pandas with the optional dependencies to read Excel files. The full list of extras that can be installed can be found in the :ref:`dependency section.` -Installing with ActivePython -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Installation instructions for -`ActivePython `__ can be found -`here `__. Versions -2.7, 3.5 and 3.6 include pandas. - Installing using your Linux distribution's package manager. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst index 68024fbd05727..d76c7e2bf3b03 100644 --- a/doc/source/user_guide/advanced.rst +++ b/doc/source/user_guide/advanced.rst @@ -918,7 +918,7 @@ If you select a label *contained* within an interval, this will also select the df.loc[2.5] df.loc[[2.5, 3.5]] -Selecting using an ``Interval`` will only return exact matches (starting from pandas 0.25.0). +Selecting using an ``Interval`` will only return exact matches. .. ipython:: python diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 101932a23ca6a..dd6ea6eccc85c 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -3999,7 +3999,7 @@ any pickled pandas object (or any other pickled object) from file: .. warning:: - :func:`read_pickle` is only guaranteed backwards compatible back to pandas version 0.20.3 + :func:`read_pickle` is only guaranteed backwards compatible back to a few minor release. .. _io.pickle.compression: @@ -5922,11 +5922,6 @@ And then issue the following queries: Google BigQuery --------------- -.. warning:: - - Starting in 0.20.0, pandas has split off Google BigQuery support into the - separate package ``pandas-gbq``. You can ``pip install pandas-gbq`` to get it. - The ``pandas-gbq`` package provides functionality to read/write from Google BigQuery. pandas integrates with this external package. if ``pandas-gbq`` is installed, you can @@ -6114,7 +6109,7 @@ SAS formats ----------- The top-level function :func:`read_sas` can read (but not write) SAS -XPORT (.xpt) and (since *v0.18.0*) SAS7BDAT (.sas7bdat) format files. +XPORT (.xpt) and SAS7BDAT (.sas7bdat) format files. SAS files only contain two value types: ASCII text and floating point values (usually 8 bytes but sometimes truncated). For xport files, diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst index ce4b3d1e8c7f3..cf8d7a05bf6e7 100644 --- a/doc/source/user_guide/merging.rst +++ b/doc/source/user_guide/merging.rst @@ -510,12 +510,6 @@ all standard database join operations between ``DataFrame`` or named ``Series`` dataset. * "many_to_many" or "m:m": allowed, but does not result in checks. -.. note:: - - Support for specifying index levels as the ``on``, ``left_on``, and - ``right_on`` parameters was added in version 0.23.0. - Support for merging named ``Series`` objects was added in version 0.24.0. - The return type will be the same as ``left``. If ``left`` is a ``DataFrame`` or named ``Series`` and ``right`` is a subclass of ``DataFrame``, the return type will still be ``DataFrame``. diff --git a/doc/source/user_guide/missing_data.rst b/doc/source/user_guide/missing_data.rst index 467c343f4ad1a..4d645cd75ac76 100644 --- a/doc/source/user_guide/missing_data.rst +++ b/doc/source/user_guide/missing_data.rst @@ -182,11 +182,6 @@ account for missing data. For example: Sum/prod of empties/nans ~~~~~~~~~~~~~~~~~~~~~~~~ -.. warning:: - - This behavior is now standard as of v0.22.0 and is consistent with the default in ``numpy``; previously sum/prod of all-NA or empty Series/DataFrames would return NaN. - See :ref:`v0.22.0 whatsnew ` for more. - The sum of an empty or all-NA Series or column of a DataFrame is 0. .. ipython:: python diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst index f188c08b7bb94..4e0b18c73ee29 100644 --- a/doc/source/user_guide/text.rst +++ b/doc/source/user_guide/text.rst @@ -206,8 +206,7 @@ and replacing any remaining whitespaces with underscores: .. warning:: - Before v.0.25.0, the ``.str``-accessor did only the most rudimentary type checks. Starting with - v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously. + The type of the Series is inferred and the allowed types (i.e. strings). Generally speaking, the ``.str`` accessor is intended to work only on strings. With very few exceptions, other uses are not supported, and may be disabled at a later point. @@ -423,11 +422,6 @@ the ``join``-keyword. s.str.cat(u) s.str.cat(u, join="left") -.. warning:: - - If the ``join`` keyword is not passed, the method :meth:`~Series.str.cat` will currently fall back to the behavior before version 0.23.0 (i.e. no alignment), - but a ``FutureWarning`` will be raised if any of the involved indexes differ, since this default will change to ``join='left'`` in a future version. - The usual options are available for ``join`` (one of ``'left', 'outer', 'inner', 'right'``). In particular, alignment also means that the different lengths do not need to coincide anymore. @@ -503,15 +497,6 @@ Extracting substrings Extract first match in each subject (extract) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. warning:: - - Before version 0.23, argument ``expand`` of the ``extract`` method defaulted to - ``False``. When ``expand=False``, ``expand`` returns a ``Series``, ``Index``, or - ``DataFrame``, depending on the subject and regular expression - pattern. When ``expand=True``, it always returns a ``DataFrame``, - which is more consistent and less confusing from the perspective of a user. - ``expand=True`` has been the default since version 0.23.0. - The ``extract`` method accepts a `regular expression `__ with at least one capture group. diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst index 844be80abd1ff..ae8de4d5386b1 100644 --- a/doc/source/user_guide/visualization.rst +++ b/doc/source/user_guide/visualization.rst @@ -1794,7 +1794,7 @@ when plotting a large number of points. Plotting backends ----------------- -Starting in version 0.25, pandas can be extended with third-party plotting backends. The +pandas can be extended with third-party plotting backends. The main idea is letting users select a plotting backend different than the provided one based on Matplotlib. diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py index 1d233e0ebde1a..ea35a86095e15 100644 --- a/pandas/core/arrays/interval.py +++ b/pandas/core/arrays/interval.py @@ -124,8 +124,6 @@ ] = """ %(summary)s -.. versionadded:: %(versionadded)s - Parameters ---------- data : array-like (1-dimensional) @@ -187,7 +185,6 @@ % { "klass": "IntervalArray", "summary": "Pandas array for interval data that are closed on the same side.", - "versionadded": "0.24.0", "name": "", "extra_attributes": "", "extra_methods": "", diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py index d3bdcee7a7341..5f1aa3a1e9535 100644 --- a/pandas/core/config_init.py +++ b/pandas/core/config_init.py @@ -275,7 +275,7 @@ def use_numba_cb(key) -> None: pc_large_repr_doc = """ : 'truncate'/'info' For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can - show a truncated table (the default from 0.13), or switch to the view from + show a truncated table, or switch to the view from df.info() (the behaviour in earlier versions of pandas). """ diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py index b55c8cd31c110..24fe1887002c9 100644 --- a/pandas/core/dtypes/concat.py +++ b/pandas/core/dtypes/concat.py @@ -240,8 +240,6 @@ def union_categoricals( ... TypeError: to union ordered Categoricals, all categories must be the same - New in version 0.20.0 - Ordered categoricals with different categories or orderings can be combined by using the `ignore_ordered=True` argument. diff --git a/pandas/core/frame.py b/pandas/core/frame.py index dfee04a784630..0e8f2b0044c66 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -379,12 +379,6 @@ merge_asof : Merge on nearest keys. DataFrame.join : Similar method using indices. -Notes ------ -Support for specifying index levels as the `on`, `left_on`, and -`right_on` parameters was added in version 0.23.0 -Support for merging named Series objects was added in version 0.24.0 - Examples -------- >>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'], @@ -1501,7 +1495,7 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: This method computes the matrix product between the DataFrame and the values of an other Series, DataFrame or a numpy array. - It can also be called using ``self @ other`` in Python >= 3.5. + It can also be called using ``self @ other``. Parameters ---------- @@ -1619,13 +1613,13 @@ def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: """ - Matrix multiplication using binary `@` operator in Python>=3.5. + Matrix multiplication using binary `@` operator. """ return self.dot(other) def __rmatmul__(self, other) -> DataFrame: """ - Matrix multiplication using binary `@` operator in Python>=3.5. + Matrix multiplication using binary `@` operator. """ try: return self.T.dot(np.transpose(other)).T @@ -2700,8 +2694,8 @@ def to_feather(self, path: FilePath | WriteBuffer[bytes], **kwargs) -> None: it will be used as Root Directory path when writing a partitioned dataset. **kwargs : Additional keywords passed to :func:`pyarrow.feather.write_feather`. - Starting with pyarrow 0.17, this includes the `compression`, - `compression_level`, `chunksize` and `version` keywords. + This includes the `compression`, `compression_level`, `chunksize` + and `version` keywords. .. versionadded:: 1.1.0 @@ -4631,8 +4625,8 @@ def select_dtypes(self, include=None, exclude=None) -> Self: * To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or ``'timedelta64'`` * To select Pandas categorical dtypes, use ``'category'`` - * To select Pandas datetimetz dtypes, use ``'datetimetz'`` (new in - 0.20.0) or ``'datetime64[ns, tz]'`` + * To select Pandas datetimetz dtypes, use ``'datetimetz'`` + or ``'datetime64[ns, tz]'`` Examples -------- @@ -9983,9 +9977,6 @@ def join( Parameters `on`, `lsuffix`, and `rsuffix` are not supported when passing a list of `DataFrame` objects. - Support for specifying index levels as the `on` parameter was added - in version 0.23.0. - Examples -------- >>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 0c14c76ab539f..800aaf47e1631 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2606,7 +2606,7 @@ def to_hdf( A value of 0 or None disables compression. complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib' Specifies the compression library to be used. - As of v0.20.2 these additional compressors for Blosc are supported + These additional compressors for Blosc are supported (default if no compressor specified: 'blosc:blosclz'): {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy', 'blosc:zlib', 'blosc:zstd'}. @@ -7537,9 +7537,7 @@ def interpolate( 'cubicspline': Wrappers around the SciPy interpolation methods of similar names. See `Notes`. * 'from_derivatives': Refers to - `scipy.interpolate.BPoly.from_derivatives` which - replaces 'piecewise_polynomial' interpolation method in - scipy 0.18. + `scipy.interpolate.BPoly.from_derivatives`. axis : {{0 or 'index', 1 or 'columns', None}}, default None Axis to interpolate along. For `Series` this parameter is unused diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py index 965c0ba9be1e3..8cf5151a8f0b5 100644 --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -154,7 +154,6 @@ def _new_IntervalIndex(cls, d): "klass": "IntervalIndex", "summary": "Immutable index of intervals that are closed on the same side.", "name": _index_doc_kwargs["name"], - "versionadded": "0.20.0", "extra_attributes": "is_overlapping\nvalues\n", "extra_methods": "", "examples": textwrap.dedent( diff --git a/pandas/core/resample.py b/pandas/core/resample.py index 0b9ebb1117821..50978275eb5e5 100644 --- a/pandas/core/resample.py +++ b/pandas/core/resample.py @@ -886,9 +886,7 @@ def interpolate( 'cubicspline': Wrappers around the SciPy interpolation methods of similar names. See `Notes`. * 'from_derivatives': Refers to - `scipy.interpolate.BPoly.from_derivatives` which - replaces 'piecewise_polynomial' interpolation method in - scipy 0.18. + `scipy.interpolate.BPoly.from_derivatives`. axis : {{0 or 'index', 1 or 'columns', None}}, default None Axis to interpolate along. For `Series` this parameter is unused diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py index 0281a0a9f562e..03773a77de0ae 100644 --- a/pandas/core/reshape/merge.py +++ b/pandas/core/reshape/merge.py @@ -389,10 +389,6 @@ def merge_asof( - A "nearest" search selects the row in the right DataFrame whose 'on' key is closest in absolute distance to the left's key. - The default is "backward" and is compatible in versions below 0.20.0. - The direction parameter was added in version 0.20.0 and introduces - "forward" and "nearest". - Optionally match on equivalent keys with 'by' before searching with 'on'. Parameters diff --git a/pandas/core/series.py b/pandas/core/series.py index e11eda33b2e34..a9d63c5d03bf8 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2891,7 +2891,7 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray: one, or the Series and each columns of a DataFrame, or the Series and each columns of an array. - It can also be called using `self @ other` in Python >= 3.5. + It can also be called using `self @ other`. Parameters ---------- @@ -2963,13 +2963,13 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray: def __matmul__(self, other): """ - Matrix multiplication using binary `@` operator in Python>=3.5. + Matrix multiplication using binary `@` operator. """ return self.dot(other) def __rmatmul__(self, other): """ - Matrix multiplication using binary `@` operator in Python>=3.5. + Matrix multiplication using binary `@` operator. """ return self.dot(np.transpose(other)) diff --git a/pandas/io/common.py b/pandas/io/common.py index 13185603c7bac..02de416e5ce37 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -243,7 +243,7 @@ def stringify_path( Notes ----- - Objects supporting the fspath protocol (python 3.6+) are coerced + Objects supporting the fspath protocol are coerced according to its __fspath__ method. Any other object is passed through unchanged, which includes bytes, diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py index d6c73664ab6f2..286d2b187c700 100644 --- a/pandas/io/gbq.py +++ b/pandas/io/gbq.py @@ -134,8 +134,6 @@ def read_gbq( If set, limit the maximum number of rows to fetch from the query results. - *New in version 0.12.0 of pandas-gbq*. - .. versionadded:: 1.1.0 progress_bar_type : Optional, str If set, use the `tqdm `__ library to @@ -156,10 +154,6 @@ def read_gbq( Use the :func:`tqdm.tqdm_gui` function to display a progress bar as a graphical dialog box. - Note that this feature requires version 0.12.0 or later of the - ``pandas-gbq`` package. And it requires the ``tqdm`` package. Slightly - different than ``pandas-gbq``, here the default is ``None``. - Returns ------- df: DataFrame diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index da0ca940791ba..85000d49cdac6 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -515,7 +515,7 @@ class HDFStore: A value of 0 or None disables compression. complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib' Specifies the compression library to be used. - As of v0.20.2 these additional compressors for Blosc are supported + These additional compressors for Blosc are supported (default if no compressor specified: 'blosc:blosclz'): {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy', 'blosc:zlib', 'blosc:zstd'}. diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py index 4e47e4197c710..81dc79d3111b8 100644 --- a/pandas/tests/io/formats/test_to_csv.py +++ b/pandas/tests/io/formats/test_to_csv.py @@ -508,7 +508,7 @@ def test_to_csv_stdout_file(self, capsys): reason=( "Especially in Windows, file stream should not be passed" "to csv writer without newline='' option." - "(https://docs.python.org/3.6/library/csv.html#csv.writer)" + "(https://docs.python.org/3/library/csv.html#csv.writer)" ), ) def test_to_csv_write_to_open_file(self): diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 8e883f9cec8ea..0a8341476dc56 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -1346,8 +1346,7 @@ def test_constructor_dict_list_value_explicit_dtype(self): def test_constructor_dict_order(self): # GH19018 - # initialization ordering: by insertion order if python>= 3.6, else - # order by value + # initialization ordering: by insertion order d = {"b": 1, "a": 0, "c": 2} result = Series(d) expected = Series([1, 0, 2], index=list("bac"))