Skip to content

Commit da1151c

Browse files
simonjayhawkinsjreback
authored andcommitted
CLN: remove versionadded:: 0.20 (#29126)
1 parent 9828d34 commit da1151c

39 files changed

+0
-162
lines changed

doc/source/development/contributing.rst

-2
Original file line numberDiff line numberDiff line change
@@ -1197,8 +1197,6 @@ submitting a pull request.
11971197

11981198
For more, see the `pytest <http://docs.pytest.org/en/latest/>`_ documentation.
11991199

1200-
.. versionadded:: 0.20.0
1201-
12021200
Furthermore one can run
12031201

12041202
.. code-block:: python

doc/source/getting_started/basics.rst

-6
Original file line numberDiff line numberDiff line change
@@ -172,8 +172,6 @@ You are highly encouraged to install both libraries. See the section
172172

173173
These are both enabled to be used by default, you can control this by setting the options:
174174

175-
.. versionadded:: 0.20.0
176-
177175
.. code-block:: python
178176
179177
pd.set_option('compute.use_bottleneck', False)
@@ -891,8 +889,6 @@ functionality.
891889
Aggregation API
892890
~~~~~~~~~~~~~~~
893891

894-
.. versionadded:: 0.20.0
895-
896892
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
897893
This API is similar across pandas objects, see :ref:`groupby API <groupby.aggregate>`, the
898894
:ref:`window functions API <stats.aggregate>`, and the :ref:`resample API <timeseries.aggregate>`.
@@ -1030,8 +1026,6 @@ to the built in :ref:`describe function <basics.describe>`.
10301026
Transform API
10311027
~~~~~~~~~~~~~
10321028

1033-
.. versionadded:: 0.20.0
1034-
10351029
The :meth:`~DataFrame.transform` method returns an object that is indexed the same (same size)
10361030
as the original. This API allows you to provide *multiple* operations at the same
10371031
time rather than one-by-one. Its API is quite similar to the ``.agg`` API.

doc/source/user_guide/advanced.rst

-4
Original file line numberDiff line numberDiff line change
@@ -206,8 +206,6 @@ highly performant. If you want to see only the used levels, you can use the
206206
To reconstruct the ``MultiIndex`` with only the used levels, the
207207
:meth:`~MultiIndex.remove_unused_levels` method may be used.
208208

209-
.. versionadded:: 0.20.0
210-
211209
.. ipython:: python
212210
213211
new_mi = df[['foo', 'qux']].columns.remove_unused_levels()
@@ -928,8 +926,6 @@ If you need integer based selection, you should use ``iloc``:
928926
IntervalIndex
929927
~~~~~~~~~~~~~
930928

931-
.. versionadded:: 0.20.0
932-
933929
:class:`IntervalIndex` together with its own dtype, :class:`~pandas.api.types.IntervalDtype`
934930
as well as the :class:`Interval` scalar type, allow first-class support in pandas
935931
for interval notation.

doc/source/user_guide/categorical.rst

-2
Original file line numberDiff line numberDiff line change
@@ -874,8 +874,6 @@ The below raises ``TypeError`` because the categories are ordered and not identi
874874
Out[3]:
875875
TypeError: to union ordered Categoricals, all categories must be the same
876876
877-
.. versionadded:: 0.20.0
878-
879877
Ordered categoricals with different categories or orderings can be combined by
880878
using the ``ignore_ordered=True`` argument.
881879

doc/source/user_guide/computation.rst

-2
Original file line numberDiff line numberDiff line change
@@ -471,8 +471,6 @@ default of the index) in a DataFrame.
471471
Rolling window endpoints
472472
~~~~~~~~~~~~~~~~~~~~~~~~
473473

474-
.. versionadded:: 0.20.0
475-
476474
The inclusion of the interval endpoints in rolling window calculations can be specified with the ``closed``
477475
parameter:
478476

doc/source/user_guide/groupby.rst

-6
Original file line numberDiff line numberDiff line change
@@ -311,8 +311,6 @@ Grouping with multiple levels is supported.
311311
s
312312
s.groupby(level=['first', 'second']).sum()
313313
314-
.. versionadded:: 0.20
315-
316314
Index level names may be supplied as keys.
317315

318316
.. ipython:: python
@@ -353,8 +351,6 @@ Index levels may also be specified by name.
353351
354352
df.groupby([pd.Grouper(level='second'), 'A']).sum()
355353
356-
.. versionadded:: 0.20
357-
358354
Index level names may be specified as keys directly to ``groupby``.
359355

360356
.. ipython:: python
@@ -1274,8 +1270,6 @@ To see the order in which each row appears within its group, use the
12741270
Enumerate groups
12751271
~~~~~~~~~~~~~~~~
12761272

1277-
.. versionadded:: 0.20.2
1278-
12791273
To see the ordering of the groups (as opposed to the order of rows
12801274
within a group given by ``cumcount``) you can use
12811275
:meth:`~pandas.core.groupby.DataFrameGroupBy.ngroup`.

doc/source/user_guide/io.rst

-21
Original file line numberDiff line numberDiff line change
@@ -163,9 +163,6 @@ dtype : Type name or dict of column -> type, default ``None``
163163
(unsupported with ``engine='python'``). Use `str` or `object` together
164164
with suitable ``na_values`` settings to preserve and
165165
not interpret dtype.
166-
167-
.. versionadded:: 0.20.0 support for the Python parser.
168-
169166
engine : {``'c'``, ``'python'``}
170167
Parser engine to use. The C engine is faster while the Python engine is
171168
currently more feature-complete.
@@ -417,10 +414,6 @@ However, if you wanted for all the data to be coerced, no matter the type, then
417414
using the ``converters`` argument of :func:`~pandas.read_csv` would certainly be
418415
worth trying.
419416

420-
.. versionadded:: 0.20.0 support for the Python parser.
421-
422-
The ``dtype`` option is supported by the 'python' engine.
423-
424417
.. note::
425418
In some cases, reading in abnormal data with columns containing mixed dtypes
426419
will result in an inconsistent dataset. If you rely on pandas to infer the
@@ -616,8 +609,6 @@ Filtering columns (``usecols``)
616609
The ``usecols`` argument allows you to select any subset of the columns in a
617610
file, either using the column names, position numbers or a callable:
618611

619-
.. versionadded:: 0.20.0 support for callable `usecols` arguments
620-
621612
.. ipython:: python
622613
623614
data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
@@ -1447,8 +1438,6 @@ is whitespace).
14471438
df = pd.read_fwf('bar.csv', header=None, index_col=0)
14481439
df
14491440
1450-
.. versionadded:: 0.20.0
1451-
14521441
``read_fwf`` supports the ``dtype`` parameter for specifying the types of
14531442
parsed columns to be different from the inferred type.
14541443

@@ -2221,8 +2210,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
22212210
Table schema
22222211
''''''''''''
22232212

2224-
.. versionadded:: 0.20.0
2225-
22262213
`Table Schema`_ is a spec for describing tabular datasets as a JSON
22272214
object. The JSON includes information on the field names, types, and
22282215
other attributes. You can use the orient ``table`` to build
@@ -3071,8 +3058,6 @@ missing data to recover integer dtype:
30713058
Dtype specifications
30723059
++++++++++++++++++++
30733060

3074-
.. versionadded:: 0.20
3075-
30763061
As an alternative to converters, the type for an entire column can
30773062
be specified using the `dtype` keyword, which takes a dictionary
30783063
mapping column names to types. To interpret data with
@@ -3345,8 +3330,6 @@ any pickled pandas object (or any other pickled object) from file:
33453330
Compressed pickle files
33463331
'''''''''''''''''''''''
33473332

3348-
.. versionadded:: 0.20.0
3349-
33503333
:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
33513334
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
33523335
The ``zip`` file format only supports reading and must contain only one data file
@@ -4323,8 +4306,6 @@ control compression: ``complevel`` and ``complib``.
43234306
- `bzip2 <http://bzip.org/>`_: Good compression rates.
43244307
- `blosc <http://www.blosc.org/>`_: Fast compression and decompression.
43254308

4326-
.. versionadded:: 0.20.2
4327-
43284309
Support for alternative blosc compressors:
43294310

43304311
- `blosc:blosclz <http://www.blosc.org/>`_ This is the
@@ -4651,8 +4632,6 @@ Performance
46514632
Feather
46524633
-------
46534634

4654-
.. versionadded:: 0.20.0
4655-
46564635
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
46574636
frames efficient, and to make sharing data across data analysis languages easy.
46584637

doc/source/user_guide/merging.rst

-2
Original file line numberDiff line numberDiff line change
@@ -843,8 +843,6 @@ resulting dtype will be upcast.
843843
pd.merge(left, right, how='outer', on='key')
844844
pd.merge(left, right, how='outer', on='key').dtypes
845845
846-
.. versionadded:: 0.20.0
847-
848846
Merging will preserve ``category`` dtypes of the mergands. See also the section on :ref:`categoricals <categorical.merge>`.
849847

850848
The left frame.

doc/source/user_guide/options.rst

-2
Original file line numberDiff line numberDiff line change
@@ -561,8 +561,6 @@ However, setting this option incorrectly for your terminal will cause these char
561561
Table schema display
562562
--------------------
563563

564-
.. versionadded:: 0.20.0
565-
566564
``DataFrame`` and ``Series`` will publish a Table Schema representation
567565
by default. False by default, this can be enabled globally with the
568566
``display.html.table_schema`` option:

doc/source/user_guide/reshaping.rst

-2
Original file line numberDiff line numberDiff line change
@@ -539,8 +539,6 @@ Alternatively we can specify custom bin-edges:
539539
c = pd.cut(ages, bins=[0, 18, 35, 70])
540540
c
541541
542-
.. versionadded:: 0.20.0
543-
544542
If the ``bins`` keyword is an ``IntervalIndex``, then these will be
545543
used to bin the passed data.::
546544

doc/source/user_guide/text.rst

-4
Original file line numberDiff line numberDiff line change
@@ -228,8 +228,6 @@ and ``repl`` must be strings:
228228
dollars.str.replace(r'-\$', '-')
229229
dollars.str.replace('-$', '-', regex=False)
230230
231-
.. versionadded:: 0.20.0
232-
233231
The ``replace`` method can also take a callable as replacement. It is called
234232
on every ``pat`` using :func:`re.sub`. The callable should expect one
235233
positional argument (a regex object) and return a string.
@@ -254,8 +252,6 @@ positional argument (a regex object) and return a string.
254252
pd.Series(['Foo Bar Baz', np.nan],
255253
dtype="string").str.replace(pat, repl)
256254
257-
.. versionadded:: 0.20.0
258-
259255
The ``replace`` method also accepts a compiled regular expression object
260256
from :func:`re.compile` as a pattern. All flags should be included in the
261257
compiled regular expression object.

doc/source/user_guide/timedeltas.rst

-2
Original file line numberDiff line numberDiff line change
@@ -327,8 +327,6 @@ similarly to the ``Series``. These are the *displayed* values of the ``Timedelta
327327
You can convert a ``Timedelta`` to an `ISO 8601 Duration`_ string with the
328328
``.isoformat`` method
329329

330-
.. versionadded:: 0.20.0
331-
332330
.. ipython:: python
333331
334332
pd.Timedelta(days=6, minutes=50, seconds=3,

doc/source/user_guide/timeseries.rst

-2
Original file line numberDiff line numberDiff line change
@@ -376,8 +376,6 @@ We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by
376376
Using the ``origin`` Parameter
377377
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
378378

379-
.. versionadded:: 0.20.0
380-
381379
Using the ``origin`` parameter, one can specify an alternative starting point for creation
382380
of a ``DatetimeIndex``. For example, to use 1960-01-01 as the starting date:
383381

doc/source/user_guide/visualization.rst

-2
Original file line numberDiff line numberDiff line change
@@ -1247,8 +1247,6 @@ in ``pandas.plotting.plot_params`` can be used in a `with statement`:
12471247
Automatic date tick adjustment
12481248
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12491249

1250-
.. versionadded:: 0.20.0
1251-
12521250
``TimedeltaIndex`` now uses the native matplotlib
12531251
tick locator methods, it is useful to call the automatic
12541252
date tick adjustment from matplotlib for figures whose ticklabels overlap.

pandas/_libs/interval.pyx

-2
Original file line numberDiff line numberDiff line change
@@ -191,8 +191,6 @@ cdef class Interval(IntervalMixin):
191191
"""
192192
Immutable object implementing an Interval, a bounded slice-like interval.
193193
194-
.. versionadded:: 0.20.0
195-
196194
Parameters
197195
----------
198196
left : orderable scalar

pandas/_libs/tslibs/timedeltas.pyx

-2
Original file line numberDiff line numberDiff line change
@@ -1157,8 +1157,6 @@ cdef class _Timedelta(timedelta):
11571157
``P[n]Y[n]M[n]DT[n]H[n]M[n]S``, where the ``[n]`` s are replaced by the
11581158
values. See https://en.wikipedia.org/wiki/ISO_8601#Durations.
11591159
1160-
.. versionadded:: 0.20.0
1161-
11621160
Returns
11631161
-------
11641162
formatted : str

pandas/core/dtypes/concat.py

-2
Original file line numberDiff line numberDiff line change
@@ -199,8 +199,6 @@ def union_categoricals(to_union, sort_categories=False, ignore_order=False):
199199
If true, the ordered attribute of the Categoricals will be ignored.
200200
Results in an unordered categorical.
201201
202-
.. versionadded:: 0.20.0
203-
204202
Returns
205203
-------
206204
result : Categorical

pandas/core/dtypes/inference.py

-4
Original file line numberDiff line numberDiff line change
@@ -162,8 +162,6 @@ def is_file_like(obj):
162162
Note: file-like objects must be iterable, but
163163
iterable objects need not be file-like.
164164
165-
.. versionadded:: 0.20.0
166-
167165
Parameters
168166
----------
169167
obj : The object to check
@@ -281,8 +279,6 @@ def is_nested_list_like(obj):
281279
Check if the object is list-like, and that all of its elements
282280
are also list-like.
283281
284-
.. versionadded:: 0.20.0
285-
286282
Parameters
287283
----------
288284
obj : The object to check

pandas/core/frame.py

-4
Original file line numberDiff line numberDiff line change
@@ -2082,8 +2082,6 @@ def to_feather(self, fname):
20822082
"""
20832083
Write out the binary feather-format for DataFrames.
20842084
2085-
.. versionadded:: 0.20.0
2086-
20872085
Parameters
20882086
----------
20892087
fname : str
@@ -7868,8 +7866,6 @@ def nunique(self, axis=0, dropna=True):
78687866
Return Series with number of distinct observations. Can ignore NaN
78697867
values.
78707868
7871-
.. versionadded:: 0.20.0
7872-
78737869
Parameters
78747870
----------
78757871
axis : {0 or 'index', 1 or 'columns'}, default 0

pandas/core/generic.py

-19
Original file line numberDiff line numberDiff line change
@@ -897,8 +897,6 @@ def squeeze(self, axis=None):
897897
A specific axis to squeeze. By default, all length-1 axes are
898898
squeezed.
899899
900-
.. versionadded:: 0.20.0
901-
902900
Returns
903901
-------
904902
DataFrame, Series, or scalar
@@ -2163,8 +2161,6 @@ def _repr_data_resource_(self):
21632161
Specifies the one-based bottommost row and rightmost column that
21642162
is to be frozen.
21652163
2166-
.. versionadded:: 0.20.0.
2167-
21682164
See Also
21692165
--------
21702166
to_csv : Write DataFrame to a comma-separated values (csv) file.
@@ -2756,8 +2752,6 @@ def to_pickle(self, path, compression="infer", protocol=pickle.HIGHEST_PROTOCOL)
27562752
default 'infer'
27572753
A string representing the compression to use in the output file. By
27582754
default, infers from the file extension in specified path.
2759-
2760-
.. versionadded:: 0.20.0
27612755
protocol : int
27622756
Int which indicates which protocol should be used by the pickler,
27632757
default HIGHEST_PROTOCOL (see [1]_ paragraph 12.1.2). The possible
@@ -3032,22 +3026,15 @@ def to_latex(
30323026
multicolumn : bool, default True
30333027
Use \multicolumn to enhance MultiIndex columns.
30343028
The default will be read from the config module.
3035-
3036-
.. versionadded:: 0.20.0
30373029
multicolumn_format : str, default 'l'
30383030
The alignment for multicolumns, similar to `column_format`
30393031
The default will be read from the config module.
3040-
3041-
.. versionadded:: 0.20.0
30423032
multirow : bool, default False
30433033
Use \multirow to enhance MultiIndex rows. Requires adding a
30443034
\usepackage{multirow} to your LaTeX preamble. Will print
30453035
centered labels (instead of top-aligned) across the contained
30463036
rows, separating groups via clines. The default will be read
30473037
from the pandas config module.
3048-
3049-
.. versionadded:: 0.20.0
3050-
30513038
caption : str, optional
30523039
The LaTeX caption to be placed inside ``\caption{}`` in the output.
30533040
@@ -5133,8 +5120,6 @@ def pipe(self, func, *args, **kwargs):
51335120
Call ``func`` on self producing a %(klass)s with transformed values
51345121
and that has the same axis length as self.
51355122
5136-
.. versionadded:: 0.20.0
5137-
51385123
Parameters
51395124
----------
51405125
func : function, str, list or dict
@@ -5805,8 +5790,6 @@ def astype(self, dtype, copy=True, errors="raise"):
58055790
- ``raise`` : allow exceptions to be raised
58065791
- ``ignore`` : suppress exceptions. On error return original object.
58075792
5808-
.. versionadded:: 0.20.0
5809-
58105793
Returns
58115794
-------
58125795
casted : same type as caller
@@ -7946,8 +7929,6 @@ def asfreq(self, freq, method=None, how=None, normalize=False, fill_value=None):
79467929
Value to use for missing values, applied during upsampling (note
79477930
this does not fill NaNs that already were present).
79487931
7949-
.. versionadded:: 0.20.0
7950-
79517932
Returns
79527933
-------
79537934
converted : same type as caller

0 commit comments

Comments
 (0)