Skip to content

Commit 5bca6ce

Browse files
topper-123jreback
authored andcommitted
DOC: Clean-up references to v12 to v14 (both included) (#17420)
1 parent c2d0481 commit 5bca6ce

16 files changed

+43
-178
lines changed

doc/source/advanced.rst

+3-18
Original file line numberDiff line numberDiff line change
@@ -270,9 +270,6 @@ Passing a list of labels or tuples works similar to reindexing:
270270
Using slicers
271271
~~~~~~~~~~~~~
272272

273-
.. versionadded:: 0.14.0
274-
275-
In 0.14.0 we added a new way to slice multi-indexed objects.
276273
You can slice a multi-index by providing multiple indexers.
277274

278275
You can provide any of the selectors as if you are indexing by label, see :ref:`Selection by Label <indexing.label>`,
@@ -384,7 +381,7 @@ selecting data at a particular level of a MultiIndex easier.
384381
385382
.. ipython:: python
386383
387-
# using the slicers (new in 0.14.0)
384+
# using the slicers
388385
df.loc[(slice(None),'one'),:]
389386
390387
You can also select on the columns with :meth:`~pandas.MultiIndex.xs`, by
@@ -397,7 +394,7 @@ providing the axis argument
397394
398395
.. ipython:: python
399396
400-
# using the slicers (new in 0.14.0)
397+
# using the slicers
401398
df.loc[:,(slice(None),'one')]
402399
403400
:meth:`~pandas.MultiIndex.xs` also allows selection with multiple keys
@@ -408,11 +405,9 @@ providing the axis argument
408405
409406
.. ipython:: python
410407
411-
# using the slicers (new in 0.14.0)
408+
# using the slicers
412409
df.loc[:,('bar','one')]
413410
414-
.. versionadded:: 0.13.0
415-
416411
You can pass ``drop_level=False`` to :meth:`~pandas.MultiIndex.xs` to retain
417412
the level that was selected
418413

@@ -743,16 +738,6 @@ Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``ND
743738
Float64Index
744739
~~~~~~~~~~~~
745740
746-
.. note::
747-
748-
As of 0.14.0, ``Float64Index`` is backed by a native ``float64`` dtype
749-
array. Prior to 0.14.0, ``Float64Index`` was backed by an ``object`` dtype
750-
array. Using a ``float64`` dtype in the backend speeds up arithmetic
751-
operations by about 30x and boolean indexing operations on the
752-
``Float64Index`` itself are about 2x as fast.
753-
754-
.. versionadded:: 0.13.0
755-
756741
By default a ``Float64Index`` will be automatically created when passing floating, or mixed-integer-floating values in index creation.
757742
This enables a pure label-based slicing paradigm that makes ``[],ix,loc`` for scalar indexing and slicing work exactly the
758743
same.

doc/source/basics.rst

+1-9
Original file line numberDiff line numberDiff line change
@@ -347,7 +347,7 @@ That is because NaNs do not compare as equals:
347347
348348
np.nan == np.nan
349349
350-
So, as of v0.13.1, NDFrames (such as Series, DataFrames, and Panels)
350+
So, NDFrames (such as Series, DataFrames, and Panels)
351351
have an :meth:`~DataFrame.equals` method for testing equality, with NaNs in
352352
corresponding locations treated as equal.
353353

@@ -1104,10 +1104,6 @@ Applying with a ``Panel`` will pass a ``Series`` to the applied function. If the
11041104
function returns a ``Series``, the result of the application will be a ``Panel``. If the applied function
11051105
reduces to a scalar, the result of the application will be a ``DataFrame``.
11061106

1107-
.. note::
1108-
1109-
Prior to 0.13.1 ``apply`` on a ``Panel`` would only work on ``ufuncs`` (e.g. ``np.sum/np.max``).
1110-
11111107
.. ipython:: python
11121108
11131109
import pandas.util.testing as tm
@@ -1800,8 +1796,6 @@ Series has the :meth:`~Series.searchsorted` method, which works similar to
18001796
smallest / largest values
18011797
~~~~~~~~~~~~~~~~~~~~~~~~~
18021798

1803-
.. versionadded:: 0.14.0
1804-
18051799
``Series`` has the :meth:`~Series.nsmallest` and :meth:`~Series.nlargest` methods which return the
18061800
smallest or largest :math:`n` values. For a large ``Series`` this can be much
18071801
faster than sorting the entire Series and calling ``head(n)`` on the result.
@@ -2168,8 +2162,6 @@ Selecting columns based on ``dtype``
21682162

21692163
.. _basics.selectdtypes:
21702164

2171-
.. versionadded:: 0.14.1
2172-
21732165
The :meth:`~DataFrame.select_dtypes` method implements subsetting of columns
21742166
based on their ``dtype``.
21752167

doc/source/comparison_with_r.rst

-4
Original file line numberDiff line numberDiff line change
@@ -247,8 +247,6 @@ For more details and examples see :ref:`the reshaping documentation
247247
|subset|_
248248
~~~~~~~~~~
249249

250-
.. versionadded:: 0.13
251-
252250
The :meth:`~pandas.DataFrame.query` method is similar to the base R ``subset``
253251
function. In R you might want to get the rows of a ``data.frame`` where one
254252
column's values are less than another column's values:
@@ -277,8 +275,6 @@ For more details and examples see :ref:`the query documentation
277275
|with|_
278276
~~~~~~~~
279277

280-
.. versionadded:: 0.13
281-
282278
An expression using a data.frame called ``df`` in R with the columns ``a`` and
283279
``b`` would be evaluated using ``with`` like so:
284280

doc/source/cookbook.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -818,7 +818,7 @@ The :ref:`Concat <merging.concatenation>` docs. The :ref:`Join <merging.join>` d
818818
df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])
819819
df2 = df1.copy()
820820
821-
ignore_index is needed in pandas < v0.13, and depending on df construction
821+
Depending on df construction, ``ignore_index`` may be needed
822822

823823
.. ipython:: python
824824

doc/source/enhancingperf.rst

+10-26
Original file line numberDiff line numberDiff line change
@@ -213,17 +213,18 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra
213213

214214
.. warning::
215215

216-
In 0.13.0 since ``Series`` has internaly been refactored to no longer sub-class ``ndarray``
217-
but instead subclass ``NDFrame``, you can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter
218-
to a cython function. Instead pass the actual ``ndarray`` using the ``.values`` attribute of the Series.
216+
You can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter
217+
to a cython function. Instead pass the actual ``ndarray`` using the
218+
``.values`` attribute of the Series. The reason is that the cython
219+
definition is specific to an ndarray and not the passed Series.
219220

220-
Prior to 0.13.0
221+
So, do not do this:
221222

222223
.. code-block:: python
223224
224225
apply_integrate_f(df['a'], df['b'], df['N'])
225226
226-
Use ``.values`` to get the underlying ``ndarray``
227+
But rather, use ``.values`` to get the underlying ``ndarray``
227228

228229
.. code-block:: python
229230
@@ -399,10 +400,8 @@ Read more in the `numba docs <http://numba.pydata.org/>`__.
399400

400401
.. _enhancingperf.eval:
401402

402-
Expression Evaluation via :func:`~pandas.eval` (Experimental)
403-
-------------------------------------------------------------
404-
405-
.. versionadded:: 0.13
403+
Expression Evaluation via :func:`~pandas.eval`
404+
-----------------------------------------------
406405

407406
The top-level function :func:`pandas.eval` implements expression evaluation of
408407
:class:`~pandas.Series` and :class:`~pandas.DataFrame` objects.
@@ -539,10 +538,8 @@ Now let's do the same thing but with comparisons:
539538
of type ``bool`` or ``np.bool_``. Again, you should perform these kinds of
540539
operations in plain Python.
541540

542-
The ``DataFrame.eval`` method (Experimental)
543-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
544-
545-
.. versionadded:: 0.13
541+
The ``DataFrame.eval`` method
542+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
546543

547544
In addition to the top level :func:`pandas.eval` function you can also
548545
evaluate an expression in the "context" of a :class:`~pandas.DataFrame`.
@@ -646,19 +643,6 @@ whether the query modifies the original frame.
646643
Local Variables
647644
~~~~~~~~~~~~~~~
648645

649-
In pandas version 0.14 the local variable API has changed. In pandas 0.13.x,
650-
you could refer to local variables the same way you would in standard Python.
651-
For example,
652-
653-
.. code-block:: python
654-
655-
df = pd.DataFrame(np.random.randn(5, 2), columns=['a', 'b'])
656-
newcol = np.random.randn(len(df))
657-
df.eval('b + newcol')
658-
659-
UndefinedVariableError: name 'newcol' is not defined
660-
661-
As you can see from the exception generated, this syntax is no longer allowed.
662646
You must *explicitly reference* any local variable that you want to use in an
663647
expression by placing the ``@`` character in front of the name. For example,
664648

doc/source/groupby.rst

-19
Original file line numberDiff line numberDiff line change
@@ -766,8 +766,6 @@ missing values with the ``ffill()`` method.
766766
Filtration
767767
----------
768768

769-
.. versionadded:: 0.12
770-
771769
The ``filter`` method returns a subset of the original object. Suppose we
772770
want to take only elements that belong to groups with a group sum greater
773771
than 2.
@@ -858,8 +856,6 @@ In this example, we chopped the collection of time series into yearly chunks
858856
then independently called :ref:`fillna <missing_data.fillna>` on the
859857
groups.
860858

861-
.. versionadded:: 0.14.1
862-
863859
The ``nlargest`` and ``nsmallest`` methods work on ``Series`` style groupbys:
864860

865861
.. ipython:: python
@@ -1048,19 +1044,6 @@ Just like for a DataFrame or Series you can call head and tail on a groupby:
10481044
10491045
This shows the first or last n rows from each group.
10501046

1051-
.. warning::
1052-
1053-
Before 0.14.0 this was implemented with a fall-through apply,
1054-
so the result would incorrectly respect the as_index flag:
1055-
1056-
.. code-block:: python
1057-
1058-
>>> g.head(1): # was equivalent to g.apply(lambda x: x.head(1))
1059-
A B
1060-
A
1061-
1 0 1 2
1062-
5 2 5 6
1063-
10641047
.. _groupby.nth:
10651048

10661049
Taking the nth row of each group
@@ -1113,8 +1096,6 @@ You can also select multiple rows from each group by specifying multiple nth val
11131096
Enumerate group items
11141097
~~~~~~~~~~~~~~~~~~~~~
11151098

1116-
.. versionadded:: 0.13.0
1117-
11181099
To see the order in which each row appears within its group, use the
11191100
``cumcount`` method:
11201101

doc/source/indexing.rst

+3-20
Original file line numberDiff line numberDiff line change
@@ -248,8 +248,6 @@ as an attribute:
248248
- In any of these cases, standard indexing will still work, e.g. ``s['1']``, ``s['min']``, and ``s['index']`` will
249249
access the corresponding element or column.
250250

251-
- The ``Series/Panel`` accesses are available starting in 0.13.0.
252-
253251
If you are using the IPython environment, you may also use tab-completion to
254252
see these accessible attributes.
255253

@@ -529,7 +527,6 @@ Out of range slice indexes are handled gracefully just as in Python/Numpy.
529527
.. ipython:: python
530528
531529
# these are allowed in python/numpy.
532-
# Only works in Pandas starting from v0.14.0.
533530
x = list('abcdef')
534531
x
535532
x[4:10]
@@ -539,14 +536,8 @@ Out of range slice indexes are handled gracefully just as in Python/Numpy.
539536
s.iloc[4:10]
540537
s.iloc[8:10]
541538
542-
.. note::
543-
544-
Prior to v0.14.0, ``iloc`` would not accept out of bounds indexers for
545-
slices, e.g. a value that exceeds the length of the object being indexed.
546-
547-
548-
Note that this could result in an empty axis (e.g. an empty DataFrame being
549-
returned)
539+
Note that using slices that go out of bounds can result in
540+
an empty axis (e.g. an empty DataFrame being returned)
550541

551542
.. ipython:: python
552543
@@ -745,8 +736,6 @@ Finally, one can also set a seed for ``sample``'s random number generator using
745736
Setting With Enlargement
746737
------------------------
747738

748-
.. versionadded:: 0.13
749-
750739
The ``.loc/[]`` operations can perform enlargement when setting a non-existant key for that axis.
751740

752741
In the ``Series`` case this is effectively an appending operation
@@ -1020,8 +1009,6 @@ partial setting via ``.loc`` (but on the contents rather than the axis labels)
10201009
df2[ df2[1:4] > 0 ] = 3
10211010
df2
10221011
1023-
.. versionadded:: 0.13
1024-
10251012
Where can also accept ``axis`` and ``level`` parameters to align the input when
10261013
performing the ``where``.
10271014

@@ -1064,8 +1051,6 @@ as condition and ``other`` argument.
10641051
The :meth:`~pandas.DataFrame.query` Method (Experimental)
10651052
---------------------------------------------------------
10661053

1067-
.. versionadded:: 0.13
1068-
10691054
:class:`~pandas.DataFrame` objects have a :meth:`~pandas.DataFrame.query`
10701055
method that allows selection using an expression.
10711056

@@ -1506,8 +1491,6 @@ The name, if set, will be shown in the console display:
15061491
Setting metadata
15071492
~~~~~~~~~~~~~~~~
15081493

1509-
.. versionadded:: 0.13.0
1510-
15111494
Indexes are "mostly immutable", but it is possible to set and change their
15121495
metadata, like the index ``name`` (or, for ``MultiIndex``, ``levels`` and
15131496
``labels``).
@@ -1790,7 +1773,7 @@ Evaluation order matters
17901773

17911774
Furthermore, in chained expressions, the order may determine whether a copy is returned or not.
17921775
If an expression will set values on a copy of a slice, then a ``SettingWithCopy``
1793-
exception will be raised (this raise/warn behavior is new starting in 0.13.0)
1776+
warning will be issued.
17941777

17951778
You can control the action of a chained assignment via the option ``mode.chained_assignment``,
17961779
which can take the values ``['raise','warn',None]``, where showing a warning is the default.

doc/source/install.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ following command::
107107

108108
To install a specific pandas version::
109109

110-
conda install pandas=0.13.1
110+
conda install pandas=0.20.3
111111

112112
To install other packages, IPython for example::
113113

0 commit comments

Comments
 (0)