Skip to content

DOC: Clean-up references to v12 to v14 (both included) #17420

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 5, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 3 additions & 18 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,6 @@ Passing a list of labels or tuples works similar to reindexing:
Using slicers
~~~~~~~~~~~~~

.. versionadded:: 0.14.0

In 0.14.0 we added a new way to slice multi-indexed objects.
You can slice a multi-index by providing multiple indexers.

You can provide any of the selectors as if you are indexing by label, see :ref:`Selection by Label <indexing.label>`,
Expand Down Expand Up @@ -384,7 +381,7 @@ selecting data at a particular level of a MultiIndex easier.

.. ipython:: python

# using the slicers (new in 0.14.0)
# using the slicers
df.loc[(slice(None),'one'),:]

You can also select on the columns with :meth:`~pandas.MultiIndex.xs`, by
Expand All @@ -397,7 +394,7 @@ providing the axis argument

.. ipython:: python

# using the slicers (new in 0.14.0)
# using the slicers
df.loc[:,(slice(None),'one')]

:meth:`~pandas.MultiIndex.xs` also allows selection with multiple keys
Expand All @@ -408,11 +405,9 @@ providing the axis argument

.. ipython:: python

# using the slicers (new in 0.14.0)
# using the slicers
df.loc[:,('bar','one')]

.. versionadded:: 0.13.0

You can pass ``drop_level=False`` to :meth:`~pandas.MultiIndex.xs` to retain
the level that was selected

Expand Down Expand Up @@ -743,16 +738,6 @@ Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``ND
Float64Index
~~~~~~~~~~~~

.. note::

As of 0.14.0, ``Float64Index`` is backed by a native ``float64`` dtype
array. Prior to 0.14.0, ``Float64Index`` was backed by an ``object`` dtype
array. Using a ``float64`` dtype in the backend speeds up arithmetic
operations by about 30x and boolean indexing operations on the
``Float64Index`` itself are about 2x as fast.

.. versionadded:: 0.13.0

By default a ``Float64Index`` will be automatically created when passing floating, or mixed-integer-floating values in index creation.
This enables a pure label-based slicing paradigm that makes ``[],ix,loc`` for scalar indexing and slicing work exactly the
same.
Expand Down
10 changes: 1 addition & 9 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ That is because NaNs do not compare as equals:

np.nan == np.nan

So, as of v0.13.1, NDFrames (such as Series, DataFrames, and Panels)
So, NDFrames (such as Series, DataFrames, and Panels)
have an :meth:`~DataFrame.equals` method for testing equality, with NaNs in
corresponding locations treated as equal.

Expand Down Expand Up @@ -1104,10 +1104,6 @@ Applying with a ``Panel`` will pass a ``Series`` to the applied function. If the
function returns a ``Series``, the result of the application will be a ``Panel``. If the applied function
reduces to a scalar, the result of the application will be a ``DataFrame``.

.. note::

Prior to 0.13.1 ``apply`` on a ``Panel`` would only work on ``ufuncs`` (e.g. ``np.sum/np.max``).

.. ipython:: python

import pandas.util.testing as tm
Expand Down Expand Up @@ -1800,8 +1796,6 @@ Series has the :meth:`~Series.searchsorted` method, which works similar to
smallest / largest values
~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.14.0

``Series`` has the :meth:`~Series.nsmallest` and :meth:`~Series.nlargest` methods which return the
smallest or largest :math:`n` values. For a large ``Series`` this can be much
faster than sorting the entire Series and calling ``head(n)`` on the result.
Expand Down Expand Up @@ -2168,8 +2162,6 @@ Selecting columns based on ``dtype``

.. _basics.selectdtypes:

.. versionadded:: 0.14.1

The :meth:`~DataFrame.select_dtypes` method implements subsetting of columns
based on their ``dtype``.

Expand Down
4 changes: 0 additions & 4 deletions doc/source/comparison_with_r.rst
Original file line number Diff line number Diff line change
Expand Up @@ -247,8 +247,6 @@ For more details and examples see :ref:`the reshaping documentation
|subset|_
~~~~~~~~~~

.. versionadded:: 0.13

The :meth:`~pandas.DataFrame.query` method is similar to the base R ``subset``
function. In R you might want to get the rows of a ``data.frame`` where one
column's values are less than another column's values:
Expand Down Expand Up @@ -277,8 +275,6 @@ For more details and examples see :ref:`the query documentation
|with|_
~~~~~~~~

.. versionadded:: 0.13

An expression using a data.frame called ``df`` in R with the columns ``a`` and
``b`` would be evaluated using ``with`` like so:

Expand Down
2 changes: 1 addition & 1 deletion doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -818,7 +818,7 @@ The :ref:`Concat <merging.concatenation>` docs. The :ref:`Join <merging.join>` d
df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])
df2 = df1.copy()

ignore_index is needed in pandas < v0.13, and depending on df construction
Depending on df construction, ``ignore_index`` may be needed

.. ipython:: python

Expand Down
36 changes: 10 additions & 26 deletions doc/source/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -213,17 +213,18 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra

.. warning::

In 0.13.0 since ``Series`` has internaly been refactored to no longer sub-class ``ndarray``
but instead subclass ``NDFrame``, you can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter
to a cython function. Instead pass the actual ``ndarray`` using the ``.values`` attribute of the Series.
You can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter
to a cython function. Instead pass the actual ``ndarray`` using the
``.values`` attribute of the Series. The reason is that the cython
definition is specific to an ndarray and not the passed Series.

Prior to 0.13.0
So, do not do this:

.. code-block:: python

apply_integrate_f(df['a'], df['b'], df['N'])

Use ``.values`` to get the underlying ``ndarray``
But rather, use ``.values`` to get the underlying ``ndarray``

.. code-block:: python

Expand Down Expand Up @@ -399,10 +400,8 @@ Read more in the `numba docs <http://numba.pydata.org/>`__.

.. _enhancingperf.eval:

Expression Evaluation via :func:`~pandas.eval` (Experimental)
-------------------------------------------------------------

.. versionadded:: 0.13
Expression Evaluation via :func:`~pandas.eval`
-----------------------------------------------

The top-level function :func:`pandas.eval` implements expression evaluation of
:class:`~pandas.Series` and :class:`~pandas.DataFrame` objects.
Expand Down Expand Up @@ -539,10 +538,8 @@ Now let's do the same thing but with comparisons:
of type ``bool`` or ``np.bool_``. Again, you should perform these kinds of
operations in plain Python.

The ``DataFrame.eval`` method (Experimental)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.13
The ``DataFrame.eval`` method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In addition to the top level :func:`pandas.eval` function you can also
evaluate an expression in the "context" of a :class:`~pandas.DataFrame`.
Expand Down Expand Up @@ -646,19 +643,6 @@ whether the query modifies the original frame.
Local Variables
~~~~~~~~~~~~~~~

In pandas version 0.14 the local variable API has changed. In pandas 0.13.x,
you could refer to local variables the same way you would in standard Python.
For example,

.. code-block:: python

df = pd.DataFrame(np.random.randn(5, 2), columns=['a', 'b'])
newcol = np.random.randn(len(df))
df.eval('b + newcol')

UndefinedVariableError: name 'newcol' is not defined

As you can see from the exception generated, this syntax is no longer allowed.
You must *explicitly reference* any local variable that you want to use in an
expression by placing the ``@`` character in front of the name. For example,

Expand Down
19 changes: 0 additions & 19 deletions doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -766,8 +766,6 @@ missing values with the ``ffill()`` method.
Filtration
----------

.. versionadded:: 0.12

The ``filter`` method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
Expand Down Expand Up @@ -858,8 +856,6 @@ In this example, we chopped the collection of time series into yearly chunks
then independently called :ref:`fillna <missing_data.fillna>` on the
groups.

.. versionadded:: 0.14.1

The ``nlargest`` and ``nsmallest`` methods work on ``Series`` style groupbys:

.. ipython:: python
Expand Down Expand Up @@ -1048,19 +1044,6 @@ Just like for a DataFrame or Series you can call head and tail on a groupby:

This shows the first or last n rows from each group.

.. warning::

Before 0.14.0 this was implemented with a fall-through apply,
so the result would incorrectly respect the as_index flag:

.. code-block:: python

>>> g.head(1): # was equivalent to g.apply(lambda x: x.head(1))
A B
A
1 0 1 2
5 2 5 6

.. _groupby.nth:

Taking the nth row of each group
Expand Down Expand Up @@ -1113,8 +1096,6 @@ You can also select multiple rows from each group by specifying multiple nth val
Enumerate group items
~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.13.0

To see the order in which each row appears within its group, use the
``cumcount`` method:

Expand Down
23 changes: 3 additions & 20 deletions doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -248,8 +248,6 @@ as an attribute:
- In any of these cases, standard indexing will still work, e.g. ``s['1']``, ``s['min']``, and ``s['index']`` will
access the corresponding element or column.

- The ``Series/Panel`` accesses are available starting in 0.13.0.

If you are using the IPython environment, you may also use tab-completion to
see these accessible attributes.

Expand Down Expand Up @@ -529,7 +527,6 @@ Out of range slice indexes are handled gracefully just as in Python/Numpy.
.. ipython:: python

# these are allowed in python/numpy.
# Only works in Pandas starting from v0.14.0.
x = list('abcdef')
x
x[4:10]
Expand All @@ -539,14 +536,8 @@ Out of range slice indexes are handled gracefully just as in Python/Numpy.
s.iloc[4:10]
s.iloc[8:10]

.. note::

Prior to v0.14.0, ``iloc`` would not accept out of bounds indexers for
slices, e.g. a value that exceeds the length of the object being indexed.


Note that this could result in an empty axis (e.g. an empty DataFrame being
returned)
Note that using slices that go out of bounds can result in
an empty axis (e.g. an empty DataFrame being returned)

.. ipython:: python

Expand Down Expand Up @@ -745,8 +736,6 @@ Finally, one can also set a seed for ``sample``'s random number generator using
Setting With Enlargement
------------------------

.. versionadded:: 0.13

The ``.loc/[]`` operations can perform enlargement when setting a non-existant key for that axis.

In the ``Series`` case this is effectively an appending operation
Expand Down Expand Up @@ -1020,8 +1009,6 @@ partial setting via ``.loc`` (but on the contents rather than the axis labels)
df2[ df2[1:4] > 0 ] = 3
df2

.. versionadded:: 0.13

Where can also accept ``axis`` and ``level`` parameters to align the input when
performing the ``where``.

Expand Down Expand Up @@ -1064,8 +1051,6 @@ as condition and ``other`` argument.
The :meth:`~pandas.DataFrame.query` Method (Experimental)
---------------------------------------------------------

.. versionadded:: 0.13

:class:`~pandas.DataFrame` objects have a :meth:`~pandas.DataFrame.query`
method that allows selection using an expression.

Expand Down Expand Up @@ -1506,8 +1491,6 @@ The name, if set, will be shown in the console display:
Setting metadata
~~~~~~~~~~~~~~~~

.. versionadded:: 0.13.0

Indexes are "mostly immutable", but it is possible to set and change their
metadata, like the index ``name`` (or, for ``MultiIndex``, ``levels`` and
``labels``).
Expand Down Expand Up @@ -1790,7 +1773,7 @@ Evaluation order matters

Furthermore, in chained expressions, the order may determine whether a copy is returned or not.
If an expression will set values on a copy of a slice, then a ``SettingWithCopy``
exception will be raised (this raise/warn behavior is new starting in 0.13.0)
warning will be issued.

You can control the action of a chained assignment via the option ``mode.chained_assignment``,
which can take the values ``['raise','warn',None]``, where showing a warning is the default.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ following command::

To install a specific pandas version::

conda install pandas=0.13.1
conda install pandas=0.20.3

To install other packages, IPython for example::

Expand Down
Loading