Skip to content

DOC: Cleaned references to pandas <v0.12 in docs #17375

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -251,8 +251,8 @@ replace NaN with some other value using ``fillna`` if you wish).
Flexible Comparisons
~~~~~~~~~~~~~~~~~~~~

Starting in v0.8, pandas introduced binary comparison methods eq, ne, lt, gt,
le, and ge to Series and DataFrame whose behavior is analogous to the binary
Series and DataFrame have the binary comparison methods ``eq``, ``ne``, ``lt``, ``gt``,
``le``, and ``ge`` whose behavior is analogous to the binary
arithmetic operations described above:

.. ipython:: python
Expand Down Expand Up @@ -1908,7 +1908,7 @@ each type in a ``DataFrame``:

dft.get_dtype_counts()

Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0).
Numeric dtypes will propagate and can coexist in DataFrames.
If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will **NOT** be combined. The following example will give you a taste.
Expand Down Expand Up @@ -2137,7 +2137,7 @@ gotchas
~~~~~~~

Performing selection operations on ``integer`` type data can easily upcast the data to ``floating``.
The dtype of the input data will be preserved in cases where ``nans`` are not introduced (starting in 0.11.0)
The dtype of the input data will be preserved in cases where ``nans`` are not introduced.
See also :ref:`Support for integer NA <gotchas.intna>`

.. ipython:: python
Expand Down
13 changes: 5 additions & 8 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ index is passed, one will be created having values ``[0, ..., len(data) - 1]``.

.. note::

Starting in v0.8.0, pandas supports non-unique index values. If an operation
pandas supports non-unique index values. If an operation
that does not support duplicate index values is attempted, an exception
will be raised at that time. The reason for being lazy is nearly all performance-based
(there are many instances in computations, like parts of GroupBy, where the index
Expand Down Expand Up @@ -698,7 +698,7 @@ DataFrame in tabular form, though it won't always fit the console width:

print(baseball.iloc[-20:, :12].to_string())

New since 0.10.0, wide DataFrames will now be printed across multiple rows by
Wide DataFrames will be printed across multiple rows by
default:

.. ipython:: python
Expand Down Expand Up @@ -845,19 +845,16 @@ DataFrame objects with mixed-type columns, all of the data will get upcasted to

.. note::

Unfortunately Panel, being less commonly used than Series and DataFrame,
Panel, being less commonly used than Series and DataFrame,
has been slightly neglected feature-wise. A number of methods and options
available in DataFrame are not available in Panel. This will get worked
on, of course, in future releases. And faster if you join me in working on
the codebase.
available in DataFrame are not available in Panel.

.. _dsintro.to_panel:

From DataFrame using ``to_panel`` method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This method was introduced in v0.7 to replace ``LongPanel.to_long``, and converts
a DataFrame with a two-level index to a Panel.
``to_panel`` converts a DataFrame with a two-level index to a Panel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a referencde to the section where panel is deprecated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a deprecation warning a bit above, so it's too much adding it here also IMO. I changed a note that calls on people to contribute to panels, though, as that isnt relevant anymore.


.. ipython:: python
:okwarning:
Expand Down
4 changes: 1 addition & 3 deletions doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ columns:

In [5]: grouped = df.groupby(get_letter_type, axis=1)

Starting with 0.8, pandas Index objects now support duplicate values. If a
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
Expand Down Expand Up @@ -288,8 +288,6 @@ chosen level:

s.sum(level='second')

.. versionadded:: 0.6

Grouping with multiple levels is supported.

.. ipython:: python
Expand Down
2 changes: 0 additions & 2 deletions doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,6 @@ See the :ref:`cookbook<cookbook.selection>` for some advanced strategies
Different Choices for Indexing
------------------------------

.. versionadded:: 0.11.0

Object selection has had a number of user-requested additions in order to
support more explicit location based indexing. Pandas now supports three types
of multi-axis indexing.
Expand Down
14 changes: 7 additions & 7 deletions doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ warn_bad_lines : boolean, default ``True``
Specifying column data types
''''''''''''''''''''''''''''

Starting with v0.10, you can indicate the data type for the whole DataFrame or
You can indicate the data type for the whole DataFrame or
individual columns:

.. ipython:: python
Expand Down Expand Up @@ -3346,7 +3346,7 @@ Read/Write API
''''''''''''''

``HDFStore`` supports an top-level API using ``read_hdf`` for reading and ``to_hdf`` for writing,
similar to how ``read_csv`` and ``to_csv`` work. (new in 0.11.0)
similar to how ``read_csv`` and ``to_csv`` work.

.. ipython:: python

Expand Down Expand Up @@ -3791,7 +3791,7 @@ indexed dimension as the ``where``.

.. note::

Indexes are automagically created (starting ``0.10.1``) on the indexables
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
``index=False`` to ``append``.

Expand Down Expand Up @@ -3878,7 +3878,7 @@ create a new table!)
Iterator
++++++++

Starting in ``0.11.0``, you can pass, ``iterator=True`` or ``chunksize=number_in_a_chunk``
You can pass ``iterator=True`` or ``chunksize=number_in_a_chunk``
to ``select`` and ``select_as_multiple`` to return an iterator on the results.
The default is 50,000 rows returned in a chunk.

Expand Down Expand Up @@ -3986,8 +3986,8 @@ of rows in an object.
Multiple Table Queries
++++++++++++++++++++++

New in 0.10.1 are the methods ``append_to_multiple`` and
``select_as_multiple``, that can perform appending/selecting from
The methods ``append_to_multiple`` and
``select_as_multiple`` can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
Expand Down Expand Up @@ -4291,7 +4291,7 @@ Pass ``min_itemsize`` on the first table creation to a-priori specify the minimu
``min_itemsize`` can be an integer, or a dict mapping a column name to an integer. You can pass ``values`` as a key to
allow all *indexables* or *data_columns* to have this min_itemsize.

Starting in 0.11.0, passing a ``min_itemsize`` dict will cause all passed columns to be created as *data_columns* automatically.
Passing a ``min_itemsize`` dict will cause all passed columns to be created as *data_columns* automatically.

.. note::

Expand Down
9 changes: 4 additions & 5 deletions doc/source/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,8 @@ arise and we wish to also consider that "missing" or "not available" or "NA".

.. note::

Prior to version v0.10.0 ``inf`` and ``-inf`` were also
considered to be "NA" in computations. This is no longer the case by
default; use the ``mode.use_inf_as_na`` option to recover it.
If you want to consider ``inf`` and ``-inf`` to be "NA" in computations,
you can set ``pandas.options.mode.use_inf_as_na = True``.

.. _missing.isna:

Expand Down Expand Up @@ -485,8 +484,8 @@ respectively:

Replacing Generic Values
~~~~~~~~~~~~~~~~~~~~~~~~
Often times we want to replace arbitrary values with other values. New in v0.8
is the ``replace`` method in Series/DataFrame that provides an efficient yet
Often times we want to replace arbitrary values with other values. The
``replace`` method in Series/DataFrame provides an efficient yet
flexible way to perform such replacements.

For a Series, you can replace a single value or a list of values by another
Expand Down
3 changes: 1 addition & 2 deletions doc/source/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1069,8 +1069,7 @@ Offset Aliases
~~~~~~~~~~~~~~

A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as *offset aliases*
(referred to as *time rules* prior to v0.8.0).
frequencies. We will refer to these aliases as *offset aliases*.

.. csv-table::
:header: "Alias", "Description"
Expand Down
6 changes: 0 additions & 6 deletions doc/source/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -306,8 +306,6 @@ subplots:
df.diff().hist(color='k', alpha=0.5, bins=50)
.. versionadded:: 0.10.0

The ``by`` keyword can be specified to plot grouped histograms:

.. ipython:: python
Expand Down Expand Up @@ -831,8 +829,6 @@ and take a :class:`Series` or :class:`DataFrame` as an argument.
Scatter Matrix Plot
~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.7.3

You can create a scatter plot matrix using the
``scatter_matrix`` method in ``pandas.plotting``:

Expand All @@ -859,8 +855,6 @@ You can create a scatter plot matrix using the
Density Plot
~~~~~~~~~~~~

.. versionadded:: 0.8.0

You can create density plots using the :meth:`Series.plot.kde` and :meth:`DataFrame.plot.kde` methods.

.. ipython:: python
Expand Down