Skip to content

DOC: Fixing more doc warnings #24438

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 27, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 8 additions & 7 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -778,12 +778,12 @@ a ``Categorical`` will return a ``CategoricalIndex``, indexed according to the c
of the **passed** ``Categorical`` dtype. This allows one to arbitrarily index these even with
values **not** in the categories, similarly to how you can reindex **any** pandas index.

.. ipython :: python
.. ipython:: python

df2.reindex(['a','e'])
df2.reindex(['a','e']).index
df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
df2.reindex(['a', 'e'])
df2.reindex(['a', 'e']).index
df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde')))
df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde'))).index

.. warning::

Expand Down Expand Up @@ -1040,7 +1040,8 @@ than integer locations. Therefore, with an integer axis index *only*
label-based indexing is possible with the standard tools like ``.loc``. The
following code will generate exceptions:

.. code-block:: python
.. ipython:: python
:okexcept:

s = pd.Series(range(5))
s[-1]
Expand Down Expand Up @@ -1130,7 +1131,7 @@ index can be somewhat complicated. For example, the following does not work:

::

s.loc['c':'e'+1]
s.loc['c':'e' + 1]

A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design to make label-based
Expand Down
11 changes: 4 additions & 7 deletions doc/source/categorical.rst
Original file line number Diff line number Diff line change
Expand Up @@ -977,21 +977,17 @@ categorical (categories and ordering). So if you read back the CSV file you have
relevant columns back to `category` and assign the right categories and categories ordering.

.. ipython:: python
:suppress:


.. ipython:: python

from pandas.compat import StringIO
import io
s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd']))
# rename the categories
s.cat.categories = ["very good", "good", "bad"]
# reorder the categories and add missing categories
s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})
csv = StringIO()
csv = io.StringIO()
df.to_csv(csv)
df2 = pd.read_csv(StringIO(csv.getvalue()))
df2 = pd.read_csv(io.StringIO(csv.getvalue()))
df2.dtypes
df2["cats"]
# Redo the category
Expand Down Expand Up @@ -1206,6 +1202,7 @@ Use ``copy=True`` to prevent such a behaviour or simply don't reuse ``Categorica
cat

.. note::

This also happens in some cases when you supply a NumPy array instead of a ``Categorical``:
using an int array (e.g. ``np.array([1,2,3,4])``) will exhibit the same behavior, while using
a string array (e.g. ``np.array(["a","b","c","a"])``) will not.
5 changes: 4 additions & 1 deletion doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,10 @@
np.random.seed(123456)
np.set_printoptions(precision=4, suppress=True)
pd.options.display.max_rows = 15
"""

import os
os.chdir('{}')
""".format(os.path.dirname(os.path.dirname(__file__)))


html_context = {
Expand Down
3 changes: 1 addition & 2 deletions doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1236,7 +1236,7 @@ the following Python code will read the binary file ``'binary.dat'`` into a
pandas ``DataFrame``, where each element of the struct corresponds to a column
in the frame:

.. code-block:: python
.. ipython:: python
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason you changed this? As this doesn't run (there is no file binary.dat)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reverted it in #24552 (along with some other doc fixes), but we certainly can also try to actually fix the example (meaning: make it runnable).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it inconsistent that everything had the ipython formatting and not this block (when rendered), and I thought code-block had to have the >>>, and that not being an ipython block was an error.

But it's ok to have them reverted. At least for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was indeed a bit strange to just include code without output (cookbook is not very well maintained anyhow).
We could in principle also include IPython like formatting in a code-block, if that makes it a bit more consistent.


names = 'count', 'avg', 'scale'

Expand Down Expand Up @@ -1399,7 +1399,6 @@ of the data values:

.. ipython:: python


def expand_grid(data_dict):
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
Expand Down
4 changes: 1 addition & 3 deletions doc/source/gotchas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -301,9 +301,7 @@ Byte-Ordering Issues
--------------------
Occasionally you may have to deal with data that were created on a machine with
a different byte order than the one on which you are running Python. A common
symptom of this issue is an error like:

.. code-block:: python-traceback
symptom of this issue is an error like:::

Traceback
...
Expand Down
2 changes: 1 addition & 1 deletion doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4879,7 +4879,7 @@ below and the SQLAlchemy `documentation <https://docs.sqlalchemy.org/en/latest/c

If you want to manage your own connections you can pass one of those instead:

.. code-block:: python
.. ipython:: python
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here


with engine.connect() as conn, conn.begin():
data = pd.read_sql_table('data', conn)
Expand Down
2 changes: 2 additions & 0 deletions doc/source/merging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1122,6 +1122,8 @@ This is equivalent but less verbose and more memory efficient / faster than this
labels=['left', 'right'], vertical=False);
plt.close('all');

.. _merging.join_with_two_multi_indexes:

Joining with two MultiIndexes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
9 changes: 2 additions & 7 deletions doc/source/sparse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ It raises if any value cannot be coerced to specified dtype.
.. code-block:: ipython

In [1]: ss = pd.Series([1, np.nan, np.nan]).to_sparse()
Out[1]:
0 1.0
1 NaN
2 NaN
Expand All @@ -160,6 +161,7 @@ It raises if any value cannot be coerced to specified dtype.
Block lengths: array([1], dtype=int32)

In [2]: ss.astype(np.int64)
Out[2]:
ValueError: unable to coerce current fill_value nan to int64 dtype

.. _sparse.calculation:
Expand Down Expand Up @@ -223,10 +225,6 @@ A :meth:`SparseSeries.to_coo` method is implemented for transforming a ``SparseS

The method requires a ``MultiIndex`` with two or more levels.

.. ipython:: python
:suppress:


.. ipython:: python

s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
Expand Down Expand Up @@ -271,9 +269,6 @@ Specifying different row and column labels (and not sorting them) yields a diffe

A convenience method :meth:`SparseSeries.from_coo` is implemented for creating a ``SparseSeries`` from a ``scipy.sparse.coo_matrix``.

.. ipython:: python
:suppress:

.. ipython:: python

from scipy import sparse
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.13.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ Enhancements

pd.MultiIndex.from_product([shades, colors], names=['shade', 'color'])

- Panel :meth:`~pandas.Panel.apply` will work on non-ufuncs. See :ref:`the docs<basics.apply_panel>`.
- Panel :meth:`~pandas.Panel.apply` will work on non-ufuncs. See :ref:`the docs<basics.apply>`.

.. ipython:: python

Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.19.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1250,8 +1250,8 @@ Operators now preserve dtypes
s
s.astype(np.int64)

``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.
``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.

.. code-block:: ipython

Expand Down
4 changes: 2 additions & 2 deletions pandas/io/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@
Also supports optionally iterating or breaking of the file
into chunks.

Additional help can be found in the `online docs for IO Tools
<http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
Additional help can be found in the online docs for
`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.

Parameters
----------
Expand Down