diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst index 1ed365d09152b..e681cb59f627f 100644 --- a/doc/source/advanced.rst +++ b/doc/source/advanced.rst @@ -778,12 +778,12 @@ a ``Categorical`` will return a ``CategoricalIndex``, indexed according to the c of the **passed** ``Categorical`` dtype. This allows one to arbitrarily index these even with values **not** in the categories, similarly to how you can reindex **any** pandas index. -.. ipython :: python +.. ipython:: python - df2.reindex(['a','e']) - df2.reindex(['a','e']).index - df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))) - df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index + df2.reindex(['a', 'e']) + df2.reindex(['a', 'e']).index + df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde'))) + df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde'))).index .. warning:: @@ -1040,7 +1040,8 @@ than integer locations. Therefore, with an integer axis index *only* label-based indexing is possible with the standard tools like ``.loc``. The following code will generate exceptions: -.. code-block:: python +.. ipython:: python + :okexcept: s = pd.Series(range(5)) s[-1] @@ -1130,7 +1131,7 @@ index can be somewhat complicated. For example, the following does not work: :: - s.loc['c':'e'+1] + s.loc['c':'e' + 1] A very common use case is to limit a time series to start and end at two specific dates. To enable this, we made the design to make label-based diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst index 72d4fec0c447c..68e39e68220a7 100644 --- a/doc/source/categorical.rst +++ b/doc/source/categorical.rst @@ -977,21 +977,17 @@ categorical (categories and ordering). So if you read back the CSV file you have relevant columns back to `category` and assign the right categories and categories ordering. .. ipython:: python - :suppress: - -.. ipython:: python - - from pandas.compat import StringIO + import io s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd'])) # rename the categories s.cat.categories = ["very good", "good", "bad"] # reorder the categories and add missing categories s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"]) df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]}) - csv = StringIO() + csv = io.StringIO() df.to_csv(csv) - df2 = pd.read_csv(StringIO(csv.getvalue())) + df2 = pd.read_csv(io.StringIO(csv.getvalue())) df2.dtypes df2["cats"] # Redo the category @@ -1206,6 +1202,7 @@ Use ``copy=True`` to prevent such a behaviour or simply don't reuse ``Categorica cat .. note:: + This also happens in some cases when you supply a NumPy array instead of a ``Categorical``: using an int array (e.g. ``np.array([1,2,3,4])``) will exhibit the same behavior, while using a string array (e.g. ``np.array(["a","b","c","a"])``) will not. diff --git a/doc/source/conf.py b/doc/source/conf.py index 2d1369499dfda..2bef64cce5c35 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -296,7 +296,10 @@ np.random.seed(123456) np.set_printoptions(precision=4, suppress=True) pd.options.display.max_rows = 15 -""" + + import os + os.chdir('{}') +""".format(os.path.dirname(os.path.dirname(__file__))) html_context = { diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst index 1b2e856e979a8..0c192a0aab24a 100644 --- a/doc/source/cookbook.rst +++ b/doc/source/cookbook.rst @@ -1236,7 +1236,7 @@ the following Python code will read the binary file ``'binary.dat'`` into a pandas ``DataFrame``, where each element of the struct corresponds to a column in the frame: -.. code-block:: python +.. ipython:: python names = 'count', 'avg', 'scale' @@ -1399,7 +1399,6 @@ of the data values: .. ipython:: python - def expand_grid(data_dict): rows = itertools.product(*data_dict.values()) return pd.DataFrame.from_records(rows, columns=data_dict.keys()) diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst index 7d1ba865d551d..2b42eebf762e1 100644 --- a/doc/source/gotchas.rst +++ b/doc/source/gotchas.rst @@ -301,9 +301,7 @@ Byte-Ordering Issues -------------------- Occasionally you may have to deal with data that were created on a machine with a different byte order than the one on which you are running Python. A common -symptom of this issue is an error like: - -.. code-block:: python-traceback +symptom of this issue is an error like::: Traceback ... diff --git a/doc/source/io.rst b/doc/source/io.rst index 9aff1e54d8e98..b22f52e448c0d 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -4879,7 +4879,7 @@ below and the SQLAlchemy `documentation `. +- Panel :meth:`~pandas.Panel.apply` will work on non-ufuncs. See :ref:`the docs`. .. ipython:: python diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst index 6f4e8e36cdc04..38208e9ff4cba 100644 --- a/doc/source/whatsnew/v0.19.0.rst +++ b/doc/source/whatsnew/v0.19.0.rst @@ -1250,8 +1250,8 @@ Operators now preserve dtypes s s.astype(np.int64) - ``astype`` fails if data contains values which cannot be converted to specified ``dtype``. - Note that the limitation is applied to ``fill_value`` which default is ``np.nan``. +``astype`` fails if data contains values which cannot be converted to specified ``dtype``. +Note that the limitation is applied to ``fill_value`` which default is ``np.nan``. .. code-block:: ipython diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index aadca1fcb3bef..a85e5b3ea6539 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -59,8 +59,8 @@ Also supports optionally iterating or breaking of the file into chunks. -Additional help can be found in the `online docs for IO Tools -`_. +Additional help can be found in the online docs for +`IO Tools `_. Parameters ----------