Skip to content

DOC: fix code-block ipython highlighting #12853

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,7 @@ In float indexes, slicing using floats is allowed

In non-float indexes, slicing using floats will raise a ``TypeError``

.. code-block:: python
.. code-block:: ipython

In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
Expand All @@ -802,7 +802,7 @@ In non-float indexes, slicing using floats will raise a ``TypeError``

Using a scalar float indexer for ``.iloc`` has been removed in 0.18.0, so the following will raise a ``TypeError``

.. code-block:: python
.. code-block:: ipython

In [3]: pd.Series(range(5)).iloc[3.0]
TypeError: cannot do positional indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [3.0] of <type 'float'>
Expand Down
4 changes: 2 additions & 2 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ To evaluate single-element pandas objects in a boolean context, use the method

.. code-block:: python

>>>if df:
>>> if df:
...

Or
Expand Down Expand Up @@ -352,7 +352,7 @@ objects of the same length:
Trying to compare ``Index`` or ``Series`` objects of different lengths will
raise a ValueError:

.. code-block:: python
.. code-block:: ipython

In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
ValueError: Series lengths must match to compare
Expand Down
2 changes: 1 addition & 1 deletion doc/source/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ These are created from methods on ``Series`` and ``DataFrame``.

These object provide tab-completion of the avaible methods and properties.

.. code-block:: python
.. code-block:: ipython

In [14]: r.
r.agg r.apply r.count r.exclusions r.max r.median r.name r.skew r.sum
Expand Down
12 changes: 6 additions & 6 deletions doc/source/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Here's the function in pure python:

We achieve our result by using ``apply`` (row-wise):

.. code-block:: python
.. code-block:: ipython

In [7]: %timeit df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1)
10 loops, best of 3: 174 ms per loop
Expand Down Expand Up @@ -125,7 +125,7 @@ is here to distinguish between function versions):
to be using bleeding edge ipython for paste to play well with cell magics.


.. code-block:: python
.. code-block:: ipython

In [4]: %timeit df.apply(lambda x: integrate_f_plain(x['a'], x['b'], x['N']), axis=1)
10 loops, best of 3: 85.5 ms per loop
Expand Down Expand Up @@ -154,7 +154,7 @@ We get another huge improvement simply by providing type information:
...: return s * dx
...:

.. code-block:: python
.. code-block:: ipython

In [4]: %timeit df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1)
10 loops, best of 3: 20.3 ms per loop
Expand Down Expand Up @@ -234,7 +234,7 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra
Loops like this would be *extremely* slow in python, but in Cython looping
over numpy arrays is *fast*.

.. code-block:: python
.. code-block:: ipython

In [4]: %timeit apply_integrate_f(df['a'].values, df['b'].values, df['N'].values)
1000 loops, best of 3: 1.25 ms per loop
Expand Down Expand Up @@ -284,7 +284,7 @@ advanced cython techniques:
...: return res
...:

.. code-block:: python
.. code-block:: ipython

In [4]: %timeit apply_integrate_f_wrap(df['a'].values, df['b'].values, df['N'].values)
1000 loops, best of 3: 987 us per loop
Expand Down Expand Up @@ -348,7 +348,7 @@ Using ``numba`` to just-in-time compile your code. We simply take the plain pyth

Note that we directly pass ``numpy`` arrays to the numba function. ``compute_numba`` is just a wrapper that provides a nicer interface by passing/returning pandas objects.

.. code-block:: python
.. code-block:: ipython

In [4]: %timeit compute_numba(df)
1000 loops, best of 3: 798 us per loop
Expand Down
2 changes: 1 addition & 1 deletion doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,7 @@ Selection By Label
dfl = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.date_range('20130101',periods=5))
dfl

.. code-block:: python
.. code-block:: ipython

In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with these indexers [2] of <type 'int'>
Expand Down
10 changes: 5 additions & 5 deletions doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4375,7 +4375,7 @@ Creating BigQuery Tables
As of 0.15.2, the gbq module has a function :func:`~pandas.io.gbq.generate_bq_schema` which will
produce the dictionary representation schema of the specified pandas DataFrame.

.. code-block:: python
.. code-block:: ipython

In [10]: gbq.generate_bq_schema(df, default_type='STRING')

Expand Down Expand Up @@ -4633,7 +4633,7 @@ Performance Considerations

This is an informal comparison of various IO methods, using pandas 0.13.1.

.. code-block:: python
.. code-block:: ipython

In [1]: df = DataFrame(randn(1000000,2),columns=list('AB'))

Expand All @@ -4648,7 +4648,7 @@ This is an informal comparison of various IO methods, using pandas 0.13.1.

Writing

.. code-block:: python
.. code-block:: ipython

In [14]: %timeit test_sql_write(df)
1 loops, best of 3: 6.24 s per loop
Expand All @@ -4670,7 +4670,7 @@ Writing

Reading

.. code-block:: python
.. code-block:: ipython

In [18]: %timeit test_sql_read()
1 loops, best of 3: 766 ms per loop
Expand All @@ -4692,7 +4692,7 @@ Reading

Space on disk (in bytes)

.. code-block:: python
.. code-block::

25843712 Apr 8 14:11 test.sql
24007368 Apr 8 14:11 test_fixed.hdf
Expand Down
2 changes: 1 addition & 1 deletion doc/source/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Setting Startup Options in python/ipython Environment

Using startup scripts for the python/ipython environment to import pandas and set options makes working with pandas more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where the startup folder is in a default ipython profile can be found at:

.. code-block:: python
.. code-block:: none

$IPYTHONDIR/profile_default/startup

Expand Down
2 changes: 1 addition & 1 deletion doc/source/release.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1521,7 +1521,7 @@ API Changes
of the future import. You can use ``//`` and ``floordiv`` to do integer
division.

.. code-block:: python
.. code-block:: ipython

In [3]: arr = np.array([1, 2, 3, 4])

Expand Down
12 changes: 6 additions & 6 deletions doc/source/remote_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ every world bank indicator is accessible.
For example, if you wanted to compare the Gross Domestic Products per capita in
constant dollars in North America, you would use the ``search`` function:

.. code-block:: python
.. code-block:: ipython

In [1]: from pandas.io import wb

Expand All @@ -207,7 +207,7 @@ constant dollars in North America, you would use the ``search`` function:
Then you would use the ``download`` function to acquire the data from the World
Bank's servers:

.. code-block:: python
.. code-block:: ipython

In [3]: dat = wb.download(indicator='NY.GDP.PCAP.KD', country=['US', 'CA', 'MX'], start=2005, end=2008)

Expand All @@ -230,7 +230,7 @@ Bank's servers:
The resulting dataset is a properly formatted ``DataFrame`` with a hierarchical
index, so it is easy to apply ``.groupby`` transformations to it:

.. code-block:: python
.. code-block:: ipython

In [6]: dat['NY.GDP.PCAP.KD'].groupby(level=0).mean()
Out[6]:
Expand All @@ -243,7 +243,7 @@ index, so it is easy to apply ``.groupby`` transformations to it:
Now imagine you want to compare GDP to the share of people with cellphone
contracts around the world.

.. code-block:: python
.. code-block:: ipython

In [7]: wb.search('cell.*%').iloc[:,:2]
Out[7]:
Expand All @@ -255,7 +255,7 @@ contracts around the world.
Notice that this second search was much faster than the first one because
``pandas`` now has a cached list of available data series.

.. code-block:: python
.. code-block:: ipython

In [13]: ind = ['NY.GDP.PCAP.KD', 'IT.MOB.COV.ZS']
In [14]: dat = wb.download(indicator=ind, country='all', start=2011, end=2011).dropna()
Expand All @@ -273,7 +273,7 @@ Finally, we use the ``statsmodels`` package to assess the relationship between
our two variables using ordinary least squares regression. Unsurprisingly,
populations in rich countries tend to use cellphones at a higher rate:

.. code-block:: python
.. code-block:: ipython

In [17]: import numpy as np
In [18]: import statsmodels.formula.api as smf
Expand Down
4 changes: 2 additions & 2 deletions doc/source/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1487,7 +1487,7 @@ If ``Period`` freq is daily or higher (``D``, ``H``, ``T``, ``S``, ``L``, ``U``,
p + timedelta(minutes=120)
p + np.timedelta64(7200, 's')
.. code-block:: python
.. code-block:: ipython
In [1]: p + Minute(5)
Traceback
Expand All @@ -1501,7 +1501,7 @@ If ``Period`` has other freqs, only the same ``offsets`` can be added. Otherwise
p = Period('2014-07', freq='M')
p + MonthEnd(3)
.. code-block:: python
.. code-block:: ipython
In [1]: p + MonthBegin(3)
Traceback
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.10.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ nfrequencies are unaffected. The prior defaults were causing a great deal of
confusion for users, especially resampling data to daily frequency (which
labeled the aggregated group with the end of the interval: the next day).

.. code-block:: python
.. code-block:: ipython

In [1]: dates = pd.date_range('1/1/2000', '1/5/2000', freq='4h')

Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.12.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ I/O Enhancements
- Iterator support via ``read_hdf`` that automatically opens and closes the
store when iteration is finished. This is only for *tables*

.. code-block:: python
.. code-block:: ipython

In [25]: path = 'store_iterator.h5'

Expand Down
8 changes: 4 additions & 4 deletions doc/source/whatsnew/v0.13.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ API changes

Integer division

.. code-block:: python
.. code-block:: ipython

In [3]: arr = np.array([1, 2, 3, 4])

Expand All @@ -99,7 +99,7 @@ API changes

True Division

.. code-block:: python
.. code-block:: ipython

In [7]: pd.Series(arr) / pd.Series(arr2) # no future import required
Out[7]:
Expand Down Expand Up @@ -304,7 +304,7 @@ Float64Index API Change
- Indexing on other index types are preserved (and positional fallback for ``[],ix``), with the exception, that floating point slicing
on indexes on non ``Float64Index`` will now raise a ``TypeError``.

.. code-block:: python
.. code-block:: ipython

In [1]: Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
Expand All @@ -314,7 +314,7 @@ Float64Index API Change

Using a scalar float indexer will be deprecated in a future version, but is allowed for now.

.. code-block:: python
.. code-block:: ipython

In [3]: Series(range(5))[3.0]
Out[3]: 3
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.14.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ API changes
:ref:`Computing rolling pairwise covariances and correlations
<stats.moments.corr_pairwise>` in the docs.

.. code-block:: python
.. code-block:: ipython

In [1]: df = DataFrame(np.random.randn(10,4),columns=list('ABCD'))

Expand Down Expand Up @@ -661,7 +661,7 @@ Deprecations
- Indexers will warn ``FutureWarning`` when used with a scalar indexer and
a non-floating point Index (:issue:`4892`, :issue:`6960`)

.. code-block:: python
.. code-block:: ipython

# non-floating point indexes can only be indexed by integers / labels
In [1]: Series(1,np.arange(5))[3.0]
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.14.1.txt
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ API changes
offsets (BusinessMonthBegin, MonthEnd, BusinessMonthEnd, CustomBusinessMonthEnd,
BusinessYearBegin, LastWeekOfMonth, FY5253Quarter, LastWeekOfMonth, Easter):

.. code-block:: python
.. code-block:: ipython

In [6]: from pandas.tseries import offsets

Expand Down
Loading