Skip to content

Commit de847b0

Browse files
committed
Replace code-block with ipython
Code within code-blocks is displayed as is, not allowing linenumbers to be properly set. The change to ipython directives with :verbatim: generates correct linenumbers whilst not executing the code. Signed-off-by: Fabian Haase <[email protected]>
1 parent a41ae95 commit de847b0

9 files changed

+77
-61
lines changed

doc/source/10min.rst

-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,6 @@ will be completed:
8484
:verbatim:
8585

8686
In [0]: df2.<TAB>
87-
Out[0]:
8887
df2.A df2.bool
8988
df2.abs df2.boxplot
9089
df2.add df2.C

doc/source/basics.rst

+6-2
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,8 @@ These are both enabled to be used by default, you can control this by setting th
118118

119119
.. versionadded:: 0.20.0
120120

121-
.. code-block:: python
121+
.. ipython:: python
122+
:verbatim:
122123
123124
pd.set_option('compute.use_bottleneck', False)
124125
pd.set_option('compute.use_numexpr', False)
@@ -389,12 +390,15 @@ objects of the same length:
389390
Trying to compare ``Index`` or ``Series`` objects of different lengths will
390391
raise a ValueError:
391392

392-
.. code-block:: ipython
393+
.. ipython:: ipython
394+
:verbatim:
393395

394396
In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
397+
---------------------------------------------------------------------------
395398
ValueError: Series lengths must match to compare
396399

397400
In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
401+
---------------------------------------------------------------------------
398402
ValueError: Series lengths must match to compare
399403

400404
Note that this is different from the NumPy behavior where a comparison can

doc/source/gotchas.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -312,7 +312,7 @@ Occasionally you may have to deal with data that were created on a machine with
312312
a different byte order than the one on which you are running Python. A common
313313
symptom of this issue is an error like:
314314

315-
.. code-block:: py3tb
315+
.. code-block:: pytb
316316
317317
Traceback
318318
...

doc/source/groupby.rst

+31-33
Original file line numberDiff line numberDiff line change
@@ -79,14 +79,15 @@ pandas objects can be split on any of their axes. The abstract definition of
7979
grouping is to provide a mapping of labels to group names. To create a GroupBy
8080
object (more on what the GroupBy object is later), you may do the following:
8181

82-
.. code-block:: python
82+
.. ipython:: python
8383
:flake8-group: None
8484
:flake8-add-ignore: F821
85+
:verbatim:
8586
8687
# default is axis=0
87-
>>> grouped = obj.groupby(key)
88-
>>> grouped = obj.groupby(key, axis=1)
89-
>>> grouped = obj.groupby([key1, key2])
88+
grouped = obj.groupby(key)
89+
grouped = obj.groupby(key, axis=1)
90+
grouped = obj.groupby([key1, key2])
9091
9192
The mapping can be specified many different ways:
9293

@@ -141,16 +142,15 @@ but the specified columns
141142
These will split the DataFrame on its index (rows). We could also split by the
142143
columns:
143144

144-
.. ipython::
145+
.. ipython:: python
145146
146-
In [4]: def get_letter_type(letter):
147-
...: if letter.lower() in 'aeiou':
148-
...: return 'vowel'
149-
...: else:
150-
...: return 'consonant'
151-
...:
147+
def get_letter_type(letter):
148+
if letter.lower() in 'aeiou':
149+
return 'vowel'
150+
else:
151+
return 'consonant'
152152
153-
In [5]: grouped = df.groupby(get_letter_type, axis=1)
153+
grouped = df.groupby(get_letter_type, axis=1)
154154
155155
pandas :class:`~pandas.Index` objects support duplicate values. If a
156156
non-unique index is used as the group key in a groupby operation, all values
@@ -251,11 +251,11 @@ the length of the ``groups`` dict, so it is largely just a convenience:
251251
gb = df.groupby('gender')
252252
253253
254-
.. ipython::
254+
.. ipython:: ipython
255255
:flake8-group: None
256256
:flake8-set-ignore: E999, E225
257+
:verbatim:
257258

258-
@verbatim
259259
In [1]: gb.<TAB>
260260
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
261261
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
@@ -409,23 +409,21 @@ Iterating through groups
409409
With the GroupBy object in hand, iterating through the grouped data is very
410410
natural and functions similarly to :py:func:`itertools.groupby`:
411411

412-
.. ipython::
412+
.. ipython:: python
413413
414-
In [4]: grouped = df.groupby('A')
414+
grouped = df.groupby('A')
415415
416-
In [5]: for name, group in grouped:
417-
...: print(name)
418-
...: print(group)
419-
...:
416+
for name, group in grouped:
417+
print(name)
418+
print(group)
420419
421420
In the case of grouping by multiple keys, the group name will be a tuple:
422421

423-
.. ipython::
422+
.. ipython:: python
424423
425-
In [5]: for name, group in df.groupby(['A', 'B']):
426-
...: print(name)
427-
...: print(group)
428-
...:
424+
for name, group in df.groupby(['A', 'B']):
425+
print(name)
426+
print(group)
429427
430428
See :ref:`timeseries.iterating-label`.
431429

@@ -924,16 +922,15 @@ for both ``aggregate`` and ``transform`` in many standard use cases. However,
924922
925923
The dimension of the returned result can also change:
926924

927-
.. ipython::
925+
.. ipython:: python
928926
929-
In [8]: grouped = df.groupby('A')['C']
927+
grouped = df.groupby('A')['C']
930928
931-
In [10]: def f(group):
932-
....: return pd.DataFrame({'original': group,
933-
....: 'demeaned': group - group.mean()})
934-
....:
929+
def f(group):
930+
return pd.DataFrame({'original': group,
931+
'demeaned': group - group.mean()})
935932
936-
In [11]: grouped.apply(f)
933+
grouped.apply(f)
937934
938935
``apply`` on a Series can operate on a returned value from the applied function,
939936
that is itself a series, and possibly upcast the result to a DataFrame:
@@ -1316,9 +1313,10 @@ Now, to find prices per store/product, we can simply do:
13161313
Piping can also be expressive when you want to deliver a grouped object to some
13171314
arbitrary function, for example:
13181315

1319-
.. code-block:: python
1316+
.. ipython:: python
13201317
:flake8-group: None
13211318
:flake8-add-ignore: F821
1319+
:verbatim:
13221320
13231321
df.groupby(['Store', 'Product']).pipe(report_func)
13241322

doc/source/reshaping.rst

+13-13
Original file line numberDiff line numberDiff line change
@@ -20,22 +20,21 @@ Reshaping by pivoting DataFrame objects
2020

2121
.. image:: _static/reshaping_pivot.png
2222

23-
.. ipython::
23+
.. ipython:: python
2424
:suppress:
2525
26-
In [1]: import pandas.util.testing as tm
27-
...: tm.N = 3
26+
import pandas.util.testing as tm
27+
tm.N = 3
2828
29-
In [2]: def unpivot(frame):
30-
...: N, K = frame.shape
31-
...: data = {'value': frame.values.ravel('F'),
32-
...: 'variable': np.asarray(frame.columns).repeat(N),
33-
...: 'date': np.tile(np.asarray(frame.index), K)}
34-
...: columns = ['date', 'variable', 'value']
35-
...: return pd.DataFrame(data, columns=columns)
36-
...:
29+
def unpivot(frame):
30+
N, K = frame.shape
31+
data = {'value': frame.values.ravel('F'),
32+
'variable': np.asarray(frame.columns).repeat(N),
33+
'date': np.tile(np.asarray(frame.index), K)}
34+
columns = ['date', 'variable', 'value']
35+
return pd.DataFrame(data, columns=columns)
3736
38-
In [3]: df = unpivot(tm.makeTimeDataFrame())
37+
df = unpivot(tm.makeTimeDataFrame())
3938
4039
Data is often stored in so-called "stacked" or "record" format:
4140

@@ -705,7 +704,8 @@ handling of NaN:
705704
because of an ordering bug. See also
706705
`here <https://github.com/numpy/numpy/issues/641>`__.
707706

708-
.. code-block:: ipython
707+
.. ipython:: ipython
708+
:verbatim:
709709

710710
In [2]: pd.factorize(x, sort=True)
711711
Out[2]:

doc/source/sparse.rst

+4-1
Original file line numberDiff line numberDiff line change
@@ -157,9 +157,11 @@ You can change the dtype using ``.astype()``, the result is also sparse. Note th
157157
158158
It raises if any value cannot be coerced to specified dtype.
159159

160-
.. code-block:: ipython
160+
.. ipython:: ipython
161+
:verbatim:
161162

162163
In [1]: ss = pd.Series([1, np.nan, np.nan]).to_sparse()
164+
Out[1]:
163165
0 1.0
164166
1 NaN
165167
2 NaN
@@ -169,6 +171,7 @@ It raises if any value cannot be coerced to specified dtype.
169171
Block lengths: array([1], dtype=int32)
170172

171173
In [2]: ss.astype(np.int64)
174+
----------------------------------------------------------------------------
172175
ValueError: unable to coerce current fill_value nan to int64 dtype
173176

174177
.. _sparse.calculation:

doc/source/text.rst

+6-3
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,8 @@ regular expression object will raise a ``ValueError``.
201201
:verbatim:
202202

203203
In [1]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
204-
Out[1]: ValueError: case and flags cannot be set when pat is a compiled regex
204+
---------------------------------------------------------------------------
205+
ValueError: case and flags cannot be set when pat is a compiled regex
205206

206207
.. _text.concatenate:
207208

@@ -442,9 +443,11 @@ returns a ``DataFrame`` if ``expand=True``.
442443
443444
It raises ``ValueError`` if ``expand=False``.
444445

445-
.. code-block:: python
446+
.. ipython:: ipython
447+
:varbatim:
446448

447-
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
449+
In [1]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
450+
---------------------------------------------------------------------------
448451
ValueError: only one regex group is supported with Index
449452

450453
The table below summarizes the behavior of ``extract(expand=False)``

doc/source/timeseries.rst

+15-5
Original file line numberDiff line numberDiff line change
@@ -291,9 +291,11 @@ Invalid Data
291291

292292
The default behavior, ``errors='raise'``, is to raise when unparseable:
293293

294-
.. code-block:: ipython
294+
.. ipython:: ipython
295+
:verbatim:
295296

296297
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
298+
---------------------------------------------------------------------------
297299
ValueError: Unknown string format
298300

299301
Pass ``errors='ignore'`` to return the original input when unparseable:
@@ -1853,9 +1855,11 @@ If ``Period`` freq is daily or higher (``D``, ``H``, ``T``, ``S``, ``L``, ``U``,
18531855
p + datetime.timedelta(minutes=120)
18541856
p + np.timedelta64(7200, 's')
18551857
1856-
.. code-block:: ipython
1858+
.. ipython:: ipython
1859+
:verbatim:
18571860

18581861
In [1]: p + pd.offsets.Minute(5)
1862+
----------------------------------------------------------------------------
18591863
Traceback
18601864
...
18611865
ValueError: Input has different freq from Period(freq=H)
@@ -1867,9 +1871,11 @@ If ``Period`` has other frequencies, only the same ``offsets`` can be added. Oth
18671871
p = pd.Period('2014-07', freq='M')
18681872
p + pd.offsets.MonthEnd(3)
18691873
1870-
.. code-block:: ipython
1874+
.. ipython:: ipython
1875+
:verbatim:
18711876

18721877
In [1]: p + pd.offsets.MonthBegin(3)
1878+
----------------------------------------------------------------------------
18731879
Traceback
18741880
...
18751881
ValueError: Input has different freq from Period(freq=M)
@@ -2333,9 +2339,11 @@ contains ambiguous times and the bottom will infer the right offset.
23332339
23342340
This will fail as there are ambiguous times
23352341

2336-
.. code-block:: ipython
2342+
.. ipython:: ipython
2343+
:verbatim:
23372344

23382345
In [2]: rng_hourly.tz_localize('US/Eastern')
2346+
----------------------------------------------------------------------------
23392347
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
23402348

23412349
Infer the ambiguous times
@@ -2388,9 +2396,11 @@ can be controlled by the ``nonexistent`` argument. The following options are ava
23882396
23892397
Localization of nonexistent times will raise an error by default.
23902398

2391-
.. code-block:: ipython
2399+
.. ipython:: ipython
2400+
:verbatim:
23922401

23932402
In [2]: dti.tz_localize('Europe/Warsaw')
2403+
----------------------------------------------------------------------------
23942404
NonExistentTimeError: 2015-03-29 02:30:00
23952405

23962406
Transform nonexistent times to ``NaT`` or the closest real time forward in time.

doc/source/visualization.rst

+1-2
Original file line numberDiff line numberDiff line change
@@ -123,15 +123,14 @@ For example, a bar plot can be created the following way:
123123
124124
You can also create these other plots using the methods ``DataFrame.plot.<kind>`` instead of providing the ``kind`` keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
125125

126-
.. ipython::
126+
.. ipython:: ipython
127127
:flake8-group: None
128128
:flake8-add-ignore: E999, E225, F821, # E999 breaks linting for complete block
129129
:verbatim:
130130

131131
In [0]: df = pd.DataFrame()
132132

133133
In [1]: df.plot.<TAB>
134-
Out[1]:
135134
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter
136135
df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie
137136

0 commit comments

Comments
 (0)