Skip to content

Commit ab85d7a

Browse files
authored
DOC: Supress setups less in user guide (#54086)
* DOC: Supress setups less in user guide * Fix error * Formatting
1 parent 569ca46 commit ab85d7a

File tree

7 files changed

+2
-102
lines changed

7 files changed

+2
-102
lines changed

doc/source/user_guide/advanced.rst

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -470,11 +470,6 @@ Compare the above with the result using ``drop_level=True`` (the default value).
470470
471471
df.xs("one", level="second", axis=1, drop_level=True)
472472
473-
.. ipython:: python
474-
:suppress:
475-
476-
df = df.T
477-
478473
.. _advanced.advanced_reindex:
479474

480475
Advanced reindexing and alignment

doc/source/user_guide/basics.rst

Lines changed: 1 addition & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -220,11 +220,6 @@ either match on the *index* or *columns* via the **axis** keyword:
220220
df.sub(column, axis="index")
221221
df.sub(column, axis=0)
222222
223-
.. ipython:: python
224-
:suppress:
225-
226-
df_orig = df
227-
228223
Furthermore you can align a level of a MultiIndexed DataFrame with a Series.
229224

230225
.. ipython:: python
@@ -272,13 +267,9 @@ case the result will be NaN (you can later replace NaN with some other value
272267
using ``fillna`` if you wish).
273268

274269
.. ipython:: python
275-
:suppress:
276270
277271
df2 = df.copy()
278272
df2["three"]["a"] = 1.0
279-
280-
.. ipython:: python
281-
282273
df
283274
df2
284275
df + df2
@@ -936,17 +927,13 @@ Another useful feature is the ability to pass Series methods to carry out some
936927
Series operation on each column or row:
937928

938929
.. ipython:: python
939-
:suppress:
940930
941931
tsdf = pd.DataFrame(
942932
np.random.randn(10, 3),
943933
columns=["A", "B", "C"],
944934
index=pd.date_range("1/1/2000", periods=10),
945935
)
946936
tsdf.iloc[3:7] = np.nan
947-
948-
.. ipython:: python
949-
950937
tsdf
951938
tsdf.apply(pd.Series.interpolate)
952939
@@ -1170,13 +1157,9 @@ another array or value), the methods :meth:`~DataFrame.map` on DataFrame
11701157
and analogously :meth:`~Series.map` on Series accept any Python function taking
11711158
a single value and returning a single value. For example:
11721159

1173-
.. ipython:: python
1174-
:suppress:
1175-
1176-
df4 = df_orig.copy()
1177-
11781160
.. ipython:: python
11791161
1162+
df4 = df.copy()
11801163
df4
11811164
11821165
def f(x):
@@ -1280,14 +1263,9 @@ is a common enough operation that the :meth:`~DataFrame.reindex_like` method is
12801263
available to make this simpler:
12811264

12821265
.. ipython:: python
1283-
:suppress:
12841266
12851267
df2 = df.reindex(["a", "b", "c"], columns=["one", "two"])
12861268
df3 = df2 - df2.mean()
1287-
1288-
1289-
.. ipython:: python
1290-
12911269
df2
12921270
df3
12931271
df.reindex_like(df2)

doc/source/user_guide/groupby.rst

Lines changed: 1 addition & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,6 @@ the length of the ``groups`` dict, so it is largely just a convenience:
271271
``GroupBy`` will tab complete column names (and other attributes):
272272

273273
.. ipython:: python
274-
:suppress:
275274
276275
n = 10
277276
weight = np.random.normal(166, 20, size=n)
@@ -281,9 +280,6 @@ the length of the ``groups`` dict, so it is largely just a convenience:
281280
df = pd.DataFrame(
282281
{"height": height, "weight": weight, "gender": gender}, index=time
283282
)
284-
285-
.. ipython:: python
286-
287283
df
288284
gb = df.groupby("gender")
289285
@@ -334,19 +330,14 @@ number:
334330
Grouping with multiple levels is supported.
335331

336332
.. ipython:: python
337-
:suppress:
338333
339334
arrays = [
340335
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
341336
["doo", "doo", "bee", "bee", "bop", "bop", "bop", "bop"],
342337
["one", "two", "one", "two", "one", "two", "one", "two"],
343338
]
344-
tuples = list(zip(*arrays))
345-
index = pd.MultiIndex.from_tuples(tuples, names=["first", "second", "third"])
339+
index = pd.MultiIndex.from_arrays(arrays, names=["first", "second", "third"])
346340
s = pd.Series(np.random.randn(8), index=index)
347-
348-
.. ipython:: python
349-
350341
s
351342
s.groupby(level=["first", "second"]).sum()
352343
@@ -963,17 +954,13 @@ match the shape of the input array.
963954
Another common data transform is to replace missing data with the group mean.
964955

965956
.. ipython:: python
966-
:suppress:
967957
968958
cols = ["A", "B", "C"]
969959
values = np.random.randn(1000, 3)
970960
values[np.random.randint(0, 1000, 100), 0] = np.nan
971961
values[np.random.randint(0, 1000, 50), 1] = np.nan
972962
values[np.random.randint(0, 1000, 200), 2] = np.nan
973963
data_df = pd.DataFrame(values, columns=cols)
974-
975-
.. ipython:: python
976-
977964
data_df
978965
979966
countries = np.array(["US", "UK", "GR", "JP"])

doc/source/user_guide/indexing.rst

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1029,14 +1029,10 @@ input data shape. ``where`` is used under the hood as the implementation.
10291029
The code below is equivalent to ``df.where(df < 0)``.
10301030

10311031
.. ipython:: python
1032-
:suppress:
10331032
10341033
dates = pd.date_range('1/1/2000', periods=8)
10351034
df = pd.DataFrame(np.random.randn(8, 4),
10361035
index=dates, columns=['A', 'B', 'C', 'D'])
1037-
1038-
.. ipython:: python
1039-
10401036
df[df < 0]
10411037
10421038
In addition, ``where`` takes an optional ``other`` argument for replacement of
@@ -1431,7 +1427,6 @@ This plot was created using a ``DataFrame`` with 3 columns each containing
14311427
floating point values generated using ``numpy.random.randn()``.
14321428

14331429
.. ipython:: python
1434-
:suppress:
14351430
14361431
df = pd.DataFrame(np.random.randn(8, 4),
14371432
index=dates, columns=['A', 'B', 'C', 'D'])
@@ -1694,15 +1689,11 @@ DataFrame has a :meth:`~DataFrame.set_index` method which takes a column name
16941689
To create a new, re-indexed DataFrame:
16951690

16961691
.. ipython:: python
1697-
:suppress:
16981692
16991693
data = pd.DataFrame({'a': ['bar', 'bar', 'foo', 'foo'],
17001694
'b': ['one', 'two', 'one', 'two'],
17011695
'c': ['z', 'y', 'x', 'w'],
17021696
'd': [1., 2., 3, 4]})
1703-
1704-
.. ipython:: python
1705-
17061697
data
17071698
indexed1 = data.set_index('c')
17081699
indexed1
@@ -1812,11 +1803,6 @@ But it turns out that assigning to the product of chained indexing has
18121803
inherently unpredictable results. To see this, think about how the Python
18131804
interpreter executes this code:
18141805

1815-
.. ipython:: python
1816-
:suppress:
1817-
1818-
value = None
1819-
18201806
.. code-block:: python
18211807
18221808
dfmi.loc[:, ('one', 'second')] = value

doc/source/user_guide/io.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -704,20 +704,16 @@ Comments
704704
Sometimes comments or meta data may be included in a file:
705705

706706
.. ipython:: python
707-
:suppress:
708707
709708
data = (
710709
"ID,level,category\n"
711710
"Patient1,123000,x # really unpleasant\n"
712711
"Patient2,23000,y # wouldn't take his medicine\n"
713712
"Patient3,1234018,z # awesome"
714713
)
715-
716714
with open("tmp.csv", "w") as fh:
717715
fh.write(data)
718716
719-
.. ipython:: python
720-
721717
print(open("tmp.csv").read())
722718
723719
By default, the parser includes the comments in the output:

doc/source/user_guide/missing_data.rst

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -142,14 +142,10 @@ Missing values propagate naturally through arithmetic operations between pandas
142142
objects.
143143

144144
.. ipython:: python
145-
:suppress:
146145
147146
df = df2.loc[:, ["one", "two", "three"]]
148147
a = df2.loc[df2.index[:5], ["one", "two"]].ffill()
149148
b = df2.loc[df2.index[:5], ["one", "two", "three"]]
150-
151-
.. ipython:: python
152-
153149
a
154150
b
155151
a + b
@@ -247,12 +243,8 @@ If we only want consecutive gaps filled up to a certain number of data points,
247243
we can use the ``limit`` keyword:
248244

249245
.. ipython:: python
250-
:suppress:
251246
252247
df.iloc[2:4, :] = np.nan
253-
254-
.. ipython:: python
255-
256248
df
257249
df.ffill(limit=1)
258250
@@ -308,13 +300,9 @@ You may wish to simply exclude labels from a data set which refer to missing
308300
data. To do this, use :meth:`~DataFrame.dropna`:
309301

310302
.. ipython:: python
311-
:suppress:
312303
313304
df["two"] = df["two"].fillna(0)
314305
df["three"] = df["three"].fillna(0)
315-
316-
.. ipython:: python
317-
318306
df
319307
df.dropna(axis=0)
320308
df.dropna(axis=1)
@@ -333,7 +321,6 @@ Both Series and DataFrame objects have :meth:`~DataFrame.interpolate`
333321
that, by default, performs linear interpolation at missing data points.
334322

335323
.. ipython:: python
336-
:suppress:
337324
338325
np.random.seed(123456)
339326
idx = pd.date_range("1/1/2000", periods=100, freq="BM")
@@ -343,8 +330,6 @@ that, by default, performs linear interpolation at missing data points.
343330
ts[60:80] = np.nan
344331
ts = ts.cumsum()
345332
346-
.. ipython:: python
347-
348333
ts
349334
ts.count()
350335
@savefig series_before_interpolate.png
@@ -361,26 +346,19 @@ that, by default, performs linear interpolation at missing data points.
361346
Index aware interpolation is available via the ``method`` keyword:
362347

363348
.. ipython:: python
364-
:suppress:
365349
366350
ts2 = ts.iloc[[0, 1, 30, 60, 99]]
367-
368-
.. ipython:: python
369-
370351
ts2
371352
ts2.interpolate()
372353
ts2.interpolate(method="time")
373354
374355
For a floating-point index, use ``method='values'``:
375356

376357
.. ipython:: python
377-
:suppress:
378358
379359
idx = [0.0, 1.0, 10.0]
380360
ser = pd.Series([0.0, np.nan, 10.0], idx)
381361
382-
.. ipython:: python
383-
384362
ser
385363
ser.interpolate()
386364
ser.interpolate(method="values")

doc/source/user_guide/visualization.rst

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -42,12 +42,9 @@ The ``plot`` method on Series and DataFrame is just a simple wrapper around
4242
:meth:`plt.plot() <matplotlib.axes.Axes.plot>`:
4343

4444
.. ipython:: python
45-
:suppress:
4645
4746
np.random.seed(123456)
4847
49-
.. ipython:: python
50-
5148
ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
5249
ts = ts.cumsum()
5350
@@ -1468,7 +1465,6 @@ otherwise you will see a warning.
14681465
Another option is passing an ``ax`` argument to :meth:`Series.plot` to plot on a particular axis:
14691466

14701467
.. ipython:: python
1471-
:suppress:
14721468
14731469
np.random.seed(123456)
14741470
ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
@@ -1583,12 +1579,8 @@ Plotting tables
15831579
Plotting with matplotlib table is now supported in :meth:`DataFrame.plot` and :meth:`Series.plot` with a ``table`` keyword. The ``table`` keyword can accept ``bool``, :class:`DataFrame` or :class:`Series`. The simple way to draw a table is to specify ``table=True``. Data will be transposed to meet matplotlib's default layout.
15841580

15851581
.. ipython:: python
1586-
:suppress:
15871582
15881583
np.random.seed(123456)
1589-
1590-
.. ipython:: python
1591-
15921584
fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
15931585
df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
15941586
ax.xaxis.tick_top() # Display x-axis ticks on top.
@@ -1663,12 +1655,8 @@ colormaps will produce lines that are not easily visible.
16631655
To use the cubehelix colormap, we can pass ``colormap='cubehelix'``.
16641656

16651657
.. ipython:: python
1666-
:suppress:
16671658
16681659
np.random.seed(123456)
1669-
1670-
.. ipython:: python
1671-
16721660
df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
16731661
df = df.cumsum()
16741662
@@ -1701,12 +1689,8 @@ Alternatively, we can pass the colormap itself:
17011689
Colormaps can also be used other plot types, like bar charts:
17021690

17031691
.. ipython:: python
1704-
:suppress:
17051692
17061693
np.random.seed(123456)
1707-
1708-
.. ipython:: python
1709-
17101694
dd = pd.DataFrame(np.random.randn(10, 10)).map(abs)
17111695
dd = dd.cumsum()
17121696
@@ -1764,12 +1748,8 @@ level of refinement you would get when plotting via pandas, it can be faster
17641748
when plotting a large number of points.
17651749

17661750
.. ipython:: python
1767-
:suppress:
17681751
17691752
np.random.seed(123456)
1770-
1771-
.. ipython:: python
1772-
17731753
price = pd.Series(
17741754
np.random.randn(150).cumsum(),
17751755
index=pd.date_range("2000-1-1", periods=150, freq="B"),

0 commit comments

Comments
 (0)