Skip to content

Commit fc214e3

Browse files
committed
code block alignments
1 parent f6314b0 commit fc214e3

File tree

3 files changed

+129
-128
lines changed

3 files changed

+129
-128
lines changed

doc/source/whatsnew/v0.19.0.rst

+96-95
Original file line numberDiff line numberDiff line change
@@ -76,51 +76,51 @@ This also illustrates using the ``by`` parameter to group data before merging.
7676

7777
.. ipython:: python
7878
79-
trades = pd.DataFrame({
80-
'time': pd.to_datetime(['20160525 13:30:00.023',
81-
'20160525 13:30:00.038',
82-
'20160525 13:30:00.048',
83-
'20160525 13:30:00.048',
84-
'20160525 13:30:00.048']),
85-
'ticker': ['MSFT', 'MSFT',
86-
'GOOG', 'GOOG', 'AAPL'],
87-
'price': [51.95, 51.95,
88-
720.77, 720.92, 98.00],
89-
'quantity': [75, 155,
90-
100, 100, 100]},
91-
columns=['time', 'ticker', 'price', 'quantity'])
92-
93-
quotes = pd.DataFrame({
94-
'time': pd.to_datetime(['20160525 13:30:00.023',
95-
'20160525 13:30:00.023',
96-
'20160525 13:30:00.030',
97-
'20160525 13:30:00.041',
98-
'20160525 13:30:00.048',
99-
'20160525 13:30:00.049',
100-
'20160525 13:30:00.072',
101-
'20160525 13:30:00.075']),
102-
'ticker': ['GOOG', 'MSFT', 'MSFT', 'MSFT',
103-
'GOOG', 'AAPL', 'GOOG', 'MSFT'],
104-
'bid': [720.50, 51.95, 51.97, 51.99,
105-
720.50, 97.99, 720.50, 52.01],
106-
'ask': [720.93, 51.96, 51.98, 52.00,
107-
720.93, 98.01, 720.88, 52.03]},
108-
columns=['time', 'ticker', 'bid', 'ask'])
79+
trades = pd.DataFrame({
80+
'time': pd.to_datetime(['20160525 13:30:00.023',
81+
'20160525 13:30:00.038',
82+
'20160525 13:30:00.048',
83+
'20160525 13:30:00.048',
84+
'20160525 13:30:00.048']),
85+
'ticker': ['MSFT', 'MSFT',
86+
'GOOG', 'GOOG', 'AAPL'],
87+
'price': [51.95, 51.95,
88+
720.77, 720.92, 98.00],
89+
'quantity': [75, 155,
90+
100, 100, 100]},
91+
columns=['time', 'ticker', 'price', 'quantity'])
92+
93+
quotes = pd.DataFrame({
94+
'time': pd.to_datetime(['20160525 13:30:00.023',
95+
'20160525 13:30:00.023',
96+
'20160525 13:30:00.030',
97+
'20160525 13:30:00.041',
98+
'20160525 13:30:00.048',
99+
'20160525 13:30:00.049',
100+
'20160525 13:30:00.072',
101+
'20160525 13:30:00.075']),
102+
'ticker': ['GOOG', 'MSFT', 'MSFT', 'MSFT',
103+
'GOOG', 'AAPL', 'GOOG', 'MSFT'],
104+
'bid': [720.50, 51.95, 51.97, 51.99,
105+
720.50, 97.99, 720.50, 52.01],
106+
'ask': [720.93, 51.96, 51.98, 52.00,
107+
720.93, 98.01, 720.88, 52.03]},
108+
columns=['time', 'ticker', 'bid', 'ask'])
109109
110110
.. ipython:: python
111111
112-
trades
113-
quotes
112+
trades
113+
quotes
114114
115115
An asof merge joins on the ``on``, typically a datetimelike field, which is ordered, and
116116
in this case we are using a grouper in the ``by`` field. This is like a left-outer join, except
117117
that forward filling happens automatically taking the most recent non-NaN value.
118118

119119
.. ipython:: python
120120
121-
pd.merge_asof(trades, quotes,
122-
on='time',
123-
by='ticker')
121+
pd.merge_asof(trades, quotes,
122+
on='time',
123+
by='ticker')
124124
125125
This returns a merged DataFrame with the entries in the same order as the original left
126126
passed DataFrame (``trades`` in this case), with the fields of the ``quotes`` merged.
@@ -135,17 +135,17 @@ See the full documentation :ref:`here <stats.moments.ts>`.
135135

136136
.. ipython:: python
137137
138-
dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
139-
index=pd.date_range('20130101 09:00:00',
140-
periods=5, freq='s'))
141-
dft
138+
dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
139+
index=pd.date_range('20130101 09:00:00',
140+
periods=5, freq='s'))
141+
dft
142142
143143
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
144144

145145
.. ipython:: python
146146
147-
dft.rolling(2).sum()
148-
dft.rolling(2, min_periods=1).sum()
147+
dft.rolling(2).sum()
148+
dft.rolling(2, min_periods=1).sum()
149149
150150
Specifying an offset allows a more intuitive specification of the rolling frequency.
151151

@@ -271,10 +271,10 @@ Categorical Concatenation
271271

272272
.. ipython:: python
273273
274-
from pandas.api.types import union_categoricals
275-
a = pd.Categorical(["b", "c"])
276-
b = pd.Categorical(["a", "b"])
277-
union_categoricals([a, b])
274+
from pandas.api.types import union_categoricals
275+
a = pd.Categorical(["b", "c"])
276+
b = pd.Categorical(["a", "b"])
277+
union_categoricals([a, b])
278278
279279
- ``concat`` and ``append`` now can concat ``category`` dtypes with different ``categories`` as ``object`` dtype (:issue:`13524`)
280280

@@ -287,14 +287,14 @@ Categorical Concatenation
287287

288288
.. code-block:: ipython
289289
290-
In [1]: pd.concat([s1, s2])
291-
ValueError: incompatible categories in categorical concat
290+
In [1]: pd.concat([s1, s2])
291+
ValueError: incompatible categories in categorical concat
292292
293293
**New behavior**:
294294

295295
.. ipython:: python
296296
297-
pd.concat([s1, s2])
297+
pd.concat([s1, s2])
298298
299299
.. _whatsnew_0190.enhancements.semi_month_offsets:
300300

@@ -307,31 +307,31 @@ These provide date offsets anchored (by default) to the 15th and end of month, a
307307

308308
.. ipython:: python
309309
310-
from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin
310+
from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin
311311
312312
**SemiMonthEnd**:
313313

314314
.. ipython:: python
315315
316-
pd.Timestamp('2016-01-01') + SemiMonthEnd()
316+
pd.Timestamp('2016-01-01') + SemiMonthEnd()
317317
318-
pd.date_range('2015-01-01', freq='SM', periods=4)
318+
pd.date_range('2015-01-01', freq='SM', periods=4)
319319
320320
**SemiMonthBegin**:
321321

322322
.. ipython:: python
323323
324-
pd.Timestamp('2016-01-01') + SemiMonthBegin()
324+
pd.Timestamp('2016-01-01') + SemiMonthBegin()
325325
326-
pd.date_range('2015-01-01', freq='SMS', periods=4)
326+
pd.date_range('2015-01-01', freq='SMS', periods=4)
327327
328328
Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.
329329

330330
.. ipython:: python
331331
332-
pd.date_range('2015-01-01', freq='SMS-16', periods=4)
332+
pd.date_range('2015-01-01', freq='SMS-16', periods=4)
333333
334-
pd.date_range('2015-01-01', freq='SM-14', periods=4)
334+
pd.date_range('2015-01-01', freq='SM-14', periods=4)
335335
336336
.. _whatsnew_0190.enhancements.index:
337337

@@ -360,11 +360,11 @@ For ``MultiIndex``, values are dropped if any level is missing by default. Speci
360360

361361
.. ipython:: python
362362
363-
midx = pd.MultiIndex.from_arrays([[1, 2, np.nan, 4],
364-
[1, 2, np.nan, np.nan]])
365-
midx
366-
midx.dropna()
367-
midx.dropna(how='all')
363+
midx = pd.MultiIndex.from_arrays([[1, 2, np.nan, 4],
364+
[1, 2, np.nan, np.nan]])
365+
midx
366+
midx.dropna()
367+
midx.dropna(how='all')
368368
369369
``Index`` now supports ``.str.extractall()`` which returns a ``DataFrame``, see the :ref:`docs here <text.extractall>` (:issue:`10008`, :issue:`13156`)
370370

@@ -464,23 +464,24 @@ Other enhancements
464464

465465
.. ipython:: python
466466
467-
pd.Timestamp(2012, 1, 1)
467+
pd.Timestamp(2012, 1, 1)
468468
469-
pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)
469+
pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)
470470
471471
- The ``.resample()`` function now accepts a ``on=`` or ``level=`` parameter for resampling on a datetimelike column or ``MultiIndex`` level (:issue:`13500`)
472472

473473
.. ipython:: python
474474
475-
df = pd.DataFrame({'date': pd.date_range('2015-01-01', freq='W', periods=5),
476-
'a': np.arange(5)},
475+
df = pd.DataFrame({'date': pd.date_range('2015-01-01', freq='W', periods=5),
476+
'a': np.arange(5)},
477477
index=pd.MultiIndex.from_arrays([[1, 2, 3, 4, 5],
478-
pd.date_range('2015-01-01', freq='W', periods=5)],
479-
names=['v', 'd'])
480-
)
481-
df
482-
df.resample('M', on='date').sum()
483-
df.resample('M', level='d').sum()
478+
pd.date_range('2015-01-01',
479+
freq='W',
480+
periods=5)
481+
], names=['v', 'd']))
482+
df
483+
df.resample('M', on='date').sum()
484+
df.resample('M', level='d').sum()
484485
485486
- The ``.get_credentials()`` method of ``GbqConnector`` can now first try to fetch `the application default credentials <https://developers.google.com/identity/protocols/application-default-credentials>`__. See the docs for more details (:issue:`13577`).
486487
- The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behavior remains to raising a ``NonExistentTimeError`` (:issue:`13057`)
@@ -975,23 +976,23 @@ Previous behavior:
975976

976977
.. code-block:: ipython
977978
978-
In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
979-
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
980-
Out[1]: Index(['a', 'b', 'c'], dtype='object')
979+
In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
980+
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
981+
Out[1]: Index(['a', 'b', 'c'], dtype='object')
981982
982983
**New behavior**: the same operation will now perform element-wise addition:
983984

984985
.. ipython:: python
985986
986-
pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
987+
pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
987988
988989
Note that numeric Index objects already performed element-wise operations.
989990
For example, the behavior of adding two integer Indexes is unchanged.
990991
The base ``Index`` is now made consistent with this behavior.
991992

992993
.. ipython:: python
993994
994-
pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
995+
pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
995996
996997
Further, because of this change, it is now possible to subtract two
997998
DatetimeIndex objects resulting in a TimedeltaIndex:
@@ -1056,23 +1057,23 @@ Previously, most ``Index`` classes returned ``np.ndarray``, and ``DatetimeIndex`
10561057

10571058
.. code-block:: ipython
10581059
1059-
In [1]: pd.Index([1, 2, 3]).unique()
1060-
Out[1]: array([1, 2, 3])
1060+
In [1]: pd.Index([1, 2, 3]).unique()
1061+
Out[1]: array([1, 2, 3])
10611062
1062-
In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02',
1063-
...: '2011-01-03'], tz='Asia/Tokyo').unique()
1064-
Out[2]:
1065-
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
1066-
'2011-01-03 00:00:00+09:00'],
1067-
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
1063+
In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02',
1064+
...: '2011-01-03'], tz='Asia/Tokyo').unique()
1065+
Out[2]:
1066+
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
1067+
'2011-01-03 00:00:00+09:00'],
1068+
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
10681069
10691070
**New behavior**:
10701071

10711072
.. ipython:: python
10721073
1073-
pd.Index([1, 2, 3]).unique()
1074-
pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'],
1075-
tz='Asia/Tokyo').unique()
1074+
pd.Index([1, 2, 3]).unique()
1075+
pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'],
1076+
tz='Asia/Tokyo').unique()
10761077
10771078
.. _whatsnew_0190.api.multiindex:
10781079

@@ -1236,27 +1237,27 @@ Operators now preserve dtypes
12361237

12371238
.. ipython:: python
12381239
1239-
s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
1240-
s.dtype
1240+
s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
1241+
s.dtype
12411242
1242-
s + 1
1243+
s + 1
12431244
12441245
- Sparse data structure now support ``astype`` to convert internal ``dtype`` (:issue:`13900`)
12451246

12461247
.. ipython:: python
12471248
1248-
s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
1249-
s
1250-
s.astype(np.int64)
1249+
s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
1250+
s
1251+
s.astype(np.int64)
12511252
12521253
``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
12531254
Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.
12541255
12551256
.. code-block:: ipython
12561257
1257-
In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
1258-
Out[7]:
1259-
ValueError: unable to coerce current fill_value nan to int64 dtype
1258+
In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
1259+
Out[7]:
1260+
ValueError: unable to coerce current fill_value nan to int64 dtype
12601261
12611262
Other sparse fixes
12621263
""""""""""""""""""

0 commit comments

Comments
 (0)