Skip to content

Fix flake8 issues on v19, v20 and v21.0.rst #24236

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Dec 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
133 changes: 66 additions & 67 deletions doc/source/whatsnew/v0.19.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,6 @@ v0.19.0 (October 2, 2016)

{{ header }}

.. ipython:: python
:suppress:

from pandas import * # noqa F401, F403


This is a major release from 0.18.1 and includes number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
Expand Down Expand Up @@ -105,9 +99,8 @@ This also illustrates using the ``by`` parameter to group data before merging.
'20160525 13:30:00.049',
'20160525 13:30:00.072',
'20160525 13:30:00.075']),
'ticker': ['GOOG', 'MSFT', 'MSFT',
'MSFT', 'GOOG', 'AAPL', 'GOOG',
'MSFT'],
'ticker': ['GOOG', 'MSFT', 'MSFT', 'MSFT',
'GOOG', 'AAPL', 'GOOG', 'MSFT'],
'bid': [720.50, 51.95, 51.97, 51.99,
720.50, 97.99, 720.50, 52.01],
'ask': [720.93, 51.96, 51.98, 52.00,
Expand Down Expand Up @@ -143,7 +136,8 @@ See the full documentation :ref:`here <stats.moments.ts>`.
.. ipython:: python

dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
index=pd.date_range('20130101 09:00:00', periods=5, freq='s'))
index=pd.date_range('20130101 09:00:00',
periods=5, freq='s'))
dft

This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
Expand All @@ -164,13 +158,13 @@ Using a non-regular, but still monotonic index, rolling with an integer window d
.. ipython:: python


dft = DataFrame({'B': [0, 1, 2, np.nan, 4]},
index = pd.Index([pd.Timestamp('20130101 09:00:00'),
pd.Timestamp('20130101 09:00:02'),
pd.Timestamp('20130101 09:00:03'),
pd.Timestamp('20130101 09:00:05'),
pd.Timestamp('20130101 09:00:06')],
name='foo'))
dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I realigned all to overlap with the first letter "i" or "c". Let me know if I miss any block.

index=pd.Index([pd.Timestamp('20130101 09:00:00'),
pd.Timestamp('20130101 09:00:02'),
pd.Timestamp('20130101 09:00:03'),
pd.Timestamp('20130101 09:00:05'),
pd.Timestamp('20130101 09:00:06')],
name='foo'))

dft
dft.rolling(2).sum()
Expand Down Expand Up @@ -277,10 +271,10 @@ Categorical Concatenation

.. ipython:: python

from pandas.api.types import union_categoricals
a = pd.Categorical(["b", "c"])
b = pd.Categorical(["a", "b"])
union_categoricals([a, b])
from pandas.api.types import union_categoricals
a = pd.Categorical(["b", "c"])
b = pd.Categorical(["a", "b"])
union_categoricals([a, b])

- ``concat`` and ``append`` now can concat ``category`` dtypes with different ``categories`` as ``object`` dtype (:issue:`13524`)

Expand All @@ -289,18 +283,18 @@ Categorical Concatenation
s1 = pd.Series(['a', 'b'], dtype='category')
s2 = pd.Series(['b', 'c'], dtype='category')

**Previous behavior**:
**Previous behavior**:

.. code-block:: ipython
.. code-block:: ipython

In [1]: pd.concat([s1, s2])
ValueError: incompatible categories in categorical concat
In [1]: pd.concat([s1, s2])
ValueError: incompatible categories in categorical concat

**New behavior**:
**New behavior**:

.. ipython:: python
.. ipython:: python

pd.concat([s1, s2])
pd.concat([s1, s2])

.. _whatsnew_0190.enhancements.semi_month_offsets:

Expand All @@ -313,31 +307,31 @@ These provide date offsets anchored (by default) to the 15th and end of month, a

.. ipython:: python

from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin
from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin

**SemiMonthEnd**:

.. ipython:: python

Timestamp('2016-01-01') + SemiMonthEnd()
pd.Timestamp('2016-01-01') + SemiMonthEnd()

pd.date_range('2015-01-01', freq='SM', periods=4)
pd.date_range('2015-01-01', freq='SM', periods=4)

**SemiMonthBegin**:

.. ipython:: python

Timestamp('2016-01-01') + SemiMonthBegin()
pd.Timestamp('2016-01-01') + SemiMonthBegin()

pd.date_range('2015-01-01', freq='SMS', periods=4)
pd.date_range('2015-01-01', freq='SMS', periods=4)

Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.

.. ipython:: python

pd.date_range('2015-01-01', freq='SMS-16', periods=4)
pd.date_range('2015-01-01', freq='SMS-16', periods=4)

pd.date_range('2015-01-01', freq='SM-14', periods=4)
pd.date_range('2015-01-01', freq='SM-14', periods=4)

.. _whatsnew_0190.enhancements.index:

Expand Down Expand Up @@ -367,7 +361,7 @@ For ``MultiIndex``, values are dropped if any level is missing by default. Speci
.. ipython:: python

midx = pd.MultiIndex.from_arrays([[1, 2, np.nan, 4],
[1, 2, np.nan, np.nan]])
[1, 2, np.nan, np.nan]])
midx
midx.dropna()
midx.dropna(how='all')
Expand All @@ -377,7 +371,7 @@ For ``MultiIndex``, values are dropped if any level is missing by default. Speci
.. ipython:: python

idx = pd.Index(["a1a2", "b1", "c1"])
idx.str.extractall("[ab](?P<digit>\d)")
idx.str.extractall(r"[ab](?P<digit>\d)")

``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)

Expand Down Expand Up @@ -453,7 +447,7 @@ The following are now part of this API:

import pprint
from pandas.api import types
funcs = [ f for f in dir(types) if not f.startswith('_') ]
funcs = [f for f in dir(types) if not f.startswith('_')]
pprint.pprint(funcs)

.. note::
Expand All @@ -470,20 +464,21 @@ Other enhancements

.. ipython:: python

pd.Timestamp(2012, 1, 1)
pd.Timestamp(2012, 1, 1)

pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)
pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)

- The ``.resample()`` function now accepts a ``on=`` or ``level=`` parameter for resampling on a datetimelike column or ``MultiIndex`` level (:issue:`13500`)

.. ipython:: python

df = pd.DataFrame({'date': pd.date_range('2015-01-01', freq='W', periods=5),
'a': np.arange(5)},
index=pd.MultiIndex.from_arrays([
[1,2,3,4,5],
pd.date_range('2015-01-01', freq='W', periods=5)],
names=['v','d']))
index=pd.MultiIndex.from_arrays([[1, 2, 3, 4, 5],
pd.date_range('2015-01-01',
freq='W',
periods=5)
], names=['v', 'd']))
df
df.resample('M', on='date').sum()
df.resample('M', level='d').sum()
Expand Down Expand Up @@ -547,7 +542,7 @@ API changes

.. ipython:: python

s = pd.Series([1,2,3])
s = pd.Series([1, 2, 3])

**Previous behavior**:

Expand Down Expand Up @@ -953,7 +948,7 @@ of integers (:issue:`13988`).

In [6]: pi = pd.PeriodIndex(['2011-01', '2011-02'], freq='M')
In [7]: pi.values
array([492, 493])
Out[7]: array([492, 493])

**New behavior**:

Expand Down Expand Up @@ -981,23 +976,23 @@ Previous behavior:

.. code-block:: ipython

In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
Out[1]: Index(['a', 'b', 'c'], dtype='object')
In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
Out[1]: Index(['a', 'b', 'c'], dtype='object')

**New behavior**: the same operation will now perform element-wise addition:

.. ipython:: python

pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
pd.Index(['a', 'b']) + pd.Index(['a', 'c'])

Note that numeric Index objects already performed element-wise operations.
For example, the behavior of adding two integer Indexes is unchanged.
The base ``Index`` is now made consistent with this behavior.

.. ipython:: python

pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])

Further, because of this change, it is now possible to subtract two
DatetimeIndex objects resulting in a TimedeltaIndex:
Expand All @@ -1006,15 +1001,17 @@ DatetimeIndex objects resulting in a TimedeltaIndex:

.. code-block:: ipython

In [1]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02', '2016-01-03'])
In [1]: (pd.DatetimeIndex(['2016-01-01', '2016-01-02'])
...: - pd.DatetimeIndex(['2016-01-02', '2016-01-03']))
FutureWarning: using '-' to provide set differences with datetimelike Indexes is deprecated, use .difference()
Out[1]: DatetimeIndex(['2016-01-01'], dtype='datetime64[ns]', freq=None)

**New behavior**:

.. ipython:: python

pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02', '2016-01-03'])
(pd.DatetimeIndex(['2016-01-01', '2016-01-02'])
- pd.DatetimeIndex(['2016-01-02', '2016-01-03']))


.. _whatsnew_0190.api.difference:
Expand Down Expand Up @@ -1063,7 +1060,8 @@ Previously, most ``Index`` classes returned ``np.ndarray``, and ``DatetimeIndex`
In [1]: pd.Index([1, 2, 3]).unique()
Out[1]: array([1, 2, 3])

In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], tz='Asia/Tokyo').unique()
In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02',
...: '2011-01-03'], tz='Asia/Tokyo').unique()
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
Expand All @@ -1074,7 +1072,8 @@ Previously, most ``Index`` classes returned ``np.ndarray``, and ``DatetimeIndex`
.. ipython:: python

pd.Index([1, 2, 3]).unique()
pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], tz='Asia/Tokyo').unique()
pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'],
tz='Asia/Tokyo').unique()

.. _whatsnew_0190.api.multiindex:

Expand Down Expand Up @@ -1236,29 +1235,29 @@ Operators now preserve dtypes

- Sparse data structure now can preserve ``dtype`` after arithmetic ops (:issue:`13848`)

.. ipython:: python
.. ipython:: python

s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
s.dtype
s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
s.dtype

s + 1
s + 1

- Sparse data structure now support ``astype`` to convert internal ``dtype`` (:issue:`13900`)

.. ipython:: python
.. ipython:: python

s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
s
s.astype(np.int64)
s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
s
s.astype(np.int64)

``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.

.. code-block:: ipython
.. code-block:: ipython

In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
Out[7]:
ValueError: unable to coerce current fill_value nan to int64 dtype
In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
Out[7]:
ValueError: unable to coerce current fill_value nan to int64 dtype

Other sparse fixes
""""""""""""""""""
Expand Down
Loading