diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst index 49289862a3acd..add1a4e587240 100644 --- a/doc/source/indexing.rst +++ b/doc/source/indexing.rst @@ -134,9 +134,10 @@ indexing functionality: .. ipython:: python dates = pd.date_range('1/1/2000', periods=8) - df = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) + df = pd.DataFrame(np.random.randn(8, 4), + index=dates, columns=['A', 'B', 'C', 'D']) df - panel = pd.Panel({'one' : df, 'two' : df - df.mean()}) + panel = pd.Panel({'one': df, 'two': df - df.mean()}) panel .. note:: @@ -174,14 +175,14 @@ columns. .. ipython:: python df[['A', 'B']] - df.loc[:,['B', 'A']] = df[['A', 'B']] + df.loc[:, ['B', 'A']] = df[['A', 'B']] df[['A', 'B']] The correct way to swap column values is by using raw values: .. ipython:: python - df.loc[:,['B', 'A']] = df[['A', 'B']].to_numpy() + df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy() df[['A', 'B']] @@ -199,7 +200,7 @@ as an attribute: .. ipython:: python - sa = pd.Series([1,2,3],index=list('abc')) + sa = pd.Series([1, 2, 3], index=list('abc')) dfa = df.copy() .. ipython:: python @@ -239,7 +240,7 @@ You can also assign a ``dict`` to a row of a ``DataFrame``: .. ipython:: python x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]}) - x.iloc[1] = dict(x=9, y=99) + x.iloc[1] = {'x': 9, 'y': 99} x You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; @@ -248,10 +249,10 @@ new column. In 0.21.0 and later, this will raise a ``UserWarning``: .. code-block:: ipython - In[1]: df = pd.DataFrame({'one': [1., 2., 3.]}) - In[2]: df.two = [4, 5, 6] + In [1]: df = pd.DataFrame({'one': [1., 2., 3.]}) + In [2]: df.two = [4, 5, 6] UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access - In[3]: df + In [3]: df Out[3]: one 0 1.0 @@ -308,7 +309,9 @@ Selection By Label .. ipython:: python - dfl = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.date_range('20130101',periods=5)) + dfl = pd.DataFrame(np.random.randn(5, 4), + columns=list('ABCD'), + index=pd.date_range('20130101', periods=5)) dfl .. code-block:: ipython @@ -345,7 +348,7 @@ The ``.loc`` attribute is the primary access method. The following are valid inp .. ipython:: python - s1 = pd.Series(np.random.randn(6),index=list('abcdef')) + s1 = pd.Series(np.random.randn(6), index=list('abcdef')) s1 s1.loc['c':] s1.loc['b'] @@ -361,7 +364,7 @@ With a DataFrame: .. ipython:: python - df1 = pd.DataFrame(np.random.randn(6,4), + df1 = pd.DataFrame(np.random.randn(6, 4), index=list('abcdef'), columns=list('ABCD')) df1 @@ -404,7 +407,7 @@ are returned: .. ipython:: python - s = pd.Series(list('abcde'), index=[0,3,2,5,4]) + s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4]) s.loc[3:5] If at least one of the two is absent, but the index is sorted, and can be @@ -444,7 +447,7 @@ The ``.iloc`` attribute is the primary access method. The following are valid in .. ipython:: python - s1 = pd.Series(np.random.randn(5), index=list(range(0,10,2))) + s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2))) s1 s1.iloc[:3] s1.iloc[3] @@ -460,9 +463,9 @@ With a DataFrame: .. ipython:: python - df1 = pd.DataFrame(np.random.randn(6,4), - index=list(range(0,12,2)), - columns=list(range(0,8,2))) + df1 = pd.DataFrame(np.random.randn(6, 4), + index=list(range(0, 12, 2)), + columns=list(range(0, 8, 2))) df1 Select via integer slicing: @@ -516,7 +519,7 @@ an empty axis (e.g. an empty DataFrame being returned). .. ipython:: python - dfl = pd.DataFrame(np.random.randn(5,2), columns=list('AB')) + dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB')) dfl dfl.iloc[:, 2:3] dfl.iloc[:, 1:3] @@ -818,7 +821,7 @@ In the ``Series`` case this is effectively an appending operation. .. ipython:: python - se = pd.Series([1,2,3]) + se = pd.Series([1, 2, 3]) se se[5] = 5. se @@ -827,10 +830,10 @@ A ``DataFrame`` can be enlarged on either axis via ``.loc``. .. ipython:: python - dfi = pd.DataFrame(np.arange(6).reshape(3,2), - columns=['A','B']) + dfi = pd.DataFrame(np.arange(6).reshape(3, 2), + columns=['A', 'B']) dfi - dfi.loc[:,'C'] = dfi.loc[:,'A'] + dfi.loc[:, 'C'] = dfi.loc[:, 'A'] dfi This is like an ``append`` operation on the ``DataFrame``. @@ -870,7 +873,7 @@ You can also set using these same indexers. .. ipython:: python - df.at[dates[-1]+1, 0] = 7 + df.at[dates[-1] + 1, 0] = 7 df Boolean indexing @@ -908,9 +911,9 @@ more complex criteria: .. ipython:: python - df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'], - 'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'], - 'c' : np.random.randn(7)}) + df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six'], + 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'], + 'c': np.random.randn(7)}) # only want 'two' or 'three' criterion = df2['a'].map(lambda x: x.startswith('t')) @@ -928,7 +931,7 @@ and :ref:`Advanced Indexing ` you may select along more than one axis .. ipython:: python - df2.loc[criterion & (df2['b'] == 'x'),'b':'c'] + df2.loc[criterion & (df2['b'] == 'x'), 'b':'c'] .. _indexing.basics.indexing_isin: @@ -1032,7 +1035,8 @@ The code below is equivalent to ``df.where(df < 0)``. :suppress: dates = pd.date_range('1/1/2000', periods=8) - df = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) + df = pd.DataFrame(np.random.randn(8, 4), + index=dates, columns=['A', 'B', 'C', 'D']) .. ipython:: python @@ -1065,7 +1069,7 @@ without creating a copy: .. ipython:: python df_orig = df.copy() - df_orig.where(df > 0, -df, inplace=True); + df_orig.where(df > 0, -df, inplace=True) df_orig .. note:: @@ -1086,7 +1090,7 @@ partial setting via ``.loc`` (but on the contents rather than the axis labels). .. ipython:: python df2 = df.copy() - df2[ df2[1:4] > 0] = 3 + df2[df2[1:4] > 0] = 3 df2 Where can also accept ``axis`` and ``level`` parameters to align the input when @@ -1095,14 +1099,14 @@ performing the ``where``. .. ipython:: python df2 = df.copy() - df2.where(df2>0,df2['A'],axis='index') + df2.where(df2 > 0, df2['A'], axis='index') This is equivalent to (but faster than) the following. .. ipython:: python df2 = df.copy() - df.apply(lambda x, y: x.where(x>0,y), y=df['A']) + df.apply(lambda x, y: x.where(x > 0, y), y=df['A']) .. versionadded:: 0.18.1 @@ -1163,25 +1167,12 @@ with the name ``a``. If instead you don't want to or cannot name your index, you can use the name ``index`` in your query expression: -.. ipython:: python - :suppress: - - old_index = index - del index - .. ipython:: python df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc')) df df.query('index < b < c') -.. ipython:: python - :suppress: - - index = old_index - del old_index - - .. note:: If the name of your index overlaps with a column name, the column name is @@ -1191,7 +1182,7 @@ If instead you don't want to or cannot name your index, you can use the name df = pd.DataFrame({'a': np.random.randint(5, size=5)}) df.index.name = 'a' - df.query('a > 2') # uses the column 'a', not the index + df.query('a > 2') # uses the column 'a', not the index You can still use the index in a query expression by using the special identifier 'index': @@ -1293,15 +1284,6 @@ The ``in`` and ``not in`` operators ``not in`` comparison operators, providing a succinct syntax for calling the ``isin`` method of a ``Series`` or ``DataFrame``. -.. ipython:: python - :suppress: - - try: - old_d = d - del d - except NameError: - pass - .. ipython:: python # get all rows where columns "a" and "b" have overlapping values @@ -1325,7 +1307,8 @@ You can combine this with other expressions for very succinct queries: .. ipython:: python - # rows where cols a and b have overlapping values and col c's values are less than col d's + # rows where cols a and b have overlapping values + # and col c's values are less than col d's df.query('a in b and c < d') # pure Python @@ -1401,15 +1384,6 @@ Of course, expressions can be arbitrarily complex too: shorter == longer -.. ipython:: python - :suppress: - - try: - d = old_d - del old_d - except NameError: - pass - Performance of :meth:`~pandas.DataFrame.query` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1433,7 +1407,8 @@ floating point values generated using ``numpy.random.randn()``. .. ipython:: python :suppress: - df = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) + df = pd.DataFrame(np.random.randn(8, 4), + index=dates, columns=['A', 'B', 'C', 'D']) df2 = df.copy() @@ -1500,8 +1475,8 @@ default value. .. ipython:: python - s = pd.Series([1,2,3], index=['a','b','c']) - s.get('a') # equivalent to s['a'] + s = pd.Series([1, 2, 3], index=['a', 'b', 'c']) + s.get('a') # equivalent to s['a'] s.get('x', default=-1) The :meth:`~pandas.DataFrame.lookup` Method @@ -1513,8 +1488,8 @@ NumPy array. For instance: .. ipython:: python - dflookup = pd.DataFrame(np.random.rand(20,4), columns = ['A','B','C','D']) - dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D']) + dflookup = pd.DataFrame(np.random.rand(20, 4), columns = ['A', 'B', 'C', 'D']) + dflookup.lookup(list(range(0, 10, 2)), ['B', 'C', 'A', 'B', 'D']) .. _indexing.class: @@ -1641,7 +1616,9 @@ Missing values idx1 idx1.fillna(2) - idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'), pd.NaT, pd.Timestamp('2011-01-03')]) + idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'), + pd.NaT, + pd.Timestamp('2011-01-03')]) idx2 idx2.fillna(pd.Timestamp('2011-01-02')) @@ -1664,10 +1641,10 @@ To create a new, re-indexed DataFrame: .. ipython:: python :suppress: - data = pd.DataFrame({'a' : ['bar', 'bar', 'foo', 'foo'], - 'b' : ['one', 'two', 'one', 'two'], - 'c' : ['z', 'y', 'x', 'w'], - 'd' : [1., 2., 3, 4]}) + data = pd.DataFrame({'a': ['bar', 'bar', 'foo', 'foo'], + 'b': ['one', 'two', 'one', 'two'], + 'c': ['z', 'y', 'x', 'w'], + 'd': [1., 2., 3, 4]}) .. ipython:: python @@ -1746,8 +1723,8 @@ When setting values in a pandas object, care must be taken to avoid what is call list('efgh'), list('ijkl'), list('mnop')], - columns=pd.MultiIndex.from_product([['one','two'], - ['first','second']])) + columns=pd.MultiIndex.from_product([['one', 'two'], + ['first', 'second']])) dfmi Compare these two access methods: @@ -1758,7 +1735,7 @@ Compare these two access methods: .. ipython:: python - dfmi.loc[:,('one','second')] + dfmi.loc[:, ('one', 'second')] These both yield the same results, so which should you use? It is instructive to understand the order of operations on these and why method 2 (``.loc``) is much preferred over method 1 (chained ``[]``). @@ -1783,6 +1760,11 @@ But it turns out that assigning to the product of chained indexing has inherently unpredictable results. To see this, think about how the Python interpreter executes this code: +.. ipython:: python + :suppress: + + value = None + .. code-block:: python dfmi.loc[:, ('one', 'second')] = value @@ -1820,7 +1802,8 @@ that you've done this: def do_something(df): foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows! # ... many lines here ... - foo['quux'] = value # We don't know whether this will modify df or not! + # We don't know whether this will modify df or not! + foo['quux'] = value return foo Yikes! @@ -1850,9 +1833,9 @@ chained indexing expression, you can set the :ref:`option ` .. ipython:: python :okwarning: - dfb = pd.DataFrame({'a' : ['one', 'one', 'two', - 'three', 'two', 'one', 'six'], - 'c' : np.arange(7)}) + dfb = pd.DataFrame({'a': ['one', 'one', 'two', + 'three', 'two', 'one', 'six'], + 'c': np.arange(7)}) # This will show the SettingWithCopyWarning # but the frame values will be set @@ -1880,8 +1863,8 @@ This is the correct access method: .. ipython:: python - dfc = pd.DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) - dfc.loc[0,'A'] = 11 + dfc = pd.DataFrame({'A': ['aaa', 'bbb', 'ccc'], 'B': [1, 2, 3]}) + dfc.loc[0, 'A'] = 11 dfc This *can* work at times, but it is not guaranteed to, and therefore should be avoided: diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst index ebe577feb706c..6a089decde3f5 100644 --- a/doc/source/missing_data.rst +++ b/doc/source/missing_data.rst @@ -81,7 +81,7 @@ Series and DataFrame objects: .. ipython:: python - None == None + None == None # noqa: E711 np.nan == np.nan So as compared to above, a scalar equality comparison versus a ``None/np.nan`` doesn't provide useful information. @@ -102,7 +102,7 @@ pandas objects provide compatibility between ``NaT`` and ``NaN``. df2 = df.copy() df2['timestamp'] = pd.Timestamp('20120101') df2 - df2.loc[['a','c','h'],['one','timestamp']] = np.nan + df2.loc[['a', 'c', 'h'], ['one', 'timestamp']] = np.nan df2 df2.get_dtype_counts() @@ -187,7 +187,7 @@ The sum of an empty or all-NA Series or column of a DataFrame is 0. .. ipython:: python pd.Series([np.nan]).sum() - + pd.Series([]).sum() The product of an empty or all-NA Series or column of a DataFrame is 1. @@ -195,7 +195,7 @@ The product of an empty or all-NA Series or column of a DataFrame is 1. .. ipython:: python pd.Series([np.nan]).prod() - + pd.Series([]).prod() @@ -287,10 +287,10 @@ use case of this is to fill a DataFrame with the mean of that column. .. ipython:: python - dff = pd.DataFrame(np.random.randn(10,3), columns=list('ABC')) - dff.iloc[3:5,0] = np.nan - dff.iloc[4:6,1] = np.nan - dff.iloc[5:8,2] = np.nan + dff = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC')) + dff.iloc[3:5, 0] = np.nan + dff.iloc[4:6, 1] = np.nan + dff.iloc[5:8, 2] = np.nan dff dff.fillna(dff.mean()) @@ -473,7 +473,8 @@ filled since the last valid observation: .. ipython:: python - ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan]) + ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, + np.nan, 13, np.nan, np.nan]) # fill all consecutive values in a forward direction ser.interpolate() diff --git a/doc/source/options.rst b/doc/source/options.rst index 31359c337fdb8..e91be3e6ae730 100644 --- a/doc/source/options.rst +++ b/doc/source/options.rst @@ -38,9 +38,9 @@ and so passing in a substring will work - as long as it is unambiguous: .. ipython:: python pd.get_option("display.max_rows") - pd.set_option("display.max_rows",101) + pd.set_option("display.max_rows", 101) pd.get_option("display.max_rows") - pd.set_option("max_r",102) + pd.set_option("max_r", 102) pd.get_option("display.max_rows") @@ -93,7 +93,7 @@ All options also have a default value, and you can use ``reset_option`` to do ju .. ipython:: python pd.get_option("display.max_rows") - pd.set_option("display.max_rows",999) + pd.set_option("display.max_rows", 999) pd.get_option("display.max_rows") pd.reset_option("display.max_rows") pd.get_option("display.max_rows") @@ -113,9 +113,9 @@ are restored automatically when you exit the `with` block: .. ipython:: python - with pd.option_context("display.max_rows",10,"display.max_columns", 5): - print(pd.get_option("display.max_rows")) - print(pd.get_option("display.max_columns")) + with pd.option_context("display.max_rows", 10, "display.max_columns", 5): + print(pd.get_option("display.max_rows")) + print(pd.get_option("display.max_columns")) print(pd.get_option("display.max_rows")) print(pd.get_option("display.max_columns")) @@ -150,7 +150,7 @@ lines are replaced by an ellipsis. .. ipython:: python - df = pd.DataFrame(np.random.randn(7,2)) + df = pd.DataFrame(np.random.randn(7, 2)) pd.set_option('max_rows', 7) df pd.set_option('max_rows', 5) @@ -162,7 +162,7 @@ dataframes to stretch across pages, wrapped over the full column vs row-wise. .. ipython:: python - df = pd.DataFrame(np.random.randn(5,10)) + df = pd.DataFrame(np.random.randn(5, 10)) pd.set_option('expand_frame_repr', True) df pd.set_option('expand_frame_repr', False) @@ -174,7 +174,7 @@ dataframes to stretch across pages, wrapped over the full column vs row-wise. .. ipython:: python - df = pd.DataFrame(np.random.randn(10,10)) + df = pd.DataFrame(np.random.randn(10, 10)) pd.set_option('max_rows', 5) pd.set_option('large_repr', 'truncate') df @@ -190,7 +190,7 @@ of this length or longer will be truncated with an ellipsis. df = pd.DataFrame(np.array([['foo', 'bar', 'bim', 'uncomfortably long string'], ['horse', 'cow', 'banana', 'apple']])) - pd.set_option('max_colwidth',40) + pd.set_option('max_colwidth', 40) df pd.set_option('max_colwidth', 6) df @@ -201,7 +201,7 @@ will be given. .. ipython:: python - df = pd.DataFrame(np.random.randn(10,10)) + df = pd.DataFrame(np.random.randn(10, 10)) pd.set_option('max_info_columns', 11) df.info() pd.set_option('max_info_columns', 5) @@ -215,7 +215,7 @@ can specify the option ``df.info(null_counts=True)`` to override on showing a pa .. ipython:: python - df = pd.DataFrame(np.random.choice([0,1,np.nan], size=(10,10))) + df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10))) df pd.set_option('max_info_rows', 11) df.info() @@ -228,10 +228,10 @@ This is only a suggestion. .. ipython:: python - df = pd.DataFrame(np.random.randn(5,5)) - pd.set_option('precision',7) + df = pd.DataFrame(np.random.randn(5, 5)) + pd.set_option('precision', 7) df - pd.set_option('precision',4) + pd.set_option('precision', 4) df ``display.chop_threshold`` sets at what level pandas rounds to zero when @@ -240,7 +240,7 @@ precision at which the number is stored. .. ipython:: python - df = pd.DataFrame(np.random.randn(6,6)) + df = pd.DataFrame(np.random.randn(6, 6)) pd.set_option('chop_threshold', 0) df pd.set_option('chop_threshold', .5) @@ -252,7 +252,9 @@ The options are 'right', and 'left'. .. ipython:: python - df = pd.DataFrame(np.array([np.random.randn(6), np.random.randint(1,9,6)*.1, np.zeros(6)]).T, + df = pd.DataFrame(np.array([np.random.randn(6), + np.random.randint(1, 9, 6) * .1, + np.zeros(6)]).T, columns=['A', 'B', 'C'], dtype='float') pd.set_option('colheader_justify', 'right') df @@ -454,14 +456,14 @@ For instance: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True) s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) - s/1.e3 - s/1.e6 + s / 1.e3 + s / 1.e6 .. ipython:: python :suppress: :okwarning: - pd.reset_option('^display\.') + pd.reset_option("^display") To round floats on a case-by-case basis, you can also use :meth:`~pandas.Series.round` and :meth:`~pandas.DataFrame.round`. @@ -483,7 +485,7 @@ If a DataFrame or Series contains these characters, the default output mode may .. ipython:: python df = pd.DataFrame({u'国籍': ['UK', u'日本'], u'名前': ['Alice', u'しのぶ']}) - df; + df .. image:: _static/option_unicode01.png @@ -494,7 +496,7 @@ times than the standard ``len`` function. .. ipython:: python pd.set_option('display.unicode.east_asian_width', True) - df; + df .. image:: _static/option_unicode02.png @@ -506,7 +508,7 @@ By default, an "Ambiguous" character's width, such as "¡" (inverted exclamation .. ipython:: python df = pd.DataFrame({'a': ['xxx', u'¡¡'], 'b': ['yyy', u'¡¡']}) - df; + df .. image:: _static/option_unicode03.png @@ -518,7 +520,7 @@ However, setting this option incorrectly for your terminal will cause these char .. ipython:: python pd.set_option('display.unicode.ambiguous_as_wide', True) - df; + df .. image:: _static/option_unicode04.png diff --git a/doc/source/release.rst b/doc/source/release.rst index 67a30984ff0a7..abbba9d6ff8ec 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -2,11 +2,6 @@ {{ header }} -.. ipython:: python - :suppress: - - import pandas.util.testing as tm - ************* Release Notes ************* @@ -2851,7 +2846,7 @@ API Changes In [5]: arr / arr2 Out[5]: array([0, 0, 1, 4]) - In [6]: pd.Series(arr) / pd.Series(arr2) # no future import required + In [6]: pd.Series(arr) / pd.Series(arr2) # no future import required Out[6]: 0 0.200000 1 0.666667 @@ -3662,12 +3657,12 @@ Improvements to existing features .. ipython:: python - p = pd.Panel(np.random.randn(3,4,4),items=['ItemA','ItemB','ItemC'], - major_axis=pd.date_range('20010102',periods=4), - minor_axis=['A','B','C','D']) + p = pd.Panel(np.random.randn(3, 4, 4), items=['ItemA', 'ItemB', 'ItemC'], + major_axis=pd.date_range('20010102', periods=4), + minor_axis=['A', 'B', 'C', 'D']) p p.reindex(items=['ItemA']).squeeze() - p.reindex(items=['ItemA'],minor=['B']).squeeze() + p.reindex(items=['ItemA'], minor=['B']).squeeze() - Improvement to Yahoo API access in ``pd.io.data.Options`` (:issue:`2758`) - added option `display.max_seq_items` to control the number of elements printed per sequence pprinting it. (:issue:`2979`) @@ -3681,10 +3676,10 @@ Improvements to existing features .. ipython:: python idx = pd.date_range("2001-10-1", periods=5, freq='M') - ts = pd.Series(np.random.rand(len(idx)),index=idx) + ts = pd.Series(np.random.rand(len(idx)), index=idx) ts['2001'] - df = pd.DataFrame(dict(A = ts)) + df = pd.DataFrame({'A': ts}) df['2001'] - added option `display.mpl_style` providing a sleeker visual style for plots. Based on https://gist.github.com/huyng/816622 (:issue:`3075`). diff --git a/setup.cfg b/setup.cfg index 30b4d13bd0a66..fd258e7334ff0 100644 --- a/setup.cfg +++ b/setup.cfg @@ -77,10 +77,6 @@ exclude = doc/source/contributing_docstring.rst doc/source/enhancingperf.rst doc/source/groupby.rst - doc/source/indexing.rst - doc/source/missing_data.rst - doc/source/options.rst - doc/source/release.rst [yapf]