Skip to content

DOC: Fix flake8 issues in whatsnew v10* and v11* #24277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 14, 2018
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 28 additions & 28 deletions doc/source/whatsnew/v0.10.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,6 @@
v0.10.0 (December 17, 2012)
---------------------------

{{ header }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still need the {{ header }} ? cc @datapythonista

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should be in all pages. It is where we do the import pandas as pd and set some options. But even in pages where there is no code, I need it there because with the new sphinx theme, I'll be controlling the navigation with this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added it back. It was a mistake. I deleted with the include block.


.. ipython:: python
:suppress:

from pandas import * # noqa F401, F403


This is a major release from 0.9.1 and includes many new features and
enhancements along with a large number of bug fixes. There are also a number of
Expand Down Expand Up @@ -60,7 +53,7 @@ talking about:
# deprecated now
df - df[0]
# Change your code to
df.sub(df[0], axis=0) # align on axis 0 (rows)
df.sub(df[0], axis=0) # align on axis 0 (rows)

You will get a deprecation warning in the 0.10.x series, and the deprecated
functionality will be removed in 0.11 or later.
Expand All @@ -77,7 +70,7 @@ labeled the aggregated group with the end of the interval: the next day).

In [1]: dates = pd.date_range('1/1/2000', '1/5/2000', freq='4h')

In [2]: series = Series(np.arange(len(dates)), index=dates)
In [2]: series = pd.Series(np.arange(len(dates)), index=dates)

In [3]: series
Out[3]:
Expand Down Expand Up @@ -187,10 +180,14 @@ labeled the aggregated group with the end of the interval: the next day).

.. ipython:: python

data= 'a,b,c\n1,Yes,2\n3,No,4'
import io

data = ('a,b,c\n'
'1,Yes,2\n'
'3,No,4')
print(data)
pd.read_csv(StringIO(data), header=None)
pd.read_csv(StringIO(data), header=None, prefix='X')
pd.read_csv(io.StringIO(data), header=None)
pd.read_csv(io.StringIO(data), header=None, prefix='X')

- Values like ``'Yes'`` and ``'No'`` are not interpreted as boolean by default,
though this can be controlled by new ``true_values`` and ``false_values``
Expand All @@ -199,8 +196,8 @@ labeled the aggregated group with the end of the interval: the next day).
.. ipython:: python

print(data)
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
pd.read_csv(io.StringIO(data))
pd.read_csv(io.StringIO(data), true_values=['Yes'], false_values=['No'])

- The file parsers will not recognize non-string values arising from a
converter function as NA if passed in the ``na_values`` argument. It's better
Expand All @@ -211,7 +208,7 @@ labeled the aggregated group with the end of the interval: the next day).

.. ipython:: python

s = Series([np.nan, 1., 2., np.nan, 4])
s = pd.Series([np.nan, 1., 2., np.nan, 4])
s
s.fillna(0)
s.fillna(method='pad')
Expand All @@ -230,9 +227,9 @@ Convenience methods ``ffill`` and ``bfill`` have been added:
.. ipython:: python

def f(x):
return Series([ x, x**2 ], index = ['x', 'x^2'])
return pd.Series([x, x**2], index=['x', 'x^2'])

s = Series(np.random.rand(5))
s = pd.Series(np.random.rand(5))
s
s.apply(f)

Expand All @@ -249,7 +246,7 @@ Convenience methods ``ffill`` and ``bfill`` have been added:

.. ipython:: python

get_option("display.max_rows")
pd.get_option("display.max_rows")

- to_string() methods now always return unicode strings (:issue:`2224`).

Expand All @@ -264,7 +261,7 @@ representation across multiple rows by default:

.. ipython:: python

wide_frame = DataFrame(randn(5, 16))
wide_frame = pd.DataFrame(np.random.randn(5, 16))

wide_frame

Expand Down Expand Up @@ -300,13 +297,16 @@ Updated PyTables Support
:suppress:
:okexcept:

import os

os.remove('store.h5')

.. ipython:: python

store = HDFStore('store.h5')
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
store = pd.HDFStore('store.h5')
df = pd.DataFrame(np.random.randn(8, 3),
index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
df

# appending data frames
Expand All @@ -322,13 +322,13 @@ Updated PyTables Support
.. ipython:: python
:okwarning:

wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'],
major_axis=pd.date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
wp

# storing a panel
store.append('wp',wp)
store.append('wp', wp)

# selecting via A QUERY
store.select('wp', "major_axis>20000102 and minor_axis=['A','B']")
Expand Down Expand Up @@ -361,8 +361,8 @@ Updated PyTables Support
.. ipython:: python

df['string'] = 'string'
df['int'] = 1
store.append('df',df)
df['int'] = 1
store.append('df', df)
df1 = store.select('df')
df1
df1.get_dtype_counts()
Expand Down
57 changes: 27 additions & 30 deletions doc/source/whatsnew/v0.10.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,6 @@
v0.10.1 (January 22, 2013)
---------------------------

{{ header }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

restore the header here too


.. ipython:: python
:suppress:

from pandas import * # noqa F401, F403


This is a minor release from 0.10.0 and includes new features, enhancements,
and bug fixes. In particular, there is substantial new HDFStore functionality
Expand Down Expand Up @@ -47,6 +40,7 @@ You may need to upgrade your existing data files. Please visit the
.. ipython:: python
:suppress:
:okexcept:
import os
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a blank line before the import here


os.remove('store.h5')

Expand All @@ -55,17 +49,18 @@ perform queries on a table, by passing a list to ``data_columns``

.. ipython:: python

store = HDFStore('store.h5')
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
store = pd.HDFStore('store.h5')
df = pd.DataFrame(np.random.randn(8, 3),
index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
df['string'] = 'foo'
df.loc[df.index[4:6], 'string'] = np.nan
df.loc[df.index[7:9], 'string'] = 'bar'
df['string2'] = 'cool'
df

# on-disk operations
store.append('df', df, data_columns = ['B','C','string','string2'])
store.append('df', df, data_columns=['B', 'C', 'string', 'string2'])
store.select('df', "B>0 and string=='foo'")

# this is in-memory version of this type of selection
Expand All @@ -77,16 +72,16 @@ Retrieving unique values in an indexable or data column.

# note that this is deprecated as of 0.14.0
# can be replicated by: store.select_column('df','index').unique()
store.unique('df','index')
store.unique('df','string')
store.unique('df', 'index')
store.unique('df', 'string')

You can now store ``datetime64`` in data columns

.. ipython:: python

df_mixed = df.copy()
df_mixed['datetime64'] = Timestamp('20010102')
df_mixed.loc[df_mixed.index[3:4], ['A','B']] = np.nan
df_mixed = df.copy()
df_mixed['datetime64'] = pd.Timestamp('20010102')
df_mixed.loc[df_mixed.index[3:4], ['A', 'B']] = np.nan

store.append('df_mixed', df_mixed)
df_mixed1 = store.select('df_mixed')
Expand All @@ -99,21 +94,21 @@ columns, this is equivalent to passing a

.. ipython:: python

store.select('df',columns = ['A','B'])
store.select('df', columns=['A', 'B'])

``HDFStore`` now serializes MultiIndex dataframes when appending tables.

.. code-block:: ipython

In [19]: index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
....: ['one', 'two', 'three']],
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
....: names=['foo', 'bar'])
In [19]: index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
....: ['one', 'two', 'three']],
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
....: names=['foo', 'bar'])
....:

In [20]: df = DataFrame(np.random.randn(10, 3), index=index,
....: columns=['A', 'B', 'C'])
In [20]: df = pd.DataFrame(np.random.randn(10, 3), index=index,
....: columns=['A', 'B', 'C'])
....:

In [21]: df
Expand All @@ -131,7 +126,7 @@ columns, this is equivalent to passing a
two -3.207595 -1.535854 0.409769
three -0.673145 -0.741113 -0.110891

In [22]: store.append('mi',df)
In [22]: store.append('mi', df)

In [23]: store.select('mi')
Out[23]:
Expand Down Expand Up @@ -162,26 +157,28 @@ combined result, by using ``where`` on a selector table.

.. ipython:: python

df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C', 'D', 'E', 'F'])
df_mt = pd.DataFrame(np.random.randn(8, 6),
index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C', 'D', 'E', 'F'])
df_mt['foo'] = 'bar'

# you can also create the tables individually
store.append_to_multiple({ 'df1_mt' : ['A','B'], 'df2_mt' : None }, df_mt, selector = 'df1_mt')
store.append_to_multiple({'df1_mt': ['A', 'B'], 'df2_mt': None},
df_mt, selector='df1_mt')
store

# indiviual tables were created
store.select('df1_mt')
store.select('df2_mt')

# as a multiple
store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ], selector = 'df1_mt')
store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
selector='df1_mt')

.. ipython:: python
:suppress:

store.close()
import os
os.remove('store.h5')

**Enhancements**
Expand Down
Loading