Skip to content

Commit aa25770

Browse files
saurav-chakravortyPingviinituutti
authored andcommitted
DOC: Fix flake8 issues in whatsnew v10* and v11* (pandas-dev#24277)
1 parent c5e8a78 commit aa25770

File tree

4 files changed

+85
-89
lines changed

4 files changed

+85
-89
lines changed

doc/source/whatsnew/v0.10.0.rst

+28-26
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,6 @@ v0.10.0 (December 17, 2012)
55

66
{{ header }}
77

8-
.. ipython:: python
9-
:suppress:
10-
11-
from pandas import * # noqa F401, F403
12-
138

149
This is a major release from 0.9.1 and includes many new features and
1510
enhancements along with a large number of bug fixes. There are also a number of
@@ -60,7 +55,7 @@ talking about:
6055
# deprecated now
6156
df - df[0]
6257
# Change your code to
63-
df.sub(df[0], axis=0) # align on axis 0 (rows)
58+
df.sub(df[0], axis=0) # align on axis 0 (rows)
6459
6560
You will get a deprecation warning in the 0.10.x series, and the deprecated
6661
functionality will be removed in 0.11 or later.
@@ -77,7 +72,7 @@ labeled the aggregated group with the end of the interval: the next day).
7772
7873
In [1]: dates = pd.date_range('1/1/2000', '1/5/2000', freq='4h')
7974
80-
In [2]: series = Series(np.arange(len(dates)), index=dates)
75+
In [2]: series = pd.Series(np.arange(len(dates)), index=dates)
8176
8277
In [3]: series
8378
Out[3]:
@@ -187,10 +182,14 @@ labeled the aggregated group with the end of the interval: the next day).
187182

188183
.. ipython:: python
189184
190-
data= 'a,b,c\n1,Yes,2\n3,No,4'
185+
import io
186+
187+
data = ('a,b,c\n'
188+
'1,Yes,2\n'
189+
'3,No,4')
191190
print(data)
192-
pd.read_csv(StringIO(data), header=None)
193-
pd.read_csv(StringIO(data), header=None, prefix='X')
191+
pd.read_csv(io.StringIO(data), header=None)
192+
pd.read_csv(io.StringIO(data), header=None, prefix='X')
194193
195194
- Values like ``'Yes'`` and ``'No'`` are not interpreted as boolean by default,
196195
though this can be controlled by new ``true_values`` and ``false_values``
@@ -199,8 +198,8 @@ labeled the aggregated group with the end of the interval: the next day).
199198
.. ipython:: python
200199
201200
print(data)
202-
pd.read_csv(StringIO(data))
203-
pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
201+
pd.read_csv(io.StringIO(data))
202+
pd.read_csv(io.StringIO(data), true_values=['Yes'], false_values=['No'])
204203
205204
- The file parsers will not recognize non-string values arising from a
206205
converter function as NA if passed in the ``na_values`` argument. It's better
@@ -211,7 +210,7 @@ labeled the aggregated group with the end of the interval: the next day).
211210

212211
.. ipython:: python
213212
214-
s = Series([np.nan, 1., 2., np.nan, 4])
213+
s = pd.Series([np.nan, 1., 2., np.nan, 4])
215214
s
216215
s.fillna(0)
217216
s.fillna(method='pad')
@@ -230,9 +229,9 @@ Convenience methods ``ffill`` and ``bfill`` have been added:
230229
.. ipython:: python
231230
232231
def f(x):
233-
return Series([ x, x**2 ], index = ['x', 'x^2'])
232+
return pd.Series([x, x**2], index=['x', 'x^2'])
234233
235-
s = Series(np.random.rand(5))
234+
s = pd.Series(np.random.rand(5))
236235
s
237236
s.apply(f)
238237
@@ -249,7 +248,7 @@ Convenience methods ``ffill`` and ``bfill`` have been added:
249248

250249
.. ipython:: python
251250
252-
get_option("display.max_rows")
251+
pd.get_option("display.max_rows")
253252
254253
- to_string() methods now always return unicode strings (:issue:`2224`).
255254

@@ -264,7 +263,7 @@ representation across multiple rows by default:
264263

265264
.. ipython:: python
266265
267-
wide_frame = DataFrame(randn(5, 16))
266+
wide_frame = pd.DataFrame(np.random.randn(5, 16))
268267
269268
wide_frame
270269
@@ -300,13 +299,16 @@ Updated PyTables Support
300299
:suppress:
301300
:okexcept:
302301
302+
import os
303+
303304
os.remove('store.h5')
304305
305306
.. ipython:: python
306307
307-
store = HDFStore('store.h5')
308-
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
309-
columns=['A', 'B', 'C'])
308+
store = pd.HDFStore('store.h5')
309+
df = pd.DataFrame(np.random.randn(8, 3),
310+
index=pd.date_range('1/1/2000', periods=8),
311+
columns=['A', 'B', 'C'])
310312
df
311313
312314
# appending data frames
@@ -322,13 +324,13 @@ Updated PyTables Support
322324
.. ipython:: python
323325
:okwarning:
324326
325-
wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
326-
major_axis=date_range('1/1/2000', periods=5),
327-
minor_axis=['A', 'B', 'C', 'D'])
327+
wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'],
328+
major_axis=pd.date_range('1/1/2000', periods=5),
329+
minor_axis=['A', 'B', 'C', 'D'])
328330
wp
329331
330332
# storing a panel
331-
store.append('wp',wp)
333+
store.append('wp', wp)
332334
333335
# selecting via A QUERY
334336
store.select('wp', "major_axis>20000102 and minor_axis=['A','B']")
@@ -361,8 +363,8 @@ Updated PyTables Support
361363
.. ipython:: python
362364
363365
df['string'] = 'string'
364-
df['int'] = 1
365-
store.append('df',df)
366+
df['int'] = 1
367+
store.append('df', df)
366368
df1 = store.select('df')
367369
df1
368370
df1.get_dtype_counts()

doc/source/whatsnew/v0.10.1.rst

+28-28
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,6 @@ v0.10.1 (January 22, 2013)
55

66
{{ header }}
77

8-
.. ipython:: python
9-
:suppress:
10-
11-
from pandas import * # noqa F401, F403
12-
138

149
This is a minor release from 0.10.0 and includes new features, enhancements,
1510
and bug fixes. In particular, there is substantial new HDFStore functionality
@@ -48,24 +43,27 @@ You may need to upgrade your existing data files. Please visit the
4843
:suppress:
4944
:okexcept:
5045
46+
import os
47+
5148
os.remove('store.h5')
5249
5350
You can designate (and index) certain columns that you want to be able to
5451
perform queries on a table, by passing a list to ``data_columns``
5552

5653
.. ipython:: python
5754
58-
store = HDFStore('store.h5')
59-
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
60-
columns=['A', 'B', 'C'])
55+
store = pd.HDFStore('store.h5')
56+
df = pd.DataFrame(np.random.randn(8, 3),
57+
index=pd.date_range('1/1/2000', periods=8),
58+
columns=['A', 'B', 'C'])
6159
df['string'] = 'foo'
6260
df.loc[df.index[4:6], 'string'] = np.nan
6361
df.loc[df.index[7:9], 'string'] = 'bar'
6462
df['string2'] = 'cool'
6563
df
6664
6765
# on-disk operations
68-
store.append('df', df, data_columns = ['B','C','string','string2'])
66+
store.append('df', df, data_columns=['B', 'C', 'string', 'string2'])
6967
store.select('df', "B>0 and string=='foo'")
7068
7169
# this is in-memory version of this type of selection
@@ -77,16 +75,16 @@ Retrieving unique values in an indexable or data column.
7775
7876
# note that this is deprecated as of 0.14.0
7977
# can be replicated by: store.select_column('df','index').unique()
80-
store.unique('df','index')
81-
store.unique('df','string')
78+
store.unique('df', 'index')
79+
store.unique('df', 'string')
8280
8381
You can now store ``datetime64`` in data columns
8482

8583
.. ipython:: python
8684
87-
df_mixed = df.copy()
88-
df_mixed['datetime64'] = Timestamp('20010102')
89-
df_mixed.loc[df_mixed.index[3:4], ['A','B']] = np.nan
85+
df_mixed = df.copy()
86+
df_mixed['datetime64'] = pd.Timestamp('20010102')
87+
df_mixed.loc[df_mixed.index[3:4], ['A', 'B']] = np.nan
9088
9189
store.append('df_mixed', df_mixed)
9290
df_mixed1 = store.select('df_mixed')
@@ -99,21 +97,21 @@ columns, this is equivalent to passing a
9997

10098
.. ipython:: python
10199
102-
store.select('df',columns = ['A','B'])
100+
store.select('df', columns=['A', 'B'])
103101
104102
``HDFStore`` now serializes MultiIndex dataframes when appending tables.
105103

106104
.. code-block:: ipython
107105
108-
In [19]: index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
109-
....: ['one', 'two', 'three']],
110-
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
111-
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
112-
....: names=['foo', 'bar'])
106+
In [19]: index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
107+
....: ['one', 'two', 'three']],
108+
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
109+
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
110+
....: names=['foo', 'bar'])
113111
....:
114112
115-
In [20]: df = DataFrame(np.random.randn(10, 3), index=index,
116-
....: columns=['A', 'B', 'C'])
113+
In [20]: df = pd.DataFrame(np.random.randn(10, 3), index=index,
114+
....: columns=['A', 'B', 'C'])
117115
....:
118116
119117
In [21]: df
@@ -131,7 +129,7 @@ columns, this is equivalent to passing a
131129
two -3.207595 -1.535854 0.409769
132130
three -0.673145 -0.741113 -0.110891
133131
134-
In [22]: store.append('mi',df)
132+
In [22]: store.append('mi', df)
135133
136134
In [23]: store.select('mi')
137135
Out[23]:
@@ -162,26 +160,28 @@ combined result, by using ``where`` on a selector table.
162160

163161
.. ipython:: python
164162
165-
df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
166-
columns=['A', 'B', 'C', 'D', 'E', 'F'])
163+
df_mt = pd.DataFrame(np.random.randn(8, 6),
164+
index=pd.date_range('1/1/2000', periods=8),
165+
columns=['A', 'B', 'C', 'D', 'E', 'F'])
167166
df_mt['foo'] = 'bar'
168167
169168
# you can also create the tables individually
170-
store.append_to_multiple({ 'df1_mt' : ['A','B'], 'df2_mt' : None }, df_mt, selector = 'df1_mt')
169+
store.append_to_multiple({'df1_mt': ['A', 'B'], 'df2_mt': None},
170+
df_mt, selector='df1_mt')
171171
store
172172
173173
# indiviual tables were created
174174
store.select('df1_mt')
175175
store.select('df2_mt')
176176
177177
# as a multiple
178-
store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ], selector = 'df1_mt')
178+
store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
179+
selector='df1_mt')
179180
180181
.. ipython:: python
181182
:suppress:
182183
183184
store.close()
184-
import os
185185
os.remove('store.h5')
186186
187187
**Enhancements**

0 commit comments

Comments
 (0)