Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 4974758

Browse files
saurav-chakravortyjreback
authored andcommittedDec 14, 2018
DOC: Fix flake8 issues in whatsnew v10* and v11* (#24277)
1 parent 040f06f commit 4974758

File tree

4 files changed

+85
-89
lines changed

4 files changed

+85
-89
lines changed
 

‎doc/source/whatsnew/v0.10.0.rst

Lines changed: 28 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,6 @@ v0.10.0 (December 17, 2012)
55

66
{{ header }}
77

8-
.. ipython:: python
9-
:suppress:
10-
11-
from pandas import * # noqa F401, F403
12-
138

149
This is a major release from 0.9.1 and includes many new features and
1510
enhancements along with a large number of bug fixes. There are also a number of
@@ -60,7 +55,7 @@ talking about:
6055
# deprecated now
6156
df - df[0]
6257
# Change your code to
63-
df.sub(df[0], axis=0) # align on axis 0 (rows)
58+
df.sub(df[0], axis=0) # align on axis 0 (rows)
6459
6560
You will get a deprecation warning in the 0.10.x series, and the deprecated
6661
functionality will be removed in 0.11 or later.
@@ -77,7 +72,7 @@ labeled the aggregated group with the end of the interval: the next day).
7772
7873
In [1]: dates = pd.date_range('1/1/2000', '1/5/2000', freq='4h')
7974
80-
In [2]: series = Series(np.arange(len(dates)), index=dates)
75+
In [2]: series = pd.Series(np.arange(len(dates)), index=dates)
8176
8277
In [3]: series
8378
Out[3]:
@@ -187,10 +182,14 @@ labeled the aggregated group with the end of the interval: the next day).
187182

188183
.. ipython:: python
189184
190-
data= 'a,b,c\n1,Yes,2\n3,No,4'
185+
import io
186+
187+
data = ('a,b,c\n'
188+
'1,Yes,2\n'
189+
'3,No,4')
191190
print(data)
192-
pd.read_csv(StringIO(data), header=None)
193-
pd.read_csv(StringIO(data), header=None, prefix='X')
191+
pd.read_csv(io.StringIO(data), header=None)
192+
pd.read_csv(io.StringIO(data), header=None, prefix='X')
194193
195194
- Values like ``'Yes'`` and ``'No'`` are not interpreted as boolean by default,
196195
though this can be controlled by new ``true_values`` and ``false_values``
@@ -199,8 +198,8 @@ labeled the aggregated group with the end of the interval: the next day).
199198
.. ipython:: python
200199
201200
print(data)
202-
pd.read_csv(StringIO(data))
203-
pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
201+
pd.read_csv(io.StringIO(data))
202+
pd.read_csv(io.StringIO(data), true_values=['Yes'], false_values=['No'])
204203
205204
- The file parsers will not recognize non-string values arising from a
206205
converter function as NA if passed in the ``na_values`` argument. It's better
@@ -211,7 +210,7 @@ labeled the aggregated group with the end of the interval: the next day).
211210

212211
.. ipython:: python
213212
214-
s = Series([np.nan, 1., 2., np.nan, 4])
213+
s = pd.Series([np.nan, 1., 2., np.nan, 4])
215214
s
216215
s.fillna(0)
217216
s.fillna(method='pad')
@@ -230,9 +229,9 @@ Convenience methods ``ffill`` and ``bfill`` have been added:
230229
.. ipython:: python
231230
232231
def f(x):
233-
return Series([ x, x**2 ], index = ['x', 'x^2'])
232+
return pd.Series([x, x**2], index=['x', 'x^2'])
234233
235-
s = Series(np.random.rand(5))
234+
s = pd.Series(np.random.rand(5))
236235
s
237236
s.apply(f)
238237
@@ -249,7 +248,7 @@ Convenience methods ``ffill`` and ``bfill`` have been added:
249248

250249
.. ipython:: python
251250
252-
get_option("display.max_rows")
251+
pd.get_option("display.max_rows")
253252
254253
- to_string() methods now always return unicode strings (:issue:`2224`).
255254

@@ -264,7 +263,7 @@ representation across multiple rows by default:
264263

265264
.. ipython:: python
266265
267-
wide_frame = DataFrame(randn(5, 16))
266+
wide_frame = pd.DataFrame(np.random.randn(5, 16))
268267
269268
wide_frame
270269
@@ -300,13 +299,16 @@ Updated PyTables Support
300299
:suppress:
301300
:okexcept:
302301
302+
import os
303+
303304
os.remove('store.h5')
304305
305306
.. ipython:: python
306307
307-
store = HDFStore('store.h5')
308-
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
309-
columns=['A', 'B', 'C'])
308+
store = pd.HDFStore('store.h5')
309+
df = pd.DataFrame(np.random.randn(8, 3),
310+
index=pd.date_range('1/1/2000', periods=8),
311+
columns=['A', 'B', 'C'])
310312
df
311313
312314
# appending data frames
@@ -322,13 +324,13 @@ Updated PyTables Support
322324
.. ipython:: python
323325
:okwarning:
324326
325-
wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
326-
major_axis=date_range('1/1/2000', periods=5),
327-
minor_axis=['A', 'B', 'C', 'D'])
327+
wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'],
328+
major_axis=pd.date_range('1/1/2000', periods=5),
329+
minor_axis=['A', 'B', 'C', 'D'])
328330
wp
329331
330332
# storing a panel
331-
store.append('wp',wp)
333+
store.append('wp', wp)
332334
333335
# selecting via A QUERY
334336
store.select('wp', "major_axis>20000102 and minor_axis=['A','B']")
@@ -361,8 +363,8 @@ Updated PyTables Support
361363
.. ipython:: python
362364
363365
df['string'] = 'string'
364-
df['int'] = 1
365-
store.append('df',df)
366+
df['int'] = 1
367+
store.append('df', df)
366368
df1 = store.select('df')
367369
df1
368370
df1.get_dtype_counts()

‎doc/source/whatsnew/v0.10.1.rst

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,6 @@ v0.10.1 (January 22, 2013)
55

66
{{ header }}
77

8-
.. ipython:: python
9-
:suppress:
10-
11-
from pandas import * # noqa F401, F403
12-
138

149
This is a minor release from 0.10.0 and includes new features, enhancements,
1510
and bug fixes. In particular, there is substantial new HDFStore functionality
@@ -48,24 +43,27 @@ You may need to upgrade your existing data files. Please visit the
4843
:suppress:
4944
:okexcept:
5045
46+
import os
47+
5148
os.remove('store.h5')
5249
5350
You can designate (and index) certain columns that you want to be able to
5451
perform queries on a table, by passing a list to ``data_columns``
5552

5653
.. ipython:: python
5754
58-
store = HDFStore('store.h5')
59-
df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
60-
columns=['A', 'B', 'C'])
55+
store = pd.HDFStore('store.h5')
56+
df = pd.DataFrame(np.random.randn(8, 3),
57+
index=pd.date_range('1/1/2000', periods=8),
58+
columns=['A', 'B', 'C'])
6159
df['string'] = 'foo'
6260
df.loc[df.index[4:6], 'string'] = np.nan
6361
df.loc[df.index[7:9], 'string'] = 'bar'
6462
df['string2'] = 'cool'
6563
df
6664
6765
# on-disk operations
68-
store.append('df', df, data_columns = ['B','C','string','string2'])
66+
store.append('df', df, data_columns=['B', 'C', 'string', 'string2'])
6967
store.select('df', "B>0 and string=='foo'")
7068
7169
# this is in-memory version of this type of selection
@@ -77,16 +75,16 @@ Retrieving unique values in an indexable or data column.
7775
7876
# note that this is deprecated as of 0.14.0
7977
# can be replicated by: store.select_column('df','index').unique()
80-
store.unique('df','index')
81-
store.unique('df','string')
78+
store.unique('df', 'index')
79+
store.unique('df', 'string')
8280
8381
You can now store ``datetime64`` in data columns
8482

8583
.. ipython:: python
8684
87-
df_mixed = df.copy()
88-
df_mixed['datetime64'] = Timestamp('20010102')
89-
df_mixed.loc[df_mixed.index[3:4], ['A','B']] = np.nan
85+
df_mixed = df.copy()
86+
df_mixed['datetime64'] = pd.Timestamp('20010102')
87+
df_mixed.loc[df_mixed.index[3:4], ['A', 'B']] = np.nan
9088
9189
store.append('df_mixed', df_mixed)
9290
df_mixed1 = store.select('df_mixed')
@@ -99,21 +97,21 @@ columns, this is equivalent to passing a
9997

10098
.. ipython:: python
10199
102-
store.select('df',columns = ['A','B'])
100+
store.select('df', columns=['A', 'B'])
103101
104102
``HDFStore`` now serializes MultiIndex dataframes when appending tables.
105103

106104
.. code-block:: ipython
107105
108-
In [19]: index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
109-
....: ['one', 'two', 'three']],
110-
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
111-
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
112-
....: names=['foo', 'bar'])
106+
In [19]: index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
107+
....: ['one', 'two', 'three']],
108+
....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
109+
....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
110+
....: names=['foo', 'bar'])
113111
....:
114112
115-
In [20]: df = DataFrame(np.random.randn(10, 3), index=index,
116-
....: columns=['A', 'B', 'C'])
113+
In [20]: df = pd.DataFrame(np.random.randn(10, 3), index=index,
114+
....: columns=['A', 'B', 'C'])
117115
....:
118116
119117
In [21]: df
@@ -131,7 +129,7 @@ columns, this is equivalent to passing a
131129
two -3.207595 -1.535854 0.409769
132130
three -0.673145 -0.741113 -0.110891
133131
134-
In [22]: store.append('mi',df)
132+
In [22]: store.append('mi', df)
135133
136134
In [23]: store.select('mi')
137135
Out[23]:
@@ -162,26 +160,28 @@ combined result, by using ``where`` on a selector table.
162160

163161
.. ipython:: python
164162
165-
df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
166-
columns=['A', 'B', 'C', 'D', 'E', 'F'])
163+
df_mt = pd.DataFrame(np.random.randn(8, 6),
164+
index=pd.date_range('1/1/2000', periods=8),
165+
columns=['A', 'B', 'C', 'D', 'E', 'F'])
167166
df_mt['foo'] = 'bar'
168167
169168
# you can also create the tables individually
170-
store.append_to_multiple({ 'df1_mt' : ['A','B'], 'df2_mt' : None }, df_mt, selector = 'df1_mt')
169+
store.append_to_multiple({'df1_mt': ['A', 'B'], 'df2_mt': None},
170+
df_mt, selector='df1_mt')
171171
store
172172
173173
# indiviual tables were created
174174
store.select('df1_mt')
175175
store.select('df2_mt')
176176
177177
# as a multiple
178-
store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ], selector = 'df1_mt')
178+
store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
179+
selector='df1_mt')
179180
180181
.. ipython:: python
181182
:suppress:
182183
183184
store.close()
184-
import os
185185
os.remove('store.h5')
186186
187187
**Enhancements**

‎doc/source/whatsnew/v0.11.0.rst

Lines changed: 29 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,6 @@ v0.11.0 (April 22, 2013)
55

66
{{ header }}
77

8-
.. ipython:: python
9-
:suppress:
10-
11-
from pandas import * # noqa F401, F403
12-
138

149
This is a major release from 0.10.1 and includes many new features and
1510
enhancements along with a large number of bug fixes. The methods of Selecting
@@ -79,12 +74,12 @@ Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passe
7974

8075
.. ipython:: python
8176
82-
df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
77+
df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32')
8378
df1
8479
df1.dtypes
85-
df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
86-
B = Series(randn(8)),
87-
C = Series(range(8),dtype='uint8') ))
80+
df2 = pd.DataFrame({'A': pd.Series(np.random.randn(8), dtype='float16'),
81+
'B': pd.Series(np.random.randn(8)),
82+
'C': pd.Series(range(8), dtype='uint8')})
8883
df2
8984
df2.dtypes
9085
@@ -127,9 +122,9 @@ Forcing Date coercion (and setting ``NaT`` when not datelike)
127122
.. ipython:: python
128123
:okwarning:
129124
130-
from datetime import datetime
131-
s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1,
132-
Timestamp('20010104'), '20010105'],dtype='O')
125+
import datetime
126+
s = pd.Series([datetime.datetime(2001, 1, 1, 0, 0), 'foo', 1.0, 1,
127+
pd.Timestamp('20010104'), '20010105'], dtype='O')
133128
s.convert_objects(convert_dates='coerce')
134129
135130
Dtype Gotchas
@@ -145,9 +140,9 @@ The following will all result in ``int64`` dtypes
145140

146141
.. ipython:: python
147142
148-
DataFrame([1,2],columns=['a']).dtypes
149-
DataFrame({'a' : [1,2] }).dtypes
150-
DataFrame({'a' : 1 }, index=range(2)).dtypes
143+
pd.DataFrame([1, 2], columns=['a']).dtypes
144+
pd.DataFrame({'a': [1, 2]}).dtypes
145+
pd.DataFrame({'a': 1}, index=range(2)).dtypes
151146
152147
Keep in mind that ``DataFrame(np.array([1,2]))`` **WILL** result in ``int32`` on 32-bit platforms!
153148

@@ -164,7 +159,7 @@ The dtype of the input data will be preserved in cases where ``nans`` are not in
164159
dfi
165160
dfi.dtypes
166161
167-
casted = dfi[dfi>0]
162+
casted = dfi[dfi > 0]
168163
casted
169164
casted.dtypes
170165
@@ -176,7 +171,7 @@ While float dtypes are unchanged.
176171
df4['A'] = df4['A'].astype('float32')
177172
df4.dtypes
178173
179-
casted = df4[df4>0]
174+
casted = df4[df4 > 0]
180175
casted
181176
casted.dtypes
182177
@@ -190,23 +185,23 @@ Furthermore ``datetime64[ns]`` columns are created by default, when passed datet
190185

191186
.. ipython:: python
192187
193-
df = DataFrame(randn(6,2),date_range('20010102',periods=6),columns=['A','B'])
194-
df['timestamp'] = Timestamp('20010103')
188+
df = pd.DataFrame(np.random.randn(6, 2), pd.date_range('20010102', periods=6),
189+
columns=['A', ' B'])
190+
df['timestamp'] = pd.Timestamp('20010103')
195191
df
196192
197193
# datetime64[ns] out of the box
198194
df.get_dtype_counts()
199195
200196
# use the traditional nan, which is mapped to NaT internally
201-
df.loc[df.index[2:4], ['A','timestamp']] = np.nan
197+
df.loc[df.index[2:4], ['A', 'timestamp']] = np.nan
202198
df
203199
204200
Astype conversion on ``datetime64[ns]`` to ``object``, implicitly converts ``NaT`` to ``np.nan``
205201

206202
.. ipython:: python
207203
208-
import datetime
209-
s = Series([datetime.datetime(2001, 1, 2, 0, 0) for i in range(3)])
204+
s = pd.Series([datetime.datetime(2001, 1, 2, 0, 0) for i in range(3)])
210205
s.dtype
211206
s[1] = np.nan
212207
s
@@ -250,14 +245,16 @@ Enhancements
250245
251246
.. ipython:: python
252247
253-
df = DataFrame(dict(A=lrange(5), B=lrange(5)))
254-
df.to_hdf('store.h5','table',append=True)
255-
read_hdf('store.h5', 'table', where = ['index>2'])
248+
df = pd.DataFrame({'A': lrange(5), 'B': lrange(5)})
249+
df.to_hdf('store.h5', 'table', append=True)
250+
pd.read_hdf('store.h5', 'table', where=['index > 2'])
256251
257252
.. ipython:: python
258253
:suppress:
259254
:okexcept:
260255
256+
import os
257+
261258
os.remove('store.h5')
262259
263260
- provide dotted attribute access to ``get`` from stores, e.g. ``store.df == store['df']``
@@ -271,23 +268,23 @@ Enhancements
271268

272269
.. ipython:: python
273270
274-
idx = date_range("2001-10-1", periods=5, freq='M')
275-
ts = Series(np.random.rand(len(idx)),index=idx)
271+
idx = pd.date_range("2001-10-1", periods=5, freq='M')
272+
ts = pd.Series(np.random.rand(len(idx)), index=idx)
276273
ts['2001']
277274
278-
df = DataFrame(dict(A = ts))
275+
df = pd.DataFrame({'A': ts})
279276
df['2001']
280277
281278
- ``Squeeze`` to possibly remove length 1 dimensions from an object.
282279

283280
.. ipython:: python
284281
285-
p = Panel(randn(3,4,4),items=['ItemA','ItemB','ItemC'],
286-
major_axis=date_range('20010102',periods=4),
287-
minor_axis=['A','B','C','D'])
282+
p = pd.Panel(np.random.randn(3, 4, 4), items=['ItemA', 'ItemB', 'ItemC'],
283+
major_axis=pd.date_range('20010102', periods=4),
284+
minor_axis=['A', 'B', 'C', 'D'])
288285
p
289286
p.reindex(items=['ItemA']).squeeze()
290-
p.reindex(items=['ItemA'],minor=['B']).squeeze()
287+
p.reindex(items=['ItemA'], minor=['B']).squeeze()
291288
292289
- In ``pd.io.data.Options``,
293290

‎setup.cfg

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,6 @@ exclude =
5050
doc/source/whatsnew/v0.8.0.rst
5151
doc/source/whatsnew/v0.9.0.rst
5252
doc/source/whatsnew/v0.9.1.rst
53-
doc/source/whatsnew/v0.10.0.rst
54-
doc/source/whatsnew/v0.10.1.rst
55-
doc/source/whatsnew/v0.11.0.rst
5653
doc/source/whatsnew/v0.12.0.rst
5754
doc/source/whatsnew/v0.13.0.rst
5855
doc/source/whatsnew/v0.13.1.rst

0 commit comments

Comments
 (0)
Please sign in to comment.