Skip to content

Commit 3c8dac0

Browse files
thooPingviinituutti
authored andcommitted
Fix flake8 issues on v22, v23 and v24rst (pandas-dev#24217)
1 parent 2905462 commit 3c8dac0

File tree

4 files changed

+73
-77
lines changed

4 files changed

+73
-77
lines changed

doc/source/whatsnew/v0.22.0.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -137,8 +137,8 @@ sum and ``1`` for product.
137137
.. code-block:: ipython
138138
139139
In [11]: s = pd.Series([1, 1, np.nan, np.nan],
140-
...: index=pd.date_range('2017', periods=4))
141-
...: s
140+
....: index=pd.date_range('2017', periods=4))
141+
....: s
142142
Out[11]:
143143
2017-01-01 1.0
144144
2017-01-02 1.0

doc/source/whatsnew/v0.23.0.rst

+29-24
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,10 @@ A ``DataFrame`` can now be written to and subsequently read back via JSON while
5353
.. ipython:: python
5454
5555
df = pd.DataFrame({'foo': [1, 2, 3, 4],
56-
'bar': ['a', 'b', 'c', 'd'],
57-
'baz': pd.date_range('2018-01-01', freq='d', periods=4),
58-
'qux': pd.Categorical(['a', 'b', 'c', 'c'])
59-
}, index=pd.Index(range(4), name='idx'))
56+
'bar': ['a', 'b', 'c', 'd'],
57+
'baz': pd.date_range('2018-01-01', freq='d', periods=4),
58+
'qux': pd.Categorical(['a', 'b', 'c', 'c'])},
59+
index=pd.Index(range(4), name='idx'))
6060
df
6161
df.dtypes
6262
df.to_json('test.json', orient='table')
@@ -97,7 +97,7 @@ The :func:`DataFrame.assign` now accepts dependent keyword arguments for python
9797
9898
df = pd.DataFrame({'A': [1, 2, 3]})
9999
df
100-
df.assign(B=df.A, C=lambda x:x['A']+ x['B'])
100+
df.assign(B=df.A, C=lambda x: x['A'] + x['B'])
101101
102102
.. warning::
103103

@@ -122,7 +122,7 @@ The :func:`DataFrame.assign` now accepts dependent keyword arguments for python
122122

123123
.. ipython:: python
124124
125-
df.assign(A=df.A+1, C= lambda df: df.A* -1)
125+
df.assign(A=df.A + 1, C=lambda df: df.A * -1)
126126
127127
128128
@@ -284,7 +284,7 @@ For pivotting operations, this behavior is *already* controlled by the ``dropna`
284284
categories=["a", "b", "z"], ordered=True)
285285
cat2 = pd.Categorical(["c", "d", "c", "d"],
286286
categories=["c", "d", "y"], ordered=True)
287-
df = DataFrame({"A": cat1, "B": cat2, "values": [1, 2, 3, 4]})
287+
df = pd.DataFrame({"A": cat1, "B": cat2, "values": [1, 2, 3, 4]})
288288
df
289289
290290
.. ipython:: python
@@ -336,7 +336,8 @@ outside the existing valid values while preserving those inside. (:issue:`16284
336336

337337
.. ipython:: python
338338
339-
ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan])
339+
ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan,
340+
np.nan, 13, np.nan, np.nan])
340341
ser
341342
342343
Fill one consecutive inside value in both directions
@@ -600,15 +601,16 @@ Previous Behavior (and current behavior if on Python < 3.6):
600601

601602
.. code-block:: ipython
602603
603-
pd.Series({'Income': 2000,
604-
'Expenses': -1500,
605-
'Taxes': -200,
606-
'Net result': 300})
607-
Expenses -1500
608-
Income 2000
609-
Net result 300
610-
Taxes -200
611-
dtype: int64
604+
In [16]: pd.Series({'Income': 2000,
605+
....: 'Expenses': -1500,
606+
....: 'Taxes': -200,
607+
....: 'Net result': 300})
608+
Out[16]:
609+
Expenses -1500
610+
Income 2000
611+
Net result 300
612+
Taxes -200
613+
dtype: int64
612614
613615
Note the Series above is ordered alphabetically by the index values.
614616

@@ -696,7 +698,8 @@ where a list-like (e.g. ``tuple`` or ``list`` is returned) (:issue:`16353`, :iss
696698

697699
.. ipython:: python
698700
699-
df = pd.DataFrame(np.tile(np.arange(3), 6).reshape(6, -1) + 1, columns=['A', 'B', 'C'])
701+
df = pd.DataFrame(np.tile(np.arange(3), 6).reshape(6, -1) + 1,
702+
columns=['A', 'B', 'C'])
700703
df
701704
702705
Previous Behavior: if the returned shape happened to match the length of original columns, this would return a ``DataFrame``.
@@ -750,7 +753,7 @@ Returning a ``Series`` allows one to control the exact return structure and colu
750753

751754
.. ipython:: python
752755
753-
df.apply(lambda x: Series([1, 2, 3], index=['D', 'E', 'F']), axis=1)
756+
df.apply(lambda x: pd.Series([1, 2, 3], index=['D', 'E', 'F']), axis=1)
754757
755758
.. _whatsnew_0230.api_breaking.concat:
756759

@@ -825,10 +828,12 @@ Current Behavior:
825828
.. ipython:: python
826829
827830
index = pd.Int64Index([-1, 0, 1])
828-
# division by zero gives -infinity where negative, +infinity where positive, and NaN for 0 / 0
831+
# division by zero gives -infinity where negative,
832+
# +infinity where positive, and NaN for 0 / 0
829833
index / 0
830834
831-
# The result of division by zero should not depend on whether the zero is int or float
835+
# The result of division by zero should not depend on
836+
# whether the zero is int or float
832837
index / 0.0
833838
834839
index = pd.UInt64Index([0, 1])
@@ -853,7 +858,7 @@ Previous Behavior:
853858
854859
In [1]: s = pd.Series(['number 10', '12 eggs'])
855860
856-
In [2]: extracted = s.str.extract('.*(\d\d).*')
861+
In [2]: extracted = s.str.extract(r'.*(\d\d).*')
857862
858863
In [3]: extracted
859864
Out [3]:
@@ -870,7 +875,7 @@ New Behavior:
870875
.. ipython:: python
871876
872877
s = pd.Series(['number 10', '12 eggs'])
873-
extracted = s.str.extract('.*(\d\d).*')
878+
extracted = s.str.extract(r'.*(\d\d).*')
874879
extracted
875880
type(extracted)
876881
@@ -879,7 +884,7 @@ To restore previous behavior, simply set ``expand`` to ``False``:
879884
.. ipython:: python
880885
881886
s = pd.Series(['number 10', '12 eggs'])
882-
extracted = s.str.extract('.*(\d\d).*', expand=False)
887+
extracted = s.str.extract(r'.*(\d\d).*', expand=False)
883888
extracted
884889
type(extracted)
885890

doc/source/whatsnew/v0.24.0.rst

+42-48
Original file line numberDiff line numberDiff line change
@@ -252,7 +252,7 @@ convenient way to apply users' predefined styling functions, and can help reduce
252252

253253
.. ipython:: python
254254
255-
df = pandas.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})
255+
df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})
256256
257257
def format_and_align(styler):
258258
return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
@@ -282,8 +282,7 @@ See the :ref:`Merge, join, and concatenate
282282
283283
284284
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
285-
'B': ['B0', 'B1', 'B2']},
286-
index=index_left)
285+
'B': ['B0', 'B1', 'B2']}, index=index_left)
287286
288287
289288
index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
@@ -292,11 +291,9 @@ See the :ref:`Merge, join, and concatenate
292291
293292
294293
right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
295-
'D': ['D0', 'D1', 'D2', 'D3']},
296-
index=index_right)
294+
'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
297295
298-
299-
left.join(right)
296+
left.join(right)
300297
301298
For earlier versions this can be done using the following.
302299

@@ -441,26 +438,26 @@ Previous Behavior on Windows:
441438

442439
.. code-block:: ipython
443440
444-
In [1]: data = pd.DataFrame({
445-
...: "string_with_lf": ["a\nbc"],
446-
...: "string_with_crlf": ["a\r\nbc"]
447-
...: })
441+
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
442+
...: "string_with_crlf": ["a\r\nbc"]})
448443
449-
In [2]: # When passing file PATH to to_csv, line_terminator does not work, and csv is saved with '\r\n'.
450-
...: # Also, this converts all '\n's in the data to '\r\n'.
451-
...: data.to_csv("test.csv", index=False, line_terminator='\n')
444+
In [2]: # When passing file PATH to to_csv,
445+
...: # line_terminator does not work, and csv is saved with '\r\n'.
446+
...: # Also, this converts all '\n's in the data to '\r\n'.
447+
...: data.to_csv("test.csv", index=False, line_terminator='\n')
452448
453449
In [3]: with open("test.csv", mode='rb') as f:
454-
...: print(f.read())
455-
b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'
450+
...: print(f.read())
451+
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'
456452
457-
In [4]: # When passing file OBJECT with newline option to to_csv, line_terminator works.
458-
...: with open("test2.csv", mode='w', newline='\n') as f:
459-
...: data.to_csv(f, index=False, line_terminator='\n')
453+
In [4]: # When passing file OBJECT with newline option to
454+
...: # to_csv, line_terminator works.
455+
...: with open("test2.csv", mode='w', newline='\n') as f:
456+
...: data.to_csv(f, index=False, line_terminator='\n')
460457
461458
In [5]: with open("test2.csv", mode='rb') as f:
462-
...: print(f.read())
463-
b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
459+
...: print(f.read())
460+
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
464461
465462
466463
New Behavior on Windows:
@@ -471,16 +468,14 @@ New Behavior on Windows:
471468

472469
.. code-block:: ipython
473470
474-
In [1]: data = pd.DataFrame({
475-
...: "string_with_lf": ["a\nbc"],
476-
...: "string_with_crlf": ["a\r\nbc"]
477-
...: })
471+
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
472+
...: "string_with_crlf": ["a\r\nbc"]})
478473
479474
In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')
480475
481476
In [3]: with open("test.csv", mode='rb') as f:
482-
...: print(f.read())
483-
b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
477+
...: print(f.read())
478+
Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
484479
485480
486481
- On Windows, the value of ``os.linesep`` is ``'\r\n'``,
@@ -489,34 +484,30 @@ New Behavior on Windows:
489484

490485
.. code-block:: ipython
491486
492-
In [1]: data = pd.DataFrame({
493-
...: "string_with_lf": ["a\nbc"],
494-
...: "string_with_crlf": ["a\r\nbc"]
495-
...: })
487+
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
488+
...: "string_with_crlf": ["a\r\nbc"]})
496489
497490
In [2]: data.to_csv("test.csv", index=False)
498491
499492
In [3]: with open("test.csv", mode='rb') as f:
500-
...: print(f.read())
501-
b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
493+
...: print(f.read())
494+
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
502495
503496
504497
- For files objects, specifying ``newline`` is not sufficient to set the line terminator.
505498
You must pass in the ``line_terminator`` explicitly, even in this case.
506499

507500
.. code-block:: ipython
508501
509-
In [1]: data = pd.DataFrame({
510-
...: "string_with_lf": ["a\nbc"],
511-
...: "string_with_crlf": ["a\r\nbc"]
512-
...: })
502+
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
503+
...: "string_with_crlf": ["a\r\nbc"]})
513504
514505
In [2]: with open("test2.csv", mode='w', newline='\n') as f:
515-
...: data.to_csv(f, index=False)
506+
...: data.to_csv(f, index=False)
516507
517508
In [3]: with open("test2.csv", mode='rb') as f:
518-
...: print(f.read())
519-
b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
509+
...: print(f.read())
510+
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
520511
521512
.. _whatsnew_0240.api.timezone_offset_parsing:
522513

@@ -563,7 +554,8 @@ Parsing datetime strings with different UTC offsets will now create an Index of
563554

564555
.. ipython:: python
565556
566-
idx = pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
557+
idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
558+
"2015-11-18 16:30:00+06:30"])
567559
idx
568560
idx[0]
569561
idx[1]
@@ -573,7 +565,8 @@ that the dates have been converted to UTC
573565

574566
.. ipython:: python
575567
576-
pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"], utc=True)
568+
pd.to_datetime(["2015-11-18 15:30:00+05:30",
569+
"2015-11-18 16:30:00+06:30"], utc=True)
577570
578571
.. _whatsnew_0240.api_breaking.calendarday:
579572

@@ -845,7 +838,7 @@ Previous Behavior:
845838
In [4]: df = pd.DataFrame(arr)
846839
847840
In [5]: df == arr[[0], :]
848-
...: # comparison previously broadcast where arithmetic would raise
841+
...: # comparison previously broadcast where arithmetic would raise
849842
Out[5]:
850843
0 1
851844
0 True True
@@ -856,8 +849,8 @@ Previous Behavior:
856849
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
857850
858851
In [7]: df == (1, 2)
859-
...: # length matches number of columns;
860-
...: # comparison previously raised where arithmetic would broadcast
852+
...: # length matches number of columns;
853+
...: # comparison previously raised where arithmetic would broadcast
861854
...
862855
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
863856
In [8]: df + (1, 2)
@@ -868,8 +861,8 @@ Previous Behavior:
868861
2 5 7
869862
870863
In [9]: df == (1, 2, 3)
871-
...: # length matches number of rows
872-
...: # comparison previously broadcast where arithmetic would raise
864+
...: # length matches number of rows
865+
...: # comparison previously broadcast where arithmetic would raise
873866
Out[9]:
874867
0 1
875868
0 False True
@@ -1032,7 +1025,8 @@ Current Behavior:
10321025

10331026
.. code-block:: ipython
10341027
1035-
In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4],
1028+
In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2],
1029+
...: 'b': [3, 3, 4, 4, 4],
10361030
...: 'c': [1, 1, np.nan, 1, 1]})
10371031
In [4]: pd.crosstab(df.a, df.b, normalize='columns')
10381032

setup.cfg

-3
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,8 @@ exclude =
7171
doc/source/whatsnew/v0.19.0.rst
7272
doc/source/whatsnew/v0.20.0.rst
7373
doc/source/whatsnew/v0.21.0.rst
74-
doc/source/whatsnew/v0.22.0.rst
75-
doc/source/whatsnew/v0.23.0.rst
7674
doc/source/whatsnew/v0.23.1.rst
7775
doc/source/whatsnew/v0.23.2.rst
78-
doc/source/whatsnew/v0.24.0.rst
7976
doc/source/basics.rst
8077
doc/source/contributing_docstring.rst
8178
doc/source/enhancingperf.rst

0 commit comments

Comments
 (0)