Skip to content

Commit f2dba53

Browse files
committed
Format all rst files
1 parent 6af42d6 commit f2dba53

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+1551
-1354
lines changed

doc/source/development/contributing_codebase.rst

+27-14
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ the ``pandas.util._decorators.deprecate``:
118118
119119
from pandas.util._decorators import deprecate
120120
121-
deprecate('old_func', 'new_func', '1.1.0')
121+
deprecate("old_func", "new_func", "1.1.0")
122122
123123
Otherwise, you need to do it manually:
124124

@@ -135,7 +135,7 @@ Otherwise, you need to do it manually:
135135
Use new_func instead.
136136
"""
137137
warnings.warn(
138-
'Use new_func instead.',
138+
"Use new_func instead.",
139139
FutureWarning,
140140
stacklevel=find_stack_level(),
141141
)
@@ -179,6 +179,7 @@ The appropriate way to annotate this would be as follows
179179
180180
str_type = str
181181
182+
182183
class SomeClass2:
183184
str: str_type = None
184185
@@ -190,6 +191,7 @@ In some cases you may be tempted to use ``cast`` from the typing module when you
190191
191192
from pandas.core.dtypes.common import is_number
192193
194+
193195
def cannot_infer_bad(obj: Union[str, int, float]):
194196
195197
if is_number(obj):
@@ -222,8 +224,8 @@ For example, quite a few functions in pandas accept a ``dtype`` argument. This c
222224
223225
from pandas._typing import Dtype
224226
225-
def as_type(dtype: Dtype) -> ...:
226-
...
227+
228+
def as_type(dtype: Dtype) -> ...: ...
227229
228230
This module will ultimately house types for repeatedly used concepts like "path-like", "array-like", "numeric", etc... and can also hold aliases for commonly appearing parameters like ``axis``. Development of this module is active so be sure to refer to the source for the most up to date list of available types.
229231

@@ -428,6 +430,7 @@ be located.
428430
import pandas as pd
429431
import pandas._testing as tm
430432
433+
431434
def test_getitem_listlike_of_ints():
432435
ser = pd.Series(range(5))
433436
@@ -634,25 +637,29 @@ as a comment to a new test.
634637
import pandas as pd
635638
636639
637-
@pytest.mark.parametrize('dtype', ['int8', 'int16', 'int32', 'int64'])
640+
@pytest.mark.parametrize("dtype", ["int8", "int16", "int32", "int64"])
638641
def test_dtypes(dtype):
639642
assert str(np.dtype(dtype)) == dtype
640643
641644
642645
@pytest.mark.parametrize(
643-
'dtype', ['float32', pytest.param('int16', marks=pytest.mark.skip),
644-
pytest.param('int32', marks=pytest.mark.xfail(
645-
reason='to show how it works'))])
646+
"dtype",
647+
[
648+
"float32",
649+
pytest.param("int16", marks=pytest.mark.skip),
650+
pytest.param("int32", marks=pytest.mark.xfail(reason="to show how it works")),
651+
],
652+
)
646653
def test_mark(dtype):
647-
assert str(np.dtype(dtype)) == 'float32'
654+
assert str(np.dtype(dtype)) == "float32"
648655
649656
650657
@pytest.fixture
651658
def series():
652659
return pd.Series([1, 2, 3])
653660
654661
655-
@pytest.fixture(params=['int8', 'int16', 'int32', 'int64'])
662+
@pytest.fixture(params=["int8", "int16", "int32", "int64"])
656663
def dtype(request):
657664
return request.param
658665
@@ -721,10 +728,16 @@ for details <https://hypothesis.readthedocs.io/en/latest/index.html>`_.
721728
import json
722729
from hypothesis import given, strategies as st
723730
724-
any_json_value = st.deferred(lambda: st.one_of(
725-
st.none(), st.booleans(), st.floats(allow_nan=False), st.text(),
726-
st.lists(any_json_value), st.dictionaries(st.text(), any_json_value)
727-
))
731+
any_json_value = st.deferred(
732+
lambda: st.one_of(
733+
st.none(),
734+
st.booleans(),
735+
st.floats(allow_nan=False),
736+
st.text(),
737+
st.lists(any_json_value),
738+
st.dictionaries(st.text(), any_json_value),
739+
)
740+
)
728741
729742
730743
@given(value=any_json_value)

doc/source/development/contributing_docstring.rst

+6-11
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,6 @@ backticks. The following are considered inline code:
137137
.. code-block:: python
138138
139139
def func():
140-
141140
"""Some function.
142141
143142
With several mistakes in the docstring.
@@ -297,7 +296,7 @@ would be used, then we will specify "str, int or None, default None".
297296
.. code-block:: python
298297
299298
class Series:
300-
def plot(self, kind, color='blue', **kwargs):
299+
def plot(self, kind, color="blue", **kwargs):
301300
"""
302301
Generate a plot.
303302
@@ -463,6 +462,7 @@ With more than one value:
463462
464463
import string
465464
465+
466466
def random_letters():
467467
"""
468468
Generate and return a sequence of random letters.
@@ -478,8 +478,7 @@ With more than one value:
478478
String of random letters.
479479
"""
480480
length = np.random.randint(1, 10)
481-
letters = ''.join(np.random.choice(string.ascii_lowercase)
482-
for i in range(length))
481+
letters = "".join(np.random.choice(string.ascii_lowercase) for i in range(length))
483482
return length, letters
484483
485484
If the method yields its value:
@@ -737,7 +736,6 @@ positional arguments ``head(3)``.
737736
"""
738737
pass
739738
740-
741739
def fillna(self, value):
742740
"""
743741
Replace missing values by ``value``.
@@ -954,14 +952,12 @@ substitute the class names in this docstring.
954952
955953
class ChildA(Parent):
956954
@doc(Parent.my_function, klass="ChildA")
957-
def my_function(self):
958-
...
955+
def my_function(self): ...
959956
960957
961958
class ChildB(Parent):
962959
@doc(Parent.my_function, klass="ChildB")
963-
def my_function(self):
964-
...
960+
def my_function(self): ...
965961
966962
The resulting docstrings are
967963

@@ -987,8 +983,7 @@ You can substitute and append in one shot with something like
987983
.. code-block:: python
988984
989985
@doc(template, **_shared_doc_kwargs)
990-
def my_function(self):
991-
...
986+
def my_function(self): ...
992987
993988
where ``template`` may come from a module-level ``_shared_docs`` dictionary
994989
mapping function names to docstrings. Wherever possible, we prefer using

doc/source/development/extending.rst

+1
Original file line numberDiff line numberDiff line change
@@ -526,6 +526,7 @@ The ``__pandas_priority__`` of :class:`DataFrame`, :class:`Series`, and :class:`
526526
# return `self` and not the addition for simplicity
527527
return self
528528
529+
529530
custom = CustomList()
530531
series = pd.Series([1, 2, 3])
531532

doc/source/development/maintaining.rst

+1
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,7 @@ create a file ``t.py`` in your pandas directory, which contains
144144
.. code-block:: python
145145
146146
import pandas as pd
147+
147148
assert pd.Series([1, 1]).sum() == 2
148149
149150
and then run::

doc/source/getting_started/comparison/comparison_with_spreadsheets.rst

+4-5
Original file line numberDiff line numberDiff line change
@@ -427,9 +427,7 @@ The equivalent in pandas:
427427

428428
.. ipython:: python
429429
430-
pd.pivot_table(
431-
tips, values="tip", index=["size"], columns=["sex"], aggfunc=np.average
432-
)
430+
pd.pivot_table(tips, values="tip", index=["size"], columns=["sex"], aggfunc=np.average)
433431
434432
435433
Adding a row
@@ -440,8 +438,9 @@ Assuming we are using a :class:`~pandas.RangeIndex` (numbered ``0``, ``1``, etc.
440438
.. ipython:: python
441439
442440
df
443-
new_row = pd.DataFrame([["E", 51, True]],
444-
columns=["class", "student_count", "all_pass"])
441+
new_row = pd.DataFrame(
442+
[["E", 51, True]], columns=["class", "student_count", "all_pass"]
443+
)
445444
pd.concat([df, new_row])
446445
447446

doc/source/getting_started/comparison/comparison_with_sql.rst

+3-9
Original file line numberDiff line numberDiff line change
@@ -331,9 +331,7 @@ UNION
331331
df1 = pd.DataFrame(
332332
{"city": ["Chicago", "San Francisco", "New York City"], "rank": range(1, 4)}
333333
)
334-
df2 = pd.DataFrame(
335-
{"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]}
336-
)
334+
df2 = pd.DataFrame({"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]})
337335
338336
.. code-block:: sql
339337
@@ -433,9 +431,7 @@ Top n rows per group
433431
434432
(
435433
tips.assign(
436-
rn=tips.sort_values(["total_bill"], ascending=False)
437-
.groupby(["day"])
438-
.cumcount()
434+
rn=tips.sort_values(["total_bill"], ascending=False).groupby(["day"]).cumcount()
439435
+ 1
440436
)
441437
.query("rn < 3")
@@ -448,9 +444,7 @@ the same using ``rank(method='first')`` function
448444
449445
(
450446
tips.assign(
451-
rnk=tips.groupby(["day"])["total_bill"].rank(
452-
method="first", ascending=False
453-
)
447+
rnk=tips.groupby(["day"])["total_bill"].rank(method="first", ascending=False)
454448
)
455449
.query("rnk < 3")
456450
.sort_values(["day", "rnk"])

doc/source/getting_started/comparison/includes/time_date.rst

+4-6
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,11 @@
55
tips["date1_year"] = tips["date1"].dt.year
66
tips["date2_month"] = tips["date2"].dt.month
77
tips["date1_next"] = tips["date1"] + pd.offsets.MonthBegin()
8-
tips["months_between"] = tips["date2"].dt.to_period("M") - tips[
9-
"date1"
10-
].dt.to_period("M")
8+
tips["months_between"] = tips["date2"].dt.to_period("M") - tips["date1"].dt.to_period(
9+
"M"
10+
)
1111
12-
tips[
13-
["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]
14-
]
12+
tips[["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]]
1513
1614
.. ipython:: python
1715
:suppress:

doc/source/getting_started/install.rst

+1
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,7 @@ obtain these directories with.
108108
.. code-block:: python
109109
110110
import sys
111+
111112
sys.path
112113
113114
One way you could be encountering this error is if you have multiple Python installations on your system

doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ I want to sort the Titanic data according to the cabin class and age in descendi
115115

116116
.. ipython:: python
117117
118-
titanic.sort_values(by=['Pclass', 'Age'], ascending=False).head()
118+
titanic.sort_values(by=["Pclass", "Age"], ascending=False).head()
119119
120120
With :meth:`DataFrame.sort_values`, the rows in the table are sorted according to the
121121
defined column(s). The index will follow the row order.

doc/source/getting_started/intro_tutorials/08_combine_dataframes.rst

+10-13
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,8 @@ Westminster* in respectively Paris, Antwerp and London.
4040

4141
.. ipython:: python
4242
43-
air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
44-
parse_dates=True)
45-
air_quality_no2 = air_quality_no2[["date.utc", "location",
46-
"parameter", "value"]]
43+
air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv", parse_dates=True)
44+
air_quality_no2 = air_quality_no2[["date.utc", "location", "parameter", "value"]]
4745
air_quality_no2.head()
4846
4947
.. raw:: html
@@ -75,10 +73,8 @@ Westminster* in respectively Paris, Antwerp and London.
7573

7674
.. ipython:: python
7775
78-
air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
79-
parse_dates=True)
80-
air_quality_pm25 = air_quality_pm25[["date.utc", "location",
81-
"parameter", "value"]]
76+
air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv", parse_dates=True)
77+
air_quality_pm25 = air_quality_pm25[["date.utc", "location", "parameter", "value"]]
8278
air_quality_pm25.head()
8379
8480
.. raw:: html
@@ -123,9 +119,9 @@ concatenated tables to verify the operation:
123119

124120
.. ipython:: python
125121
126-
print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)
127-
print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)
128-
print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)
122+
print("Shape of the ``air_quality_pm25`` table: ", air_quality_pm25.shape)
123+
print("Shape of the ``air_quality_no2`` table: ", air_quality_no2.shape)
124+
print("Shape of the resulting ``air_quality`` table: ", air_quality.shape)
129125
130126
Hence, the resulting table has 3178 = 1110 + 2068 rows.
131127

@@ -265,8 +261,9 @@ Add the parameters' full description and name, provided by the parameters metada
265261
266262
.. ipython:: python
267263
268-
air_quality = pd.merge(air_quality, air_quality_parameters,
269-
how='left', left_on='parameter', right_on='id')
264+
air_quality = pd.merge(
265+
air_quality, air_quality_parameters, how="left", left_on="parameter", right_on="id"
266+
)
270267
air_quality.head()
271268
272269
Compared to the previous example, there is no common column name.

doc/source/getting_started/intro_tutorials/09_timeseries.rst

+1-2
Original file line numberDiff line numberDiff line change
@@ -174,8 +174,7 @@ What is the average :math:`NO_2` concentration for each day of the week for each
174174

175175
.. ipython:: python
176176
177-
air_quality.groupby(
178-
[air_quality["datetime"].dt.weekday, "location"])["value"].mean()
177+
air_quality.groupby([air_quality["datetime"].dt.weekday, "location"])["value"].mean()
179178
180179
Remember the split-apply-combine pattern provided by ``groupby`` from the
181180
:ref:`tutorial on statistics calculation <10min_tut_06_stats>`?

doc/source/user_guide/10min.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -550,8 +550,8 @@ Stack
550550
.. ipython:: python
551551
552552
arrays = [
553-
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
554-
["one", "two", "one", "two", "one", "two", "one", "two"],
553+
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
554+
["one", "two", "one", "two", "one", "two", "one", "two"],
555555
]
556556
index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
557557
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])

doc/source/user_guide/advanced.rst

+4-6
Original file line numberDiff line numberDiff line change
@@ -621,7 +621,7 @@ return a copy of the data rather than a view:
621621
)
622622
dfm = dfm.set_index(["jim", "joe"])
623623
dfm
624-
dfm.loc[(1, 'z')]
624+
dfm.loc[(1, "z")]
625625
626626
.. _advanced.unsorted:
627627

@@ -630,7 +630,7 @@ Furthermore, if you try to index something that is not fully lexsorted, this can
630630
.. ipython:: python
631631
:okexcept:
632632
633-
dfm.loc[(0, 'y'):(1, 'z')]
633+
dfm.loc[(0, "y"):(1, "z")]
634634
635635
The :meth:`~MultiIndex.is_monotonic_increasing` method on a ``MultiIndex`` shows if the
636636
index is sorted:
@@ -792,9 +792,7 @@ values **not** in the categories, similarly to how you can reindex **any** panda
792792

793793
.. ipython:: python
794794
795-
df3 = pd.DataFrame(
796-
{"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
797-
)
795+
df3 = pd.DataFrame({"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")})
798796
df3 = df3.set_index("B")
799797
df3
800798
@@ -1096,7 +1094,7 @@ index can be somewhat complicated. For example, the following does not work:
10961094
.. ipython:: python
10971095
:okexcept:
10981096
1099-
s.loc['c':'e' + 1]
1097+
s.loc["c" : "e" + 1]
11001098
11011099
A very common use case is to limit a time series to start and end at two
11021100
specific dates. To enable this, we made the design choice to make label-based

0 commit comments

Comments
 (0)