Skip to content

CI: Format rst code blocks with blacken-docs #57401

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,14 @@ repos:
files: ^pandas/_libs/src|^pandas/_libs/include
args: [-i]
types_or: [c, c++]
- repo: https://github.com/adamchainz/blacken-docs
rev: 1.16.0
hooks:
- id: blacken-docs
additional_dependencies:
- black==23.12.1
types_or: [rst]
args: ["--skip-errors", "--skip-string-normalization"]
- repo: local
hooks:
- id: pyright
Expand Down
3 changes: 3 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,9 @@
numpydoc_show_inherited_class_members = False
numpydoc_attributes_as_param_list = False

# IPython
ipython_warning_is_error = False

# matplotlib plot directive
plot_include_source = True
plot_formats = [("png", 90)]
Expand Down
30 changes: 21 additions & 9 deletions doc/source/development/contributing_codebase.rst
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ The appropriate way to annotate this would be as follows

str_type = str


class SomeClass2:
str: str_type = None

Expand All @@ -190,8 +191,8 @@ In some cases you may be tempted to use ``cast`` from the typing module when you

from pandas.core.dtypes.common import is_number

def cannot_infer_bad(obj: Union[str, int, float]):

def cannot_infer_bad(obj: Union[str, int, float]):
if is_number(obj):
...
else: # Reasonably only str objects would reach this but...
Expand All @@ -203,7 +204,6 @@ The limitation here is that while a human can reasonably understand that ``is_nu
.. code-block:: python

def cannot_infer_good(obj: Union[str, int, float]):

if isinstance(obj, str):
return obj.upper()
else:
Expand All @@ -222,6 +222,7 @@ For example, quite a few functions in pandas accept a ``dtype`` argument. This c

from pandas._typing import Dtype


def as_type(dtype: Dtype) -> ...:
...

Expand Down Expand Up @@ -428,6 +429,7 @@ be located.
import pandas as pd
import pandas._testing as tm


def test_getitem_listlike_of_ints():
ser = pd.Series(range(5))

Expand Down Expand Up @@ -641,9 +643,13 @@ as a comment to a new test.


@pytest.mark.parametrize(
'dtype', ['float32', pytest.param('int16', marks=pytest.mark.skip),
pytest.param('int32', marks=pytest.mark.xfail(
reason='to show how it works'))])
'dtype',
[
'float32',
pytest.param('int16', marks=pytest.mark.skip),
pytest.param('int32', marks=pytest.mark.xfail(reason='to show how it works')),
],
)
def test_mark(dtype):
assert str(np.dtype(dtype)) == 'float32'

Expand Down Expand Up @@ -722,10 +728,16 @@ for details <https://hypothesis.readthedocs.io/en/latest/index.html>`_.
import json
from hypothesis import given, strategies as st

any_json_value = st.deferred(lambda: st.one_of(
st.none(), st.booleans(), st.floats(allow_nan=False), st.text(),
st.lists(any_json_value), st.dictionaries(st.text(), any_json_value)
))
any_json_value = st.deferred(
lambda: st.one_of(
st.none(),
st.booleans(),
st.floats(allow_nan=False),
st.text(),
st.lists(any_json_value),
st.dictionaries(st.text(), any_json_value),
)
)


@given(value=any_json_value)
Expand Down
8 changes: 2 additions & 6 deletions doc/source/development/contributing_docstring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,6 @@ backticks. The following are considered inline code:
.. code-block:: python

def func():

"""Some function.

With several mistakes in the docstring.
Expand Down Expand Up @@ -463,6 +462,7 @@ With more than one value:

import string


def random_letters():
"""
Generate and return a sequence of random letters.
Expand All @@ -478,8 +478,7 @@ With more than one value:
String of random letters.
"""
length = np.random.randint(1, 10)
letters = ''.join(np.random.choice(string.ascii_lowercase)
for i in range(length))
letters = ''.join(np.random.choice(string.ascii_lowercase) for i in range(length))
return length, letters

If the method yields its value:
Expand Down Expand Up @@ -628,7 +627,6 @@ A simple example could be:
.. code-block:: python

class Series:

def head(self, n=5):
"""
Return the first elements of the Series.
Expand Down Expand Up @@ -724,7 +722,6 @@ positional arguments ``head(3)``.
.. code-block:: python

class Series:

def mean(self):
"""
Compute the mean of the input.
Expand All @@ -737,7 +734,6 @@ positional arguments ``head(3)``.
"""
pass


def fillna(self, value):
"""
Replace missing values by ``value``.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -408,7 +408,6 @@ Below is an example to define two original properties, "internal_cache" as a tem
.. code-block:: python

class SubclassedDataFrame2(pd.DataFrame):

# temporary properties
_internal_names = pd.DataFrame._internal_names + ["internal_cache"]
_internal_names_set = set(_internal_names)
Expand Down Expand Up @@ -526,6 +525,7 @@ The ``__pandas_priority__`` of :class:`DataFrame`, :class:`Series`, and :class:`
# return `self` and not the addition for simplicity
return self


custom = CustomList()
series = pd.Series([1, 2, 3])

Expand Down
1 change: 1 addition & 0 deletions doc/source/development/maintaining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ create a file ``t.py`` in your pandas directory, which contains
.. code-block:: python

import pandas as pd

assert pd.Series([1, 1]).sum() == 2

and then run::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -427,9 +427,7 @@ The equivalent in pandas:

.. ipython:: python

pd.pivot_table(
tips, values="tip", index=["size"], columns=["sex"], aggfunc=np.average
)
pd.pivot_table(tips, values="tip", index=["size"], columns=["sex"], aggfunc=np.average)


Adding a row
Expand All @@ -440,8 +438,9 @@ Assuming we are using a :class:`~pandas.RangeIndex` (numbered ``0``, ``1``, etc.
.. ipython:: python

df
new_row = pd.DataFrame([["E", 51, True]],
columns=["class", "student_count", "all_pass"])
new_row = pd.DataFrame(
[["E", 51, True]], columns=["class", "student_count", "all_pass"]
)
pd.concat([df, new_row])


Expand Down
12 changes: 3 additions & 9 deletions doc/source/getting_started/comparison/comparison_with_sql.rst
Original file line number Diff line number Diff line change
Expand Up @@ -331,9 +331,7 @@ UNION
df1 = pd.DataFrame(
{"city": ["Chicago", "San Francisco", "New York City"], "rank": range(1, 4)}
)
df2 = pd.DataFrame(
{"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]}
)
df2 = pd.DataFrame({"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]})

.. code-block:: sql

Expand Down Expand Up @@ -433,9 +431,7 @@ Top n rows per group

(
tips.assign(
rn=tips.sort_values(["total_bill"], ascending=False)
.groupby(["day"])
.cumcount()
rn=tips.sort_values(["total_bill"], ascending=False).groupby(["day"]).cumcount()
+ 1
)
.query("rn < 3")
Expand All @@ -448,9 +444,7 @@ the same using ``rank(method='first')`` function

(
tips.assign(
rnk=tips.groupby(["day"])["total_bill"].rank(
method="first", ascending=False
)
rnk=tips.groupby(["day"])["total_bill"].rank(method="first", ascending=False)
)
.query("rnk < 3")
.sort_values(["day", "rnk"])
Expand Down
10 changes: 4 additions & 6 deletions doc/source/getting_started/comparison/includes/time_date.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,11 @@
tips["date1_year"] = tips["date1"].dt.year
tips["date2_month"] = tips["date2"].dt.month
tips["date1_next"] = tips["date1"] + pd.offsets.MonthBegin()
tips["months_between"] = tips["date2"].dt.to_period("M") - tips[
"date1"
].dt.to_period("M")
tips["months_between"] = tips["date2"].dt.to_period("M") - tips["date1"].dt.to_period(
"M"
)

tips[
["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]
]
tips[["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]]

.. ipython:: python
:suppress:
Expand Down
1 change: 1 addition & 0 deletions doc/source/getting_started/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ obtain these directories with.
.. code-block:: python

import sys

sys.path

One way you could be encountering this error is if you have multiple Python installations on your system
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,8 @@ Westminster* in respectively Paris, Antwerp and London.

.. ipython:: python

air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
parse_dates=True)
air_quality_no2 = air_quality_no2[["date.utc", "location",
"parameter", "value"]]
air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv", parse_dates=True)
air_quality_no2 = air_quality_no2[["date.utc", "location", "parameter", "value"]]
air_quality_no2.head()

.. raw:: html
Expand Down Expand Up @@ -75,10 +73,8 @@ Westminster* in respectively Paris, Antwerp and London.

.. ipython:: python

air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
parse_dates=True)
air_quality_pm25 = air_quality_pm25[["date.utc", "location",
"parameter", "value"]]
air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv", parse_dates=True)
air_quality_pm25 = air_quality_pm25[["date.utc", "location", "parameter", "value"]]
air_quality_pm25.head()

.. raw:: html
Expand Down Expand Up @@ -265,8 +261,9 @@ Add the parameters' full description and name, provided by the parameters metada

.. ipython:: python

air_quality = pd.merge(air_quality, air_quality_parameters,
how='left', left_on='parameter', right_on='id')
air_quality = pd.merge(
air_quality, air_quality_parameters, how='left', left_on='parameter', right_on='id'
)
air_quality.head()

Compared to the previous example, there is no common column name.
Expand Down
3 changes: 1 addition & 2 deletions doc/source/getting_started/intro_tutorials/09_timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,7 @@ What is the average :math:`NO_2` concentration for each day of the week for each

.. ipython:: python

air_quality.groupby(
[air_quality["datetime"].dt.weekday, "location"])["value"].mean()
air_quality.groupby([air_quality["datetime"].dt.weekday, "location"])["value"].mean()

Remember the split-apply-combine pattern provided by ``groupby`` from the
:ref:`tutorial on statistics calculation <10min_tut_06_stats>`?
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -550,8 +550,8 @@ Stack
.. ipython:: python

arrays = [
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
["one", "two", "one", "two", "one", "two", "one", "two"],
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
["one", "two", "one", "two", "one", "two", "one", "two"],
]
index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
Expand Down
6 changes: 2 additions & 4 deletions doc/source/user_guide/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -792,9 +792,7 @@ values **not** in the categories, similarly to how you can reindex **any** panda

.. ipython:: python

df3 = pd.DataFrame(
{"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
)
df3 = pd.DataFrame({"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")})
df3 = df3.set_index("B")
df3

Expand Down Expand Up @@ -1096,7 +1094,7 @@ index can be somewhat complicated. For example, the following does not work:
.. ipython:: python
:okexcept:

s.loc['c':'e' + 1]
s.loc['c' : 'e' + 1]

A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design choice to make label-based
Expand Down
Loading