Skip to content

DOC: Fix code block line length #36773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Oct 7, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
0f76a2a
DOC: Fix code block line length
dsaxton Oct 1, 2020
94ba010
Fix
dsaxton Oct 1, 2020
5b5268e
Fix
dsaxton Oct 1, 2020
f089659
Another
dsaxton Oct 1, 2020
1b9f83b
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
f57fa6c
Fix
dsaxton Oct 2, 2020
a4fddc9
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
31032a5
Fix
dsaxton Oct 2, 2020
43cac82
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
cb6b0a0
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
1b6f347
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
3e10b2c
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 2, 2020
ce985ba
Fix
dsaxton Oct 2, 2020
d791bd5
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 3, 2020
ec204d2
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 3, 2020
33a2e26
Fix
dsaxton Oct 3, 2020
aad9468
Fix
dsaxton Oct 3, 2020
00afb92
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 4, 2020
918b870
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 4, 2020
6f8d68d
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 5, 2020
b9ae2ac
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 5, 2020
90126cc
Merge remote-tracking branch 'upstream/master' into fix-doc-line-length
dsaxton Oct 5, 2020
def814f
Fix
dsaxton Oct 5, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,10 @@ aggregating statistics for given columns can be defined using the
.. ipython:: python

titanic.agg(
{"Age": ["min", "max", "median", "skew"], "Fare": ["min", "max", "median", "mean"]}
{
"Age": ["min", "max", "median", "skew"],
"Fare": ["min", "max", "median", "mean"],
}
)

.. raw:: html
Expand Down
11 changes: 8 additions & 3 deletions doc/source/user_guide/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,8 @@ whereas a tuple of lists refer to several values within a level:
.. ipython:: python

s = pd.Series(
[1, 2, 3, 4, 5, 6], index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]])
[1, 2, 3, 4, 5, 6],
index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]),
)
s.loc[[("A", "c"), ("B", "d")]] # list of tuples
s.loc[(["A", "B"], ["c", "d"])] # tuple of lists
Expand Down Expand Up @@ -819,7 +820,9 @@ values **not** in the categories, similarly to how you can reindex **any** panda

.. ipython:: python

df3 = pd.DataFrame({"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")})
df3 = pd.DataFrame(
{"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
)
df3 = df3.set_index("B")
df3

Expand Down Expand Up @@ -934,7 +937,9 @@ example, be millisecond offsets.
np.random.randn(5, 2), index=np.arange(5) * 250.0, columns=list("AB")
),
pd.DataFrame(
np.random.randn(6, 2), index=np.arange(4, 10) * 250.1, columns=list("AB")
np.random.randn(6, 2),
index=np.arange(4, 10) * 250.1,
columns=list("AB"),
),
]
)
Expand Down
36 changes: 28 additions & 8 deletions doc/source/user_guide/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -464,7 +464,10 @@ which we illustrate:
{"A": [1.0, np.nan, 3.0, 5.0, np.nan], "B": [np.nan, 2.0, 3.0, np.nan, 6.0]}
)
df2 = pd.DataFrame(
{"A": [5.0, 2.0, 4.0, np.nan, 3.0, 7.0], "B": [np.nan, np.nan, 3.0, 4.0, 6.0, 8.0]}
{
"A": [5.0, 2.0, 4.0, np.nan, 3.0, 7.0],
"B": [np.nan, np.nan, 3.0, 4.0, 6.0, 8.0],
}
)
df1
df2
Expand Down Expand Up @@ -712,7 +715,10 @@ Similarly, you can get the most frequently occurring value(s), i.e. the mode, of
s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7])
s5.mode()
df5 = pd.DataFrame(
{"A": np.random.randint(0, 7, size=50), "B": np.random.randint(-10, 15, size=50)}
{
"A": np.random.randint(0, 7, size=50),
"B": np.random.randint(-10, 15, size=50),
}
)
df5.mode()

Expand Down Expand Up @@ -1192,7 +1198,9 @@ to :ref:`merging/joining functionality <merging>`:

.. ipython:: python

s = pd.Series(["six", "seven", "six", "seven", "six"], index=["a", "b", "c", "d", "e"])
s = pd.Series(
["six", "seven", "six", "seven", "six"], index=["a", "b", "c", "d", "e"]
)
t = pd.Series({"six": 6.0, "seven": 7.0})
s
s.map(t)
Expand Down Expand Up @@ -1494,7 +1502,9 @@ labels).

df = pd.DataFrame(
{"x": [1, 2, 3, 4, 5, 6], "y": [10, 20, 30, 40, 50, 60]},
index=pd.MultiIndex.from_product([["a", "b", "c"], [1, 2]], names=["let", "num"]),
index=pd.MultiIndex.from_product(
[["a", "b", "c"], [1, 2]], names=["let", "num"]
),
)
df
df.rename_axis(index={"let": "abc"})
Expand Down Expand Up @@ -1803,7 +1813,9 @@ used to sort a pandas object by its index levels.
}
)

unsorted_df = df.reindex(index=["a", "d", "c", "b"], columns=["three", "two", "one"])
unsorted_df = df.reindex(
index=["a", "d", "c", "b"], columns=["three", "two", "one"]
)
unsorted_df

# DataFrame
Expand Down Expand Up @@ -1849,7 +1861,9 @@ to use to determine the sorted order.

.. ipython:: python

df1 = pd.DataFrame({"one": [2, 1, 1, 1], "two": [1, 3, 2, 4], "three": [5, 4, 3, 2]})
df1 = pd.DataFrame(
{"one": [2, 1, 1, 1], "two": [1, 3, 2, 4], "three": [5, 4, 3, 2]}
)
df1.sort_values(by="two")

The ``by`` parameter can take a list of column names, e.g.:
Expand Down Expand Up @@ -1994,7 +2008,9 @@ all levels to ``by``.

.. ipython:: python

df1.columns = pd.MultiIndex.from_tuples([("a", "one"), ("a", "two"), ("b", "three")])
df1.columns = pd.MultiIndex.from_tuples(
[("a", "one"), ("a", "two"), ("b", "three")]
)
df1.sort_values(by=("a", "two"))


Expand Down Expand Up @@ -2245,7 +2261,11 @@ to the correct type.
import datetime

df = pd.DataFrame(
[[1, 2], ["a", "b"], [datetime.datetime(2016, 3, 2), datetime.datetime(2016, 3, 2)]]
[
[1, 2],
["a", "b"],
[datetime.datetime(2016, 3, 2), datetime.datetime(2016, 3, 2)],
]
)
df = df.T
df
Expand Down
20 changes: 17 additions & 3 deletions doc/source/user_guide/categorical.rst
Original file line number Diff line number Diff line change
Expand Up @@ -513,7 +513,11 @@ The ordering of the categorical is determined by the ``categories`` of that colu

dfs = pd.DataFrame(
{
"A": pd.Categorical(list("bbeebbaa"), categories=["e", "a", "b"], ordered=True),
"A": pd.Categorical(
list("bbeebbaa"),
categories=["e", "a", "b"],
ordered=True,
),
"B": [1, 2, 1, 2, 2, 1, 2, 1],
}
)
Expand Down Expand Up @@ -642,7 +646,13 @@ Groupby will also show "unused" categories:
df.groupby("cats").mean()

cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
df2 = pd.DataFrame({"cats": cats2, "B": ["c", "d", "c", "d"], "values": [1, 2, 3, 4]})
df2 = pd.DataFrame(
{
"cats": cats2,
"B": ["c", "d", "c", "d"],
"values": [1, 2, 3, 4],
}
)
df2.groupby(["cats", "B"]).mean()


Expand Down Expand Up @@ -1115,7 +1125,11 @@ You can use ``fillna`` to handle missing values before applying a function.
.. ipython:: python

df = pd.DataFrame(
{"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"], "cats": pd.Categorical([1, 2, 3, 2])}
{
"a": [1, 2, 3, 4],
"b": ["a", "b", "c", "d"],
"cats": pd.Categorical([1, 2, 3, 2]),
}
)
df.apply(lambda row: type(row["cats"]), axis=1)
df.apply(lambda col: col.dtype, axis=0)
Expand Down
6 changes: 5 additions & 1 deletion doc/source/user_guide/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -787,7 +787,11 @@ can even be omitted:

.. ipython:: python

covs = df[["B", "C", "D"]].rolling(window=50).cov(df[["A", "B", "C"]], pairwise=True)
covs = (
df[["B", "C", "D"]]
.rolling(window=50)
.cov(df[["A", "B", "C"]], pairwise=True)
)
covs.loc["2002-09-22":]

.. ipython:: python
Expand Down
41 changes: 34 additions & 7 deletions doc/source/user_guide/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,9 @@ New columns

.. ipython:: python

df = pd.DataFrame({"AAA": [1, 1, 1, 2, 2, 2, 3, 3], "BBB": [2, 1, 3, 4, 5, 1, 2, 3]})
df = pd.DataFrame(
{"AAA": [1, 1, 1, 2, 2, 2, 3, 3], "BBB": [2, 1, 3, 4, 5, 1, 2, 3]}
)
df

Method 1 : idxmin() to get the index of the minimums
Expand Down Expand Up @@ -327,7 +329,9 @@ Arithmetic

.. ipython:: python

cols = pd.MultiIndex.from_tuples([(x, y) for x in ["A", "B", "C"] for y in ["O", "I"]])
cols = pd.MultiIndex.from_tuples(
[(x, y) for x in ["A", "B", "C"] for y in ["O", "I"]]
)
df = pd.DataFrame(np.random.randn(2, 6), index=["n", "m"], columns=cols)
df
df = df.div(df["C"], level=1)
Expand Down Expand Up @@ -566,7 +570,9 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to

.. ipython:: python

df = pd.DataFrame({"Color": "Red Red Red Blue".split(), "Value": [100, 150, 50, 50]})
df = pd.DataFrame(
{"Color": "Red Red Red Blue".split(), "Value": [100, 150, 50, 50]}
)
df
df["Counts"] = df.groupby(["Color"]).transform(len)
df
Expand Down Expand Up @@ -648,7 +654,10 @@ Create a list of dataframes, split using a delineation based on logic included i
dfs = list(
zip(
*df.groupby(
(1 * (df["Case"] == "B")).cumsum().rolling(window=3, min_periods=1).median()
(1 * (df["Case"] == "B"))
.cumsum()
.rolling(window=3, min_periods=1)
.median()
)
)
)[-1]
Expand Down Expand Up @@ -740,7 +749,18 @@ The :ref:`Pivot <reshaping.pivot>` docs.
"yes",
],
"Passed": ["yes" if x > 50 else "no" for x in grades],
"Employed": [True, True, True, False, False, False, False, True, True, False],
"Employed": [
True,
True,
True,
False,
False,
False,
False,
True,
True,
False,
],
"Grade": grades,
}
)
Expand Down Expand Up @@ -791,7 +811,9 @@ Apply
return pd.Series(aList)


df_orgz = pd.concat({ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()})
df_orgz = pd.concat(
{ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()}
)
df_orgz

`Rolling apply with a DataFrame returning a Series
Expand Down Expand Up @@ -1162,7 +1184,12 @@ Option 1: pass rows explicitly to skip rows
from io import StringIO

pd.read_csv(
StringIO(data), sep=";", skiprows=[11, 12], index_col=0, parse_dates=True, header=10
StringIO(data),
sep=";",
skiprows=[11, 12],
index_col=0,
parse_dates=True,
header=10,
)

Option 2: read column names and then data
Expand Down
9 changes: 7 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,9 @@ the length of the ``groups`` dict, so it is largely just a convenience:
height = np.random.normal(60, 10, size=n)
time = pd.date_range("1/1/2000", periods=n)
gender = np.random.choice(["male", "female"], size=n)
df = pd.DataFrame({"height": height, "weight": weight, "gender": gender}, index=time)
df = pd.DataFrame(
{"height": height, "weight": weight, "gender": gender}, index=time
)

.. ipython:: python

Expand Down Expand Up @@ -767,7 +769,10 @@ For example, suppose we wished to standardize the data within each group:
ts.head()
ts.tail()

transformed = ts.groupby(lambda x: x.year).transform(lambda x: (x - x.mean()) / x.std())
transformed = ts.groupby(lambda x: x.year).transform(
lambda x: (x - x.mean()) / x.std()
)


We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
Expand Down
Loading