Skip to content

Commit 33a2e26

Browse files
committed
Fix
1 parent ec204d2 commit 33a2e26

File tree

5 files changed

+67
-16
lines changed

5 files changed

+67
-16
lines changed

doc/source/user_guide/computation.rst

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -787,7 +787,11 @@ can even be omitted:
787787

788788
.. ipython:: python
789789
790-
covs = df[["B", "C", "D"]].rolling(window=50).cov(df[["A", "B", "C"]], pairwise=True)
790+
covs = (
791+
df[["B", "C", "D"]]
792+
.rolling(window=50)
793+
.cov(df[["A", "B", "C"]], pairwise=True)
794+
)
791795
covs.loc["2002-09-22":]
792796
793797
.. ipython:: python

doc/source/user_guide/groupby.rst

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,9 @@ the length of the ``groups`` dict, so it is largely just a convenience:
267267
height = np.random.normal(60, 10, size=n)
268268
time = pd.date_range("1/1/2000", periods=n)
269269
gender = np.random.choice(["male", "female"], size=n)
270-
df = pd.DataFrame({"height": height, "weight": weight, "gender": gender}, index=time)
270+
df = pd.DataFrame(
271+
{"height": height, "weight": weight, "gender": gender}, index=time
272+
)
271273
272274
.. ipython:: python
273275
@@ -767,7 +769,10 @@ For example, suppose we wished to standardize the data within each group:
767769
ts.head()
768770
ts.tail()
769771
770-
transformed = ts.groupby(lambda x: x.year).transform(lambda x: (x - x.mean()) / x.std())
772+
transformed = ts.groupby(lambda x: x.year).transform(
773+
lambda x: (x - x.mean()) / x.std()
774+
)
775+
771776
772777
We would expect the result to now have mean 0 and standard deviation 1 within
773778
each group, which we can easily check:

doc/source/user_guide/missing_data.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -400,7 +400,10 @@ You can also interpolate with a DataFrame:
400400
.. ipython:: python
401401
402402
df = pd.DataFrame(
403-
{"A": [1, 2.1, np.nan, 4.7, 5.6, 6.8], "B": [0.25, np.nan, np.nan, 4, 12.2, 14.4]}
403+
{
404+
"A": [1, 2.1, np.nan, 4.7, 5.6, 6.8],
405+
"B": [0.25, np.nan, np.nan, 4, 12.2, 14.4],
406+
}
404407
)
405408
df
406409
df.interpolate()

doc/source/user_guide/timeseries.rst

Lines changed: 46 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,9 @@ which can be specified. These are computed from the starting point specified by
317317

318318
.. ipython:: python
319319
320-
pd.to_datetime([1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s")
320+
pd.to_datetime(
321+
[1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
322+
)
321323
322324
pd.to_datetime(
323325
[1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
@@ -707,7 +709,9 @@ If the timestamp string is treated as a slice, it can be used to index ``DataFra
707709
.. ipython:: python
708710
:okwarning:
709711
710-
dft_minute = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index)
712+
dft_minute = pd.DataFrame(
713+
{"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
714+
)
711715
dft_minute["2011-12-31 23"]
712716
713717
@@ -748,10 +752,11 @@ With no defaults.
748752
.. ipython:: python
749753
750754
dft[
751-
datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(2013, 2, 28, 10, 12, 0)
755+
datetime.datetime(2013, 1, 1, 10, 12, 0) : datetime.datetime(
756+
2013, 2, 28, 10, 12, 0
757+
)
752758
]
753759
754-
755760
Truncating & fancy indexing
756761
~~~~~~~~~~~~~~~~~~~~~~~~~~~
757762

@@ -1036,8 +1041,15 @@ As an interesting example, let's look at Egypt where a Friday-Saturday weekend i
10361041
# They also observe International Workers' Day so let's
10371042
# add that for a couple of years
10381043
1039-
holidays = ["2012-05-01", datetime.datetime(2013, 5, 1), np.datetime64("2014-05-01")]
1040-
bday_egypt = pd.offsets.CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt)
1044+
holidays = [
1045+
"2012-05-01",
1046+
datetime.datetime(2013, 5, 1),
1047+
np.datetime64("2014-05-01"),
1048+
]
1049+
bday_egypt = pd.offsets.CustomBusinessDay(
1050+
holidays=holidays,
1051+
weekmask=weekmask_egypt,
1052+
)
10411053
dt = datetime.datetime(2013, 4, 30)
10421054
dt + 2 * bday_egypt
10431055
@@ -1417,7 +1429,12 @@ An example of how holidays and holiday calendars are defined:
14171429
rules = [
14181430
USMemorialDay,
14191431
Holiday("July 4th", month=7, day=4, observance=nearest_workday),
1420-
Holiday("Columbus Day", month=10, day=1, offset=pd.DateOffset(weekday=MO(2))),
1432+
Holiday(
1433+
"Columbus Day",
1434+
month=10,
1435+
day=1,
1436+
offset=pd.DateOffset(weekday=MO(2)),
1437+
),
14211438
]
14221439
14231440
@@ -2279,15 +2296,25 @@ To return ``dateutil`` time zone objects, append ``dateutil/`` before the string
22792296
rng_dateutil.tz
22802297
22812298
# dateutil - utc special case
2282-
rng_utc = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=dateutil.tz.tzutc())
2299+
rng_utc = pd.date_range(
2300+
"3/6/2012 00:00",
2301+
periods=3,
2302+
freq="D",
2303+
tz=dateutil.tz.tzutc(),
2304+
)
22832305
rng_utc.tz
22842306
22852307
.. versionadded:: 0.25.0
22862308

22872309
.. ipython:: python
22882310
22892311
# datetime.timezone
2290-
rng_utc = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=datetime.timezone.utc)
2312+
rng_utc = pd.date_range(
2313+
"3/6/2012 00:00",
2314+
periods=3,
2315+
freq="D",
2316+
tz=datetime.timezone.utc,
2317+
)
22912318
rng_utc.tz
22922319
22932320
Note that the ``UTC`` time zone is a special case in ``dateutil`` and should be constructed explicitly
@@ -2440,10 +2467,18 @@ control over how they are handled.
24402467
.. ipython:: python
24412468
24422469
pd.Timestamp(
2443-
datetime.datetime(2019, 10, 27, 1, 30, 0, 0), tz="dateutil/Europe/London", fold=0
2470+
datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
2471+
tz="dateutil/Europe/London",
2472+
fold=0,
24442473
)
24452474
pd.Timestamp(
2446-
year=2019, month=10, day=27, hour=1, minute=30, tz="dateutil/Europe/London", fold=1
2475+
year=2019,
2476+
month=10,
2477+
day=27,
2478+
hour=1,
2479+
minute=30,
2480+
tz="dateutil/Europe/London",
2481+
fold=1,
24472482
)
24482483
24492484
.. _timeseries.timezone_ambiguous:

doc/source/user_guide/visualization.rst

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1453,7 +1453,11 @@ Here is an example of one way to easily plot group means with standard deviation
14531453
)
14541454
14551455
df3 = pd.DataFrame(
1456-
{"data1": [3, 2, 4, 3, 2, 4, 3, 2], "data2": [6, 5, 7, 5, 4, 5, 6, 5]}, index=ix3
1456+
{
1457+
"data1": [3, 2, 4, 3, 2, 4, 3, 2],
1458+
"data2": [6, 5, 7, 5, 4, 5, 6, 5],
1459+
},
1460+
index=ix3,
14571461
)
14581462
14591463
# Group by index labels and take the means and standard deviations

0 commit comments

Comments
 (0)