Skip to content

Commit 72c3888

Browse files
jschendeljorisvandenbossche
authored andcommitted
CLN: Fix Spelling Errors (#17535)
1 parent 9b21c54 commit 72c3888

36 files changed

+65
-65
lines changed

doc/source/advanced.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -625,7 +625,7 @@ Index Types
625625
We have discussed ``MultiIndex`` in the previous sections pretty extensively. ``DatetimeIndex`` and ``PeriodIndex``
626626
are shown :ref:`here <timeseries.overview>`. ``TimedeltaIndex`` are :ref:`here <timedeltas.timedeltas>`.
627627

628-
In the following sub-sections we will highlite some other index types.
628+
In the following sub-sections we will highlight some other index types.
629629

630630
.. _indexing.categoricalindex:
631631

@@ -645,7 +645,7 @@ and allows efficient indexing and storage of an index with a large number of dup
645645
df.dtypes
646646
df.B.cat.categories
647647
648-
Setting the index, will create create a ``CategoricalIndex``
648+
Setting the index, will create a ``CategoricalIndex``
649649

650650
.. ipython:: python
651651
@@ -681,7 +681,7 @@ Groupby operations on the index will preserve the index nature as well
681681
Reindexing operations, will return a resulting index based on the type of the passed
682682
indexer, meaning that passing a list will return a plain-old-``Index``; indexing with
683683
a ``Categorical`` will return a ``CategoricalIndex``, indexed according to the categories
684-
of the PASSED ``Categorical`` dtype. This allows one to arbitrarly index these even with
684+
of the PASSED ``Categorical`` dtype. This allows one to arbitrarily index these even with
685685
values NOT in the categories, similarly to how you can reindex ANY pandas index.
686686

687687
.. ipython :: python
@@ -722,7 +722,7 @@ Int64Index and RangeIndex
722722
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
723723
724724
``RangeIndex`` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
725-
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analagous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
725+
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
726726
727727
.. _indexing.float64index:
728728
@@ -963,7 +963,7 @@ index can be somewhat complicated. For example, the following does not work:
963963
s.loc['c':'e'+1]
964964
965965
A very common use case is to limit a time series to start and end at two
966-
specific dates. To enable this, we made the design design to make label-based
966+
specific dates. To enable this, we made the design to make label-based
967967
slicing include both endpoints:
968968
969969
.. ipython:: python

doc/source/api.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1291,7 +1291,7 @@ Index
12911291
-----
12921292

12931293
**Many of these methods or variants thereof are available on the objects
1294-
that contain an index (Series/Dataframe) and those should most likely be
1294+
that contain an index (Series/DataFrame) and those should most likely be
12951295
used before calling these methods directly.**
12961296

12971297
.. autosummary::

doc/source/basics.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -923,7 +923,7 @@ Passing a named function will yield that name for the row:
923923
Aggregating with a dict
924924
+++++++++++++++++++++++
925925

926-
Passing a dictionary of column names to a scalar or a list of scalars, to ``DataFame.agg``
926+
Passing a dictionary of column names to a scalar or a list of scalars, to ``DataFrame.agg``
927927
allows you to customize which functions are applied to which columns. Note that the results
928928
are not in any particular order, you can use an ``OrderedDict`` instead to guarantee ordering.
929929

doc/source/computation.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -654,7 +654,7 @@ aggregation with, outputting a DataFrame:
654654
655655
r['A'].agg([np.sum, np.mean, np.std])
656656
657-
On a widowed DataFrame, you can pass a list of functions to apply to each
657+
On a windowed DataFrame, you can pass a list of functions to apply to each
658658
column, which produces an aggregated result with a hierarchical index:
659659

660660
.. ipython:: python

doc/source/groupby.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -561,7 +561,7 @@ must be either implemented on GroupBy or available via :ref:`dispatching
561561
562562
.. note::
563563

564-
If you pass a dict to ``aggregate``, the ordering of the output colums is
564+
If you pass a dict to ``aggregate``, the ordering of the output columns is
565565
non-deterministic. If you want to be sure the output columns will be in a specific
566566
order, you can use an ``OrderedDict``. Compare the output of the following two commands:
567567

@@ -1211,7 +1211,7 @@ Groupby by Indexer to 'resample' data
12111211

12121212
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
12131213

1214-
In order to resample to work on indices that are non-datetimelike , the following procedure can be utilized.
1214+
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
12151215

12161216
In the following examples, **df.index // 5** returns a binary array which is used to determine what gets selected for the groupby operation.
12171217

doc/source/indexing.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -714,7 +714,7 @@ Finally, one can also set a seed for ``sample``'s random number generator using
714714
Setting With Enlargement
715715
------------------------
716716

717-
The ``.loc/[]`` operations can perform enlargement when setting a non-existant key for that axis.
717+
The ``.loc/[]`` operations can perform enlargement when setting a non-existent key for that axis.
718718

719719
In the ``Series`` case this is effectively an appending operation
720720

doc/source/io.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -3077,7 +3077,7 @@ Compressed pickle files
30773077

30783078
.. versionadded:: 0.20.0
30793079

3080-
:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle` can read
3080+
:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
30813081
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
30823082
`zip`` file supports read only and must contain only one data file
30833083
to be read in.

doc/source/merging.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -1329,7 +1329,7 @@ By default we are taking the asof of the quotes.
13291329
on='time',
13301330
by='ticker')
13311331
1332-
We only asof within ``2ms`` betwen the quote time and the trade time.
1332+
We only asof within ``2ms`` between the quote time and the trade time.
13331333

13341334
.. ipython:: python
13351335
@@ -1338,8 +1338,8 @@ We only asof within ``2ms`` betwen the quote time and the trade time.
13381338
by='ticker',
13391339
tolerance=pd.Timedelta('2ms'))
13401340
1341-
We only asof within ``10ms`` betwen the quote time and the trade time and we exclude exact matches on time.
1342-
Note that though we exclude the exact matches (of the quotes), prior quotes DO propogate to that point
1341+
We only asof within ``10ms`` between the quote time and the trade time and we exclude exact matches on time.
1342+
Note that though we exclude the exact matches (of the quotes), prior quotes DO propagate to that point
13431343
in time.
13441344

13451345
.. ipython:: python

doc/source/missing_data.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,7 @@ Interpolation
320320

321321
The ``limit_direction`` keyword argument was added.
322322

323-
Both Series and Dataframe objects have an ``interpolate`` method that, by default,
323+
Both Series and DataFrame objects have an ``interpolate`` method that, by default,
324324
performs linear interpolation at missing datapoints.
325325

326326
.. ipython:: python

doc/source/options.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -313,9 +313,9 @@ display.large_repr truncate For DataFrames exceeding max_ro
313313
display.latex.repr False Whether to produce a latex DataFrame
314314
representation for jupyter frontends
315315
that support it.
316-
display.latex.escape True Escapes special caracters in Dataframes, when
316+
display.latex.escape True Escapes special characters in DataFrames, when
317317
using the to_latex method.
318-
display.latex.longtable False Specifies if the to_latex method of a Dataframe
318+
display.latex.longtable False Specifies if the to_latex method of a DataFrame
319319
uses the longtable format.
320320
display.latex.multicolumn True Combines columns when using a MultiIndex
321321
display.latex.multicolumn_format 'l' Alignment of multicolumn labels

doc/source/reshaping.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ the level numbers:
156156
stacked.unstack('second')
157157
158158
Notice that the ``stack`` and ``unstack`` methods implicitly sort the index
159-
levels involved. Hence a call to ``stack`` and then ``unstack``, or viceversa,
159+
levels involved. Hence a call to ``stack`` and then ``unstack``, or vice versa,
160160
will result in a **sorted** copy of the original DataFrame or Series:
161161

162162
.. ipython:: python

doc/source/sparse.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ dtype, ``fill_value`` default changes:
132132
s.to_sparse()
133133
134134
You can change the dtype using ``.astype()``, the result is also sparse. Note that
135-
``.astype()`` also affects to the ``fill_value`` to keep its dense represantation.
135+
``.astype()`` also affects to the ``fill_value`` to keep its dense representation.
136136

137137

138138
.. ipython:: python

doc/source/style.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@
169169
"cell_type": "markdown",
170170
"metadata": {},
171171
"source": [
172-
"Notice the similarity with the standard `df.applymap`, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames.\n",
172+
"Notice the similarity with the standard `df.applymap`, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.\n",
173173
"\n",
174174
"Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a `<style>` tag. This will be a common theme.\n",
175175
"\n",

doc/source/timeseries.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -1054,7 +1054,7 @@ as ``BusinessHour`` except that it skips specified custom holidays.
10541054
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
10551055
dt + bhour_us * 2
10561056
1057-
You can use keyword arguments suported by either ``BusinessHour`` and ``CustomBusinessDay``.
1057+
You can use keyword arguments supported by either ``BusinessHour`` and ``CustomBusinessDay``.
10581058

10591059
.. ipython:: python
10601060
@@ -1088,7 +1088,7 @@ frequencies. We will refer to these aliases as *offset aliases*.
10881088
"BMS", "business month start frequency"
10891089
"CBMS", "custom business month start frequency"
10901090
"Q", "quarter end frequency"
1091-
"BQ", "business quarter endfrequency"
1091+
"BQ", "business quarter end frequency"
10921092
"QS", "quarter start frequency"
10931093
"BQS", "business quarter start frequency"
10941094
"A, Y", "year end frequency"
@@ -1132,13 +1132,13 @@ For some frequencies you can specify an anchoring suffix:
11321132
:header: "Alias", "Description"
11331133
:widths: 15, 100
11341134

1135-
"W\-SUN", "weekly frequency (sundays). Same as 'W'"
1136-
"W\-MON", "weekly frequency (mondays)"
1137-
"W\-TUE", "weekly frequency (tuesdays)"
1138-
"W\-WED", "weekly frequency (wednesdays)"
1139-
"W\-THU", "weekly frequency (thursdays)"
1140-
"W\-FRI", "weekly frequency (fridays)"
1141-
"W\-SAT", "weekly frequency (saturdays)"
1135+
"W\-SUN", "weekly frequency (Sundays). Same as 'W'"
1136+
"W\-MON", "weekly frequency (Mondays)"
1137+
"W\-TUE", "weekly frequency (Tuesdays)"
1138+
"W\-WED", "weekly frequency (Wednesdays)"
1139+
"W\-THU", "weekly frequency (Thursdays)"
1140+
"W\-FRI", "weekly frequency (Fridays)"
1141+
"W\-SAT", "weekly frequency (Saturdays)"
11421142
"(B)Q(S)\-DEC", "quarterly frequency, year ends in December. Same as 'Q'"
11431143
"(B)Q(S)\-JAN", "quarterly frequency, year ends in January"
11441144
"(B)Q(S)\-FEB", "quarterly frequency, year ends in February"

doc/source/visualization.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ Histogram can be stacked by ``stacked=True``. Bin size can be changed by ``bins`
261261
262262
plt.close('all')
263263
264-
You can pass other keywords supported by matplotlib ``hist``. For example, horizontal and cumulative histgram can be drawn by ``orientation='horizontal'`` and ``cumulative='True'``.
264+
You can pass other keywords supported by matplotlib ``hist``. For example, horizontal and cumulative histogram can be drawn by ``orientation='horizontal'`` and ``cumulative=True``.
265265

266266
.. ipython:: python
267267

pandas/core/algorithms.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1475,7 +1475,7 @@ def func(arr, indexer, out, fill_value=np.nan):
14751475
def diff(arr, n, axis=0):
14761476
"""
14771477
difference of n between self,
1478-
analagoust to s-s.shift(n)
1478+
analogous to s-s.shift(n)
14791479
14801480
Parameters
14811481
----------

pandas/core/indexes/interval.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -918,7 +918,7 @@ def take(self, indices, axis=0, allow_fill=True,
918918
except ValueError:
919919

920920
# we need to coerce; migth have NA's in an
921-
# interger dtype
921+
# integer dtype
922922
new_left = taker(left.astype(float))
923923
new_right = taker(right.astype(float))
924924

pandas/core/reshape/concat.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
7272
The keys, levels, and names arguments are all optional.
7373
7474
A walkthrough of how this method fits in with other tools for combining
75-
panda objects can be found `here
75+
pandas objects can be found `here
7676
<http://pandas.pydata.org/pandas-docs/stable/merging.html>`__.
7777
7878
See Also

pandas/core/reshape/merge.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -447,7 +447,7 @@ def merge_asof(left, right, on=None,
447447
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
448448
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
449449
450-
We only asof within 2ms betwen the quote time and the trade time
450+
We only asof within 2ms between the quote time and the trade time
451451
452452
>>> pd.merge_asof(trades, quotes,
453453
... on='time',
@@ -460,9 +460,9 @@ def merge_asof(left, right, on=None,
460460
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
461461
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
462462
463-
We only asof within 10ms betwen the quote time and the trade time
463+
We only asof within 10ms between the quote time and the trade time
464464
and we exclude exact matches on time. However *prior* data will
465-
propogate forward
465+
propagate forward
466466
467467
>>> pd.merge_asof(trades, quotes,
468468
... on='time',

pandas/core/reshape/tile.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -359,7 +359,7 @@ def _preprocess_for_cut(x):
359359
"""
360360
handles preprocessing for cut where we convert passed
361361
input to array, strip the index information and store it
362-
seperately
362+
separately
363363
"""
364364
x_is_series = isinstance(x, Series)
365365
series_index = None

pandas/io/formats/excel.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ def build_font(self, props):
263263
else None),
264264
'strike': ('line-through' in decoration) or None,
265265
'color': self.color_to_excel(props.get('color')),
266-
# shadow if nonzero digit before shadow colour
266+
# shadow if nonzero digit before shadow color
267267
'shadow': (bool(re.search('^[^#(]*[1-9]',
268268
props['text-shadow']))
269269
if 'text-shadow' in props else None),
@@ -304,7 +304,7 @@ def color_to_excel(self, val):
304304
try:
305305
return self.NAMED_COLORS[val]
306306
except KeyError:
307-
warnings.warn('Unhandled colour format: {val!r}'.format(val=val),
307+
warnings.warn('Unhandled color format: {val!r}'.format(val=val),
308308
CSSWarning)
309309

310310

pandas/io/pytables.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -605,7 +605,7 @@ def open(self, mode='a', **kwargs):
605605

606606
except (Exception) as e:
607607

608-
# trying to read from a non-existant file causes an error which
608+
# trying to read from a non-existent file causes an error which
609609
# is not part of IOError, make it one
610610
if self._mode == 'r' and 'Unable to open/create file' in str(e):
611611
raise IOError(str(e))
@@ -1621,7 +1621,7 @@ def __iter__(self):
16211621

16221622
def maybe_set_size(self, min_itemsize=None, **kwargs):
16231623
""" maybe set a string col itemsize:
1624-
min_itemsize can be an interger or a dict with this columns name
1624+
min_itemsize can be an integer or a dict with this columns name
16251625
with an integer size """
16261626
if _ensure_decoded(self.kind) == u('string'):
16271627

@@ -1712,11 +1712,11 @@ def set_info(self, info):
17121712
self.__dict__.update(idx)
17131713

17141714
def get_attr(self):
1715-
""" set the kind for this colummn """
1715+
""" set the kind for this column """
17161716
self.kind = getattr(self.attrs, self.kind_attr, None)
17171717

17181718
def set_attr(self):
1719-
""" set the kind for this colummn """
1719+
""" set the kind for this column """
17201720
setattr(self.attrs, self.kind_attr, self.kind)
17211721

17221722
def read_metadata(self, handler):
@@ -2160,14 +2160,14 @@ def convert(self, values, nan_rep, encoding):
21602160
return self
21612161

21622162
def get_attr(self):
2163-
""" get the data for this colummn """
2163+
""" get the data for this column """
21642164
self.values = getattr(self.attrs, self.kind_attr, None)
21652165
self.dtype = getattr(self.attrs, self.dtype_attr, None)
21662166
self.meta = getattr(self.attrs, self.meta_attr, None)
21672167
self.set_kind()
21682168

21692169
def set_attr(self):
2170-
""" set the data for this colummn """
2170+
""" set the data for this column """
21712171
setattr(self.attrs, self.kind_attr, self.values)
21722172
setattr(self.attrs, self.meta_attr, self.meta)
21732173
if self.dtype is not None:

pandas/io/stata.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -511,8 +511,8 @@ def _cast_to_stata_types(data):
511511
this range. If the int64 values are outside of the range of those
512512
perfectly representable as float64 values, a warning is raised.
513513
514-
bool columns are cast to int8. uint colums are converted to int of the
515-
same size if there is no loss in precision, other wise are upcast to a
514+
bool columns are cast to int8. uint columns are converted to int of the
515+
same size if there is no loss in precision, otherwise are upcast to a
516516
larger type. uint64 is currently not supported since it is concerted to
517517
object in a DataFrame.
518518
"""

pandas/plotting/_misc.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -413,7 +413,7 @@ def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None,
413413
axvlines_kwds: keywords, optional
414414
Options to be passed to axvline method for vertical lines
415415
sort_labels: bool, False
416-
Sort class_column labels, useful when assigning colours
416+
Sort class_column labels, useful when assigning colors
417417
418418
.. versionadded:: 0.20.0
419419

pandas/plotting/_tools.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey):
329329
if ncols > 1:
330330
for ax in axarr:
331331
# only the first column should get y labels -> set all other to
332-
# off as we only have labels in teh first column and we always
332+
# off as we only have labels in the first column and we always
333333
# have a subplot there, we can skip the layout test
334334
if ax.is_first_col():
335335
continue

pandas/tests/frame/test_convert_to.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -136,11 +136,11 @@ def test_to_records_with_unicode_index(self):
136136
def test_to_records_with_unicode_column_names(self):
137137
# xref issue: https://github.com/numpy/numpy/issues/2407
138138
# Issue #11879. to_records used to raise an exception when used
139-
# with column names containing non ascii caracters in Python 2
139+
# with column names containing non-ascii characters in Python 2
140140
result = DataFrame(data={u"accented_name_é": [1.0]}).to_records()
141141

142142
# Note that numpy allows for unicode field names but dtypes need
143-
# to be specified using dictionnary intsead of list of tuples.
143+
# to be specified using dictionary instead of list of tuples.
144144
expected = np.rec.array(
145145
[(0, 1.0)],
146146
dtype={"names": ["index", u"accented_name_é"],

pandas/tests/groupby/test_transform.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -533,7 +533,7 @@ def test_cython_transform(self):
533533
for (op, args), targop in ops:
534534
if op != 'shift' and 'int' not in gb_target:
535535
# numeric apply fastpath promotes dtype so have
536-
# to apply seperately and concat
536+
# to apply separately and concat
537537
i = gb[['int']].apply(targop)
538538
f = gb[['float', 'float_missing']].apply(targop)
539539
expected = pd.concat([f, i], axis=1)

0 commit comments

Comments
 (0)