Skip to content

CLN: Fix Spelling Errors #17535

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 15, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@ Index Types
We have discussed ``MultiIndex`` in the previous sections pretty extensively. ``DatetimeIndex`` and ``PeriodIndex``
are shown :ref:`here <timeseries.overview>`. ``TimedeltaIndex`` are :ref:`here <timedeltas.timedeltas>`.

In the following sub-sections we will highlite some other index types.
In the following sub-sections we will highlight some other index types.

.. _indexing.categoricalindex:

Expand All @@ -645,7 +645,7 @@ and allows efficient indexing and storage of an index with a large number of dup
df.dtypes
df.B.cat.categories

Setting the index, will create create a ``CategoricalIndex``
Setting the index, will create a ``CategoricalIndex``

.. ipython:: python

Expand Down Expand Up @@ -681,7 +681,7 @@ Groupby operations on the index will preserve the index nature as well
Reindexing operations, will return a resulting index based on the type of the passed
indexer, meaning that passing a list will return a plain-old-``Index``; indexing with
a ``Categorical`` will return a ``CategoricalIndex``, indexed according to the categories
of the PASSED ``Categorical`` dtype. This allows one to arbitrarly index these even with
of the PASSED ``Categorical`` dtype. This allows one to arbitrarily index these even with
values NOT in the categories, similarly to how you can reindex ANY pandas index.

.. ipython :: python
Expand Down Expand Up @@ -722,7 +722,7 @@ Int64Index and RangeIndex
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.

``RangeIndex`` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analagous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.

.. _indexing.float64index:

Expand Down Expand Up @@ -963,7 +963,7 @@ index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e'+1]

A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design design to make label-based
specific dates. To enable this, we made the design to make label-based
slicing include both endpoints:

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1291,7 +1291,7 @@ Index
-----

**Many of these methods or variants thereof are available on the objects
that contain an index (Series/Dataframe) and those should most likely be
that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.**

.. autosummary::
Expand Down
2 changes: 1 addition & 1 deletion doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -923,7 +923,7 @@ Passing a named function will yield that name for the row:
Aggregating with a dict
+++++++++++++++++++++++

Passing a dictionary of column names to a scalar or a list of scalars, to ``DataFame.agg``
Passing a dictionary of column names to a scalar or a list of scalars, to ``DataFrame.agg``
allows you to customize which functions are applied to which columns. Note that the results
are not in any particular order, you can use an ``OrderedDict`` instead to guarantee ordering.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -654,7 +654,7 @@ aggregation with, outputting a DataFrame:

r['A'].agg([np.sum, np.mean, np.std])

On a widowed DataFrame, you can pass a list of functions to apply to each
On a windowed DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -561,7 +561,7 @@ must be either implemented on GroupBy or available via :ref:`dispatching

.. note::

If you pass a dict to ``aggregate``, the ordering of the output colums is
If you pass a dict to ``aggregate``, the ordering of the output columns is
non-deterministic. If you want to be sure the output columns will be in a specific
order, you can use an ``OrderedDict``. Compare the output of the following two commands:

Expand Down Expand Up @@ -1211,7 +1211,7 @@ Groupby by Indexer to 'resample' data

Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.

In order to resample to work on indices that are non-datetimelike , the following procedure can be utilized.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.

In the following examples, **df.index // 5** returns a binary array which is used to determine what gets selected for the groupby operation.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -714,7 +714,7 @@ Finally, one can also set a seed for ``sample``'s random number generator using
Setting With Enlargement
------------------------

The ``.loc/[]`` operations can perform enlargement when setting a non-existant key for that axis.
The ``.loc/[]`` operations can perform enlargement when setting a non-existent key for that axis.

In the ``Series`` case this is effectively an appending operation

Expand Down
2 changes: 1 addition & 1 deletion doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3077,7 +3077,7 @@ Compressed pickle files

.. versionadded:: 0.20.0

:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle` can read
:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
`zip`` file supports read only and must contain only one data file
to be read in.
Expand Down
6 changes: 3 additions & 3 deletions doc/source/merging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1329,7 +1329,7 @@ By default we are taking the asof of the quotes.
on='time',
by='ticker')

We only asof within ``2ms`` betwen the quote time and the trade time.
We only asof within ``2ms`` between the quote time and the trade time.

.. ipython:: python

Expand All @@ -1338,8 +1338,8 @@ We only asof within ``2ms`` betwen the quote time and the trade time.
by='ticker',
tolerance=pd.Timedelta('2ms'))

We only asof within ``10ms`` betwen the quote time and the trade time and we exclude exact matches on time.
Note that though we exclude the exact matches (of the quotes), prior quotes DO propogate to that point
We only asof within ``10ms`` between the quote time and the trade time and we exclude exact matches on time.
Note that though we exclude the exact matches (of the quotes), prior quotes DO propagate to that point
in time.

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ Interpolation

The ``limit_direction`` keyword argument was added.

Both Series and Dataframe objects have an ``interpolate`` method that, by default,
Both Series and DataFrame objects have an ``interpolate`` method that, by default,
performs linear interpolation at missing datapoints.

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions doc/source/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -313,9 +313,9 @@ display.large_repr truncate For DataFrames exceeding max_ro
display.latex.repr False Whether to produce a latex DataFrame
representation for jupyter frontends
that support it.
display.latex.escape True Escapes special caracters in Dataframes, when
display.latex.escape True Escapes special characters in DataFrames, when
using the to_latex method.
display.latex.longtable False Specifies if the to_latex method of a Dataframe
display.latex.longtable False Specifies if the to_latex method of a DataFrame
uses the longtable format.
display.latex.multicolumn True Combines columns when using a MultiIndex
display.latex.multicolumn_format 'l' Alignment of multicolumn labels
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ the level numbers:
stacked.unstack('second')

Notice that the ``stack`` and ``unstack`` methods implicitly sort the index
levels involved. Hence a call to ``stack`` and then ``unstack``, or viceversa,
levels involved. Hence a call to ``stack`` and then ``unstack``, or vice versa,
will result in a **sorted** copy of the original DataFrame or Series:

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/sparse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ dtype, ``fill_value`` default changes:
s.to_sparse()

You can change the dtype using ``.astype()``, the result is also sparse. Note that
``.astype()`` also affects to the ``fill_value`` to keep its dense represantation.
``.astype()`` also affects to the ``fill_value`` to keep its dense representation.


.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/style.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice the similarity with the standard `df.applymap`, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames.\n",
"Notice the similarity with the standard `df.applymap`, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.\n",
"\n",
"Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a `<style>` tag. This will be a common theme.\n",
"\n",
Expand Down
18 changes: 9 additions & 9 deletions doc/source/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1054,7 +1054,7 @@ as ``BusinessHour`` except that it skips specified custom holidays.
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
dt + bhour_us * 2

You can use keyword arguments suported by either ``BusinessHour`` and ``CustomBusinessDay``.
You can use keyword arguments supported by either ``BusinessHour`` and ``CustomBusinessDay``.

.. ipython:: python

Expand Down Expand Up @@ -1088,7 +1088,7 @@ frequencies. We will refer to these aliases as *offset aliases*.
"BMS", "business month start frequency"
"CBMS", "custom business month start frequency"
"Q", "quarter end frequency"
"BQ", "business quarter endfrequency"
"BQ", "business quarter end frequency"
"QS", "quarter start frequency"
"BQS", "business quarter start frequency"
"A, Y", "year end frequency"
Expand Down Expand Up @@ -1132,13 +1132,13 @@ For some frequencies you can specify an anchoring suffix:
:header: "Alias", "Description"
:widths: 15, 100

"W\-SUN", "weekly frequency (sundays). Same as 'W'"
"W\-MON", "weekly frequency (mondays)"
"W\-TUE", "weekly frequency (tuesdays)"
"W\-WED", "weekly frequency (wednesdays)"
"W\-THU", "weekly frequency (thursdays)"
"W\-FRI", "weekly frequency (fridays)"
"W\-SAT", "weekly frequency (saturdays)"
"W\-SUN", "weekly frequency (Sundays). Same as 'W'"
"W\-MON", "weekly frequency (Mondays)"
"W\-TUE", "weekly frequency (Tuesdays)"
"W\-WED", "weekly frequency (Wednesdays)"
"W\-THU", "weekly frequency (Thursdays)"
"W\-FRI", "weekly frequency (Fridays)"
"W\-SAT", "weekly frequency (Saturdays)"
"(B)Q(S)\-DEC", "quarterly frequency, year ends in December. Same as 'Q'"
"(B)Q(S)\-JAN", "quarterly frequency, year ends in January"
"(B)Q(S)\-FEB", "quarterly frequency, year ends in February"
Expand Down
2 changes: 1 addition & 1 deletion doc/source/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ Histogram can be stacked by ``stacked=True``. Bin size can be changed by ``bins`

plt.close('all')

You can pass other keywords supported by matplotlib ``hist``. For example, horizontal and cumulative histgram can be drawn by ``orientation='horizontal'`` and ``cumulative='True'``.
You can pass other keywords supported by matplotlib ``hist``. For example, horizontal and cumulative histogram can be drawn by ``orientation='horizontal'`` and ``cumulative=True``.

.. ipython:: python

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -1475,7 +1475,7 @@ def func(arr, indexer, out, fill_value=np.nan):
def diff(arr, n, axis=0):
"""
difference of n between self,
analagoust to s-s.shift(n)
analogous to s-s.shift(n)

Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -918,7 +918,7 @@ def take(self, indices, axis=0, allow_fill=True,
except ValueError:

# we need to coerce; migth have NA's in an
# interger dtype
# integer dtype
new_left = taker(left.astype(float))
new_right = taker(right.astype(float))

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/reshape/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
The keys, levels, and names arguments are all optional.

A walkthrough of how this method fits in with other tools for combining
panda objects can be found `here
pandas objects can be found `here
<http://pandas.pydata.org/pandas-docs/stable/merging.html>`__.

See Also
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/reshape/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ def merge_asof(left, right, on=None,
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN

We only asof within 2ms betwen the quote time and the trade time
We only asof within 2ms between the quote time and the trade time

>>> pd.merge_asof(trades, quotes,
... on='time',
Expand All @@ -460,9 +460,9 @@ def merge_asof(left, right, on=None,
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN

We only asof within 10ms betwen the quote time and the trade time
We only asof within 10ms between the quote time and the trade time
and we exclude exact matches on time. However *prior* data will
propogate forward
propagate forward

>>> pd.merge_asof(trades, quotes,
... on='time',
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/reshape/tile.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ def _preprocess_for_cut(x):
"""
handles preprocessing for cut where we convert passed
input to array, strip the index information and store it
seperately
separately
"""
x_is_series = isinstance(x, Series)
series_index = None
Expand Down
4 changes: 2 additions & 2 deletions pandas/io/formats/excel.py
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ def build_font(self, props):
else None),
'strike': ('line-through' in decoration) or None,
'color': self.color_to_excel(props.get('color')),
# shadow if nonzero digit before shadow colour
# shadow if nonzero digit before shadow color
'shadow': (bool(re.search('^[^#(]*[1-9]',
props['text-shadow']))
if 'text-shadow' in props else None),
Expand Down Expand Up @@ -304,7 +304,7 @@ def color_to_excel(self, val):
try:
return self.NAMED_COLORS[val]
except KeyError:
warnings.warn('Unhandled colour format: {val!r}'.format(val=val),
warnings.warn('Unhandled color format: {val!r}'.format(val=val),
CSSWarning)


Expand Down
12 changes: 6 additions & 6 deletions pandas/io/pytables.py
Original file line number Diff line number Diff line change
Expand Up @@ -605,7 +605,7 @@ def open(self, mode='a', **kwargs):

except (Exception) as e:

# trying to read from a non-existant file causes an error which
# trying to read from a non-existent file causes an error which
# is not part of IOError, make it one
if self._mode == 'r' and 'Unable to open/create file' in str(e):
raise IOError(str(e))
Expand Down Expand Up @@ -1621,7 +1621,7 @@ def __iter__(self):

def maybe_set_size(self, min_itemsize=None, **kwargs):
""" maybe set a string col itemsize:
min_itemsize can be an interger or a dict with this columns name
min_itemsize can be an integer or a dict with this columns name
with an integer size """
if _ensure_decoded(self.kind) == u('string'):

Expand Down Expand Up @@ -1712,11 +1712,11 @@ def set_info(self, info):
self.__dict__.update(idx)

def get_attr(self):
""" set the kind for this colummn """
""" set the kind for this column """
self.kind = getattr(self.attrs, self.kind_attr, None)

def set_attr(self):
""" set the kind for this colummn """
""" set the kind for this column """
setattr(self.attrs, self.kind_attr, self.kind)

def read_metadata(self, handler):
Expand Down Expand Up @@ -2160,14 +2160,14 @@ def convert(self, values, nan_rep, encoding):
return self

def get_attr(self):
""" get the data for this colummn """
""" get the data for this column """
self.values = getattr(self.attrs, self.kind_attr, None)
self.dtype = getattr(self.attrs, self.dtype_attr, None)
self.meta = getattr(self.attrs, self.meta_attr, None)
self.set_kind()

def set_attr(self):
""" set the data for this colummn """
""" set the data for this column """
setattr(self.attrs, self.kind_attr, self.values)
setattr(self.attrs, self.meta_attr, self.meta)
if self.dtype is not None:
Expand Down
4 changes: 2 additions & 2 deletions pandas/io/stata.py
Original file line number Diff line number Diff line change
Expand Up @@ -511,8 +511,8 @@ def _cast_to_stata_types(data):
this range. If the int64 values are outside of the range of those
perfectly representable as float64 values, a warning is raised.

bool columns are cast to int8. uint colums are converted to int of the
same size if there is no loss in precision, other wise are upcast to a
bool columns are cast to int8. uint columns are converted to int of the
same size if there is no loss in precision, otherwise are upcast to a
larger type. uint64 is currently not supported since it is concerted to
object in a DataFrame.
"""
Expand Down
2 changes: 1 addition & 1 deletion pandas/plotting/_misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None,
axvlines_kwds: keywords, optional
Options to be passed to axvline method for vertical lines
sort_labels: bool, False
Sort class_column labels, useful when assigning colours
Sort class_column labels, useful when assigning colors

.. versionadded:: 0.20.0

Expand Down
2 changes: 1 addition & 1 deletion pandas/plotting/_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey):
if ncols > 1:
for ax in axarr:
# only the first column should get y labels -> set all other to
# off as we only have labels in teh first column and we always
# off as we only have labels in the first column and we always
# have a subplot there, we can skip the layout test
if ax.is_first_col():
continue
Expand Down
4 changes: 2 additions & 2 deletions pandas/tests/frame/test_convert_to.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,11 +136,11 @@ def test_to_records_with_unicode_index(self):
def test_to_records_with_unicode_column_names(self):
# xref issue: https://github.com/numpy/numpy/issues/2407
# Issue #11879. to_records used to raise an exception when used
# with column names containing non ascii caracters in Python 2
# with column names containing non-ascii characters in Python 2
result = DataFrame(data={u"accented_name_é": [1.0]}).to_records()

# Note that numpy allows for unicode field names but dtypes need
# to be specified using dictionnary intsead of list of tuples.
# to be specified using dictionary instead of list of tuples.
expected = np.rec.array(
[(0, 1.0)],
dtype={"names": ["index", u"accented_name_é"],
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/groupby/test_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ def test_cython_transform(self):
for (op, args), targop in ops:
if op != 'shift' and 'int' not in gb_target:
# numeric apply fastpath promotes dtype so have
# to apply seperately and concat
# to apply separately and concat
i = gb[['int']].apply(targop)
f = gb[['float', 'float_missing']].apply(targop)
expected = pd.concat([f, i], axis=1)
Expand Down
Loading