Skip to content

Fix typos #30481

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 26, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/user_guide/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ When working with an ``Index`` object directly, rather than via a ``DataFrame``,
.. code-block:: none

>>> mi.levels[0].name = 'name via level'
>>> mi.names[0] # only works for older panads
>>> mi.names[0] # only works for older pandas
'name via level'

As of pandas 1.0, this will *silently* fail to update the names
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -791,7 +791,7 @@ the nullable :doc:`integer <integer_na>`, boolean and
:ref:`dedicated string <text.types>` data types as the missing value indicator.

The goal of ``pd.NA`` is provide a "missing" indicator that can be used
consistently accross data types (instead of ``np.nan``, ``None`` or ``pd.NaT``
consistently across data types (instead of ``np.nan``, ``None`` or ``pd.NaT``
depending on the data type).

For example, when having missing values in a Series with the nullable integer
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/text.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,10 @@ l. For ``StringDtype``, :ref:`string accessor methods<api.series.str>`
2. Some string methods, like :meth:`Series.str.decode` are not available
on ``StringArray`` because ``StringArray`` only holds strings, not
bytes.
3. In comparision operations, :class:`arrays.StringArray` and ``Series`` backed
3. In comparison operations, :class:`arrays.StringArray` and ``Series`` backed
by a ``StringArray`` will return an object with :class:`BooleanDtype`,
rather than a ``bool`` dtype object. Missing values in a ``StringArray``
will propagate in comparision operations, rather than always comparing
will propagate in comparison operations, rather than always comparing
unequal like :attr:`numpy.nan`.

Everything else that follows in the rest of this document applies equally to
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v1.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ A new ``pd.NA`` value (singleton) is introduced to represent scalar missing
values. Up to now, ``np.nan`` is used for this for float data, ``np.nan`` or
``None`` for object-dtype data and ``pd.NaT`` for datetime-like data. The
goal of ``pd.NA`` is provide a "missing" indicator that can be used
consistently accross data types. For now, the nullable integer and boolean
consistently across data types. For now, the nullable integer and boolean
data types and the new string data type make use of ``pd.NA`` (:issue:`28095`).

.. warning::
Expand Down Expand Up @@ -826,7 +826,7 @@ Plotting
- Bug where :meth:`DataFrame.boxplot` would not accept a `color` parameter like `DataFrame.plot.box` (:issue:`26214`)
- Bug in the ``xticks`` argument being ignored for :meth:`DataFrame.plot.bar` (:issue:`14119`)
- :func:`set_option` now validates that the plot backend provided to ``'plotting.backend'`` implements the backend when the option is set, rather than when a plot is created (:issue:`28163`)
- :meth:`DataFrame.plot` now allow a ``backend`` keyword arugment to allow changing between backends in one session (:issue:`28619`).
- :meth:`DataFrame.plot` now allow a ``backend`` keyword argument to allow changing between backends in one session (:issue:`28619`).
- Bug in color validation incorrectly raising for non-color styles (:issue:`29122`).

Groupby/resample/rolling
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/arrays/datetimelike.py
Original file line number Diff line number Diff line change
Expand Up @@ -915,7 +915,7 @@ def _is_unique(self):
__rdivmod__ = make_invalid_op("__rdivmod__")

def _add_datetimelike_scalar(self, other):
# Overriden by TimedeltaArray
# Overridden by TimedeltaArray
raise TypeError(f"cannot add {type(self).__name__} and {type(other).__name__}")

_add_datetime_arraylike = _add_datetimelike_scalar
Expand All @@ -928,7 +928,7 @@ def _sub_datetimelike_scalar(self, other):
_sub_datetime_arraylike = _sub_datetimelike_scalar

def _sub_period(self, other):
# Overriden by PeriodArray
# Overridden by PeriodArray
raise TypeError(f"cannot subtract Period from a {type(self).__name__}")

def _add_offset(self, offset):
Expand Down Expand Up @@ -1085,7 +1085,7 @@ def _addsub_int_array(self, other, op):
-------
result : same class as self
"""
# _addsub_int_array is overriden by PeriodArray
# _addsub_int_array is overridden by PeriodArray
assert not is_period_dtype(self)
assert op in [operator.add, operator.sub]

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/groupby/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ class providing the base-class of operations.

The SeriesGroupBy and DataFrameGroupBy sub-class
(defined in pandas.core.groupby.generic)
expose these user-facing objects to provide specific functionailty.
expose these user-facing objects to provide specific functionality.
"""

from contextlib import contextmanager
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -978,7 +978,7 @@ def get_indexer(
right_indexer = self.right.get_indexer(target_as_index.right)
indexer = np.where(left_indexer == right_indexer, left_indexer, -1)
elif is_categorical(target_as_index):
# get an indexer for unique categories then propogate to codes via take_1d
# get an indexer for unique categories then propagate to codes via take_1d
categories_indexer = self.get_indexer(target_as_index.categories)
indexer = take_1d(categories_indexer, target_as_index.codes, fill_value=-1)
elif not is_object_dtype(target_as_index):
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/internals/blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -1449,7 +1449,7 @@ def quantile(self, qs, interpolation="linear", axis=0):
-------
Block
"""
# We should always have ndim == 2 becase Series dispatches to DataFrame
# We should always have ndim == 2 because Series dispatches to DataFrame
assert self.ndim == 2

values = self.get_values()
Expand Down Expand Up @@ -2432,7 +2432,7 @@ def fillna(self, value, **kwargs):
# Deprecation GH#24694, GH#19233
raise TypeError(
"Passing integers to fillna for timedelta64[ns] dtype is no "
"longer supporetd. To obtain the old behavior, pass "
"longer supported. To obtain the old behavior, pass "
"`pd.Timedelta(seconds=n)` instead."
)
return super().fillna(value, **kwargs)
Expand Down Expand Up @@ -2971,7 +2971,7 @@ def make_block(values, placement, klass=None, ndim=None, dtype=None):


def _extend_blocks(result, blocks=None):
""" return a new extended blocks, givin the result """
""" return a new extended blocks, given the result """
from pandas.core.internals import BlockManager

if blocks is None:
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/nanops.py
Original file line number Diff line number Diff line change
Expand Up @@ -1337,7 +1337,7 @@ def f(x, y):

def _nanpercentile_1d(values, mask, q, na_value, interpolation):
"""
Wraper for np.percentile that skips missing values, specialized to
Wrapper for np.percentile that skips missing values, specialized to
1-dimensional case.

Parameters
Expand Down Expand Up @@ -1368,7 +1368,7 @@ def _nanpercentile_1d(values, mask, q, na_value, interpolation):

def nanpercentile(values, q, axis, na_value, mask, ndim, interpolation):
"""
Wraper for np.percentile that skips missing values.
Wrapper for np.percentile that skips missing values.

Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -727,7 +727,7 @@ def __array__(self, dtype=None):
Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
dtype=object)

Or the values may be localized to UTC and the tzinfo discared with
Or the values may be localized to UTC and the tzinfo discarded with
``dtype='datetime64[ns]'``

>>> np.asarray(tzser, dtype="datetime64[ns]") # doctest: +ELLIPSIS
Expand Down
4 changes: 2 additions & 2 deletions pandas/io/pytables.py
Original file line number Diff line number Diff line change
Expand Up @@ -3297,7 +3297,7 @@ def data_orientation(self):
def queryables(self) -> Dict[str, Any]:
""" return a dict of the kinds allowable columns for this object """

# mypy doesnt recognize DataFrame._AXIS_NAMES, so we re-write it here
# mypy doesn't recognize DataFrame._AXIS_NAMES, so we re-write it here
axis_names = {0: "index", 1: "columns"}

# compute the values_axes queryables
Expand Down Expand Up @@ -4993,7 +4993,7 @@ def _get_data_and_dtype_name(data: Union[np.ndarray, ABCExtensionArray]):
if data.dtype.kind in ["m", "M"]:
data = np.asarray(data.view("i8"))
# TODO: we used to reshape for the dt64tz case, but no longer
# doing that doesnt seem to break anything. why?
# doing that doesn't seem to break anything. why?

elif isinstance(data, PeriodIndex):
data = data.asi8
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/arithmetic/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ def box_df_fail(request):
def box_transpose_fail(request):
"""
Fixture similar to `box` but testing both transpose cases for DataFrame,
with the tranpose=True case xfailed.
with the transpose=True case xfailed.
"""
# GH#23620
return request.param
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/arrays/test_integer.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ def _check_op_integer(self, result, expected, mask, s, op_name, other):
# to compare properly, we convert the expected
# to float, mask to nans and convert infs
# if we have uints then we process as uints
# then conert to float
# then convert to float
# and we ultimately want to create a IntArray
# for comparisons

Expand Down
4 changes: 2 additions & 2 deletions pandas/tests/indexes/datetimes/test_indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -457,7 +457,7 @@ def test_insert(self):
def test_delete(self):
idx = date_range(start="2000-01-01", periods=5, freq="M", name="idx")

# prserve freq
# preserve freq
expected_0 = date_range(start="2000-02-01", periods=4, freq="M", name="idx")
expected_4 = date_range(start="2000-01-01", periods=4, freq="M", name="idx")

Expand Down Expand Up @@ -511,7 +511,7 @@ def test_delete(self):
def test_delete_slice(self):
idx = date_range(start="2000-01-01", periods=10, freq="D", name="idx")

# prserve freq
# preserve freq
expected_0_2 = date_range(start="2000-01-04", periods=7, freq="D", name="idx")
expected_7_9 = date_range(start="2000-01-01", periods=7, freq="D", name="idx")

Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/indexing/interval/test_interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def test_non_matching(self):
s = self.s

# this is a departure from our current
# indexin scheme, but simpler
# indexing scheme, but simpler
with pytest.raises(KeyError, match="^$"):
s.loc[[-1, 3, 4, 5]]

Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/io/formats/test_format.py
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@ def mkframe(n):
assert not has_truncated_repr(df6)

with option_context("display.max_rows", 9, "display.max_columns", 10):
# out vertical bounds can not result in exanded repr
# out vertical bounds can not result in expanded repr
assert not has_expanded_repr(df10)
assert has_vertically_truncated_repr(df10)

Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/io/pytables/test_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -1273,7 +1273,7 @@ def test_append_with_different_block_ordering(self, setup_path):
with pytest.raises(ValueError):
store.append("df", df)

# store multile additional fields in different blocks
# store multiple additional fields in different blocks
df["float_3"] = Series([1.0] * len(df), dtype="float64")
with pytest.raises(ValueError):
store.append("df", df)
Expand Down
6 changes: 3 additions & 3 deletions pandas/tests/plotting/test_frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -555,14 +555,14 @@ def test_subplots_timeseries_y_axis_not_supported(self):
period:
since period isn't yet implemented in ``select_dtypes``
and because it will need a custom value converter +
tick formater (as was done for x-axis plots)
tick formatter (as was done for x-axis plots)

categorical:
because it will need a custom value converter +
tick formater (also doesn't work for x-axis, as of now)
tick formatter (also doesn't work for x-axis, as of now)

datetime_mixed_tz:
because of the way how pandas handels ``Series`` of
because of the way how pandas handles ``Series`` of
``datetime`` objects with different timezone,
generally converting ``datetime`` objects in a tz-aware
form could help with this problem
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/scalar/timedelta/test_timedelta.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def test_arithmetic_overflow(self):
Timestamp("1700-01-01") + timedelta(days=13 * 19999)

def test_array_timedelta_floordiv(self):
# deprected GH#19761, enforced GH#29797
# deprecated GH#19761, enforced GH#29797
ints = pd.date_range("2012-10-08", periods=4, freq="D").view("i8")

with pytest.raises(TypeError, match="Invalid dtype"):
Expand Down
4 changes: 2 additions & 2 deletions pandas/tests/series/test_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ def test_iteritems_strings(self, string_series):
for idx, val in string_series.iteritems():
assert val == string_series[idx]

# assert is lazy (genrators don't define reverse, lists do)
# assert is lazy (generators don't define reverse, lists do)
assert not hasattr(string_series.iteritems(), "reverse")

def test_items_datetimes(self, datetime_series):
Expand All @@ -321,7 +321,7 @@ def test_items_strings(self, string_series):
for idx, val in string_series.items():
assert val == string_series[idx]

# assert is lazy (genrators don't define reverse, lists do)
# assert is lazy (generators don't define reverse, lists do)
assert not hasattr(string_series.items(), "reverse")

def test_raise_on_info(self):
Expand Down
2 changes: 1 addition & 1 deletion scripts/tests/test_validate_docstrings.py
Original file line number Diff line number Diff line change
Expand Up @@ -719,7 +719,7 @@ def no_type(self):

def no_description(self):
"""
Provides type but no descrption.
Provides type but no description.

Returns
-------
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ def maybe_cythonize(extensions, *args, **kwargs):
if hasattr(ext, "include_dirs") and numpy_incl not in ext.include_dirs:
ext.include_dirs.append(numpy_incl)

# reuse any parallel arguments provided for compliation to cythonize
# reuse any parallel arguments provided for compilation to cythonize
parser = argparse.ArgumentParser()
parser.add_argument("-j", type=int)
parser.add_argument("--parallel", type=int)
Expand Down