Skip to content

[pre-commit.ci] pre-commit autoupdate #55846

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Nov 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ ci:
repos:
- repo: https://github.com/hauntsaninja/black-pre-commit-mirror
# black compiled with mypyc
rev: 23.9.1
rev: 23.10.1
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.291
rev: v0.1.4
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
Expand All @@ -34,14 +34,14 @@ repos:
alias: ruff-selected-autofixes
args: [--select, "ANN001,ANN204", --fix-only, --exit-non-zero-on-fix]
- repo: https://github.com/jendrikseipp/vulture
rev: 'v2.9.1'
rev: 'v2.10'
hooks:
- id: vulture
entry: python scripts/run_vulture.py
pass_filenames: true
require_serial: false
- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
rev: v2.2.6
hooks:
- id: codespell
types_or: [python, rst, markdown, cython, c]
Expand All @@ -52,7 +52,7 @@ repos:
- id: cython-lint
- id: double-quote-cython-strings
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: check-ast
- id: check-case-conflict
Expand All @@ -71,7 +71,7 @@ repos:
args: [--remove]
- id: trailing-whitespace
- repo: https://github.com/pylint-dev/pylint
rev: v3.0.0b0
rev: v3.0.1
hooks:
- id: pylint
stages: [manual]
Expand All @@ -94,7 +94,7 @@ repos:
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
rev: v3.13.0
rev: v3.15.0
hooks:
- id: pyupgrade
args: [--py39-plus]
Expand All @@ -111,11 +111,11 @@ repos:
types: [text] # overwrite types: [rst]
types_or: [python, rst]
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.6.8
rev: v0.8.1
hooks:
- id: sphinx-lint
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: ea59a72
rev: v17.0.4
hooks:
- id: clang-format
files: ^pandas/_libs/src|^pandas/_libs/include
Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/debugging_extensions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ By default building pandas from source will generate a release build. To generat

.. note::

conda environements update CFLAGS/CPPFLAGS with flags that are geared towards generating releases. If using conda, you may need to set ``CFLAGS="$CFLAGS -O0"`` and ``CPPFLAGS="$CPPFLAGS -O0"`` to ensure optimizations are turned off for debugging
conda environments update CFLAGS/CPPFLAGS with flags that are geared towards generating releases. If using conda, you may need to set ``CFLAGS="$CFLAGS -O0"`` and ``CPPFLAGS="$CPPFLAGS -O0"`` to ensure optimizations are turned off for debugging

By specifying ``builddir="debug"`` all of the targets will be built and placed in the debug directory relative to the project root. This helps to keep your debug and release artifacts separate; you are of course able to choose a different directory name or omit altogether if you do not care to separate build types.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The interface consists of two classes.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A :class:`pandas.api.extensions.ExtensionDtype` is similar to a ``numpy.dtype`` object. It describes the
data type. Implementors are responsible for a few unique items like the name.
data type. Implementers are responsible for a few unique items like the name.

One particularly important item is the ``type`` property. This should be the
class that is the scalar type for your data. For example, if you were writing an
Expand Down
1 change: 0 additions & 1 deletion doc/source/getting_started/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,6 @@ Various tutorials
* `Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013 <https://conference.scipy.org/scipy2013/tutorial_detail.php?id=109>`_
* `Financial analysis in Python, by Thomas Wiecki <https://nbviewer.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb>`_
* `Intro to pandas data structures, by Greg Reda <http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/>`_
* `Pandas and Python: Top 10, by Manish Amde <https://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/>`_
* `Pandas DataFrames Tutorial, by Karlijn Willems <https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_
* `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1/>`_
* `430+ Searchable Pandas recipes by Isshin Inada <https://skytowner.com/explore/pandas_recipes_reference>`_
2 changes: 1 addition & 1 deletion doc/source/user_guide/merging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -525,7 +525,7 @@ Performing an outer join with duplicate join keys in :class:`DataFrame`

.. warning::

Merging on duplicate keys sigificantly increase the dimensions of the result
Merging on duplicate keys significantly increase the dimensions of the result
and can cause a memory overflow.

.. _merging.validation:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@ The values can be cast to a different type using the ``dtype`` argument.

.. versionadded:: 1.5.0

:func:`~pandas.from_dummies` coverts the output of :func:`~pandas.get_dummies` back into
:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
a :class:`Series` of categorical values from indicator values.

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1079,7 +1079,7 @@ Datetimelike
- Bug in masking datetime-like arrays with a boolean mask of an incorrect length not raising an ``IndexError`` (:issue:`30308`)
- Bug in :attr:`Timestamp.resolution` being a property instead of a class attribute (:issue:`29910`)
- Bug in :func:`pandas.to_datetime` when called with ``None`` raising ``TypeError`` instead of returning ``NaT`` (:issue:`30011`)
- Bug in :func:`pandas.to_datetime` failing for ``deques`` when using ``cache=True`` (the default) (:issue:`29403`)
- Bug in :func:`pandas.to_datetime` failing for ``dequeues`` when using ``cache=True`` (the default) (:issue:`29403`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK this was actually a wrong suggestion of the spelling corrector. This is the plural of "dequeue" which is another word (or old spelling of deque, but python doesn't use that) -> #55862

- Bug in :meth:`Series.item` with ``datetime64`` or ``timedelta64`` dtype, :meth:`DatetimeIndex.item`, and :meth:`TimedeltaIndex.item` returning an integer instead of a :class:`Timestamp` or :class:`Timedelta` (:issue:`30175`)
- Bug in :class:`DatetimeIndex` addition when adding a non-optimized :class:`DateOffset` incorrectly dropping timezone information (:issue:`30336`)
- Bug in :meth:`DataFrame.drop` where attempting to drop non-existent values from a DatetimeIndex would yield a confusing error message (:issue:`30399`)
Expand Down
2 changes: 1 addition & 1 deletion pandas/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
try:
# numpy compat
from pandas.compat import (
is_numpy_dev as _is_numpy_dev, # pyright: ignore[reportUnusedImport] # noqa: F401,E501
is_numpy_dev as _is_numpy_dev, # pyright: ignore[reportUnusedImport] # noqa: F401
)
except ImportError as _err: # pragma: no cover
_module = _err.name
Expand Down
4 changes: 2 additions & 2 deletions pandas/_libs/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
# Below imports needs to happen first to ensure pandas top level
# module gets monkeypatched with the pandas_datetime_CAPI
# see pandas_datetime_exec in pd_datetime.c
import pandas._libs.pandas_parser # noqa: E501 # isort: skip # type: ignore[reportUnusedImport]
import pandas._libs.pandas_datetime # noqa: F401,E501 # isort: skip # type: ignore[reportUnusedImport]
import pandas._libs.pandas_parser # isort: skip # type: ignore[reportUnusedImport]
import pandas._libs.pandas_datetime # noqa: F401 # isort: skip # type: ignore[reportUnusedImport]
from pandas._libs.interval import Interval
from pandas._libs.tslibs import (
NaT,
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/include/pandas/portable.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ The full license is in the LICENSE file, distributed with this software.
#define getdigit_ascii(c, default) \
(isdigit_ascii(c) ? ((int)((c) - '0')) : default)
#define isspace_ascii(c) (((c) == ' ') || (((unsigned)(c) - '\t') < 5))
#define toupper_ascii(c) ((((unsigned)(c) - 'a') < 26) ? ((c)&0x5f) : (c))
#define toupper_ascii(c) ((((unsigned)(c) - 'a') < 26) ? ((c) & 0x5f) : (c))
#define tolower_ascii(c) ((((unsigned)(c) - 'A') < 26) ? ((c) | 0x20) : (c))
4 changes: 2 additions & 2 deletions pandas/_libs/include/pandas/vendored/ujson/lib/ultrajson.h
Original file line number Diff line number Diff line change
Expand Up @@ -189,13 +189,13 @@ typedef struct __JSONObjectEncoder {

/*
Begin iteration of an iterable object (JS_ARRAY or JS_OBJECT)
Implementor should setup iteration state in ti->prv
Implementer should setup iteration state in ti->prv
*/
JSPFN_ITERBEGIN iterBegin;

/*
Retrieve next object in an iteration. Should return 0 to indicate iteration
has reached end or 1 if there are more items. Implementor is responsible for
has reached end or 1 if there are more items. Implementer is responsible for
keeping state of the iteration. Use ti->prv fields for this
*/
JSPFN_ITERNEXT iterNext;
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/src/vendored/ujson/lib/ultrajsonenc.c
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ or UTF-16 surrogate pairs
The extra 2 bytes are for the quotes around the string

*/
#define RESERVE_STRING(_len) (2 + ((_len)*6))
#define RESERVE_STRING(_len) (2 + ((_len) * 6))

static const double g_pow10[] = {1,
10,
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/tslibs/parsing.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -950,7 +950,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# the offset is separated into two tokens, ex. ['+', '0900’].
# This separation will prevent subsequent processing
# from correctly parsing the time zone format.
# So in addition to the format nomalization, we rejoin them here.
# So in addition to the format normalization, we rejoin them here.
try:
tokens[offset_index] = parsed_datetime.strftime("%z")
except ValueError:
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/tslibs/timedeltas.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -1229,7 +1229,7 @@ cdef class _Timedelta(timedelta):
return cmp_scalar(self._value, ots._value, op)
return self._compare_mismatched_resos(ots, op)

# TODO: re-use/share with Timestamp
# TODO: reuse/share with Timestamp
cdef bint _compare_mismatched_resos(self, _Timedelta other, op):
# Can't just dispatch to numpy as they silently overflow and get it wrong
cdef:
Expand Down
4 changes: 2 additions & 2 deletions pandas/_testing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1059,14 +1059,14 @@ def shares_memory(left, right) -> bool:
if (
isinstance(left, ExtensionArray)
and is_string_dtype(left.dtype)
and left.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined] # noqa: E501
and left.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
):
# https://github.com/pandas-dev/pandas/pull/43930#discussion_r736862669
left = cast("ArrowExtensionArray", left)
if (
isinstance(right, ExtensionArray)
and is_string_dtype(right.dtype)
and right.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined] # noqa: E501
and right.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
):
right = cast("ArrowExtensionArray", right)
left_pa_data = left._pa_array
Expand Down
10 changes: 5 additions & 5 deletions pandas/core/arrays/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -455,7 +455,7 @@ def __setitem__(self, key, value) -> None:
-------
None
"""
# Some notes to the ExtensionArray implementor who may have ended up
# Some notes to the ExtensionArray implementer who may have ended up
# here. While this method is not required for the interface, if you
# *do* choose to implement __setitem__, then some semantics should be
# observed:
Expand Down Expand Up @@ -775,7 +775,7 @@ def _values_for_argsort(self) -> np.ndarray:
Notes
-----
The caller is responsible for *not* modifying these values in-place, so
it is safe for implementors to give views on ``self``.
it is safe for implementers to give views on ``self``.

Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore
entries with missing values in the original array (according to
Expand Down Expand Up @@ -833,7 +833,7 @@ def argsort(
>>> arr.argsort()
array([1, 2, 0, 4, 3])
"""
# Implementor note: You have two places to override the behavior of
# Implementer note: You have two places to override the behavior of
# argsort.
# 1. _values_for_argsort : construct the values passed to np.argsort
# 2. argsort : total control over sorting. In case of overriding this,
Expand Down Expand Up @@ -874,7 +874,7 @@ def argmin(self, skipna: bool = True) -> int:
>>> arr.argmin()
1
"""
# Implementor note: You have two places to override the behavior of
# Implementer note: You have two places to override the behavior of
# argmin.
# 1. _values_for_argsort : construct the values used in nargminmax
# 2. argmin itself : total control over sorting.
Expand Down Expand Up @@ -908,7 +908,7 @@ def argmax(self, skipna: bool = True) -> int:
>>> arr.argmax()
3
"""
# Implementor note: You have two places to override the behavior of
# Implementer note: You have two places to override the behavior of
# argmax.
# 1. _values_for_argsort : construct the values used in nargminmax
# 2. argmax itself : total control over sorting.
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/datetimes.py
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,7 @@ def _box_func(self, x: np.datetime64) -> Timestamp | NaTType:
# error: Return type "Union[dtype, DatetimeTZDtype]" of "dtype"
# incompatible with return type "ExtensionDtype" in supertype
# "ExtensionArray"
def dtype(self) -> np.dtype[np.datetime64] | DatetimeTZDtype: # type: ignore[override] # noqa: E501
def dtype(self) -> np.dtype[np.datetime64] | DatetimeTZDtype: # type: ignore[override]
"""
The dtype for the DatetimeArray.

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -850,7 +850,7 @@ def argsort(
ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs)

if ascending and kind == "quicksort" and na_position == "last":
# TODO: in an IntervalIndex we can re-use the cached
# TODO: in an IntervalIndex we can reuse the cached
# IntervalTree.left_sorter
return np.lexsort((self.right, self.left))

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/timedeltas.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def f(self) -> np.ndarray:
# error: Incompatible types in assignment (
# expression has type "ndarray[Any, dtype[signedinteger[_32Bit]]]",
# variable has type "ndarray[Any, dtype[signedinteger[_64Bit]]]
result = get_timedelta_field(values, alias, reso=self._creso) # type: ignore[assignment] # noqa: E501
result = get_timedelta_field(values, alias, reso=self._creso) # type: ignore[assignment]
if self._hasna:
result = self._maybe_mask_results(
result, fill_value=None, convert="float64"
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/groupby/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -962,7 +962,7 @@ def _selected_obj(self):
return self.obj[self._selection]

# Otherwise _selection is equivalent to _selection_list, so
# _selected_obj matches _obj_with_exclusions, so we can re-use
# _selected_obj matches _obj_with_exclusions, so we can reuse
# that and avoid making a copy.
return self._obj_with_exclusions

Expand Down Expand Up @@ -1466,7 +1466,7 @@ def _concat_objects(
# when the ax has duplicates
# so we resort to this
# GH 14776, 30667
# TODO: can we re-use e.g. _reindex_non_unique?
# TODO: can we reuse e.g. _reindex_non_unique?
if ax.has_duplicates and not result.axes[self.axis].equals(ax):
# e.g. test_category_order_transformer
target = algorithms.unique1d(ax._values)
Expand Down Expand Up @@ -2864,7 +2864,7 @@ def _value_counts(
result_series.name = name
result_series.index = index.set_names(range(len(columns)))
result_frame = result_series.reset_index()
orig_dtype = self.grouper.groupings[0].obj.columns.dtype # type: ignore[union-attr] # noqa: E501
orig_dtype = self.grouper.groupings[0].obj.columns.dtype # type: ignore[union-attr]
cols = Index(columns, dtype=orig_dtype).insert(len(columns), name)
result_frame.columns = cols
result = result_frame
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -5369,7 +5369,7 @@ def _getitem_slice(self, slobj: slice) -> Self:
result = type(self)._simple_new(res, name=self._name, refs=self._references)
if "_engine" in self._cache:
reverse = slobj.step is not None and slobj.step < 0
result._engine._update_from_sliced(self._engine, reverse=reverse) # type: ignore[union-attr] # noqa: E501
result._engine._update_from_sliced(self._engine, reverse=reverse) # type: ignore[union-attr]

return result

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -985,7 +985,7 @@ def _getitem_tuple_same_dim(self, tup: tuple):
This is only called after a failed call to _getitem_lowerdim.
"""
retval = self.obj
# Selecting columns before rows is signficiantly faster
# Selecting columns before rows is significantly faster
start_val = (self.ndim - len(tup)) + 1
for i, key in enumerate(reversed(tup)):
i = self.ndim - i - start_val
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/interchange/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def select_columns(self, indices: Sequence[int]) -> PandasDataFrameXchg:
self._df.iloc[:, indices], self._nan_as_null, self._allow_copy
)

def select_columns_by_name(self, names: list[str]) -> PandasDataFrameXchg: # type: ignore[override] # noqa: E501
def select_columns_by_name(self, names: list[str]) -> PandasDataFrameXchg: # type: ignore[override]
if not isinstance(names, abc.Sequence):
raise ValueError("`names` is not a sequence")
if not isinstance(names, list):
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/internals/blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -1415,7 +1415,7 @@ def where(

try:
# try/except here is equivalent to a self._can_hold_element check,
# but this gets us back 'casted' which we will re-use below;
# but this gets us back 'casted' which we will reuse below;
# without using 'casted', expressions.where may do unwanted upcasts.
casted = np_can_hold_element(values.dtype, other)
except (ValueError, TypeError, LossySetitemError):
Expand Down Expand Up @@ -1786,7 +1786,7 @@ def delete(self, loc) -> list[Block]:
else:
# No overload variant of "__getitem__" of "ExtensionArray" matches
# argument type "Tuple[slice, slice]"
values = self.values[previous_loc + 1 : idx, :] # type: ignore[call-overload] # noqa: E501
values = self.values[previous_loc + 1 : idx, :] # type: ignore[call-overload]
locs = mgr_locs_arr[previous_loc + 1 : idx]
nb = type(self)(
values, placement=BlockPlacement(locs), ndim=self.ndim, refs=refs
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/internals/construction.py
Original file line number Diff line number Diff line change
Expand Up @@ -550,7 +550,7 @@ def _prep_ndarraylike(values, copy: bool = True) -> np.ndarray:

if len(values) == 0:
# TODO: check for length-zero range, in which case return int64 dtype?
# TODO: re-use anything in try_cast?
# TODO: reuse anything in try_cast?
return np.empty((0, 0), dtype=object)
elif isinstance(values, range):
arr = range_to_ndarray(values)
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/reshape/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -1390,13 +1390,13 @@ def _maybe_coerce_merge_keys(self) -> None:
):
ct = find_common_type([lk.dtype, rk.dtype])
if is_extension_array_dtype(ct):
rk = ct.construct_array_type()._from_sequence(rk) # type: ignore[union-attr] # noqa: E501
rk = ct.construct_array_type()._from_sequence(rk) # type: ignore[union-attr]
else:
rk = rk.astype(ct) # type: ignore[arg-type]
elif is_extension_array_dtype(rk.dtype):
ct = find_common_type([lk.dtype, rk.dtype])
if is_extension_array_dtype(ct):
lk = ct.construct_array_type()._from_sequence(lk) # type: ignore[union-attr] # noqa: E501
lk = ct.construct_array_type()._from_sequence(lk) # type: ignore[union-attr]
else:
lk = lk.astype(ct) # type: ignore[arg-type]

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/reshape/reshape.py
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ def mask_all(self) -> bool:

@cache_readonly
def arange_result(self) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.bool_]]:
# We cache this for re-use in ExtensionBlock._unstack
# We cache this for reuse in ExtensionBlock._unstack
dummy_arr = np.arange(len(self.index), dtype=np.intp)
new_values, mask = self.get_new_values(dummy_arr, fill_value=-1)
return new_values, mask.any(0)
Expand Down
Loading