Skip to content

Commit 5f88eb1

Browse files
[pre-commit.ci] pre-commit autoupdate (#55846)
* [pre-commit.ci] pre-commit autoupdate updates: - [github.com/hauntsaninja/black-pre-commit-mirror: 23.9.1 → 23.10.1](psf/black-pre-commit-mirror@23.9.1...23.10.1) - [github.com/astral-sh/ruff-pre-commit: v0.0.291 → v0.1.4](astral-sh/ruff-pre-commit@v0.0.291...v0.1.4) - [github.com/jendrikseipp/vulture: v2.9.1 → v2.10](jendrikseipp/vulture@v2.9.1...v2.10) - [github.com/codespell-project/codespell: v2.2.5 → v2.2.6](codespell-project/codespell@v2.2.5...v2.2.6) - [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](pre-commit/pre-commit-hooks@v4.4.0...v4.5.0) - [github.com/pylint-dev/pylint: v3.0.0b0 → v3.0.1](pylint-dev/pylint@v3.0.0b0...v3.0.1) - [github.com/asottile/pyupgrade: v3.13.0 → v3.15.0](asottile/pyupgrade@v3.13.0...v3.15.0) - [github.com/sphinx-contrib/sphinx-lint: v0.6.8 → v0.8.1](sphinx-contrib/sphinx-lint@v0.6.8...v0.8.1) - [github.com/pre-commit/mirrors-clang-format: ea59a72 → v17.0.4](pre-commit/mirrors-clang-format@ea59a72...v17.0.4) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Bump black in pyproject.toml * Remove unneeded noqa * Manually codespelled * Remove 404 link --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Matthew Roeschke <[email protected]>
1 parent bf37560 commit 5f88eb1

File tree

42 files changed

+66
-67
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+66
-67
lines changed

.pre-commit-config.yaml

+9-9
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,11 @@ ci:
2020
repos:
2121
- repo: https://github.com/hauntsaninja/black-pre-commit-mirror
2222
# black compiled with mypyc
23-
rev: 23.9.1
23+
rev: 23.10.1
2424
hooks:
2525
- id: black
2626
- repo: https://github.com/astral-sh/ruff-pre-commit
27-
rev: v0.0.291
27+
rev: v0.1.4
2828
hooks:
2929
- id: ruff
3030
args: [--exit-non-zero-on-fix]
@@ -34,14 +34,14 @@ repos:
3434
alias: ruff-selected-autofixes
3535
args: [--select, "ANN001,ANN204", --fix-only, --exit-non-zero-on-fix]
3636
- repo: https://github.com/jendrikseipp/vulture
37-
rev: 'v2.9.1'
37+
rev: 'v2.10'
3838
hooks:
3939
- id: vulture
4040
entry: python scripts/run_vulture.py
4141
pass_filenames: true
4242
require_serial: false
4343
- repo: https://github.com/codespell-project/codespell
44-
rev: v2.2.5
44+
rev: v2.2.6
4545
hooks:
4646
- id: codespell
4747
types_or: [python, rst, markdown, cython, c]
@@ -52,7 +52,7 @@ repos:
5252
- id: cython-lint
5353
- id: double-quote-cython-strings
5454
- repo: https://github.com/pre-commit/pre-commit-hooks
55-
rev: v4.4.0
55+
rev: v4.5.0
5656
hooks:
5757
- id: check-ast
5858
- id: check-case-conflict
@@ -71,7 +71,7 @@ repos:
7171
args: [--remove]
7272
- id: trailing-whitespace
7373
- repo: https://github.com/pylint-dev/pylint
74-
rev: v3.0.0b0
74+
rev: v3.0.1
7575
hooks:
7676
- id: pylint
7777
stages: [manual]
@@ -94,7 +94,7 @@ repos:
9494
hooks:
9595
- id: isort
9696
- repo: https://github.com/asottile/pyupgrade
97-
rev: v3.13.0
97+
rev: v3.15.0
9898
hooks:
9999
- id: pyupgrade
100100
args: [--py39-plus]
@@ -111,11 +111,11 @@ repos:
111111
types: [text] # overwrite types: [rst]
112112
types_or: [python, rst]
113113
- repo: https://github.com/sphinx-contrib/sphinx-lint
114-
rev: v0.6.8
114+
rev: v0.8.1
115115
hooks:
116116
- id: sphinx-lint
117117
- repo: https://github.com/pre-commit/mirrors-clang-format
118-
rev: ea59a72
118+
rev: v17.0.4
119119
hooks:
120120
- id: clang-format
121121
files: ^pandas/_libs/src|^pandas/_libs/include

doc/source/development/debugging_extensions.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ By default building pandas from source will generate a release build. To generat
2323

2424
.. note::
2525

26-
conda environements update CFLAGS/CPPFLAGS with flags that are geared towards generating releases. If using conda, you may need to set ``CFLAGS="$CFLAGS -O0"`` and ``CPPFLAGS="$CPPFLAGS -O0"`` to ensure optimizations are turned off for debugging
26+
conda environments update CFLAGS/CPPFLAGS with flags that are geared towards generating releases. If using conda, you may need to set ``CFLAGS="$CFLAGS -O0"`` and ``CPPFLAGS="$CPPFLAGS -O0"`` to ensure optimizations are turned off for debugging
2727

2828
By specifying ``builddir="debug"`` all of the targets will be built and placed in the debug directory relative to the project root. This helps to keep your debug and release artifacts separate; you are of course able to choose a different directory name or omit altogether if you do not care to separate build types.
2929

doc/source/development/extending.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ The interface consists of two classes.
9999
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
100100

101101
A :class:`pandas.api.extensions.ExtensionDtype` is similar to a ``numpy.dtype`` object. It describes the
102-
data type. Implementors are responsible for a few unique items like the name.
102+
data type. Implementers are responsible for a few unique items like the name.
103103

104104
One particularly important item is the ``type`` property. This should be the
105105
class that is the scalar type for your data. For example, if you were writing an

doc/source/getting_started/tutorials.rst

-1
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,6 @@ Various tutorials
115115
* `Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013 <https://conference.scipy.org/scipy2013/tutorial_detail.php?id=109>`_
116116
* `Financial analysis in Python, by Thomas Wiecki <https://nbviewer.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb>`_
117117
* `Intro to pandas data structures, by Greg Reda <http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/>`_
118-
* `Pandas and Python: Top 10, by Manish Amde <https://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/>`_
119118
* `Pandas DataFrames Tutorial, by Karlijn Willems <https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_
120119
* `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1/>`_
121120
* `430+ Searchable Pandas recipes by Isshin Inada <https://skytowner.com/explore/pandas_recipes_reference>`_

doc/source/user_guide/merging.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -525,7 +525,7 @@ Performing an outer join with duplicate join keys in :class:`DataFrame`
525525
526526
.. warning::
527527

528-
Merging on duplicate keys sigificantly increase the dimensions of the result
528+
Merging on duplicate keys significantly increase the dimensions of the result
529529
and can cause a memory overflow.
530530

531531
.. _merging.validation:

doc/source/user_guide/reshaping.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -480,7 +480,7 @@ The values can be cast to a different type using the ``dtype`` argument.
480480
481481
.. versionadded:: 1.5.0
482482

483-
:func:`~pandas.from_dummies` coverts the output of :func:`~pandas.get_dummies` back into
483+
:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
484484
a :class:`Series` of categorical values from indicator values.
485485

486486
.. ipython:: python

doc/source/whatsnew/v1.0.0.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1079,7 +1079,7 @@ Datetimelike
10791079
- Bug in masking datetime-like arrays with a boolean mask of an incorrect length not raising an ``IndexError`` (:issue:`30308`)
10801080
- Bug in :attr:`Timestamp.resolution` being a property instead of a class attribute (:issue:`29910`)
10811081
- Bug in :func:`pandas.to_datetime` when called with ``None`` raising ``TypeError`` instead of returning ``NaT`` (:issue:`30011`)
1082-
- Bug in :func:`pandas.to_datetime` failing for ``deques`` when using ``cache=True`` (the default) (:issue:`29403`)
1082+
- Bug in :func:`pandas.to_datetime` failing for ``dequeues`` when using ``cache=True`` (the default) (:issue:`29403`)
10831083
- Bug in :meth:`Series.item` with ``datetime64`` or ``timedelta64`` dtype, :meth:`DatetimeIndex.item`, and :meth:`TimedeltaIndex.item` returning an integer instead of a :class:`Timestamp` or :class:`Timedelta` (:issue:`30175`)
10841084
- Bug in :class:`DatetimeIndex` addition when adding a non-optimized :class:`DateOffset` incorrectly dropping timezone information (:issue:`30336`)
10851085
- Bug in :meth:`DataFrame.drop` where attempting to drop non-existent values from a DatetimeIndex would yield a confusing error message (:issue:`30399`)

pandas/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
try:
2525
# numpy compat
2626
from pandas.compat import (
27-
is_numpy_dev as _is_numpy_dev, # pyright: ignore[reportUnusedImport] # noqa: F401,E501
27+
is_numpy_dev as _is_numpy_dev, # pyright: ignore[reportUnusedImport] # noqa: F401
2828
)
2929
except ImportError as _err: # pragma: no cover
3030
_module = _err.name

pandas/_libs/__init__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@
1313
# Below imports needs to happen first to ensure pandas top level
1414
# module gets monkeypatched with the pandas_datetime_CAPI
1515
# see pandas_datetime_exec in pd_datetime.c
16-
import pandas._libs.pandas_parser # noqa: E501 # isort: skip # type: ignore[reportUnusedImport]
17-
import pandas._libs.pandas_datetime # noqa: F401,E501 # isort: skip # type: ignore[reportUnusedImport]
16+
import pandas._libs.pandas_parser # isort: skip # type: ignore[reportUnusedImport]
17+
import pandas._libs.pandas_datetime # noqa: F401 # isort: skip # type: ignore[reportUnusedImport]
1818
from pandas._libs.interval import Interval
1919
from pandas._libs.tslibs import (
2020
NaT,

pandas/_libs/include/pandas/portable.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,5 +21,5 @@ The full license is in the LICENSE file, distributed with this software.
2121
#define getdigit_ascii(c, default) \
2222
(isdigit_ascii(c) ? ((int)((c) - '0')) : default)
2323
#define isspace_ascii(c) (((c) == ' ') || (((unsigned)(c) - '\t') < 5))
24-
#define toupper_ascii(c) ((((unsigned)(c) - 'a') < 26) ? ((c)&0x5f) : (c))
24+
#define toupper_ascii(c) ((((unsigned)(c) - 'a') < 26) ? ((c) & 0x5f) : (c))
2525
#define tolower_ascii(c) ((((unsigned)(c) - 'A') < 26) ? ((c) | 0x20) : (c))

pandas/_libs/include/pandas/vendored/ujson/lib/ultrajson.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -189,13 +189,13 @@ typedef struct __JSONObjectEncoder {
189189

190190
/*
191191
Begin iteration of an iterable object (JS_ARRAY or JS_OBJECT)
192-
Implementor should setup iteration state in ti->prv
192+
Implementer should setup iteration state in ti->prv
193193
*/
194194
JSPFN_ITERBEGIN iterBegin;
195195

196196
/*
197197
Retrieve next object in an iteration. Should return 0 to indicate iteration
198-
has reached end or 1 if there are more items. Implementor is responsible for
198+
has reached end or 1 if there are more items. Implementer is responsible for
199199
keeping state of the iteration. Use ti->prv fields for this
200200
*/
201201
JSPFN_ITERNEXT iterNext;

pandas/_libs/src/vendored/ujson/lib/ultrajsonenc.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ or UTF-16 surrogate pairs
7272
The extra 2 bytes are for the quotes around the string
7373
7474
*/
75-
#define RESERVE_STRING(_len) (2 + ((_len)*6))
75+
#define RESERVE_STRING(_len) (2 + ((_len) * 6))
7676

7777
static const double g_pow10[] = {1,
7878
10,

pandas/_libs/tslibs/parsing.pyx

+1-1
Original file line numberDiff line numberDiff line change
@@ -950,7 +950,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
950950
# the offset is separated into two tokens, ex. ['+', '0900’].
951951
# This separation will prevent subsequent processing
952952
# from correctly parsing the time zone format.
953-
# So in addition to the format nomalization, we rejoin them here.
953+
# So in addition to the format normalization, we rejoin them here.
954954
try:
955955
tokens[offset_index] = parsed_datetime.strftime("%z")
956956
except ValueError:

pandas/_libs/tslibs/timedeltas.pyx

+1-1
Original file line numberDiff line numberDiff line change
@@ -1229,7 +1229,7 @@ cdef class _Timedelta(timedelta):
12291229
return cmp_scalar(self._value, ots._value, op)
12301230
return self._compare_mismatched_resos(ots, op)
12311231

1232-
# TODO: re-use/share with Timestamp
1232+
# TODO: reuse/share with Timestamp
12331233
cdef bint _compare_mismatched_resos(self, _Timedelta other, op):
12341234
# Can't just dispatch to numpy as they silently overflow and get it wrong
12351235
cdef:

pandas/_testing/__init__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1059,14 +1059,14 @@ def shares_memory(left, right) -> bool:
10591059
if (
10601060
isinstance(left, ExtensionArray)
10611061
and is_string_dtype(left.dtype)
1062-
and left.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined] # noqa: E501
1062+
and left.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
10631063
):
10641064
# https://github.com/pandas-dev/pandas/pull/43930#discussion_r736862669
10651065
left = cast("ArrowExtensionArray", left)
10661066
if (
10671067
isinstance(right, ExtensionArray)
10681068
and is_string_dtype(right.dtype)
1069-
and right.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined] # noqa: E501
1069+
and right.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
10701070
):
10711071
right = cast("ArrowExtensionArray", right)
10721072
left_pa_data = left._pa_array

pandas/core/arrays/base.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -455,7 +455,7 @@ def __setitem__(self, key, value) -> None:
455455
-------
456456
None
457457
"""
458-
# Some notes to the ExtensionArray implementor who may have ended up
458+
# Some notes to the ExtensionArray implementer who may have ended up
459459
# here. While this method is not required for the interface, if you
460460
# *do* choose to implement __setitem__, then some semantics should be
461461
# observed:
@@ -775,7 +775,7 @@ def _values_for_argsort(self) -> np.ndarray:
775775
Notes
776776
-----
777777
The caller is responsible for *not* modifying these values in-place, so
778-
it is safe for implementors to give views on ``self``.
778+
it is safe for implementers to give views on ``self``.
779779
780780
Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore
781781
entries with missing values in the original array (according to
@@ -833,7 +833,7 @@ def argsort(
833833
>>> arr.argsort()
834834
array([1, 2, 0, 4, 3])
835835
"""
836-
# Implementor note: You have two places to override the behavior of
836+
# Implementer note: You have two places to override the behavior of
837837
# argsort.
838838
# 1. _values_for_argsort : construct the values passed to np.argsort
839839
# 2. argsort : total control over sorting. In case of overriding this,
@@ -874,7 +874,7 @@ def argmin(self, skipna: bool = True) -> int:
874874
>>> arr.argmin()
875875
1
876876
"""
877-
# Implementor note: You have two places to override the behavior of
877+
# Implementer note: You have two places to override the behavior of
878878
# argmin.
879879
# 1. _values_for_argsort : construct the values used in nargminmax
880880
# 2. argmin itself : total control over sorting.
@@ -908,7 +908,7 @@ def argmax(self, skipna: bool = True) -> int:
908908
>>> arr.argmax()
909909
3
910910
"""
911-
# Implementor note: You have two places to override the behavior of
911+
# Implementer note: You have two places to override the behavior of
912912
# argmax.
913913
# 1. _values_for_argsort : construct the values used in nargminmax
914914
# 2. argmax itself : total control over sorting.

pandas/core/arrays/datetimes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -559,7 +559,7 @@ def _box_func(self, x: np.datetime64) -> Timestamp | NaTType:
559559
# error: Return type "Union[dtype, DatetimeTZDtype]" of "dtype"
560560
# incompatible with return type "ExtensionDtype" in supertype
561561
# "ExtensionArray"
562-
def dtype(self) -> np.dtype[np.datetime64] | DatetimeTZDtype: # type: ignore[override] # noqa: E501
562+
def dtype(self) -> np.dtype[np.datetime64] | DatetimeTZDtype: # type: ignore[override]
563563
"""
564564
The dtype for the DatetimeArray.
565565

pandas/core/arrays/interval.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -850,7 +850,7 @@ def argsort(
850850
ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs)
851851

852852
if ascending and kind == "quicksort" and na_position == "last":
853-
# TODO: in an IntervalIndex we can re-use the cached
853+
# TODO: in an IntervalIndex we can reuse the cached
854854
# IntervalTree.left_sorter
855855
return np.lexsort((self.right, self.left))
856856

pandas/core/arrays/timedeltas.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ def f(self) -> np.ndarray:
9292
# error: Incompatible types in assignment (
9393
# expression has type "ndarray[Any, dtype[signedinteger[_32Bit]]]",
9494
# variable has type "ndarray[Any, dtype[signedinteger[_64Bit]]]
95-
result = get_timedelta_field(values, alias, reso=self._creso) # type: ignore[assignment] # noqa: E501
95+
result = get_timedelta_field(values, alias, reso=self._creso) # type: ignore[assignment]
9696
if self._hasna:
9797
result = self._maybe_mask_results(
9898
result, fill_value=None, convert="float64"

pandas/core/groupby/groupby.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -962,7 +962,7 @@ def _selected_obj(self):
962962
return self.obj[self._selection]
963963

964964
# Otherwise _selection is equivalent to _selection_list, so
965-
# _selected_obj matches _obj_with_exclusions, so we can re-use
965+
# _selected_obj matches _obj_with_exclusions, so we can reuse
966966
# that and avoid making a copy.
967967
return self._obj_with_exclusions
968968

@@ -1466,7 +1466,7 @@ def _concat_objects(
14661466
# when the ax has duplicates
14671467
# so we resort to this
14681468
# GH 14776, 30667
1469-
# TODO: can we re-use e.g. _reindex_non_unique?
1469+
# TODO: can we reuse e.g. _reindex_non_unique?
14701470
if ax.has_duplicates and not result.axes[self.axis].equals(ax):
14711471
# e.g. test_category_order_transformer
14721472
target = algorithms.unique1d(ax._values)
@@ -2864,7 +2864,7 @@ def _value_counts(
28642864
result_series.name = name
28652865
result_series.index = index.set_names(range(len(columns)))
28662866
result_frame = result_series.reset_index()
2867-
orig_dtype = self.grouper.groupings[0].obj.columns.dtype # type: ignore[union-attr] # noqa: E501
2867+
orig_dtype = self.grouper.groupings[0].obj.columns.dtype # type: ignore[union-attr]
28682868
cols = Index(columns, dtype=orig_dtype).insert(len(columns), name)
28692869
result_frame.columns = cols
28702870
result = result_frame

pandas/core/indexes/base.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -5369,7 +5369,7 @@ def _getitem_slice(self, slobj: slice) -> Self:
53695369
result = type(self)._simple_new(res, name=self._name, refs=self._references)
53705370
if "_engine" in self._cache:
53715371
reverse = slobj.step is not None and slobj.step < 0
5372-
result._engine._update_from_sliced(self._engine, reverse=reverse) # type: ignore[union-attr] # noqa: E501
5372+
result._engine._update_from_sliced(self._engine, reverse=reverse) # type: ignore[union-attr]
53735373

53745374
return result
53755375

pandas/core/indexing.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -985,7 +985,7 @@ def _getitem_tuple_same_dim(self, tup: tuple):
985985
This is only called after a failed call to _getitem_lowerdim.
986986
"""
987987
retval = self.obj
988-
# Selecting columns before rows is signficiantly faster
988+
# Selecting columns before rows is significantly faster
989989
start_val = (self.ndim - len(tup)) + 1
990990
for i, key in enumerate(reversed(tup)):
991991
i = self.ndim - i - start_val

pandas/core/interchange/dataframe.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def select_columns(self, indices: Sequence[int]) -> PandasDataFrameXchg:
8787
self._df.iloc[:, indices], self._nan_as_null, self._allow_copy
8888
)
8989

90-
def select_columns_by_name(self, names: list[str]) -> PandasDataFrameXchg: # type: ignore[override] # noqa: E501
90+
def select_columns_by_name(self, names: list[str]) -> PandasDataFrameXchg: # type: ignore[override]
9191
if not isinstance(names, abc.Sequence):
9292
raise ValueError("`names` is not a sequence")
9393
if not isinstance(names, list):

pandas/core/internals/blocks.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1415,7 +1415,7 @@ def where(
14151415

14161416
try:
14171417
# try/except here is equivalent to a self._can_hold_element check,
1418-
# but this gets us back 'casted' which we will re-use below;
1418+
# but this gets us back 'casted' which we will reuse below;
14191419
# without using 'casted', expressions.where may do unwanted upcasts.
14201420
casted = np_can_hold_element(values.dtype, other)
14211421
except (ValueError, TypeError, LossySetitemError):
@@ -1786,7 +1786,7 @@ def delete(self, loc) -> list[Block]:
17861786
else:
17871787
# No overload variant of "__getitem__" of "ExtensionArray" matches
17881788
# argument type "Tuple[slice, slice]"
1789-
values = self.values[previous_loc + 1 : idx, :] # type: ignore[call-overload] # noqa: E501
1789+
values = self.values[previous_loc + 1 : idx, :] # type: ignore[call-overload]
17901790
locs = mgr_locs_arr[previous_loc + 1 : idx]
17911791
nb = type(self)(
17921792
values, placement=BlockPlacement(locs), ndim=self.ndim, refs=refs

pandas/core/internals/construction.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -550,7 +550,7 @@ def _prep_ndarraylike(values, copy: bool = True) -> np.ndarray:
550550

551551
if len(values) == 0:
552552
# TODO: check for length-zero range, in which case return int64 dtype?
553-
# TODO: re-use anything in try_cast?
553+
# TODO: reuse anything in try_cast?
554554
return np.empty((0, 0), dtype=object)
555555
elif isinstance(values, range):
556556
arr = range_to_ndarray(values)

pandas/core/reshape/merge.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1390,13 +1390,13 @@ def _maybe_coerce_merge_keys(self) -> None:
13901390
):
13911391
ct = find_common_type([lk.dtype, rk.dtype])
13921392
if is_extension_array_dtype(ct):
1393-
rk = ct.construct_array_type()._from_sequence(rk) # type: ignore[union-attr] # noqa: E501
1393+
rk = ct.construct_array_type()._from_sequence(rk) # type: ignore[union-attr]
13941394
else:
13951395
rk = rk.astype(ct) # type: ignore[arg-type]
13961396
elif is_extension_array_dtype(rk.dtype):
13971397
ct = find_common_type([lk.dtype, rk.dtype])
13981398
if is_extension_array_dtype(ct):
1399-
lk = ct.construct_array_type()._from_sequence(lk) # type: ignore[union-attr] # noqa: E501
1399+
lk = ct.construct_array_type()._from_sequence(lk) # type: ignore[union-attr]
14001400
else:
14011401
lk = lk.astype(ct) # type: ignore[arg-type]
14021402

pandas/core/reshape/reshape.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,7 @@ def mask_all(self) -> bool:
222222

223223
@cache_readonly
224224
def arange_result(self) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.bool_]]:
225-
# We cache this for re-use in ExtensionBlock._unstack
225+
# We cache this for reuse in ExtensionBlock._unstack
226226
dummy_arr = np.arange(len(self.index), dtype=np.intp)
227227
new_values, mask = self.get_new_values(dummy_arr, fill_value=-1)
228228
return new_values, mask.any(0)

0 commit comments

Comments
 (0)