Skip to content

Removed ABCs from pandas._typing #27424

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Jul 24, 2019
Merged
46 changes: 24 additions & 22 deletions pandas/_typing.py
Original file line number Diff line number Diff line change
@@ -1,33 +1,35 @@
from pathlib import Path
from typing import IO, AnyStr, TypeVar, Union
from typing import IO, TYPE_CHECKING, AnyStr, TypeVar, Union

import numpy as np

from pandas._libs import Timestamp
from pandas._libs.tslibs.period import Period
from pandas._libs.tslibs.timedeltas import Timedelta
if TYPE_CHECKING: # Use for any internal imports
from pandas._libs import Timestamp
from pandas._libs.tslibs.period import Period
from pandas._libs.tslibs.timedeltas import Timedelta

from pandas.core.arrays.base import ExtensionArray
from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCExtensionArray,
ABCIndexClass,
ABCSeries,
ABCSparseSeries,
)
from pandas.core.indexes.base import Index
from pandas.core.frame import DataFrame
from pandas.core.series import Series
from pandas.core.sparse.series import SparseSeries

from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCExtensionArray,
ABCIndexClass,
ABCSeries,
ABCSparseSeries,
)

AnyArrayLike = TypeVar(
"AnyArrayLike",
ABCExtensionArray,
ABCIndexClass,
ABCSeries,
ABCSparseSeries,
np.ndarray,
"AnyArrayLike", "ExtensionArray", "Index", "Series", "SparseSeries", np.ndarray
)
ArrayLike = TypeVar("ArrayLike", ABCExtensionArray, np.ndarray)
DatetimeLikeScalar = TypeVar("DatetimeLikeScalar", Period, Timestamp, Timedelta)
Dtype = Union[str, np.dtype, ExtensionDtype]
ArrayLike = TypeVar("ArrayLike", "ExtensionArray", np.ndarray)
DatetimeLikeScalar = TypeVar("DatetimeLikeScalar", "Period", "Timestamp", "Timedelta")
Dtype = Union[str, np.dtype, "ExtensionDtype"]
FilePathOrBuffer = Union[str, Path, IO[AnyStr]]

FrameOrSeries = TypeVar("FrameOrSeries", ABCSeries, ABCDataFrame)
FrameOrSeries = TypeVar("FrameOrSeries", "Series", "DataFrame")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
FrameOrSeries = TypeVar("FrameOrSeries", "Series", "DataFrame")
FrameOrSeries = Union["Series", "DataFrame"]

quote from https://mypy.readthedocs.io/en/latest/generics.html...
"User-defined generics are a moderately advanced feature and you can get far without ever using them..."

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks but this just loosens the type system rather than actually fixing anything. TypeVar is going to be generally more useful for checking functions that can be fully generic in nature.

Might just change the return of this one and see how many others require Union in the future

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense. Union[Series, DataFrame] might be better written as NDFrame anyway?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the "user-defined generics" you are referring to are more applicable to containers not TypeVars. Right now we just use a blanket Series as a return object, though in the future we could do something like Series[int] and Series[str], etc...; the Series would be the user-defined generic in that case

The TypeVar in the docs you linked is just a way of parametrizing that user-defined generic, so that a Series[int] would have to stay as a Series[int] through it's lifecycle; without that parametrization we allow Series[int] to become Series[str] without any complaints from mypy today

We are probably a ways off of doing user-defined generics but this is great that you looked into it. Certainly open to ideas on that front if you think of a good way to implement as we get more familiar with these annotations

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense. Union[Series, DataFrame] might be better written as NDFrame anyway?

Hmm that would work though we don't typically import NDFrame anywhere so I don't think want to start here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave as FrameOrSeries as its more descriptive

Scalar = Union[str, int, float]
4 changes: 2 additions & 2 deletions pandas/core/dtypes/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,11 +168,11 @@ def ensure_int_or_float(arr: ArrayLike, copy=False) -> np.array:
will remain unchanged.
"""
try:
return arr.astype("int64", copy=copy, casting="safe")
return arr.astype("int64", copy=copy, casting="safe") # type: ignore
except TypeError:
pass
try:
return arr.astype("uint64", copy=copy, casting="safe")
return arr.astype("uint64", copy=copy, casting="safe") # type: ignore
except TypeError:
return arr.astype("float64", copy=copy)

Expand Down
6 changes: 4 additions & 2 deletions pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -934,7 +934,7 @@ def get_indexer(
elif not is_object_dtype(target):
# homogeneous scalar index: use IntervalTree
target = self._maybe_convert_i8(target)
indexer = self._engine.get_indexer(target.values)
indexer = self._engine.get_indexer(target.values) # type: ignore
else:
# heterogeneous scalar index: defer elementwise to get_loc
# (non-overlapping so get_loc guarantees scalar of KeyError)
Expand Down Expand Up @@ -979,7 +979,9 @@ def get_indexer_non_unique(
indexer = np.concatenate(indexer)
else:
target = self._maybe_convert_i8(target)
indexer, missing = self._engine.get_indexer_non_unique(target.values)
indexer, missing = self._engine.get_indexer_non_unique(
target.values # type: ignore
)

return ensure_platform_int(indexer), ensure_platform_int(missing)

Expand Down
1 change: 1 addition & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ exclude =
.eggs/*.py,
versioneer.py,
env # exclude asv benchmark environments from linting
pandas/_typing.py

[flake8-rst]
bootstrap =
Expand Down