Skip to content

PERF: improve performance of infer_dtype #51054

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Feb 9, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v2.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -909,6 +909,7 @@ Performance improvements
- Performance improvement for :meth:`MultiIndex.unique` (:issue:`48335`)
- Performance improvement for indexing operations with nullable dtypes (:issue:`49420`)
- Performance improvement for :func:`concat` with extension array backed indexes (:issue:`49128`, :issue:`49178`)
- Performance improvement for :func:`api.types.infer_dtype` (:issue:`51054`)
- Reduce memory usage of :meth:`DataFrame.to_pickle`/:meth:`Series.to_pickle` when using BZ2 or LZMA (:issue:`49068`)
- Performance improvement for :class:`~arrays.StringArray` constructor passing a numpy array with type ``np.str_`` (:issue:`49109`)
- Performance improvement in :meth:`~arrays.IntervalArray.from_tuples` (:issue:`50620`)
Expand Down
28 changes: 20 additions & 8 deletions pandas/_libs/lib.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,13 @@ from pandas._libs.tslibs.period cimport is_period_object
from pandas._libs.tslibs.timedeltas cimport convert_to_timedelta64
from pandas._libs.tslibs.timezones cimport tz_compare

from pandas.core.dtypes.generic import (
ABCExtensionArray,
ABCIndex,
ABCPandasArray,
ABCSeries,
)

# constants that will be compared to potentially arbitrarily large
# python int
cdef:
Expand Down Expand Up @@ -1358,7 +1365,7 @@ cdef object _try_infer_map(object dtype):
cdef:
object val
str attr
for attr in ["name", "kind", "base", "type"]:
for attr in ["kind", "name", "base", "type"]:
val = getattr(dtype, attr, None)
if val in _TYPE_MAP:
return _TYPE_MAP[val]
Expand Down Expand Up @@ -1475,11 +1482,17 @@ def infer_dtype(value: object, skipna: bool = True) -> str:
bint seen_val = False
flatiter it

if not util.is_array(value):
if isinstance(value, (ABCSeries, ABCExtensionArray, ABCPandasArray)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any way to do this without relying on the ABCFoos?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's not ideal, but it was the best solution I could find that keeps the performance:

  • Importing Series, Index etc. into the global namespace is not possible because of circular import issues
  • Importing Series, Index etc. inside infer_dtype is possible, but costs 300 ns per imported class for each function call, so it adds up to something very noticable
  • I thought of checking class names as a guard before importing inside the function, but that gives problems with subclasses.

The advantage of importing ABCFoo into the global namespace is that it's a onetime import, so no cost for each function call and it's a nice speed boost. But I also see the problem of importing from pythonland.

IMO the speedup is worth this, but I can also see it's not ideal, so no biggie if it doesn't get accepted.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely not suggesting using the non-ABC versions. my thought is that without adding this here, in the elif hasattr(value, "dtype") block we could do the _try_infer_map(value.dtype) check before the "Unwrap Series/Index" bit

inferred = _try_infer_map(value.dtype)
if inferred is not None:
return inferred
elif isinstance(value, ABCIndex) and skipna is False:
# Index, use the cached attribute if possible, populate the cache otherwise
return value.inferred_type

if util.is_array(value):
values = value
elif hasattr(value, "inferred_type") and skipna is False:
# Index, use the cached attribute if possible, populate the cache otherwise
return value.inferred_type
elif hasattr(value, "dtype"):
# this will handle ndarray-like
# e.g. categoricals
Expand All @@ -1493,7 +1506,6 @@ def infer_dtype(value: object, skipna: bool = True) -> str:

# Unwrap Series/Index
values = np.asarray(value)

else:
if not isinstance(value, list):
value = list(value)
Expand All @@ -1503,10 +1515,10 @@ def infer_dtype(value: object, skipna: bool = True) -> str:
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
values = construct_1d_object_array_from_listlike(value)

val = _try_infer_map(values.dtype)
if val is not None:
inferred = _try_infer_map(values.dtype)
if inferred is not None:
# Anything other than object-dtype should return here.
return val
return inferred

if values.descr.type_num != NPY_OBJECT:
# i.e. values.dtype != np.object_
Expand Down