Skip to content

DEPR: concat ignoring empty objects #52532

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Jul 10, 2023
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
63292d4
DEPR: concat with empty objects
jbrockmendel Apr 7, 2023
2ace79c
xfail on 32bit
jbrockmendel Apr 8, 2023
6258adf
missing reason
jbrockmendel Apr 8, 2023
bfd969f
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 10, 2023
51e6d36
Fix AM build
jbrockmendel Apr 10, 2023
52ce0d7
post-merge fixup
jbrockmendel Apr 10, 2023
f8dc81e
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 11, 2023
163bf8a
catch more specifically
jbrockmendel Apr 11, 2023
03a0641
un-xfail
jbrockmendel Apr 12, 2023
49a7146
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 12, 2023
7e2e995
mypy fixup
jbrockmendel Apr 12, 2023
7c0c715
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 13, 2023
7f2977a
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 13, 2023
0eaf359
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 17, 2023
a878fea
Merge branch 'main' into depr-concat-empty
jbrockmendel Apr 18, 2023
75d5041
update test
jbrockmendel Apr 18, 2023
9e2de8f
Merge branch 'main' into depr-concat-empty
jbrockmendel May 4, 2023
392b40a
Fix broken test
jbrockmendel May 4, 2023
465c141
Merge branch 'main' into depr-concat-empty
jbrockmendel May 16, 2023
3666bca
remove duplicate whatsnew entries
jbrockmendel May 16, 2023
390d4ef
Merge branch 'main' into depr-concat-empty
jbrockmendel May 22, 2023
aa5794f
Merge branch 'main' into depr-concat-empty
jbrockmendel May 23, 2023
1277b26
Merge branch 'main' into depr-concat-empty
jbrockmendel May 24, 2023
5cddae9
Merge branch 'main' into depr-concat-empty
jbrockmendel May 24, 2023
8e58bff
Merge branch 'main' into depr-concat-empty
jbrockmendel May 25, 2023
47a17b3
Merge branch 'main' into depr-concat-empty
jbrockmendel May 25, 2023
e696c53
remove unused
jbrockmendel May 25, 2023
7f07121
Merge branch 'main' into depr-concat-empty
jbrockmendel May 26, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions doc/source/whatsnew/v2.1.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -218,22 +218,29 @@ Deprecations
~~~~~~~~~~~~
- Deprecated 'broadcast_axis' keyword in :meth:`Series.align` and :meth:`DataFrame.align`, upcast before calling ``align`` with ``left = DataFrame({col: left for col in right.columns}, index=right.index)`` (:issue:`51856`)
- Deprecated 'method', 'limit', and 'fill_axis' keywords in :meth:`DataFrame.align` and :meth:`Series.align`, explicitly call ``fillna`` on the alignment results instead (:issue:`51856`)
- Deprecated :func:`concat` behavior when any of the objects being concatenated have length 0; in the past the dtypes of empty objects were ignored when determining the resulting dtype, in a future version they will not (:issue:`39122`)
- Deprecated :meth:`.DataFrameGroupBy.apply` and methods on the objects returned by :meth:`.DataFrameGroupBy.resample` operating on the grouping column(s); select the columns to operate on after groupby to either explicitly include or exclude the groupings and avoid the ``FutureWarning`` (:issue:`7155`)
- Deprecated :meth:`.Groupby.all` and :meth:`.GroupBy.any` with datetime64 or :class:`PeriodDtype` values, matching the :class:`Series` and :class:`DataFrame` deprecations (:issue:`34479`)
- Deprecated :meth:`Categorical.to_list`, use ``obj.tolist()`` instead (:issue:`51254`)
- Deprecated :meth:`DataFrame._data` and :meth:`Series._data`, use public APIs instead (:issue:`33333`)
- Deprecated :meth:`DataFrameGroupBy.dtypes`, check ``dtypes`` on the underlying object instead (:issue:`51045`)
- Deprecated :meth:`DataFrameGroupBy.dtypes`, check ``dtypes`` on the underlying object instead (:issue:`51045`)
- Deprecated ``axis=1`` in :meth:`DataFrame.ewm`, :meth:`DataFrame.rolling`, :meth:`DataFrame.expanding`, transpose before calling the method instead (:issue:`51778`)
- Deprecated ``axis=1`` in :meth:`DataFrame.groupby` and in :class:`Grouper` constructor, do ``frame.T.groupby(...)`` instead (:issue:`51203`)
- Deprecated ``axis=1`` in :meth:`DataFrame.groupby` and in :class:`Grouper` constructor, do ``frame.T.groupby(...)`` instead (:issue:`51203`)
- Deprecated accepting slices in :meth:`DataFrame.take`, call ``obj[slicer]`` or pass a sequence of integers instead (:issue:`51539`)
- Deprecated explicit support for subclassing :class:`Index` (:issue:`45289`)
- Deprecated explicit support for subclassing :class:`Index` (:issue:`45289`)
- Deprecated passing a :class:`DataFrame` to :meth:`DataFrame.from_records`, use :meth:`DataFrame.set_index` or :meth:`DataFrame.drop` instead (:issue:`51353`)
- Deprecated silently dropping unrecognized timezones when parsing strings to datetimes (:issue:`18702`)
- Deprecated the ``axis`` keyword in :meth:`DataFrame.ewm`, :meth:`Series.ewm`, :meth:`DataFrame.rolling`, :meth:`Series.rolling`, :meth:`DataFrame.expanding`, :meth:`Series.expanding` (:issue:`51778`)
- Deprecated the ``axis`` keyword in :meth:`DataFrame.resample`, :meth:`Series.resample` (:issue:`51778`)
- Deprecated the behavior of :func:`concat` with both ``len(keys) != len(objs)``, in a future version this will raise instead of truncating to the shorter of the two sequences (:issue:`43485`)
- Deprecated the behavior of :func:`concat` with both ``len(keys) != len(objs)``, in a future version this will raise instead of truncating to the shorter of the two sequences (:issue:`43485`)
- Deprecated the default of ``observed=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby`; this will default to ``True`` in a future version (:issue:`43999`)
- Deprecated the default of ``observed=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby`; this will default to ``True`` in a future version (:issue:`43999`)
- Deprecating pinning ``group.name`` to each group in :meth:`SeriesGroupBy.aggregate` aggregations; if your operation requires utilizing the groupby keys, iterate over the groupby object instead (:issue:`41090`)
- Deprecating pinning ``group.name`` to each group in :meth:`SeriesGroupBy.aggregate` aggregations; if your operation requires utilizing the groupby keys, iterate over the groupby object instead (:issue:`41090`)
- Deprecated the 'axis' keyword in :meth:`.GroupBy.idxmax`, :meth:`.GroupBy.idxmin`, :meth:`.GroupBy.fillna`, :meth:`.GroupBy.take`, :meth:`.GroupBy.skew`, :meth:`.GroupBy.rank`, :meth:`.GroupBy.cumprod`, :meth:`.GroupBy.cumsum`, :meth:`.GroupBy.cummax`, :meth:`.GroupBy.cummin`, :meth:`.GroupBy.pct_change`, :meth:`GroupBy.diff`, :meth:`.GroupBy.shift`, and :meth:`DataFrameGroupBy.corrwith`; for ``axis=1`` operate on the underlying :class:`DataFrame` instead (:issue:`50405`, :issue:`51046`)
- Deprecated :class:`.DataFrameGroupBy` with ``as_index=False`` not including groupings in the result when they are not columns of the DataFrame (:issue:`49519`)
- Deprecated :func:`is_categorical_dtype`, use ``isinstance(obj.dtype, pd.CategoricalDtype)`` instead (:issue:`52527`)
Expand Down
139 changes: 57 additions & 82 deletions pandas/core/dtypes/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,24 +8,21 @@
Sequence,
cast,
)
import warnings

import numpy as np

from pandas._libs import lib
from pandas.util._exceptions import find_stack_level

from pandas.core.dtypes.astype import astype_array
from pandas.core.dtypes.cast import (
common_dtype_categorical_compat,
find_common_type,
)
from pandas.core.dtypes.dtypes import (
CategoricalDtype,
DatetimeTZDtype,
ExtensionDtype,
)
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.dtypes.generic import (
ABCCategoricalIndex,
ABCExtensionArray,
ABCSeries,
)

Expand All @@ -41,6 +38,9 @@
)


_dtype_obj = np.dtype(object)


def _is_nonempty(x, axis) -> bool:
# filter empty arrays
# 1-d dtypes always are included here
Expand Down Expand Up @@ -100,45 +100,63 @@ def concat_compat(
# Creating an empty array directly is tempting, but the winnings would be
# marginal given that it would still require shape & dtype calculation and
# np.concatenate which has them both implemented is compiled.
orig = to_concat
non_empties = [x for x in to_concat if _is_nonempty(x, axis)]
if non_empties and axis == 0 and not ea_compat_axis:
# ea_compat_axis see GH#39574
to_concat = non_empties

dtypes = {obj.dtype for obj in to_concat}
kinds = {obj.dtype.kind for obj in to_concat}
contains_datetime = any(
isinstance(dtype, (np.dtype, DatetimeTZDtype)) and dtype.kind in "mM"
for dtype in dtypes
) or any(isinstance(obj, ABCExtensionArray) and obj.ndim > 1 for obj in to_concat)
any_ea, kinds, target_dtype = _get_result_dtype(to_concat, non_empties)

if len(to_concat) < len(orig):
_, _, alt_dtype = _get_result_dtype(orig, non_empties)
if alt_dtype != target_dtype:
# GH#39122
warnings.warn(
"The behavior of array concatenation with empty entries is "
"deprecated. In a future version, this will no longer exclude "
"empty items when determining the result dtype. "
"To retain the old behavior, exclude the empty entries before "
"the concat operation.",
FutureWarning,
stacklevel=find_stack_level(),
)

all_empty = not len(non_empties)
single_dtype = len(dtypes) == 1
any_ea = any(isinstance(x, ExtensionDtype) for x in dtypes)
if target_dtype is not None:
to_concat = [astype_array(arr, target_dtype, copy=False) for arr in to_concat]

if not isinstance(to_concat[0], np.ndarray):
# i.e. isinstance(to_concat[0], ExtensionArray)
to_concat_eas = cast("Sequence[ExtensionArray]", to_concat)
cls = type(to_concat[0])
return cls._concat_same_type(to_concat_eas)
else:
to_concat_arrs = cast("Sequence[np.ndarray]", to_concat)
result = np.concatenate(to_concat_arrs, axis=axis)

if not any_ea and "b" in kinds and result.dtype.kind in "iuf":
# GH#39817 cast to object instead of casting bools to numeric
result = result.astype(object, copy=False)
return result

if contains_datetime:
return _concat_datetime(to_concat, axis=axis)

def _get_result_dtype(to_concat: Sequence[ArrayLike], non_empties: Sequence[ArrayLike]):
target_dtype = None

dtypes = {obj.dtype for obj in to_concat}
kinds = {obj.dtype.kind for obj in to_concat}

any_ea = any(not isinstance(x, np.ndarray) for x in to_concat)
if any_ea:
# i.e. any ExtensionArrays

# we ignore axis here, as internally concatting with EAs is always
# for axis=0
if not single_dtype:
if len(dtypes) != 1:
target_dtype = find_common_type([x.dtype for x in to_concat])
target_dtype = common_dtype_categorical_compat(to_concat, target_dtype)
to_concat = [
astype_array(arr, target_dtype, copy=False) for arr in to_concat
]

if isinstance(to_concat[0], ABCExtensionArray):
# TODO: what about EA-backed Index?
to_concat_eas = cast("Sequence[ExtensionArray]", to_concat)
cls = type(to_concat[0])
return cls._concat_same_type(to_concat_eas)
else:
to_concat_arrs = cast("Sequence[np.ndarray]", to_concat)
return np.concatenate(to_concat_arrs)

elif all_empty:
elif not len(non_empties):
# we have all empties, but may need to coerce the result dtype to
# object if we have non-numeric type operands (numpy would otherwise
# cast this to float)
Expand All @@ -148,17 +166,16 @@ def concat_compat(
pass
else:
# coerce to object
to_concat = [x.astype("object") for x in to_concat]
target_dtype = np.dtype(object)
kinds = {"o"}
else:
# Argument 1 to "list" has incompatible type "Set[Union[ExtensionDtype,
# Any]]"; expected "Iterable[Union[dtype[Any], None, Type[Any],
# _SupportsDType[dtype[Any]], str, Tuple[Any, Union[SupportsIndex,
# Sequence[SupportsIndex]]], List[Any], _DTypeDict, Tuple[Any, Any]]]"
target_dtype = np.find_common_type(list(dtypes), []) # type: ignore[arg-type]

# error: Argument 1 to "concatenate" has incompatible type
# "Sequence[Union[ExtensionArray, ndarray[Any, Any]]]"; expected
# "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]]]"
result: np.ndarray = np.concatenate(to_concat, axis=axis) # type: ignore[arg-type]
if "b" in kinds and result.dtype.kind in "iuf":
# GH#39817 cast to object instead of casting bools to numeric
result = result.astype(object, copy=False)
return result
return any_ea, kinds, target_dtype


def union_categoricals(
Expand Down Expand Up @@ -320,45 +337,3 @@ def _maybe_unwrap(x):

dtype = CategoricalDtype(categories=categories, ordered=ordered)
return Categorical._simple_new(new_codes, dtype=dtype)


def _concatenate_2d(to_concat: Sequence[np.ndarray], axis: AxisInt) -> np.ndarray:
# coerce to 2d if needed & concatenate
if axis == 1:
to_concat = [np.atleast_2d(x) for x in to_concat]
return np.concatenate(to_concat, axis=axis)


def _concat_datetime(to_concat: Sequence[ArrayLike], axis: AxisInt = 0) -> ArrayLike:
"""
provide concatenation of an datetimelike array of arrays each of which is a
single M8[ns], datetime64[ns, tz] or m8[ns] dtype

Parameters
----------
to_concat : sequence of arrays
axis : axis to provide concatenation

Returns
-------
a single array, preserving the combined dtypes
"""
from pandas.core.construction import ensure_wrapped_if_datetimelike

to_concat = [ensure_wrapped_if_datetimelike(x) for x in to_concat]

single_dtype = lib.dtypes_all_equal([x.dtype for x in to_concat])

# multiple types, need to coerce to object
if not single_dtype:
# ensure_wrapped_if_datetimelike ensures that astype(object) wraps
# in Timestamp/Timedelta
return _concatenate_2d([x.astype(object) for x in to_concat], axis=axis)

# error: Unexpected keyword argument "axis" for "_concat_same_type" of
# "ExtensionArray"
to_concat_eas = cast("list[ExtensionArray]", to_concat)
result = type(to_concat_eas[0])._concat_same_type( # type: ignore[call-arg]
to_concat_eas, axis=axis
)
return result
31 changes: 16 additions & 15 deletions pandas/core/internals/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -475,7 +475,9 @@ def is_na(self) -> bool:

values = blk.values
if values.size == 0:
# GH#39122 this case will return False once deprecation is enforced
return True

if isinstance(values.dtype, SparseDtype):
return False

Expand All @@ -494,16 +496,14 @@ def is_na(self) -> bool:
return all(isna_all(row) for row in values)

@cache_readonly
def is_na_without_isna_all(self) -> bool:
def is_na_after_size_and_isna_all_deprecation(self) -> bool:
"""
Will self.is_na be True after values.size == 0 deprecation and isna_all
deprecation are enforced?
"""
blk = self.block
if blk.dtype.kind == "V":
return True
if not blk._can_hold_na:
return False

values = blk.values
if values.size == 0:
return True
Comment on lines -409 to -418
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the future behavior won't depend on values.size == 0 (note the changed method name/docstring)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure but the deprecation hasn't been enforced yet, why is this changing already?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this method is for checking on the future behavior to see if we need to issue a warning.

return False

def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
Expand Down Expand Up @@ -565,17 +565,16 @@ def _concatenate_join_units(join_units: list[JoinUnit], copy: bool) -> ArrayLike

if empty_dtype != empty_dtype_future:
if empty_dtype == concat_values.dtype:
# GH#40893
# GH#39122, GH#40893
warnings.warn(
"The behavior of DataFrame concatenation with all-NA entries is "
"deprecated. In a future version, this will no longer exclude "
"all-NA columns when determining the result dtypes. "
"To retain the old behavior, cast the all-NA columns to the "
"desired dtype before the concat operation.",
"The behavior of DataFrame concatenation with empty or all-NA "
"entries is deprecated. In a future version, this will no longer "
"exclude empty or all-NA columns when determining the result dtypes. "
"To retain the old behavior, exclude the relevant entries before "
"the concat operation.",
FutureWarning,
stacklevel=find_stack_level(),
)

return concat_values


Expand Down Expand Up @@ -631,7 +630,9 @@ def _get_empty_dtype(join_units: Sequence[JoinUnit]) -> tuple[DtypeObj, DtypeObj
dtype_future = dtype
if len(dtypes) != len(join_units):
dtypes_future = [
unit.block.dtype for unit in join_units if not unit.is_na_without_isna_all
unit.block.dtype
for unit in join_units
if not unit.is_na_after_size_and_isna_all_deprecation
]
if not len(dtypes_future):
dtypes_future = [
Expand Down
7 changes: 5 additions & 2 deletions pandas/tests/dtypes/test_concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,11 @@ def test_concat_mismatched_categoricals_with_empty():
ser1 = Series(["a", "b", "c"], dtype="category")
ser2 = Series([], dtype="category")

result = _concat.concat_compat([ser1._values, ser2._values])
expected = pd.concat([ser1, ser2])._values
msg = "The behavior of array concatenation with empty entries is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
result = _concat.concat_compat([ser1._values, ser2._values])
with tm.assert_produces_warning(FutureWarning, match=msg):
expected = pd.concat([ser1, ser2])._values
tm.assert_categorical_equal(result, expected)


Expand Down
11 changes: 8 additions & 3 deletions pandas/tests/groupby/test_groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,9 +374,13 @@ def f3(x):

df2 = DataFrame({"a": [3, 2, 2, 2], "b": range(4), "c": range(5, 9)})

depr_msg = "The behavior of array concatenation with empty entries is deprecated"

# correct result
result1 = df.groupby("a").apply(f1)
result2 = df2.groupby("a").apply(f1)
with tm.assert_produces_warning(FutureWarning, match=depr_msg):
result1 = df.groupby("a").apply(f1)
with tm.assert_produces_warning(FutureWarning, match=depr_msg):
result2 = df2.groupby("a").apply(f1)
tm.assert_frame_equal(result1, result2)

# should fail (not the same number of levels)
Expand All @@ -390,7 +394,8 @@ def f3(x):
with pytest.raises(AssertionError, match=msg):
df.groupby("a").apply(f3)
with pytest.raises(AssertionError, match=msg):
df2.groupby("a").apply(f3)
with tm.assert_produces_warning(FutureWarning, match=depr_msg):
df2.groupby("a").apply(f3)


def test_attr_wrapper(ts):
Expand Down
4 changes: 3 additions & 1 deletion pandas/tests/indexes/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -616,7 +616,9 @@ def test_append_empty_preserve_name(self, name, expected):
left = Index([], name="foo")
right = Index([1, 2, 3], name=name)

result = left.append(right)
msg = "The behavior of array concatenation with empty entries is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
result = left.append(right)
assert result.name == expected

@pytest.mark.parametrize(
Expand Down
4 changes: 3 additions & 1 deletion pandas/tests/reshape/concat/test_append.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,9 @@ def test_append_preserve_index_name(self):
df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]], columns=["A", "B", "C"])
df2 = df2.set_index(["A"])

result = df1._append(df2)
msg = "The behavior of array concatenation with empty entries is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
result = df1._append(df2)
assert result.index.name == "A"

indexes_can_append = [
Expand Down
21 changes: 13 additions & 8 deletions pandas/tests/reshape/concat/test_append_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -693,11 +693,14 @@ def test_concat_categorical_empty(self):
s1 = Series([], dtype="category")
s2 = Series([1, 2], dtype="category")

tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2)
tm.assert_series_equal(s1._append(s2, ignore_index=True), s2)
msg = "The behavior of array concatenation with empty entries is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2)
tm.assert_series_equal(s1._append(s2, ignore_index=True), s2)

tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2)
tm.assert_series_equal(s2._append(s1, ignore_index=True), s2)
with tm.assert_produces_warning(FutureWarning, match=msg):
tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2)
tm.assert_series_equal(s2._append(s1, ignore_index=True), s2)

s1 = Series([], dtype="category")
s2 = Series([], dtype="category")
Expand All @@ -719,11 +722,13 @@ def test_concat_categorical_empty(self):

# empty Series is ignored
exp = Series([np.nan, np.nan])
tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp)
tm.assert_series_equal(s1._append(s2, ignore_index=True), exp)
with tm.assert_produces_warning(FutureWarning, match=msg):
tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp)
tm.assert_series_equal(s1._append(s2, ignore_index=True), exp)

tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp)
tm.assert_series_equal(s2._append(s1, ignore_index=True), exp)
with tm.assert_produces_warning(FutureWarning, match=msg):
tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp)
tm.assert_series_equal(s2._append(s1, ignore_index=True), exp)

def test_categorical_concat_append(self):
cat = Categorical(["a", "b"], categories=["a", "b"])
Expand Down
Loading