Skip to content

BUG: pd.concat produces frames with inconsistent order when concating the ones with categorical indices #46019

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 24 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v1.5.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -535,6 +535,7 @@ Reshaping
- Bug in :func:`get_dummies` that selected object and categorical dtypes but not string (:issue:`44965`)
- Bug in :meth:`DataFrame.align` when aligning a :class:`MultiIndex` to a :class:`Series` with another :class:`MultiIndex` (:issue:`46001`)
- Bug in concanenation with ``IntegerDtype``, or ``FloatingDtype`` arrays where the resulting dtype did not mirror the behavior of the non-nullable dtypes (:issue:`46379`)
- Bug in :func:`concat` between two :class:`DataFrame` with categorical indexes that have the same categories returning with indexes in improper order (:issue:`44099`)
-

Sparse
Expand Down
5 changes: 4 additions & 1 deletion pandas/core/indexes/category.py
Original file line number Diff line number Diff line change
Expand Up @@ -571,7 +571,10 @@ def map(self, mapper):
def _concat(self, to_concat: list[Index], name: Hashable) -> Index:
# if calling index is category, don't check dtype of others
try:
codes = np.concatenate([self._is_dtype_compat(c).codes for c in to_concat])
data = np.concatenate(
[self._is_dtype_compat(c).tolist() for c in to_concat]
)
codes = Categorical(data, categories=self.categories).codes
except TypeError:
# not all to_concat elements are among our categories (or NA)
from pandas.core.dtypes.concat import concat_compat
Expand Down
12 changes: 12 additions & 0 deletions pandas/tests/indexes/categorical/test_category.py
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,18 @@ def test_map_str(self):
# See test_map.py
pass

def test_append(self):
# GH 44099
# concat indexes which have the same categories

ci1 = CategoricalIndex(["a", "b", "c"], categories=["a", "b", "c"])
ci2 = CategoricalIndex(["b", "a", "c"], categories=["b", "a", "c"])
expected = CategoricalIndex(
["a", "b", "c", "b", "a", "c"], categories=["a", "b", "c"]
)
result = ci1.append(ci2)
tm.assert_index_equal(result, expected)


class TestCategoricalIndex2:
# Tests that are not overriding a test in Base
Expand Down
22 changes: 22 additions & 0 deletions pandas/tests/reshape/concat/test_concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

import pandas as pd
from pandas import (
CategoricalIndex,
DataFrame,
Index,
MultiIndex,
Expand Down Expand Up @@ -502,6 +503,27 @@ def test_concat_duplicate_indices_raise(self):
with pytest.raises(InvalidIndexError, match=msg):
concat([df1, df2], axis=1)

def test_concat_with_categorical_indices(self):
# GH 44099
# concat frames with categorical indices that have the same values

df1 = DataFrame(
{"col1": ["a_val", "b_val", "c_val"]},
index=CategoricalIndex(["a", "b", "c"], categories=["a", "b", "c"]),
)
df2 = DataFrame(
{"col1": ["b_val", "a_val", "c_val"]},
index=CategoricalIndex(["b", "a", "c"], categories=["b", "a", "c"]),
)
expected = DataFrame(
{"col1": ["a_val", "b_val", "c_val", "b_val", "a_val", "c_val"]},
index=CategoricalIndex(
["a", "b", "c", "b", "a", "c"], categories=["a", "b", "c"]
),
)
result = concat([df1, df2])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make an asv for this type of concatanation and then show how it performs previously. i am worried that the list conversion of the codes is very expensive.

tm.assert_frame_equal(result, expected)


@pytest.mark.parametrize("pdt", [Series, DataFrame])
@pytest.mark.parametrize("dt", np.sctypes["float"])
Expand Down