Skip to content

[BUG]: Fix ValueError in concat() when at least one Index has duplicates #36290

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Nov 19, 2020
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions asv_bench/benchmarks/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from pandas._libs import lib

import pandas as pd
from pandas.core.algorithms import make_duplicates_of_left_unique_in_right

from .pandas_vb_common import tm

Expand Down Expand Up @@ -174,4 +175,15 @@ def time_argsort(self, N):
self.array.argsort()


class RemoveDuplicates:
def setup(self):
N = 10 ** 5
na = np.arange(int(N / 2))
self.left = np.concatenate([na[: int(N / 4)], na[: int(N / 4)]])
self.right = np.concatenate([na, na])

def time_make_duplicates_of_left_unique_in_right(self):
make_duplicates_of_left_unique_in_right(self.left, self.right)


from .pandas_vb_common import setup # noqa: F401 isort:skip
1 change: 1 addition & 0 deletions doc/source/whatsnew/v1.2.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -567,6 +567,7 @@ Reshaping
- Bug in :meth:`DataFrame.combine_first()` caused wrong alignment with dtype ``string`` and one level of ``MultiIndex`` containing only ``NA`` (:issue:`37591`)
- Fixed regression in :func:`merge` on merging DatetimeIndex with empty DataFrame (:issue:`36895`)
- Bug in :meth:`DataFrame.apply` not setting index of return value when ``func`` return type is ``dict`` (:issue:`37544`)
- Bug in :func:`concat` resulted in a ``ValueError`` when at least one of both inputs had a non unique index (:issue:`36263`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non-unique

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


Sparse
^^^^^^
Expand Down
20 changes: 20 additions & 0 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -2149,3 +2149,23 @@ def _sort_tuples(values: np.ndarray[tuple]):
arrays, _ = to_arrays(values, None)
indexer = lexsort_indexer(arrays, orders=True)
return values[indexer]


def make_duplicates_of_left_unique_in_right(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this related to or useful for the index.union-with-duplicates stuff?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you pass in the union as left and right, you would get the distinct result. Have to take a look if we can use this.

left: np.ndarray, right: np.ndarray
) -> np.ndarray:
"""
Drops all duplicates values from left in right, so that they are
unique in right.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code itself looks good, but this sentence isn't clear to a reader without context

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improved it?


Parameters
----------
left: ndarray
right: ndarray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtypes unrestricted?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could not think of anything, why they should be restricted.


Returns
-------
Duplicates of left are unique in right
"""
left_duplicates = unique(left[duplicated(left)])
return right[~(duplicated(right) & np.isin(right, left_duplicates))]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason to prefer np.isin vs the algos.isin?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that i remember, changed it.

8 changes: 8 additions & 0 deletions pandas/core/reshape/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
from pandas.core.dtypes.missing import isna

import pandas.core.algorithms as algos
from pandas.core.arrays.categorical import (
factorize_from_iterable,
factorize_from_iterables,
Expand Down Expand Up @@ -501,6 +502,13 @@ def get_result(self):
# 1-ax to convert BlockManager axis to DataFrame axis
obj_labels = obj.axes[1 - ax]
if not new_labels.equals(obj_labels):
# We have to remove the duplicates from obj_labels
# in new labels to make them unique, otherwise we would
# duplicate or duplicates again
if not obj_labels.is_unique:
new_labels = algos.make_duplicates_of_left_unique_in_right(
np.asarray(obj_labels), np.asarray(new_labels)
)
indexers[ax] = obj_labels.reindex(new_labels)[1]

mgrs_indexers.append((obj._mgr, indexers))
Expand Down
11 changes: 11 additions & 0 deletions pandas/tests/reshape/concat/test_dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,3 +167,14 @@ def test_concat_dataframe_keys_bug(self, sort):
# it works
result = concat([t1, t2], axis=1, keys=["t1", "t2"], sort=sort)
assert list(result.columns) == [("t1", "value"), ("t2", "value")]

def test_concat_duplicate_indexes(self):
# GH#36263 ValueError with non unique indexes
df1 = DataFrame([1, 2, 3, 4], index=[0, 1, 1, 4], columns=["a"])
df2 = DataFrame([6, 7, 8, 9], index=[0, 0, 1, 3], columns=["b"])
result = concat([df1, df2], axis=1)
expected = DataFrame(
{"a": [1, 1, 2, 3, np.nan, 4], "b": [6, 7, 8, 8, 9, np.nan]},
index=Index([0, 0, 1, 1, 3, 4]),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to make sure i understand this, for any df1 and df2, we want result.index to always satisfy:

vc = result.index.value_counts()
vc1 = df1.index.value_counts()
vc2 = df2.index.value_counts()

vc1b = vc1.reindex(vc.index, fill_value=0)
vc2b = vc2.reindex(vc.index, fill_value=0)

We expect vc to be the pointwise maximum of vc1b and vc2b?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly. Thats perfectly on point.

)
tm.assert_frame_equal(result, expected)
Empty file.
12 changes: 12 additions & 0 deletions pandas/tests/test_algos.py
Original file line number Diff line number Diff line change
Expand Up @@ -2358,3 +2358,15 @@ def test_diff_ea_axis(self):
msg = "cannot diff DatetimeArray on axis=1"
with pytest.raises(ValueError, match=msg):
algos.diff(dta, 1, axis=1)


@pytest.mark.parametrize(
"left_values", [[0, 1, 1, 4], [0, 1, 1, 4, 4], [0, 1, 1, 1, 4]]
)
def test_make_duplicates_of_left_unique_in_right(left_values):
# GH#36263
left = np.array(left_values)
right = np.array([0, 0, 1, 1, 4])
result = algos.make_duplicates_of_left_unique_in_right(left, right)
expected = np.array([0, 0, 1, 4])
tm.assert_numpy_array_equal(result, expected)