Skip to content

BUG: Fix getitem dtype preservation with multiindexes #51895

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Apr 25, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -3831,11 +3831,8 @@ def _getitem_multilevel(self, key):
result = self.reindex(columns=new_columns)
result.columns = result_columns
else:
new_values = self.values[:, loc]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is faster than iloc? Did not profile but that's the only reason I can think of that it is done this way. To preserve the performance, I guess we can check something like _can_fast_transpose here as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But in the case where this matters, I expect that locis a slice? And then I would expect iloc to be fast as well? (it has separate paths for single blocks AFAIR)
But yes, something to verify (indexing the numpy array will always be faster since you don't have the overhead of our indexing machinery. But if we also end up doing the same, the difference should be acceptable).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(and I think a bit of extra overhead is also acceptable to avoid needing the custom add_references code here (as in #51944)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good as long as is not to much overhead. We should check at least. That was my reasoning for adding the reference code instead of switching to iloc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small test:

columns = pd.MultiIndex.from_product([range(10), range(20)])
df = pd.DataFrame(np.random.randn(10000, len(columns)), columns=columns).copy()
slice = df.columns.get_loc(0)
# gives slice(0, 20, None)

In [47]: %timeit df.iloc[:, slice]
96.2 µs ± 4.58 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [48]: %timeit pd.DataFrame(df.values[:, slice], index=df.index)
33.9 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

So iloc is a bit slower, having a bit more overhead (mostly in creating the resulting MultiIndex columns, which we then override ..). But this mostly fixed overhead regardless of the size of the data, and we are speaking about microseconds.
The actual subsetting still happens with a slice. And just to be sure to compare to a non-slice case (where we do an actual "take") selecting the same subset:

In [49]: idx = np.arange(20)

In [50]: %timeit df.iloc[:, idx]
576 µs ± 19.6 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

result = self._constructor(
new_values, index=self.index, columns=result_columns
)
result = result.__finalize__(self)
result = self.iloc[:, loc]
result.columns = result_columns
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do away with going through self.values (which is a good catch! that probably stems from the time we only had consolidated numpy dtypes, so whenever you had a single dtype we assumed a single numpy array), we might as well combine the if self._is_mixed_type: .. else: .. paths? I am not sure there is any benefit over using iloc vs reindex.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a look at this locally, it seems like these separate paths are now redundant and the reindex block can be used in both.
(There was not deliberate choice in my original change to use iloc instead of reindex, I am just less familiar with reindex). I'm also unfortunately ignorant of how they compare performance wise.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They should be equivalent, I think (especially since we afterwards still set the resulting column names), generally they should end up using the same code under the hood. Since the existing code was using reindex, I think it's fine to continue using that.


# If there is only one column being returned, and its name is
# either an empty string, or a tuple with an empty string as its
Expand All @@ -3851,6 +3848,7 @@ def _getitem_multilevel(self, key):
top = top[0]
if top == "":
result = result[""]
# result = result.squeeze("columns") # TODO iloc?
if isinstance(result, Series):
result = self._constructor_sliced(
result, index=self.index, name=key
Expand Down
19 changes: 19 additions & 0 deletions pandas/tests/indexing/multiindex/test_multiindex.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,3 +206,22 @@ def test_multiindex_with_na_missing_key(self):
)
with pytest.raises(KeyError, match="missing_key"):
df[[("missing_key",)]]

def test_multiindex_dtype_preservation(self):
# GH51261
df = DataFrame(
["value"],
columns=MultiIndex.from_tuples([("A", "B")], names=["lvl1", "lvl2"]),
).astype("category")
assert (df["A"].dtypes == "category").all()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use pandas.core.dtypes.common.is_categorical_dtype(df["A"]) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure


# geopandas 1763 analogue
df = DataFrame(
[[1, 0], [0, 1]],
columns=[
["foo", "foo"],
["location", "location"],
["x", "y"],
],
).assign(bools=Series([True, False], dtype="boolean"))
assert df["bools"].dtype == "boolean"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use pandas.core.dtypes.common.is_bool_dtype(df["bools"]) here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this case we actually want to be more explicit to check the extension dtype (is_bool_dtype would also pass with numpy bool dtype). Possible alternative is isinstance(.., BooleanDtype)