Skip to content

BUG: transform with nunique should have dtype int64 #35152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 10, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v1.1.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1080,6 +1080,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.groupby` lost index, when one of the ``agg`` keys referenced an empty list (:issue:`32580`)
- Bug in :meth:`Rolling.apply` where ``center=True`` was ignored when ``engine='numba'`` was specified (:issue:`34784`)
- Bug in :meth:`DataFrame.ewm.cov` was throwing ``AssertionError`` for :class:`MultiIndex` inputs (:issue:`34440`)
- Bug in :meth:`core.groupby.DataFrameGroupBy.transform` when ``func='nunique'`` and columns are of type ``datetime64``, the result would also be of type ``datetime64`` instead of ``int64`` (:issue:`35109`)

Reshaping
^^^^^^^^^
Expand Down
32 changes: 15 additions & 17 deletions pandas/core/groupby/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -485,8 +485,13 @@ def transform(self, func, *args, engine="cython", engine_kwargs=None, **kwargs):
# If func is a reduction, we need to broadcast the
# result to the whole group. Compute func result
# and deal with possible broadcasting below.
# Temporarily set observed for dealing with
# categoricals so we don't have to convert dtypes.
observed = self.observed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woa? can we not simply handle this at a lower level?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Computing with observed=True is much more performant than observed=False when the category combinations are not saturated, and slightly less performant when they are. So it seems best to me to run with observed=True since transform will disregard any unobserved combinations.
  • Computing with observed=True makes it so that we don't have to do a type conversion float to int because of np.nan for missing combinations.
  • The next step is to call getattr(self, func)(*args, **kwargs). So I don't see how to handle this at a lower level.

That said, the code as-is is definitely not good - self.observed should be modified using a context manager in case the reduction fails. Something like:

with temp_setattr(self, observed=True) as obj:
    getattr(obj, func)(*args, **kwargs)

where temp_setattr is a context manager like the one found here. Not sure if this is a more palatable change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what i mean is, why don't you just pass observed=True directly in the function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're suggesting doing something like getattr(self, func)(*args, **kwargs, observed=True). Assuming args and kwargs are empty, that is akin to df.groupby(keys).sum(observed=True); but sum doesn't take observed as an argument, only groupby does.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, can you make a context manager for this, like _group_selection_context conceptually.

self.observed = True
result = getattr(self, func)(*args, **kwargs)
return self._transform_fast(result, func)
self.observed = observed
return self._transform_fast(result)

def _transform_general(
self, func, *args, engine="cython", engine_kwargs=None, **kwargs
Expand Down Expand Up @@ -539,17 +544,14 @@ def _transform_general(
result.index = self._selected_obj.index
return result

def _transform_fast(self, result, func_nm: str) -> Series:
def _transform_fast(self, result) -> Series:
"""
fast version of transform, only applicable to
builtin/cythonizable functions
"""
ids, _, ngroup = self.grouper.group_info
result = result.reindex(self.grouper.result_index, copy=False)
cast = self._transform_should_cast(func_nm)
out = algorithms.take_1d(result._values, ids)
if cast:
out = maybe_cast_result(out, self.obj, how=func_nm)
return self.obj._constructor(out, index=self.obj.index, name=self.obj.name)

def filter(self, func, dropna=True, *args, **kwargs):
Expand Down Expand Up @@ -1467,25 +1469,26 @@ def transform(self, func, *args, engine="cython", engine_kwargs=None, **kwargs):
# If func is a reduction, we need to broadcast the
# result to the whole group. Compute func result
# and deal with possible broadcasting below.
# Temporarily set observed for dealing with
# categoricals so we don't have to convert dtypes.
observed = self.observed
self.observed = True
result = getattr(self, func)(*args, **kwargs)
self.observed = observed

if isinstance(result, DataFrame) and result.columns.equals(
self._obj_with_exclusions.columns
):
return self._transform_fast(result, func)
return self._transform_fast(result)

return self._transform_general(
func, engine=engine, engine_kwargs=engine_kwargs, *args, **kwargs
)

def _transform_fast(self, result: DataFrame, func_nm: str) -> DataFrame:
def _transform_fast(self, result: DataFrame) -> DataFrame:
"""
Fast transform path for aggregations
"""
# if there were groups with no observations (Categorical only?)
# try casting data to original dtype
cast = self._transform_should_cast(func_nm)

obj = self._obj_with_exclusions

# for each col, reshape to to size of original frame
Expand All @@ -1494,12 +1497,7 @@ def _transform_fast(self, result: DataFrame, func_nm: str) -> DataFrame:
result = result.reindex(self.grouper.result_index, copy=False)
output = []
for i, _ in enumerate(result.columns):
res = algorithms.take_1d(result.iloc[:, i].values, ids)
# TODO: we have no test cases that get here with EA dtypes;
# maybe_cast_result may not be needed if EAs never get here
if cast:
res = maybe_cast_result(res, obj.iloc[:, i], how=func_nm)
output.append(res)
output.append(algorithms.take_1d(result.iloc[:, i].values, ids))

return self.obj._constructor._from_arrays(
output, columns=result.columns, index=obj.index
Expand Down
7 changes: 7 additions & 0 deletions pandas/tests/groupby/test_nunique.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,3 +167,10 @@ def test_nunique_preserves_column_level_names():
result = test.groupby([0, 0, 0]).nunique()
expected = pd.DataFrame([2], columns=test.columns)
tm.assert_frame_equal(result, expected)


def test_nunique_transform_with_datetime():
df = pd.DataFrame(date_range("2008-12-31", "2009-01-02"), columns=["date"])
result = df.groupby([0, 0, 1])["date"].transform("nunique")
expected = pd.Series([2, 2, 1], name="date")
tm.assert_series_equal(result, expected)