Skip to content

PERF/BUG: use masked algo in groupby cummin and cummax #40651

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 44 commits into from
Apr 21, 2021
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
7cd4dc3
wip
mzeitlin11 Mar 26, 2021
91984dc
wip
mzeitlin11 Mar 26, 2021
b371cc5
wip
mzeitlin11 Mar 26, 2021
69cce96
wip
mzeitlin11 Mar 26, 2021
dd7f324
wip
mzeitlin11 Mar 26, 2021
64680d4
wip
mzeitlin11 Mar 26, 2021
be16f65
wip
mzeitlin11 Mar 26, 2021
9442846
wip
mzeitlin11 Mar 26, 2021
f089175
wip
mzeitlin11 Mar 26, 2021
31409f8
wip
mzeitlin11 Mar 26, 2021
0c05f74
wip
mzeitlin11 Mar 26, 2021
18dcc94
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Mar 26, 2021
5c60a1f
wip
mzeitlin11 Mar 26, 2021
f0c27ce
PERF: use masked algo in groupby cummin and cummax
mzeitlin11 Mar 27, 2021
2fa80ad
Avoid mask copy
mzeitlin11 Mar 27, 2021
280c7e5
Update whatsnew
mzeitlin11 Mar 27, 2021
dca28cf
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 1, 2021
7e2fbe0
Merge fixup
mzeitlin11 Apr 1, 2021
0ebb97a
Follow transpose
mzeitlin11 Apr 1, 2021
0009dfd
Compute mask usage inside algo
mzeitlin11 Apr 1, 2021
6663832
try optional
mzeitlin11 Apr 1, 2021
8247f82
WIP
mzeitlin11 Apr 1, 2021
71e1c4f
Use more contiguity
mzeitlin11 Apr 1, 2021
c6cf9ee
Shrink benchmark
mzeitlin11 Apr 1, 2021
02768ec
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 1, 2021
836175b
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 2, 2021
293dc6e
Revert unrelated
mzeitlin11 Apr 2, 2021
478c6c9
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 6, 2021
fa45a9a
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 8, 2021
1632b81
Merge remote-tracking branch 'origin/master' into perf/masked_cummin/max
mzeitlin11 Apr 12, 2021
1bb344e
Remove merge conflict relic
mzeitlin11 Apr 12, 2021
97d9eea
Update doc/source/whatsnew/v1.3.0.rst
mzeitlin11 Apr 13, 2021
892a92a
Update doc/source/whatsnew/v1.3.0.rst
mzeitlin11 Apr 13, 2021
a239a68
Update pandas/core/groupby/ops.py
mzeitlin11 Apr 13, 2021
f98ca35
Merge remote-tracking branch 'origin' into perf/masked_cummin/max
mzeitlin11 Apr 13, 2021
e7ed12f
Merge branch 'perf/masked_cummin/max' of github.com:/mzeitlin11/panda…
mzeitlin11 Apr 13, 2021
a1422ba
Address comments
mzeitlin11 Apr 13, 2021
482a209
Change random generation style
mzeitlin11 Apr 13, 2021
4e7404d
Merge remote-tracking branch 'origin' into perf/masked_cummin/max
mzeitlin11 Apr 18, 2021
251c02a
Use conditional instead of partial
mzeitlin11 Apr 18, 2021
3de7e5e
Remove ensure_int_or_float
mzeitlin11 Apr 18, 2021
237f86f
Remove unnecessary condition
mzeitlin11 Apr 18, 2021
a1b0c04
Merge remote-tracking branch 'origin' into perf/masked_cummin/max
mzeitlin11 Apr 19, 2021
5e1dac4
Merge remote-tracking branch 'origin' into perf/masked_cummin/max
mzeitlin11 Apr 20, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions asv_bench/benchmarks/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,34 @@ def time_frame_agg(self, dtype, method):
self.df.groupby("key").agg(method)


class CumminMax:
param_names = ["dtype", "method"]
params = [
["float64", "int64", "Float64", "Int64"],
["cummin", "cummax"],
]

def setup(self, dtype, method):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this costly? worth using setup_cache?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks to take ~0.25s. So might be worth caching, but appears setup_cache can't be parameterized, so would have to ugly up the benchmark a bit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just make N // 10

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep have shrunk benchmark

N = 500_000
vals = np.random.randint(-10, 10, (N, 5))
null_vals = vals.astype(float, copy=True)
null_vals[::2, :] = np.nan
null_vals[::3, :] = np.nan
df = DataFrame(vals, columns=list("abcde"), dtype=dtype)
null_df = DataFrame(null_vals, columns=list("abcde"), dtype=dtype)
keys = np.random.randint(0, 100, size=N)
df["key"] = keys
null_df["key"] = keys
self.df = df
self.null_df = null_df

def time_frame_transform(self, dtype, method):
self.df.groupby("key").transform(method)

def time_frame_transform_many_nulls(self, dtype, method):
self.null_df.groupby("key").transform(method)


class RankWithTies:
# GH 21237
param_names = ["dtype", "tie_method"]
Expand Down
4 changes: 4 additions & 0 deletions doc/source/whatsnew/v1.3.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -590,6 +590,8 @@ Performance improvements
- Performance improvement in :meth:`core.window.ewm.ExponentialMovingWindow.mean` with ``times`` (:issue:`39784`)
- Performance improvement in :meth:`.GroupBy.apply` when requiring the python fallback implementation (:issue:`40176`)
- Performance improvement for concatenation of data with type :class:`CategoricalDtype` (:issue:`40193`)
- Performance improvement in :meth:`.GroupBy.cummin` and :meth:`.GroupBy.cummax` with nullable data types (:issue:`37493`)
-

.. ---------------------------------------------------------------------------

Expand Down Expand Up @@ -787,6 +789,8 @@ Groupby/resample/rolling
- Bug in :meth:`Series.asfreq` and :meth:`DataFrame.asfreq` dropping rows when the index is not sorted (:issue:`39805`)
- Bug in aggregation functions for :class:`DataFrame` not respecting ``numeric_only`` argument when ``level`` keyword was given (:issue:`40660`)
- Bug in :class:`core.window.RollingGroupby` where ``as_index=False`` argument in ``groupby`` was ignored (:issue:`39433`)
- Bug in :meth:`.GroupBy.cummin` and :meth:`.GroupBy.cummax` computing wrong result with nullable data types too large to roundtrip when casting to float (:issue:`37493`)


Reshaping
^^^^^^^^^
Expand Down
61 changes: 52 additions & 9 deletions pandas/_libs/groupby.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -1240,6 +1240,7 @@ def group_min(groupby_t[:, ::1] out,
@cython.wraparound(False)
cdef group_cummin_max(groupby_t[:, ::1] out,
ndarray[groupby_t, ndim=2] values,
uint8_t[:, ::1] mask,
const intp_t[:] labels,
int ngroups,
bint is_datetimelike,
Expand All @@ -1253,6 +1254,9 @@ cdef group_cummin_max(groupby_t[:, ::1] out,
Array to store cummin/max in.
values : np.ndarray[groupby_t, ndim=2]
Values to take cummin/max of.
mask : np.ndarray[bool] or None
If not None, indices represent missing values,
otherwise the mask will not be used
labels : np.ndarray[np.intp]
Labels to group by.
ngroups : int
Expand All @@ -1270,11 +1274,14 @@ cdef group_cummin_max(groupby_t[:, ::1] out,
cdef:
Py_ssize_t i, j, N, K, size
groupby_t val, mval
ndarray[groupby_t, ndim=2] accum
groupby_t[:, ::1] accum
intp_t lab
bint val_is_nan, use_mask

use_mask = mask is not None

N, K = (<object>values).shape
accum = np.empty((ngroups, K), dtype=np.asarray(values).dtype)
accum = np.empty((ngroups, K), dtype=values.dtype)
if groupby_t is int64_t:
accum[:] = -_int64_max if compute_max else _int64_max
elif groupby_t is uint64_t:
Expand All @@ -1289,11 +1296,29 @@ cdef group_cummin_max(groupby_t[:, ::1] out,
if lab < 0:
continue
for j in range(K):
val = values[i, j]
val_is_nan = False

if use_mask:
if mask[i, j]:

# `out` does not need to be set since it
# will be masked anyway
val_is_nan = True
else:

if _treat_as_na(val, is_datetimelike):
out[i, j] = val
# If using the mask, we can avoid grabbing the
# value unless necessary
val = values[i, j]

# Otherwise, `out` must be set accordingly if the
# value is missing
else:
val = values[i, j]
if _treat_as_na(val, is_datetimelike):
val_is_nan = True
out[i, j] = val

if not val_is_nan:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to implement this as a separate function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion about this. The question would be the tradeoff between a bit more complexity/branching vs duplication/increased package size (if we end up adding masked support to a lot more of these grouped algos)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any guess what the impact on package size is?

potential duplication might be addressed by making e.g. refactoring L1340-1347 or L 1302-1308 into helper functions

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on rough estimate, binaries generated from groupby.pyx take up ~5% of total _libs. So based on the figure of _libs taking up 17MB from #30741, cost of full duplication would be around 0.8-0.9 MB. But like you mentioned above, some duplication could be avoided, 1 MB should be an upper bound.

mval = accum[lab, j]
if compute_max:
if val > mval:
Expand All @@ -1310,9 +1335,18 @@ def group_cummin(groupby_t[:, ::1] out,
ndarray[groupby_t, ndim=2] values,
const intp_t[:] labels,
int ngroups,
bint is_datetimelike) -> None:
bint is_datetimelike,
uint8_t[:, ::1] mask=None) -> None:
"""See group_cummin_max.__doc__"""
group_cummin_max(out, values, labels, ngroups, is_datetimelike, compute_max=False)
group_cummin_max(
out,
values,
mask,
labels,
ngroups,
is_datetimelike,
compute_max=False
)


@cython.boundscheck(False)
Expand All @@ -1321,6 +1355,15 @@ def group_cummax(groupby_t[:, ::1] out,
ndarray[groupby_t, ndim=2] values,
const intp_t[:] labels,
int ngroups,
bint is_datetimelike) -> None:
bint is_datetimelike,
uint8_t[:, ::1] mask=None) -> None:
"""See group_cummin_max.__doc__"""
group_cummin_max(out, values, labels, ngroups, is_datetimelike, compute_max=True)
group_cummin_max(
out,
values,
mask,
labels,
ngroups,
is_datetimelike,
compute_max=True
)
80 changes: 71 additions & 9 deletions pandas/core/groupby/ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@

import collections
import functools
from functools import partial
from typing import (
Generic,
Hashable,
Expand Down Expand Up @@ -69,6 +70,10 @@

from pandas.core import algorithms
from pandas.core.arrays import ExtensionArray
from pandas.core.arrays.masked import (
BaseMaskedArray,
BaseMaskedDtype,
)
import pandas.core.common as com
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
Expand Down Expand Up @@ -124,6 +129,8 @@ def __init__(self, kind: str, how: str):
},
}

_MASKED_CYTHON_FUNCTIONS = {"cummin", "cummax"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is a separate variable necessary here? that is just confusing

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to store this information in the WrappedCythonOp class, instead of putting this in _cython_operation (which would be your alternative?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes this would be fine, its not clear that this happening in this PR


_cython_arity = {"ohlc": 4} # OHLC

# Note: we make this a classmethod and pass kind+how so that caching
Expand Down Expand Up @@ -259,6 +266,9 @@ def get_out_dtype(self, dtype: np.dtype) -> np.dtype:
out_dtype = "object"
return np.dtype(out_dtype)

def uses_mask(self) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is very far from actual use, would remove

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this falls under the umbrella of your comment here #40651 (comment). I liked folding this functionality into WrappedCythonOp since it only depends on the specific op. And _MASKED_CYTHON_FUNCTIONS can then just act like a constant to hold which algos have masked support

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree having this in WrappedCythonOp class makes sense. This class also holds other information on the cython algos like _CYTHON_FUNCTIONS with a mapping of ops name to cython function names.

return self.how in self._MASKED_CYTHON_FUNCTIONS


class BaseGrouper:
"""
Expand Down Expand Up @@ -608,9 +618,49 @@ def _ea_wrap_cython_operation(
f"function is not implemented for this dtype: {values.dtype}"
)

@final
def _masked_ea_wrap_cython_operation(
self,
kind: str,
values: BaseMaskedArray,
how: str,
axis: int,
min_count: int = -1,
**kwargs,
) -> BaseMaskedArray:
"""
Equivalent of `_ea_wrap_cython_operation`, but optimized for masked EA's
and cython algorithms which accept a mask.
"""
orig_values = values

# Copy to ensure input and result masks don't end up shared
mask = values._mask.copy()
arr = values._data

if is_integer_dtype(arr.dtype) or is_bool_dtype(arr.dtype):
# IntegerArray or BooleanArray
arr = ensure_int_or_float(arr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW im planning to kill off this function; for EAs this is always just arr.to_numpy(dtype="float64", na_value=np.nan)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for bringing up - realized this whole condition can be simplified since we actually have an ndarray at this point


res_values = self._cython_operation(
kind, arr, how, axis, min_count, mask=mask, **kwargs
)
dtype = maybe_cast_result_dtype(orig_values.dtype, how)
assert isinstance(dtype, BaseMaskedDtype)
cls = dtype.construct_array_type()

return cls(res_values.astype(dtype.type, copy=False), mask)

@final
def _cython_operation(
self, kind: str, values, how: str, axis: int, min_count: int = -1, **kwargs
self,
kind: str,
values,
how: str,
axis: int,
min_count: int = -1,
mask: np.ndarray | None = None,
**kwargs,
) -> ArrayLike:
"""
Returns the values of a cython operation.
Expand All @@ -634,10 +684,16 @@ def _cython_operation(
# if not raise NotImplementedError
cy_op.disallow_invalid_ops(dtype, is_numeric)

func_uses_mask = cy_op.uses_mask()
if is_extension_array_dtype(dtype):
return self._ea_wrap_cython_operation(
kind, values, how, axis, min_count, **kwargs
)
if isinstance(values, BaseMaskedArray) and func_uses_mask:
return self._masked_ea_wrap_cython_operation(
kind, values, how, axis, min_count, **kwargs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i really don't understand all of this code duplication. this is adding huge complexity. pls reduce it.

Copy link
Member

@jorisvandenbossche jorisvandenbossche Apr 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jeff, did you actually read the previous responses to your similar comment? (https://github.com/pandas-dev/pandas/pull/40651/files#r603319910) Can you then please answer there to the concrete reasons given.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes and its a terrible pattern.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this duplication of code is ridiculous. We have a VERY large codebase. Having this kind of separate logic is amazingling confusing & is humungous tech debt. This is heavily used code and needs to be carefully modified.

Copy link
Member Author

@mzeitlin11 mzeitlin11 Apr 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the concern about adding code complexity - my thinking was that if the goal is for nullable types to become the default in pandas, then direct support makes sense. And in that case, nullable types would need to be special-cased somewhere, and I think the separate function is cleaner than interleaving in _ea_wrap_cython_operation.

If direct support for nullable dtypes is not desired, we can just close this. If it is, I'll keep trying to think of ways to achieve this without adding more code, but any suggestions there would be welcome!

Copy link
Member

@jorisvandenbossche jorisvandenbossche Apr 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proper support for nullable dtypes is certainly desired (how to add it exactly can of course be discussed), so thanks a lot @mzeitlin11 for your efforts here.

AFAIK, it's correct we need some special casing for it somewhere (that's the whole point of this PR is to add special handling for it).
Where exactly to put this special casing can of course be discussed, but to me the separate helper method instead of interleaving it in _ea_wrap_cython_operation seems good (I don't think that interleaving it into the existing _ea_wrap_cython_operation would result in fewer added lines of code (and would be harder to read)).

@jreback please try to stay constructive (eg answer to our arguments or provide concrete suggestions on where you would put it / how you would do it differently) and please mind your language (there is no need to call the approach taken by a contributor "terrible").

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I agree with @jorisvandenbossche on phrasing concerns. Even the best of us slip up here from time to time.

  2. if the goal is for nullable types to become the default in pandas

This decision has not been made.

  1. I think the separate function is cleaner than interleaving in _ea_wrap_cython_operation.

Agreed.

  1. My preferred dispatch logic would look something like:
def _cython_operation(...)
    if is_ea_dtype(...):
       return self. _ea_wrap_cython_operation(...)
    [status quo]

def _ea_wrap_cython_operation(...):
    if should_use_mask(...):
        return self._masked_ea_wrap_cython_operation(...)
    [status quo]

as Joris correctly pointed out, that is not viable ATM. I think a lot of this dispatch logic eventually belongs in WrappedCythonOp (which i've been vaguely planning on doing next time there aren't any open PRs touching this code), at which point we can reconsider flattening this

  1. My other preferred dispatch logic would not be in this file at all, but be implemented as a method on the EA subclass. I'm really uncomfortable with this code depending on MaskedArray implementation details, seeing as how there has been discussion of swapping them out for something arrow-based.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jbrockmendel if you plan further refactoring of this code, I'm happy to just mothball this pr for now. The real benefit won't come in until more groupby algos allow a mask on this path anyway, so not worth adding now if it's just going to cause more pain in future refactoring.

I also like the idea of approach 5 instead of this - could start looking into that if you think it's a promising direction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you plan further refactoring of this code, I'm happy to just mothball this pr for now.

From today's call, I think the plan is to move forward with this first.

I also like the idea of approach 5 instead of this - could start looking into that if you think it's a promising direction.

Long-term I think this is the right way to go to get the general case right, so I'd encourage you if you're interested in trying to implement this on the EA- separate PR(s).

)
else:
return self._ea_wrap_cython_operation(
kind, values, how, axis, min_count, **kwargs
)

elif values.ndim == 1:
# expand to 2d, dispatch, then squeeze if appropriate
Expand All @@ -648,6 +704,7 @@ def _cython_operation(
how=how,
axis=1,
min_count=min_count,
mask=mask,
**kwargs,
)
if res.shape[0] == 1:
Expand All @@ -666,7 +723,7 @@ def _cython_operation(
elif is_integer_dtype(dtype):
# we use iNaT for the missing value on ints
# so pre-convert to guard this condition
if (values == iNaT).any():
if mask is None and (values == iNaT).any():
values = ensure_float64(values)
else:
values = ensure_int_or_float(values)
Expand All @@ -680,8 +737,13 @@ def _cython_operation(
assert axis == 1
values = values.T

if mask is not None:
mask = mask.reshape(values.shape, order="C")

out_shape = cy_op.get_output_shape(ngroups, values)
func, values = cy_op.get_cython_func_and_vals(values, is_numeric)
if func_uses_mask:
func = partial(func, mask=mask)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a viable alternative to using a partial here? When its just one its not that bad, but they have a tendency to pile up and make the code harder to reason about

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Will look into it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Called func with if/else structure instead of using partial, which is definitely easier to follow if a little longer (but aggregate does something similar).

out_dtype = cy_op.get_out_dtype(values.dtype)

result = maybe_fill(np.empty(out_shape, dtype=out_dtype))
Expand All @@ -692,11 +754,11 @@ def _cython_operation(
# TODO: min_count
func(result, values, comp_ids, ngroups, is_datetimelike, **kwargs)

if is_integer_dtype(result.dtype) and not is_datetimelike:
mask = result == iNaT
if mask.any():
if mask is None and is_integer_dtype(result.dtype) and not is_datetimelike:
result_mask = result == iNaT
if result_mask.any():
result = result.astype("float64")
result[mask] = np.nan
result[result_mask] = np.nan

if kind == "aggregate" and self._filter_empty_groups and not counts.all():
assert result.ndim != 2
Expand Down
Loading