Skip to content

REGR: Series.nlargest with masked arrays #42838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Aug 10, 2021
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.3.2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Fixed regressions
- Regression in :meth:`DataFrame.drop` does nothing if :class:`MultiIndex` has duplicates and indexer is a tuple or list of tuples (:issue:`42771`)
- Fixed regression where :meth:`pandas.read_csv` raised a ``ValueError`` when parameters ``names`` and ``prefix`` were both set to None (:issue:`42387`)
- Fixed regression in comparisons between :class:`Timestamp` object and ``datetime64`` objects outside the implementation bounds for nanosecond ``datetime64`` (:issue:`42794`)
-
- Regression in :meth:`Series.nlargest` and :meth:`Series.nsmallest` with nullable integer or float dtype (:issue:`41816`)
Copy link
Member

@simonjayhawkins simonjayhawkins Aug 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will need to change to #42816. will do that after backport is merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in #42983


.. ---------------------------------------------------------------------------

Expand Down
16 changes: 16 additions & 0 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
Literal,
Union,
cast,
final,
)
from warnings import warn

Expand Down Expand Up @@ -1211,12 +1212,15 @@ def __init__(self, obj, n: int, keep: str):
def compute(self, method: str) -> DataFrame | Series:
raise NotImplementedError

@final
def nlargest(self):
return self.compute("nlargest")

@final
def nsmallest(self):
return self.compute("nsmallest")

@final
@staticmethod
def is_valid_dtype_n_method(dtype: DtypeObj) -> bool:
"""
Expand Down Expand Up @@ -1255,6 +1259,18 @@ def compute(self, method: str) -> Series:

dropped = self.obj.dropna()

if is_extension_array_dtype(dropped.dtype):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this prefereable to hanlding on L1281 with the other dtypes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bc ensure_data does the wrong thing with MaskedArrays, np.asarray(obj) defaults to object dtype.

we could kludge _ensure_data to work in cases where we dont have any pd.NAs, but doing it here lets us handle cases with NAs too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bc ensure_data does the wrong thing with MaskedArrays, np.asarray(obj) defaults to object dtype.

this is very unfortunate. I don't really like this approach here. i suppose its ok for a backport though this is na experimental type and so i don't consider this regression to be a big deal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the more i work on it, the more i want to nuke NA from space

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok what is involved with changing this to a non-recursive formulation though? e.g. ensure_data should be able to handle NA (if not we will have other issues)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three options:

  1. have ensure_data special-case MaskedArray cases with no NAs, in which case they can just use the ndarray. This fixes the regression (cases with NAs didnt use to work IIUC). kluuuuudge
  2. make algos.kth_smallest support object dtype (and not choke on pd.NA)
  3. make the EA implement its own nlargest

I decided the approach here was less kludgy than those in part bc this function uses obj.dropna(), so the MaskedArray case actually is much simpler to implement than an arbitrary EA

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is 1 a kludge?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bc its special-casing for MaskedArray and special casing for not values.isna().any()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

worse than that, it isnt not values.isna().any() but not values._mask.any()

# GH#41816 bc we have dropped NAs above, MaskedArrays can use the
# numpy logic.
from pandas.core.arrays import BaseMaskedArray

arr = dropped._values
if isinstance(arr, BaseMaskedArray):
ser = type(dropped)(arr._data, index=dropped.index, name=dropped.name)

result = type(self)(ser, n=self.n, keep=self.keep).compute(method)
return result.astype(arr.dtype)

# slow method
if n >= len(self.obj):
ascending = method == "nsmallest"
Expand Down
16 changes: 16 additions & 0 deletions pandas/tests/series/methods/test_nlargest.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,3 +211,19 @@ def test_nlargest_boolean(self, data, expected):
result = ser.nlargest(1)
expected = Series(expected)
tm.assert_series_equal(result, expected)

def test_nlargest_nullable(self, any_nullable_numeric_dtype):
# GH#42816
dtype = any_nullable_numeric_dtype
arr = np.random.randn(10).astype(dtype.lower(), copy=False)

ser = Series(arr.copy(), dtype=dtype)
ser[1] = pd.NA
result = ser.nlargest(5)

expected = (
Series(np.delete(arr, 1), index=ser.index.delete(1))
.nlargest(5)
.astype(dtype)
)
tm.assert_series_equal(result, expected)