Skip to content

BUG: Series.setitem losing precision when enlarging #47342

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 1, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v1.5.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -823,6 +823,7 @@ Indexing
- Bug in :meth:`Series.__setitem__` where setting :attr:`NA` into a numeric-dtpye :class:`Series` would incorrectly upcast to object-dtype rather than treating the value as ``np.nan`` (:issue:`44199`)
- Bug in :meth:`Series.__setitem__` with ``datetime64[ns]`` dtype, an all-``False`` boolean mask, and an incompatible value incorrectly casting to ``object`` instead of retaining ``datetime64[ns]`` dtype (:issue:`45967`)
- Bug in :meth:`Index.__getitem__` raising ``ValueError`` when indexer is from boolean dtype with ``NA`` (:issue:`45806`)
- Bug in :meth:`Series.__setitem__` losing precision when enlarging :class:`Series` with scalar (:issue:`32346`)
- Bug in :meth:`Series.mask` with ``inplace=True`` or setting values with a boolean mask with small integer dtypes incorrectly raising (:issue:`45750`)
- Bug in :meth:`DataFrame.mask` with ``inplace=True`` and ``ExtensionDtype`` columns incorrectly raising (:issue:`45577`)
- Bug in getting a column from a DataFrame with an object-dtype row index with datetime-like values: the resulting Series now preserves the exact object-dtype Index from the parent DataFrame (:issue:`42950`)
Expand Down
19 changes: 16 additions & 3 deletions pandas/core/indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,10 @@
from pandas.util._decorators import doc
from pandas.util._exceptions import find_stack_level

from pandas.core.dtypes.cast import can_hold_element
from pandas.core.dtypes.cast import (
can_hold_element,
maybe_promote,
)
from pandas.core.dtypes.common import (
is_array_like,
is_bool_dtype,
Expand Down Expand Up @@ -2083,8 +2086,18 @@ def _setitem_with_indexer_missing(self, indexer, value):
# We get only here with loc, so can hard code
return self._setitem_with_indexer(new_indexer, value, "loc")

# this preserves dtype of the value
new_values = Series([value])._values
# this preserves dtype of the value and of the object
if isna(value == value):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value == value to see if it's pd.NA? Not exactly clear why value is being compare to itself

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, is there a better way to achieve this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think isna(value) should just work right?

In [1]: pd.isna(pd.NA)
Out[1]: True

In [2]: pd.isna(np.nan)
Out[2]: True

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t want to run in there when I get nan, because nan does not fit into int64 for example

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Well If you only want to check for pd.NA I think value is pd.NA is clearer IMO

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current code only checks for pd.NA, so my example above of pd.NaT isn't actually applicable right now. But in light of:

we should strive to have the result consistent regardless of enlargement or not.

We should maybe handle np.nan as well?

We currently treat np.nan as "missing value" when setting into a nullable series without enlargement. So then we should also treat it as "missing" in case of of enlargement? (and so preserve the nullable Int64 dtype, instead of converting to float64)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like your idea of checking nans more broadly. So currently, if we are setting nan into Int64 it gets converted to pd.NA?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, for example:

In [7]: s = pd.Series([1, 2, 3], dtype="Int64")

In [8]: s[0] = np.nan

In [9]: s
Out[9]: 
0    <NA>
1       2
2       3
dtype: Int64

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn’t aware of this. Will adjust accordingly. You are correct, this should be consistent

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you have another look @jorisvandenbossche? I tried to add relevant cases for enlargement and non-enlargement to ensure consistency. Let me know if there is something missing.

There is one open case:

ser = pd.Series([1, 2], dtype="Int64")
ser[1] = "a"

This raises while expansion casts to object

ser = pd.Series([1, 2], dtype="Int64")
ser[2] = "a"

This is true for rhs="a" and rhs=pd.NaT. With non-ea dtypes we are casting to object. Do we want to be consistent here or is the difference intended?

new_dtype = self.obj.dtype
elif not self.obj.empty and not is_object_dtype(self.obj.dtype):
# We should not cast, if we have object dtype because we can
# set timedeltas into object series
curr_dtype = self.obj.dtype
curr_dtype = getattr(curr_dtype, "numpy_dtype", curr_dtype)
new_dtype = maybe_promote(curr_dtype, value)[0]
else:
new_dtype = None
new_values = Series([value], dtype=new_dtype)._values
if len(self.obj._values):
# GH#22717 handle casting compatibility that np.concatenate
# does incorrectly
Expand Down
21 changes: 21 additions & 0 deletions pandas/tests/series/indexing/test_setitem.py
Original file line number Diff line number Diff line change
Expand Up @@ -534,6 +534,27 @@ def test_setitem_not_contained(self, string_series):
expected = concat([string_series, app])
tm.assert_series_equal(ser, expected)

def test_setitem_keep_precision(self, any_numeric_ea_dtype):
# GH#32346
ser = Series([1, 2], dtype=any_numeric_ea_dtype)
ser[2] = 10
expected = Series([1, 2, 10], dtype=any_numeric_ea_dtype)
tm.assert_series_equal(ser, expected)

def test_setitem_enlarge_with_na(self):
# GH#32346
ser = Series([1, 2], dtype="Int64")
ser[2] = NA
expected = Series([1, 2, NA], dtype="Int64")
tm.assert_series_equal(ser, expected)

def test_setitem_enlarge_with_nan(self):
# GH#32346
ser = Series([1, 2])
ser[2] = np.nan
expected = Series([1, 2, np.nan])
tm.assert_series_equal(ser, expected)


def test_setitem_scalar_into_readonly_backing_data():
# GH#14359: test that you cannot mutate a read only buffer
Expand Down