Skip to content

BUG: assign doesnt cast SparseDataFrame to DataFrame #19178

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Feb 12, 2018
3 changes: 1 addition & 2 deletions doc/source/whatsnew/v0.23.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -491,7 +491,7 @@ Groupby/Resample/Rolling
Sparse
^^^^^^

-
- Bug in :class:`SparseArray` where if a scalar and index are passed in it will coerce to float64 regardless of scalar's dtype. (:issue:`19163`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify what "passed in" refers to here? Is it specifically .assign? Or any method setting / updating the sparse array?

-
-

Expand All @@ -508,7 +508,6 @@ Reshaping
- Bug in :func:`DataFrame.merge` in which merging using ``Index`` objects as vectors raised an Exception (:issue:`19038`)
- Bug in :func:`DataFrame.stack`, :func:`DataFrame.unstack`, :func:`Series.unstack` which were not returning subclasses (:issue:`15563`)
- Bug in timezone comparisons, manifesting as a conversion of the index to UTC in ``.concat()`` (:issue:`18523`)
-

Numeric
^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/sparse/array.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
is_scalar, is_dtype_equal)
from pandas.core.dtypes.cast import (
maybe_convert_platform, maybe_promote,
astype_nansafe, find_common_type)
astype_nansafe, find_common_type, infer_dtype_from)
from pandas.core.dtypes.missing import isna, notna, na_value_for_dtype

import pandas._libs.sparse as splib
Expand Down Expand Up @@ -195,7 +195,7 @@ def __new__(cls, data, sparse_index=None, index=None, kind='integer',
data = np.nan
if not is_scalar(data):
raise Exception("must only pass scalars with an index ")
values = np.empty(len(index), dtype='float64')
values = np.empty(len(index), dtype=infer_dtype_from(data)[0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is infer_dtype_from_scalar more appropriate here, since we've validated that data is a scalar?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea that's a good point. Since infer_dtype_from just checks is_scalar again.

values.fill(data)
data = values

Expand Down
11 changes: 11 additions & 0 deletions pandas/tests/sparse/frame/test_frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -1271,3 +1271,14 @@ def test_quantile_multi(self):

tm.assert_frame_equal(result, dense_expected)
tm.assert_sp_frame_equal(result, sparse_expected)

def test_assign_with_sparse_frame(self):
# GH 19163
df = pd.DataFrame({"a": [1, 2, 3]})
res = df.to_sparse(fill_value=False).assign(newcol=False)
exp = df.assign(newcol=False).to_sparse(fill_value=False)

tm.assert_sp_frame_equal(res, exp)

for column in res.columns:
assert type(res[column]) is SparseSeries
15 changes: 15 additions & 0 deletions pandas/tests/sparse/test_array.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,21 @@ def test_constructor_spindex_dtype(self):
assert arr.dtype == np.int64
assert arr.fill_value == 0

@pytest.mark.parametrize('scalar,dtype', [
(False, bool),
(0.0, 'float64'),
(1, 'int64'),
('z', 'object')])
def test_scalar_with_index_infer_dtype(self, scalar, dtype):
# GH 19163
arr = SparseArray(scalar, index=[1, 2, 3], fill_value=scalar)
exp = SparseArray([scalar, scalar, scalar], fill_value=scalar)

tm.assert_sp_array_equal(arr, exp)

assert arr.dtype == dtype
assert exp.dtype == dtype

def test_sparseseries_roundtrip(self):
# GH 13999
for kind in ['integer', 'block']:
Expand Down