Skip to content

BUG: Inconsistent data type conversion with .sum #37491

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
candalfigomoro opened this issue Oct 29, 2020 · 12 comments · Fixed by #57306
Closed
2 of 3 tasks

BUG: Inconsistent data type conversion with .sum #37491

candalfigomoro opened this issue Oct 29, 2020 · 12 comments · Fixed by #57306
Assignees
Labels
Dtype Conversions Unexpected or buggy dtype conversions good first issue Needs Tests Unit test(s) needed to prevent regressions Reduction Operations sum, mean, min, max, etc.

Comments

@candalfigomoro
Copy link

candalfigomoro commented Oct 29, 2020

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

import pandas as pd
s = pd.Series([1, 2, 3, 4, 5])
print(s.sum().dtype)                    # int64
print(s.astype("float16").sum().dtype)  # float16
print(s.astype("float32").sum().dtype)  # float32
print(s.astype("int16").sum().dtype)    # int64 ???
print(s.astype("uint8").sum().dtype)    # int64 ???

Problem description

Summing a float16 Series/DataFrame returns float16 values.
Summing a float32 Series/DataFrame returns float32 values.
However, summing integer Series/DataFrames always returns int64 values, regardless of their original type ( int16, uint8 etc.).

Expected Output

Summing int32 values should return int32 values, just like summing float32 values returns float32 values.

Output of pd.show_versions()

INSTALLED VERSIONS

commit : db08276
python : 3.8.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.16299
machine : AMD64
processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Italian_Italy.1252

pandas : 1.1.3
numpy : 1.19.3
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.4
setuptools : 50.3.0.post20201006
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

@candalfigomoro candalfigomoro added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 29, 2020
@jreback
Copy link
Contributor

jreback commented Oct 29, 2020

these int types can easily overflow and to make the implementation easier we upcast in the sum operation itself (not that this can be either numpy or bottleneck)

sure it's possible to not upcast but would require some checking logic (and possible perf hit)

so would not be averse to a proper PR to handle

@jreback jreback added Dtype Conversions Unexpected or buggy dtype conversions Performance Memory or execution speed performance and removed Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 29, 2020
@jbrockmendel jbrockmendel added the Reduction Operations sum, mean, min, max, etc. label Oct 30, 2020
@jorisvandenbossche
Copy link
Member

jorisvandenbossche commented Oct 30, 2020

Note that this casting behaviour is what numpy does:

In [28]: type(np.array([1, 2, 3, 4], dtype="int8").sum()) 
Out[28]: numpy.int64

(indeed as @jreback mentions the reason this is done is because a sum can easily overflow for smaller integer types)

So not sure this is something we want to change.

@jorisvandenbossche
Copy link
Member

One difference is that numpy preserves the "signed"ness:

In [30]: type(np.array([1, 2, 3, 4], dtype="uint8").sum())  
Out[30]: numpy.uint64

In [31]: type(pd.Series([1, 2, 3, 4], dtype="uint8").sum()) 
Out[31]: numpy.int64

so that is something we can do in pandas as well

@candalfigomoro
Copy link
Author

Another problem is that in memory-constrained environments, where I use uint8 integers, implicit conversions to int64 when using .sum could be an issue.

@jorisvandenbossche
Copy link
Member

jorisvandenbossche commented Oct 30, 2020

Just a note that if you have uint8 integers, and have relatively many of them (assuming this is the case because otherwise a cast to int64 might not be very problematic), a sum of those will almost always overflow (the only case where this won't happen is if it are almost all 0's. Once you have 256 elements that are 1 or larger, you have an overflow)

@candalfigomoro
Copy link
Author

@jorisvandenbossche They are actually a lot of booleans that are converted to 0/1

@jorisvandenbossche
Copy link
Member

jorisvandenbossche commented Oct 30, 2020

Yes, but the same note still applies: you only need to have 256 True values in your data, and you'll get an overflow.
Numpy does also use int64 as result dtype for summing bool data.

In theory, we could only cast to a larger dtype if we detect that an overflow will happen. But 1) that will have a significant performance cost and 2) that creates value-dependent behaviour, something that we try to avoid in general (meaning that ideally you know the output dtype based on the input dtype, without having to inspect the specific input values)

@candalfigomoro
Copy link
Author

Yes, but the same note still applies: you only need to have 256 True values in your data, and you'll get an overflow.

Yes, of course.

Numpy does also use int64 as result dtype for summing bool data.

Yes, I see that this is also an "issue" in numpy. So if we want to be consistent with numpy, I think we can close this issue or just fix the "signedness" unconsistency.

@candalfigomoro
Copy link
Author

candalfigomoro commented Oct 30, 2020

Note that this casting behaviour is what numpy does:

In [28]: type(np.array([1, 2, 3, 4], dtype="int8").sum()) 
Out[28]: numpy.int64

(indeed as @jreback mentions the reason this is done is because a sum can easily overflow for smaller integer types)

So not sure this is something we want to change.

Curiously, this returns a numpy.int32 in my environment (see my pd.show_versions() output).

@jorisvandenbossche
Copy link
Member

Curiously, this returns a numpy.int32 in my environment (see my pd.show_versions() output).

See the docstring of np.sum: numpy uses the default platform int (also eg when creating arrays), while in pandas we always default to int64. So I suppose that is the reason for that difference

@mroeschke mroeschke added good first issue Needs Tests Unit test(s) needed to prevent regressions and removed Performance Memory or execution speed performance labels Feb 2, 2024
@mroeschke
Copy link
Member

Looks like the signedness is preserved now when summing uint. Could use a test if there isn't one

@banorton
Copy link
Contributor

banorton commented Feb 7, 2024

take

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Dtype Conversions Unexpected or buggy dtype conversions good first issue Needs Tests Unit test(s) needed to prevent regressions Reduction Operations sum, mean, min, max, etc.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants