-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: Inconsistent data type conversion with .sum #37491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
these int types can easily overflow and to make the implementation easier we upcast in the sum operation itself (not that this can be either numpy or bottleneck) sure it's possible to not upcast but would require some checking logic (and possible perf hit) so would not be averse to a proper PR to handle |
Note that this casting behaviour is what numpy does:
(indeed as @jreback mentions the reason this is done is because a sum can easily overflow for smaller integer types) So not sure this is something we want to change. |
One difference is that numpy preserves the "signed"ness:
so that is something we can do in pandas as well |
Another problem is that in memory-constrained environments, where I use uint8 integers, implicit conversions to int64 when using .sum could be an issue. |
Just a note that if you have uint8 integers, and have relatively many of them (assuming this is the case because otherwise a cast to int64 might not be very problematic), a |
@jorisvandenbossche They are actually a lot of booleans that are converted to 0/1 |
Yes, but the same note still applies: you only need to have 256 True values in your data, and you'll get an overflow. In theory, we could only cast to a larger dtype if we detect that an overflow will happen. But 1) that will have a significant performance cost and 2) that creates value-dependent behaviour, something that we try to avoid in general (meaning that ideally you know the output dtype based on the input dtype, without having to inspect the specific input values) |
Yes, of course.
Yes, I see that this is also an "issue" in numpy. So if we want to be consistent with numpy, I think we can close this issue or just fix the "signedness" unconsistency. |
Curiously, this returns a numpy.int32 in my environment (see my pd.show_versions() output). |
See the docstring of |
Looks like the signedness is preserved now when summing uint. Could use a test if there isn't one |
take |
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Problem description
Summing a
float16
Series/DataFrame returnsfloat16
values.Summing a
float32
Series/DataFrame returnsfloat32
values.However, summing integer Series/DataFrames always returns
int64
values, regardless of their original type (int16
,uint8
etc.).Expected Output
Summing
int32
values should returnint32
values, just like summingfloat32
values returnsfloat32
values.Output of
pd.show_versions()
INSTALLED VERSIONS
commit : db08276
python : 3.8.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.16299
machine : AMD64
processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Italian_Italy.1252
pandas : 1.1.3
numpy : 1.19.3
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.4
setuptools : 50.3.0.post20201006
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: