You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@TomAugspurger I'm not certain it's a dupe, but certainly related. Seems to me #20599 relates to the need of raising the error in the first place, and calls for using some other datatype than a C long internally when the int in question is outside [-231+1, 231-1]. This doesn't seem to be solved in 0.23.4, but the error is not raised anymore (maybe related to some work done on #20599). Loading big_int = 2147483647 + 1 into an python int works, but a C long gives no guaranty of correctness. In my opinion, loading an int into a df from json that can't be represented in the used datatype should at least raise a warning, but preferably an error .
Code Sample
Problem description
No error when loading int longer than
C long
. In contrast,df = pd.DataFrame(json.loads(json_string)).astype(int)
raises the expected error.
Expected Output
OverflowError: Python int too large to convert to C long
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.5.final.0
python-bits: 32
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.4
pytest: None
pip: 18.0
setuptools: 39.0.1
Cython: None
numpy: 1.15.0
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.4
xlrd: 1.1.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: 1.2.10
pymysql: None
psycopg2: 2.7.5 (dt dec pq3 ext)
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: