-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
read_json with dtype=False infers Missing Values as None #28501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think that this isn't a bug. NaN is a numerical value. If But if it it is an issue I can do this one. |
Yea there is certainly some ambiguity here and @jorisvandenbossche might have some thoughts but I don't necessarily think the JSON read_csv would also convert this to NaN: >>> pd.read_csv(io.StringIO("1\nnull"))
1
0 NaN |
Complementing what @chrisstpierre said, the null type in json can be numerical, string, list, ..., as seen in https://stackoverflow.com/questions/21120999/representing-null-in-json. So setting up as |
Hi all - Just curious whether stringifying df = pd.read_json('[null]', dtype={0: str})
print(df.values)
# [['None']] If yes, how can I avoid this while at the same time specifying the the column to be of |
take |
It's true that Another point of comparison, when creating a Series from None without specified dtype, we also keep it as None with object dtype:
|
Run against master:
I think the second above is an issue - should probably return
np.nan
instead ofNone
The text was updated successfully, but these errors were encountered: