Skip to content

BUG: Fix empty Data frames to JSON round-trippable back to data frames #21318

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jun 8, 2018
Merged
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.23.1.txt
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ I/O

- Bug in IO methods specifying ``compression='zip'`` which produced uncompressed zip archives (:issue:`17778`, :issue:`21144`)
- Bug in :meth:`DataFrame.to_stata` which prevented exporting DataFrames to buffers and most file-like objects (:issue:`21041`)
-
- Bug in IO JSON :func:`read_json`reading empty JSON schema with ``orient='table'`` back to :class:DataFrame caused an error (:issue:`21287`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pyryjook : If you could address the merge conflicts, that would be great.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I’ll certainly do that when I’m on my laptop again!


Plotting
^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/json/json.py
Original file line number Diff line number Diff line change
Expand Up @@ -686,7 +686,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,

result = False

if data.dtype == 'object':
if len(data) and data.dtype == 'object':
Copy link
Contributor Author

@pyryjook pyryjook Jun 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the fix that seems to solve the error.

Any thoughts on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of the JSON table schema is that we can be explicit about the types of the columns, so we shouldn't need to infer anything. Any way to avoid a call to this method altogether?

Copy link
Member

@gfyoung gfyoung Jun 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of the JSON table schema is that we can be explicit about the types of the columns, so we shouldn't need to infer anything.

@WillAyd : I'm confused how this is relevant to the patch. It looks fine to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that since the datatypes are explicitly defined in the schema that we shouldn't need any type of inference, which I get the impression this method is doing

Copy link
Member

@gfyoung gfyoung Jun 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that since the datatypes are explicitly defined in the schema that we shouldn't need any type of inference, which I get the impression this method is doing

Sure, but I'm still not seeing relevance to this particular PR. The patch is pretty straightforward from what I see.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My personal opinion, I don't think option 2 makes much sense. As a user, I would rather have consistent, though undesirable behavior (and leave the test a bit weak, ignoring dtypes, or perhaps marking as expected failure?) than have empty and non empty data frame behave differently.
If at least non empty dataframe behaved as expected, and there was bad behavior on the corner case of empty ones, i could better see the case for living with inconsistent behavior.

Option 3 is obviously the best choice, though spontaneously seems a bit overkill to me? But, given #21140 I would be glad to take a look this week end and circle back with my best shot.

Copy link
Contributor Author

@pyryjook pyryjook Jun 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I swiftly went thru the past commits related to the method and that way tried to find out the reasoning behind the implementation. First of all it's quite old method (5 years) and it has seen only a few modifications during it's lifetime. On top of that, the functionality of it does seem to be quite complexly tied to multiple use cases.

In that light, knowing its quite subtle look on the matter, it sure looks like quite fundamental task to re-think the purpose (or existence) of that method within this PR.

I completely get the point that the fix with the len() or without would only be a compromise when looking at the whole picture.

Clearly, my opinion of the complexity is affected by the fact that this is my first contribution to this library :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @ludaavics that 2 is the least desirable. I would say if you don't see an apparent solution to number 3 above then create a separate issue about the coercion of object types to float with read_json and table='orient'. After that you can parametrize the test for check_dtypes, xfailing the strict check and placing a reference via a TODO comment to the issue around coercion.

Copy link
Member

@gfyoung gfyoung Jun 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WillAyd : Yes, that sounds like a good plan. Thanks for bearing with me. 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, sounds like a plan! I'll make the new issue and the changes to the code accordingly. Thanks guys!


# try float
try:
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/json/table_schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ def parse_table_schema(json, precise_float):
"""
table = loads(json, precise_float=precise_float)
col_order = [field['name'] for field in table['schema']['fields']]
df = DataFrame(table['data'])[col_order]
df = DataFrame(table['data'], columns=col_order)[col_order]

dtypes = {field['name']: convert_json_field_to_pandas_type(field)
for field in table['schema']['fields']}
Expand Down
10 changes: 10 additions & 0 deletions pandas/tests/io/json/test_json_table_schema.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
"""Tests for Table Schema integration."""
import json
import io
from collections import OrderedDict

import numpy as np
Expand All @@ -12,6 +13,7 @@
from pandas.io.json.table_schema import (
as_json_table_type,
build_table_schema,
parse_table_schema,
convert_pandas_type_to_json_field,
convert_json_field_to_pandas_type,
set_default_names)
Expand Down Expand Up @@ -560,3 +562,11 @@ def test_multiindex(self, index_names):
out = df.to_json(orient="table")
result = pd.read_json(out, orient="table")
tm.assert_frame_equal(df, result)

def test_empty_frame_roundtrip(self):
# GH 21287
df = pd.DataFrame([], columns=['a', 'b', 'c'])
expected = df.copy()
out = df.to_json(orient='table')
result = pd.read_json(out, orient='table')
tm.assert_frame_equal(expected, result)
Copy link
Contributor Author

@pyryjook pyryjook Jun 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This raises an assertion error:

E       AssertionError: DataFrame.index are different
E
E       DataFrame.index classes are not equivalent
E       [left]:  Index([], dtype='object')
E       [right]: Float64Index([], dtype='float64')

That's something I need to dig deeper. If there is something obvious, that I'm missing, any pointers would be appreciated in such case.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this PR! Beat me to it :)
A bit weak but what do we think of just
pd.testing.assert_frame_equal(expected, actual, check_dtype=False) ?

Otherwise I would guess we have to go down the road of including the dtypes in the JSON representation?

Copy link
Contributor Author

@pyryjook pyryjook Jun 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! And actually it only works if I assert it like this:
pd.testing.assert_frame_equal(expected, result, check_dtype=False, check_index_type=False)
So both check_dtypeand check_index_type have to be set to False in order to get the assertion right.

Thoughts on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignoring the dtype difference is not the solution. The point of this format is to persist that metadata.

What I would do is check that the proper type information for the index is being written out (you can use a io.StringIO instance instead of writing to None). If that appears correct then there would be an issue with the reader that is ignoring or casting the type of the index after the fact

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you're right, ignoring the dtype and the index_type will just hide the problem.

Did some initial testings and it seems that on the reading side empty data with data.dtype == 'object' gets coerced to Float64 without any clear reason.

I'll push a commit with fix proposal for comments.