Skip to content

Commit b0923d4

Browse files
changed approach from changin to_json method to improving read_json to parse schema for numeric column names
1 parent 6bcd303 commit b0923d4

File tree

2 files changed

+24
-2
lines changed

2 files changed

+24
-2
lines changed

doc/source/whatsnew/v3.0.0.rst

+1
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ Other enhancements
6363
- :meth:`DataFrame.plot.scatter` argument ``c`` now accepts a column of strings, where rows with the same string are colored identically (:issue:`16827` and :issue:`16485`)
6464
- :class:`DataFrameGroupBy` and :class:`SeriesGroupBy` methods ``sum``, ``mean``, ``median``, ``prod``, ``min``, ``max``, ``std``, ``var`` and ``sem`` now accept ``skipna`` parameter (:issue:`15675`)
6565
- :class:`Rolling` and :class:`Expanding` now support aggregations ``first`` and ``last`` (:issue:`33155`)
66+
- :func:`read_json` with ``orient="table"`` now correctly restores non-string column names when reading JSON data, ensuring that column names retain their original types as specified in the schema (:issue:`19129`).
6667
- :func:`read_parquet` accepts ``to_pandas_kwargs`` which are forwarded to :meth:`pyarrow.Table.to_pandas` which enables passing additional keywords to customize the conversion to pandas, such as ``maps_as_pydicts`` to read the Parquet map data type as python dictionaries (:issue:`56842`)
6768
- :meth:`.DataFrameGroupBy.transform`, :meth:`.SeriesGroupBy.transform`, :meth:`.DataFrameGroupBy.agg`, :meth:`.SeriesGroupBy.agg`, :meth:`.SeriesGroupBy.apply`, :meth:`.DataFrameGroupBy.apply` now support ``kurt`` (:issue:`40139`)
6869
- :meth:`DataFrameGroupBy.transform`, :meth:`SeriesGroupBy.transform`, :meth:`DataFrameGroupBy.agg`, :meth:`SeriesGroupBy.agg`, :meth:`RollingGroupby.apply`, :meth:`ExpandingGroupby.apply`, :meth:`Rolling.apply`, :meth:`Expanding.apply`, :meth:`DataFrame.apply` with ``engine="numba"`` now supports positional arguments passed as kwargs (:issue:`58995`)

pandas/io/json/_table_schema.py

+23-2
Original file line numberDiff line numberDiff line change
@@ -366,17 +366,29 @@ def parse_table_schema(json, precise_float: bool) -> DataFrame:
366366
:class:`Index` name of 'index' and :class:`MultiIndex` names starting
367367
with 'level_' are not supported.
368368
369+
To handle cases where column names are non-string types (e.g., integers),
370+
all column names are first converted to strings when constructing the DataFrame.
371+
After applying the correct data types using `astype(dtypes)`, the column names
372+
are restored to their original types as specified in the schema.
373+
This ensures compatibility with `to_json(orient="table")` while maintaining
374+
the integrity of non-string column names.
375+
369376
See Also
370377
--------
371378
build_table_schema : Inverse function.
372379
pandas.read_json
373380
"""
374381
table = ujson_loads(json, precise_float=precise_float)
375-
col_order = [field["name"] for field in table["schema"]["fields"]]
382+
col_order = [
383+
field["name"] if isinstance(field["name"], str) else str(field["name"])
384+
for field in table["schema"]["fields"]
385+
]
376386
df = DataFrame(table["data"], columns=col_order)[col_order]
377387

378388
dtypes = {
379-
field["name"]: convert_json_field_to_pandas_type(field)
389+
field["name"]
390+
if isinstance(field["name"], str)
391+
else str(field["name"]): convert_json_field_to_pandas_type(field)
380392
for field in table["schema"]["fields"]
381393
}
382394

@@ -388,6 +400,15 @@ def parse_table_schema(json, precise_float: bool) -> DataFrame:
388400

389401
df = df.astype(dtypes)
390402

403+
# Convert column names back to their original types
404+
original_types = {
405+
str(field["name"])
406+
if not isinstance(field["name"], str)
407+
else field["name"]: field["name"]
408+
for field in table["schema"]["fields"]
409+
}
410+
df.columns = [original_types[col] for col in df.columns]
411+
391412
if "primaryKey" in table["schema"]:
392413
df = df.set_index(table["schema"]["primaryKey"])
393414
if len(df.index.names) == 1:

0 commit comments

Comments
 (0)