You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
read_parquet raises an exception: ValueError: Cannot convert from timedelta64[ns] to timedelta64. Supported resolutions are 's', 'ms', 'us', 'ns'. Traceback:
---------------------------------------------------------------------------ValueErrorTraceback (mostrecentcalllast)
File/path/to/script.py:64df=pd.DataFrame(columns=td)
5df.to_parquet('test.parquet', engine='pyarrow')
---->6pd.read_parquet('test.parquet', engine='pyarrow')
File~/.../site-packages/pandas/io/parquet.py:667, inread_parquet(path, engine, columns, storage_options, use_nullable_dtypes, dtype_backend, filesystem, filters, **kwargs)
664use_nullable_dtypes=False665check_dtype_backend(dtype_backend)
-->667returnimpl.read(
668path,
669columns=columns,
670filters=filters,
671storage_options=storage_options,
672use_nullable_dtypes=use_nullable_dtypes,
673dtype_backend=dtype_backend,
674filesystem=filesystem,
675**kwargs,
676 )
File~/.../site-packages/pandas/io/parquet.py:281, inPyArrowImpl.read(self, path, columns, filters, use_nullable_dtypes, dtype_backend, storage_options, filesystem, **kwargs)
273try:
274pa_table=self.api.parquet.read_table(
275path_or_handle,
276columns=columns,
(...)
279**kwargs,
280 )
-->281result=pa_table.to_pandas(**to_pandas_kwargs)
283ifmanager=="array":
284result=result._as_manager("array", copy=False)
File~/.../site-packages/pyarrow/array.pxi:885, inpyarrow.lib._PandasConvertible.to_pandas()
File~/.../site-packages/pyarrow/table.pxi:5002, inpyarrow.lib.Table._to_pandas()
File~/.../site-packages/pyarrow/pandas_compat.py:781, intable_to_dataframe(options, table, categories, ignore_metadata, types_mapper)
778ext_columns_dtypes=_get_extension_dtypes(table, [], types_mapper)
780_check_data_column_metadata_consistency(all_columns)
-->781columns=_deserialize_column_index(table, all_columns, column_indexes)
783column_names=table.column_names784result=pa.lib.table_to_blocks(options, table, categories,
785list(ext_columns_dtypes.keys()))
File~/.../site-packages/pyarrow/pandas_compat.py:919, in_deserialize_column_index(block_table, all_columns, column_indexes)
917# if we're reconstructing the index918iflen(column_indexes) >0:
-->919columns=_reconstruct_columns_from_metadata(columns, column_indexes)
921returncolumnsFile~/.../site-packages/pyarrow/pandas_compat.py:1122, in_reconstruct_columns_from_metadata(columns, column_indexes)
1120level=_pandas_api.pd.Index([decimal.Decimal(i) foriinlevel])
1121eliflevel.dtype!=dtype:
->1122level=level.astype(dtype)
1123# ARROW-9096: if original DataFrame was upcast we keep that1124iflevel.dtype!=numpy_dtypeandpandas_dtype!="datetimetz":
File~/.../site-packages/pandas/core/indexes/base.py:1097, inIndex.astype(self, dtype, copy)
1093new_values=cls._from_sequence(self, dtype=dtype, copy=copy)
1095else:
1096# GH#13149 specifically use astype_array instead of astype->1097new_values=astype_array(values, dtype=dtype, copy=copy)
1099# pass copy=False because any copying will be done in the astype above1100result=Index(new_values, name=self.name, dtype=new_values.dtype, copy=False)
File~/.../site-packages/pandas/core/dtypes/astype.py:182, inastype_array(values, dtype, copy)
179values=values.astype(dtype, copy=copy)
181else:
-->182values=_astype_nansafe(values, dtype, copy=copy)
184# in pandas we don't store numpy str dtypes, so convert to object185ifisinstance(dtype, np.dtype) andissubclass(values.dtype.type, str):
File~/.../site-packages/pandas/core/dtypes/astype.py:122, in_astype_nansafe(arr, dtype, copy, skipna)
119tdvals=array_to_timedelta64(arr).view("m8[ns]")
121tda=ensure_wrapped_if_datetimelike(tdvals)
-->122returntda.astype(dtype, copy=False)._ndarray124ifdtype.namein ("datetime64", "timedelta64"):
125msg= (
126f"The '{dtype.name}' dtype has no unit. Please pass in "127f"'{dtype.name}[ns]' instead."128 )
File~/.../site-packages/pandas/core/arrays/timedeltas.py:358, inTimedeltaArray.astype(self, dtype, copy)
354returntype(self)._simple_new(
355res_values, dtype=res_values.dtype, freq=self.freq356 )
357else:
-->358raiseValueError(
359f"Cannot convert from {self.dtype} to {dtype}. "360"Supported resolutions are 's', 'ms', 'us', 'ns'"361 )
363returndtl.DatetimeLikeArrayMixin.astype(self, dtype, copy=copy)
ValueError: Cannotconvertfromtimedelta64[ns] totimedelta64. Supportedresolutionsare's', 'ms', 'us', 'ns'
Now, the odd thing that makes me believe this might be a bug (and not a feature request) is that the code works with either df = pd.DataFrame(data=td) or even df = pd.DataFrame(index=td).
BTW: df.to_parquet('test.parquet', engine='fastparquet') fails with any of the three DataFrames.
Expected Behavior
Using td for columns should work the same way as using it for index.
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.11.9.final.0
python-bits : 64
OS : Linux
OS-release : 6.8.0-101041-tuxedo
Version : #41~22.04.1tux1 SMP PREEMPT_DYNAMIC Wed Aug 21 22:16:53 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
Thanks for the report! I confirm this bug exists in the main branch. You are correct, this is a bug and not a feature request since the DataFrame is successfully saved using to_parquet but is not readable using read_parquet. PRs to fix are welcome!
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
read_parquet
raises an exception:ValueError: Cannot convert from timedelta64[ns] to timedelta64. Supported resolutions are 's', 'ms', 'us', 'ns'
. Traceback:Now, the odd thing that makes me believe this might be a bug (and not a feature request) is that the code works with either
df = pd.DataFrame(data=td)
or evendf = pd.DataFrame(index=td)
.BTW:
df.to_parquet('test.parquet', engine='fastparquet')
fails with any of the three DataFrames.Expected Behavior
Using
td
forcolumns
should work the same way as using it forindex
.Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.11.9.final.0
python-bits : 64
OS : Linux
OS-release : 6.8.0-101041-tuxedo
Version : #41~22.04.1tux1 SMP PREEMPT_DYNAMIC Wed Aug 21 22:16:53 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 2.2.2
numpy : 2.1.0
pytz : 2024.1
dateutil : 2.9.0
setuptools : 73.0.1
pip : 24.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.27.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.5.0
fsspec : 2024.6.1
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: