-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
UnknownTimeZoneError: 'tzutc()' when writing a dataframe using from_dict() then reading it using read_parquet() #25423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
pls remove anything extraneous here - just show the minimal construction and call which errros |
I cleaned up the comment and showed where the error happens. |
Update on this, running the example with both latest fastparquet and pyarrow (with pandas master): Writing with pyarrow fails:
Writing with fastparquet succeeds, but reading with pandas then fails (the same as what @joeax reported initially):
In case of writing / reading the file with fastparquet, the error boils down to fastparquet doing:
which then fails in pandas (so fastparquet seems to have written a literal string representation of the timezone in the parquet file). When reading that file with pyarrow, the same happens (but inside the arrow code). |
This is related to a change in pandas. Previously, we would parse input with
but from pandas 0.24, we started to preserve the dateutil object:
And it is this object that fastparquet and pyarrow cannot handle. @joeax in any case: workaround that you can do yourself: convert the timezone to a pytz one (eg with |
@joeax I opened https://issues.apache.org/jira/browse/ARROW-5248 for supporting this on the pyarrow side, and dask/fastparquet#424 on the fastparquet side. So we can follow up on both projects, and therefore closing this issue here. |
Just came here to re-iterate the workaround by @jorisvandenbossche a bit more completely
Docs: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.tz_convert.html |
Code Sample
Problem description
I have an app that reads bucket list data on a periodic basis from the S3 API. Up until recently, everything worked fine. When we upgraded to pandas 0.24 the problem with the parquet files being generated started to surface.
Note: I created a clean VM, installed pandas 0.24 and all dependencies, and was able to reproduce the issue.
Here is more info on the column metadata generated by fastparquet.
pandas 0.23 metadata
pandas 0.24 metadata
Output of
pd.show_versions()
pandas: 0.24.1
pytest: None
pip: 19.0.3
setuptools: 39.0.1
Cython: None
numpy: 1.16.1
scipy: None
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.2.5
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: 0.2.1
pandas_gbq: None
pandas_datareader: None
gcsfs: None
The text was updated successfully, but these errors were encountered: