-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: pd.read_parquet with pyarrow fails when row number is 0 and contains Pandas extensions type #35436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the report. One of I think it should probably be |
I'm not sure if this is the same bug or it's related, but didn't want to add another bug. There are no problems with 1.0.5, it's only in 1.1.0. The issue is when reading a a parquet from S3 with 1.1.0. The same parquet from a file is read successfully but if the file is in s3 it gives the following error: ArrowInvalid: Unable to merge: Field year has incompatible types: string vs int32 Below a full traceback from iPython
|
@gallir that seems to be something else. Can you open a new issue about it? (and a reproducible case with some code to construct the dataset would help) |
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Problem description
All of the code is produced with pyarrow. fastparquet is not affected by this bug as it does not support pandas extension type
(See: dask/fastparquet#465).
This problem only occurs when the row number is 0 and contains pandas extension type (
Int64
,string
, etc).It does not happen when there is 1 row. Even if the row contains
pd.NA
. For example, following code works:The above code will restore the DataFrame correctly.
The problem does not occur when the column type is normal type. For example, following code works when the
dtype
isint64
When using the pyarrow library to read the parse the output parquet,
.read_table
would fail when converting to pandas DataFrame with similar errors frompd.read_parquet
. This only occurs on stable. Correct result is produced on environment 1 and environment 2 (See bottom for environment descriptions)However, reading the dataframe with
.read_pandas
would succeed with expected result in all environments.The produced DataFrame is as expected with correct column type. This could be used as a workround for now.
Expected Output
The parquet should be read successfuly.
Output of
pd.show_versions()
(The code is run under docker image
python:3.8-slim
)Stable environment
All packages installed from pip directly
INSTALLED VERSIONS
commit : None
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
Environment 1
Pandas and numpy compiled from source. pyarrow from their nightly channel
INSTALLED VERSIONS
commit : 6302f7b
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
Version : #1 SMP PREEMPT Thu, 16 Jul 2020 19:34:49 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0rc0+8.g6302f7b
numpy : 1.20.0.dev0+4690248
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.1.0.dev19
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
Environment 2
Pandas and numpy installed from pip. pyarrow from their nightly channel
INSTALLED VERSIONS
commit : None
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.1.0.dev19
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
The text was updated successfully, but these errors were encountered: