Skip to content

BUG: pd.read_parquet with pyarrow fails when row number is 0 and contains Pandas extensions type #35436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
Holi0317 opened this issue Jul 28, 2020 · 3 comments
Open
3 tasks done
Assignees
Labels
Bug IO Parquet parquet, feather

Comments

@Holi0317
Copy link

Holi0317 commented Jul 28, 2020

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

import pandas as pd
df = pd.DataFrame([], columns=["s"]).astype({"s": "string"})
df.to_parquet("out.parquet")
df2 = pd.read_parquet("out.parquet") # <- Exception thrown here
Traceback (most recent call last):
  File "<input>", line 1, in <module>
    df2 = pd.read_parquet("out.parquet")
  File "/usr/local/lib/python3.8/site-packages/pandas/io/parquet.py", line 312, in read_parquet
    return impl.read(path, columns=columns, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pandas/io/parquet.py", line 126, in read
    result = self.api.parquet.read_table(
  File "pyarrow/array.pxi", line 715, in pyarrow.lib._PandasConvertible.to_pandas
  File "pyarrow/table.pxi", line 1565, in pyarrow.lib.Table._to_pandas
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 779, in table_to_blockmanager
    blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1116, in _table_to_blocks
    return [_reconstruct_block(item, columns, extension_columns)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1116, in <listcomp>
    return [_reconstruct_block(item, columns, extension_columns)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 738, in _reconstruct_block
    pd_ext_arr = pandas_dtype.__from_arrow__(arr)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/string_.py", line 83, in __from_arrow__
    return StringArray._concat_same_type(results)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/numpy_.py", line 171, in _concat_same_type
    return cls(np.concatenate(to_concat))
  File "<__array_function__ internals>", line 5, in concatenate
ValueError: need at least one array to concatenate

Problem description

All of the code is produced with pyarrow. fastparquet is not affected by this bug as it does not support pandas extension type
(See: dask/fastparquet#465).

This problem only occurs when the row number is 0 and contains pandas extension type (Int64, string, etc).
It does not happen when there is 1 row. Even if the row contains pd.NA. For example, following code works:

import pandas as pd
df = pd.DataFrame([{"s": pd.NA}], columns=["s"]).astype({"s": "Int64"})
df.to_parquet("out.parquet")
pd.read_parquet("out.parquet")

The above code will restore the DataFrame correctly.

The problem does not occur when the column type is normal type. For example, following code works when the dtype is int64

import pandas as pd
df = pd.DataFrame([], columns=["s"]).astype({"s": "int"})
df.to_parquet("out.parquet")
pd.read_parquet("out.parquet")

When using the pyarrow library to read the parse the output parquet, .read_table would fail when converting to pandas DataFrame with similar errors from pd.read_parquet. This only occurs on stable. Correct result is produced on environment 1 and environment 2 (See bottom for environment descriptions)

import pandas as pd
import pyarrow.parquet as pq
df = pd.DataFrame([], columns=["s"]).astype({"s": "int"})
df.to_parquet("out.parquet")
table = pq.read_table("out.parquet")
table.to_pandas() # Exception here on stable. Correct result on env 1 and 2
Traceback (most recent call last):
  File "<input>", line 1, in <module>
    table.to_pandas()
  File "pyarrow/array.pxi", line 715, in pyarrow.lib._PandasConvertible.to_pandas
  File "pyarrow/table.pxi", line 1565, in pyarrow.lib.Table._to_pandas
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 779, in table_to_blockmanager
    blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1116, in _table_to_blocks
    return [_reconstruct_block(item, columns, extension_columns)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1116, in <listcomp>
    return [_reconstruct_block(item, columns, extension_columns)
  File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 738, in _reconstruct_block
    pd_ext_arr = pandas_dtype.__from_arrow__(arr)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/integer.py", line 113, in __from_arrow__
    return IntegerArray._concat_same_type(results)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/masked.py", line 175, in _concat_same_type
    data = np.concatenate([x._data for x in to_concat])
  File "<__array_function__ internals>", line 5, in concatenate
ValueError: need at least one array to concatenate

However, reading the dataframe with .read_pandas would succeed with expected result in all environments.

import pandas as pd
import pyarrow.parquet as pq
df = pd.DataFrame([], columns=["s"]).astype({"s": "int"})
df.to_parquet("out.parquet")
table = pq.read_pandas("out.parquet")
df2 = table.to_pandas() # No exception here
df.equals(df2) # Retruns True

The produced DataFrame is as expected with correct column type. This could be used as a workround for now.

Expected Output

The parquet should be read successfuly.

Output of pd.show_versions()

(The code is run under docker image python:3.8-slim)

Stable environment

All packages installed from pip directly

INSTALLED VERSIONS

commit : None
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.0.5
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

Environment 1

Pandas and numpy compiled from source. pyarrow from their nightly channel

INSTALLED VERSIONS

commit : 6302f7b
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
Version : #1 SMP PREEMPT Thu, 16 Jul 2020 19:34:49 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.1.0rc0+8.g6302f7b
numpy : 1.20.0.dev0+4690248
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.1.0.dev19
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

Environment 2

Pandas and numpy installed from pip. pyarrow from their nightly channel

INSTALLED VERSIONS

commit : None
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.0.5
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.1.0.dev19
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

@Holi0317 Holi0317 added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 28, 2020
@TomAugspurger
Copy link
Contributor

Thanks for the report. One of StringDtype.__from_arrow__ or StringArray._concat_same_type will need to handle the empty array case.

I think it should probably be __from_arrow__, since we probably want to know if we're concatenating no arrays.

@gallir
Copy link

gallir commented Aug 3, 2020

I'm not sure if this is the same bug or it's related, but didn't want to add another bug.

There are no problems with 1.0.5, it's only in 1.1.0.

The issue is when reading a a parquet from S3 with 1.1.0. The same parquet from a file is read successfully but if the file is in s3 it gives the following error:

ArrowInvalid: Unable to merge: Field year has incompatible types: string vs int32

Below a full traceback from iPython


In [20]: pd.read_parquet("s3://a-private-bucket/bookings-stream-v2/parquet/to_import/year=2020/month=08/day=03/bookingsv2-2-2020-08-03-18-15
    ...: -02-20faa3e3-3991-4e00-9f56-7547fc1eb539.gz.parquet")                                                                                                
---------------------------------------------------------------------------
ArrowInvalid                              Traceback (most recent call last)
<ipython-input-20-def989cdc337> in <module>
----> 1 pd.read_parquet("s3://a-private-bucket/bookings-stream-v2/parquet/to_import/year=2020/month=08/day=03/bookingsv2-2-2020-08-03-18-15-02-20faa3e3-3991-4e00-9f56-7547fc1eb539.gz.parquet")

~/venv/lib64/python3.7/site-packages/pandas/io/parquet.py in read_parquet(path, engine, columns, **kwargs)
    315     """
    316     impl = get_engine(engine)
--> 317     return impl.read(path, columns=columns, **kwargs)

~/venv/lib64/python3.7/site-packages/pandas/io/parquet.py in read(self, path, columns, **kwargs)
    140         kwargs["use_pandas_metadata"] = True
    141         result = self.api.parquet.read_table(
--> 142             path, columns=columns, filesystem=fs, **kwargs
    143         ).to_pandas()
    144         if should_close:

~/venv/lib64/python3.7/site-packages/pyarrow/parquet.py in read_table(source, columns, use_threads, metadata, use_pandas_metadata, memory_map, read_dictionary, filesystem, filters, buffer_size, partitioning, use_legacy_dataset)
   1562                 read_dictionary=read_dictionary,
   1563                 buffer_size=buffer_size,
-> 1564                 filters=filters,
   1565             )
   1566         except ImportError:

~/venv/lib64/python3.7/site-packages/pyarrow/parquet.py in __init__(self, path_or_paths, filesystem, filters, partitioning, read_dictionary, buffer_size, memory_map, **kwargs)
   1431         self._dataset = ds.dataset(path_or_paths, filesystem=filesystem,
   1432                                    format=parquet_format,
-> 1433                                    partitioning=partitioning)
   1434 
   1435     @property

~/venv/lib64/python3.7/site-packages/pyarrow/dataset.py in dataset(source, schema, format, filesystem, partitioning, partition_base_dir, exclude_invalid_files, ignore_prefixes)
    665     # TODO(kszucs): support InMemoryDataset for a table input
    666     if _is_path_like(source):
--> 667         return _filesystem_dataset(source, **kwargs)
    668     elif isinstance(source, (tuple, list)):
    669         if all(_is_path_like(elem) for elem in source):

~/venv/lib64/python3.7/site-packages/pyarrow/dataset.py in _filesystem_dataset(source, schema, filesystem, partitioning, format, partition_base_dir, exclude_invalid_files, selector_ignore_prefixes)
    432     factory = FileSystemDatasetFactory(fs, paths_or_selector, format, options)
    433 
--> 434     return factory.finish(schema)
    435 
    436 

~/venv/lib64/python3.7/site-packages/pyarrow/_dataset.pyx in pyarrow._dataset.DatasetFactory.finish()

~/venv/lib64/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()

~/venv/lib64/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()

ArrowInvalid: Unable to merge: Field year has incompatible types: string vs int32

@jorisvandenbossche
Copy link
Member

@gallir that seems to be something else. Can you open a new issue about it? (and a reproducible case with some code to construct the dataset would help)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug IO Parquet parquet, feather
Projects
None yet
Development

No branches or pull requests

5 participants