Skip to content

Error reading parquet from s3 with s3fs >= 0.3.0 #27756

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
CJStadler opened this issue Aug 5, 2019 · 13 comments · Fixed by #27777
Closed

Error reading parquet from s3 with s3fs >= 0.3.0 #27756

CJStadler opened this issue Aug 5, 2019 · 13 comments · Fixed by #27777
Labels
good first issue IO Data IO issues that don't fit into a more specific label
Milestone

Comments

@CJStadler
Copy link
Contributor

Code Sample, a copy-pastable example if possible

import pandas as pd

df = pd.read_parquet('s3://my-bucket/df.parquet')

Raises

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/.../pandas/io/parquet.py", line 294, in read_parquet
    return impl.read(path, columns=columns, **kwargs)
  File "/.../pandas/io/parquet.py", line 192, in read
    parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
AttributeError: 'S3File' object has no attribute 's3'

Problem description

In version 0.3.0 s3fs removed the S3File.s3 attribute. It is replaced by S3File.fs (which is inherited from fsspec.AbstractBufferedFile.fs.

Should pandas check the s3fs version and call the right attribute based on that?

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 0.25.0
numpy : 1.17.0
pytz : 2019.1
dateutil : 2.8.0
pip : 19.2.1
setuptools : 41.0.1
Cython : None
pytest : 4.4.1
hypothesis : None
sphinx : 2.1.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.1
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : 0.3.1
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : 0.3.1
scipy : 1.3.0
sqlalchemy : 1.3.5
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

@TomAugspurger
Copy link
Contributor

Should pandas check the s3fs version and call the right attribute based on that?

Sure.

cc @martindurant for the (possibly unintentional) API change.

@TomAugspurger TomAugspurger added this to the Contributions Welcome milestone Aug 5, 2019
@TomAugspurger TomAugspurger added IO Data IO issues that don't fit into a more specific label good first issue labels Aug 5, 2019
@TomAugspurger
Copy link
Contributor

So the open_with in

parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
will need to depend on the version of s3fs.

@martindurant
Copy link
Contributor

Indeed this is an API change. However, I am surprised that anyone is opening a file and then using the FS methods of the attribute of that file - you presumably have the FS available directly anyway at this point.

Indeed, rather than test specifically for s3 URLs, I would strongly encourage pandas to use fsspec directly, so that then you can read from any of the implementations supported by fsspec.

@CJStadler
Copy link
Contributor Author

Perhaps there should be a function returning both the file and the filesystem, which can be used here instead of get_filepath_or_buffer. That would avoid S3File.s3/S3File.fs.

If that sounds like a reasonable direction I will work on a PR.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Aug 5, 2019 via email

@avicennax
Copy link

Ran into this issue today; just made a local, hacky in-vivo fix to the API break. Happy to help in any way to fix the issue properly.

Cheers.

@martindurant
Copy link
Contributor

For the sake of compatibility, I can make S3File.s3 -> S3File.fs alias, if that makes life easier.

@CJStadler CJStadler mentioned this issue Aug 6, 2019
5 tasks
@WillAyd
Copy link
Member

WillAyd commented Aug 6, 2019

I would strongly encourage pandas to use fsspec directly

Is this compatible with the PEP 519 fspath protocol? We are dropping 3.6 soon so maybe worth looking towards that instead

@martindurant
Copy link
Contributor

I have considered adding __fspath__ to file types (at least ones derived from fsspec.spec.AbstractBufferedFile), but my reading of the PEP is that it should return local paths. os.fspath does indeed work on files returned by the LocalFileSystem, however.

@martindurant
Copy link
Contributor

For the interested, the implementation is as simple as

--- a/fsspec/spec.py
+++ b/fsspec/spec.py
@@ -1154,6 +1154,9 @@ class AbstractBufferedFile(io.IOBase):
     def __str__(self):
         return "<File-like object %s, %s>" % (type(self.fs).__name__, self.path)

+    def __fspath__(self):
+        return self.fs.protocol + "://" + self.path
+

@jorisvandenbossche
Copy link
Member

For the sake of compatibility, I can make S3File.s3 -> S3File.fs alias, if that makes life easier.

@martindurant For compatibility with released pandas versions, that might be nice? (or at least for a while?)

@jorisvandenbossche jorisvandenbossche modified the milestones: Contributions Welcome, 0.25.1 Aug 8, 2019
@martindurant
Copy link
Contributor

Done and released

conda-forge/s3fs-feedstock#25
fsspec/s3fs@990ceeb

@jorisvandenbossche
Copy link
Member

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue IO Data IO issues that don't fit into a more specific label
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants