-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
DOC: s3fs is required when using read_csv
with an S3 URI
#35206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The user guide mentions it: https://pandas.pydata.org/docs/user_guide/io.html#reading-remote-files, and the install guide as well: https://pandas.pydata.org/docs/getting_started/install.html#optional-dependencies I think it would probably be too much to list all optional dependencies in the read_csv docstring as well (S3 is one, but eg Azure or Google Cloud need other optional deps), but we should maybe mention it in general that additional deps might be needed and link to one the other places where this is explained? |
I wasn't aware that there are even more 😱
Sounds good! Should I make a PR? |
Yes, PR very welcome! |
hello, can I make a PR cuz till now nobody makes it yet? |
@abdoulayegk Oops, sorry, I forgot. Please go ahead if you want to take care of that :-) |
Sorry if these are beyond the scope of this issue! They seemed closely related, so I thought that I would note these gaps here rather than create a new issue. |
Location of the documentation
pandas.read_csv
Documentation problem
I've just noticed that
s3fs
is required when you read an URL from s3. While it is documented that you can read from S3, the implication that you need to install an extra is not documented.Also, it would be nice if this was a pandas extra in setup.py (e.g.
s3
).The text was updated successfully, but these errors were encountered: