You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to upload a Panda's dataframe as a new file in s3. I am getting FileNotFoundError since the s3 file is opened for read instead of write.
Flow:
Code Sample, a copy-pastable example if possible
Problem description
I am trying to upload a Panda's dataframe as a new file in s3. I am getting
FileNotFoundError
since the s3 file is opened for read instead of write.Flow:
It looks as if s3fs tried to read the file instead of write to it.
Another issue is that I can't specify AWS profile to use. See https://stackoverflow.com/questions/48486209/specify-aws-profile-name-to-use-when-uploading-pandas-dataframe-to-s3
Expected Output
A new file should be created.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.2.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: 3.3.2
pip: 9.0.1
setuptools: 28.8.0
Cython: 0.27.3
numpy: 1.14.0
scipy: 1.0.0
pyarrow: 0.8.0
xarray: None
IPython: None
sphinx: None
patsy: 0.5.0
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: 1.2.1
pymysql: None
psycopg2: None
jinja2: None
s3fs: 0.1.2
fastparquet: 0.1.3
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: