-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
read_csv and others derived from _read close user-provided filehandles #14418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
As context for why it would be nice to not close a user-filehandle, we have a function like this: def _parse_blast_data(fh, columns, error, error_message, comment=None,
skiprows=None):
read_csv = functools.partial(pd.read_csv, na_values='N/A', sep='\t',
header=None, keep_default_na=False,
comment=comment, skiprows=skiprows)
lineone = read_csv(fh, nrows=1)
if len(lineone.columns) != len(columns):
raise error(error_message % (len(columns), len(lineone.columns)))
fh.seek(0)
return read_csv(fh, names=columns, dtype=_possible_columns) which reads the first line to check the columns before reading the entire file. |
we need to close file handles that pandas opens. These include things like a re-encoding stream and compression streams (e.g. pandas needs to open a new handle). In theory we shouldn't be closing a user stream (though it certainly is possible), its not tested that well. The changes above were to fix NOT closing things in the test suite as PY3 reports unclosed handles much better than PY2. so if you can avoid closing things that are not supposed to be that would be great. |
read_csv
and others derived from _read
close user-provided filehandles
@jreback agree completely, and thank you guys for working to clean up resources opened by pandas. The line I noted in combination with this one is what I believe is causing user sources to be closed. As unlike the After poking through that it occurred to me that it may be specific only to the C parser engine, and testing it looks like that is the case: In [1]: import pandas as pd
In [2]: import io
In [3]: fh = io.StringIO('a,b\n1,2')
In [4]: pd.read_csv(fh, engine='python')
Out[4]:
a b
0 1 2
In [5]: fh.closed
Out[5]: False |
@ebolyen yes, the c-engine got a facelift w.r.t. handles in 0.19.0. yeah, we may need to keep some kind of state whether to close a handle |
That seems logical. Perhaps close if an actual file BUT keep open any streams (e.g. |
Could make an argument either way, but if we want to follow what seems to be the stdlib convention, should also leave handles to actual files open too. In [48]: %%file tmp.json
...: {"a": "22"}
Overwriting tmp.json
In [49]: import json
In [50]: fh = open('tmp.json'); json.load(fh); fh.closed
Out[50]: False |
But what about the resource warnings in our tests? I presume we would then just do an assert_produces_warning check? |
This seems to still be happening (in version Script to reproduce: import sys
import pandas
from io import BytesIO, StringIO
print(f'Python version: {sys.version}')
print(f'pandas version: {pandas.__version__}')
string_io = StringIO('a,b\n1,2')
bytes_io_1 = BytesIO(b'a,b\n1,2')
bytes_io_2 = BytesIO(b'a,b\n1,2')
pandas.read_csv(string_io)
print(f'Was StringIO closed? - {string_io.closed}')
pandas.read_csv(bytes_io_1)
print(f'Was BytesIO closed when encoding is NOT passed? - {bytes_io_1.closed}')
pandas.read_csv(bytes_io_2, encoding='utf-8')
print(f'Was BytesIO closed when encoding is passed? - {bytes_io_2.closed}') prints:
|
@tpllaha this is a very old issue. Can you open a new issue with your reproducible example? |
@jorisvandenbossche done in #36980 |
I believe the "regression" was introduced on this line. That being said, tracking which filehandles a library owns vs what a user provided is hard, and I can't fault you guys if this is considered correct behavior from now on. Just wanted to bring it to your attention. Thanks!
A small, complete example of the issue
Expected Output
Output of
pd.show_versions()
commit: None
python: 3.4.4.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-96-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.0
nose: 1.3.7
pip: 8.1.2
setuptools: 20.1.1
Cython: None
numpy: 1.11.2
scipy: 0.18.1
statsmodels: None
xarray: None
IPython: 5.1.0
sphinx: 1.5a2
patsy: None
dateutil: 2.5.3
pytz: 2016.4
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.5.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
boto: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: