You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
importpandasaspdfromioimportStringIO# With single row of data, raises a ParserError:pd.read_csv(StringIO("1,2\n"), names=['a', 'b', 'c'])
# With two rows of data, does NOT raise a ParseError ("c" column values are NaN):pd.read_csv(StringIO("1,2\n4,5\n"), names=['a', 'b', 'c'])
Issue Description
In pandas 1.3.4, the following code would parse the dataframe as a single row without issue, with the value in column c as NaN:
In pandas 1.5.1, this raises ParserError: Too many columns specified: expected 3 and found 2, which is an understandable exception. However, also in pandas 1.5.1, if multiple rows are being parsed, it does not raise a ParserError, and treats all values in column c as NaN (like in pandas 1.3.4):
read_csv should have consistent behavior when the length of names exceeds the number of parsed columns, regardless of the number of rows being parsed. i.e., either treat the missing columns as NaN/null consistently (like in pandas 1.3.4), or raise a ParserError consistently, regardless of the number of rows being parsed.
Installed Versions
INSTALLED VERSIONS
commit : 91111fd
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Mon Aug 22 20:17:10 PDT 2022; root:xnu-8020.140.49~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
In pandas 1.3.4, the following code would parse the dataframe as a single row without issue, with the value in column
c
asNaN
:In pandas 1.5.1, this raises
ParserError: Too many columns specified: expected 3 and found 2
, which is an understandable exception. However, also in pandas 1.5.1, if multiple rows are being parsed, it does not raise aParserError
, and treats all values in columnc
asNaN
(like in pandas 1.3.4):Expected Behavior
read_csv
should have consistent behavior when the length ofnames
exceeds the number of parsed columns, regardless of the number of rows being parsed. i.e., either treat the missing columns asNaN
/null consistently (like in pandas 1.3.4), or raise aParserError
consistently, regardless of the number of rows being parsed.Installed Versions
INSTALLED VERSIONS
commit : 91111fd
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Mon Aug 22 20:17:10 PDT 2022; root:xnu-8020.140.49~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.5.1
numpy : 1.23.4
pytz : 2020.5
dateutil : 2.8.2
setuptools : 59.8.0
pip : 22.3
Cython : None
pytest : 7.1.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.9.3
jinja2 : 3.1.2
IPython : 8.5.0
pandas_datareader: None
bs4 : 4.11.1
bottleneck : 1.3.5
brotli :
fastparquet : 0.8.3
fsspec : 2022.10.0
gcsfs : None
matplotlib : 3.5.1
numba : 0.56.3
numexpr : 2.8.3
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.1
snappy : None
sqlalchemy : 1.4.32
tables : None
tabulate : 0.9.0
xarray : 2022.10.0
xlrd : None
xlwt : None
zstandard : None
tzdata : None
The text was updated successfully, but these errors were encountered: