You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pandas Version: 0.10.0.dev-e80d24e
I get the following error when reading a CSV file that was previously correctly read with the old Pandas 0.09 read_csv() code.
CParserError: Error tokenizing data. C error: Expected 41 fields in line 7123, saw 40
Looks like the tokenizer will fail if it encounters a line with missing data at the end if there are no trialling commas for the last missing column(s):
A,B,C,D
1,2,3,4 <-OK
1,3,3, <-OK
1,4,5 <-Fail :(
I know this is not a clean CSV but having this feature may save some headaches when reading in files, especially since some CSV writers do not add the trailing commas.
Thanks,
-Gagi
The text was updated successfully, but these errors were encountered:
data = ('a:b\n'
'c:d:e\n'
'f:g:h:i\n'
'j:k\n'
'l:m:n\n')
##if first row does not match longest line length, you error out doing this
print pandas.read_csv(StringIO(data),header=None,sep=':')
##even if u specify names..
print pandas.read_csv(StringIO(data),header=None,sep=':',names=list(np.arange(4)))
It would be really awesome if there was a way to force the parser to expect a fixed number of columns.
The text was updated successfully, but these errors were encountered: