Skip to content

BUG: parsing dates for multiple columns in read_csv() doesn't return DatetimeIndex if values are missing #56401

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
mosc9575 opened this issue Dec 8, 2023 · 2 comments
Labels
Bug Needs Triage Issue that has not been reviewed by a pandas team member

Comments

@mosc9575
Copy link
Contributor

mosc9575 commented Dec 8, 2023

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

Example with two columns for the dates and unexpected result.

from io import StringIO

import pandas as pd

data = '''day,time,value
2020-01-01,14:00:00+01:00,1
2020-01-01,14:15:00+01:00,1
2020-01-01,14:30:00+01:00,1
2020-01-01,14:45:00+01:00,1
2020-01-01,15:00:00+01:00,1
,,
'''

df = pd.read_csv(
    StringIO(data),
    sep=',',
    parse_dates=[[0,1]],
    index_col=0
)
print(df)
>>>                          value
day_time                        
2020-01-01 14:00:00+01:00    1.0
2020-01-01 14:15:00+01:00    1.0
2020-01-01 14:30:00+01:00    1.0
2020-01-01 14:45:00+01:00    1.0
2020-01-01 15:00:00+01:00    1.0
nan nan                      NaN

print(df.index)
>>> Index(['2020-01-01 14:00:00+01:00', '2020-01-01 14:15:00+01:00',
       '2020-01-01 14:30:00+01:00', '2020-01-01 14:45:00+01:00',
       '2020-01-01 15:00:00+01:00', 'nan nan'],
      dtype='object', name='day_time')

Example with one column for the dates and expected result.

data = '''day_time,value
2020-01-01 14:00:00+01:00,1
2020-01-01 14:15:00+01:00,1
2020-01-01 14:30:00+01:00,1
2020-01-01 14:45:00+01:00,1
2020-01-01 15:00:00+01:00,1
,
'''

df = pd.read_csv(
    StringIO(data),
    sep=',',
    parse_dates=[0],
    index_col=0
)
print(df)

>>>                          value
day_time                        
2020-01-01 14:00:00+01:00    1.0
2020-01-01 14:15:00+01:00    1.0
2020-01-01 14:30:00+01:00    1.0
2020-01-01 14:45:00+01:00    1.0
2020-01-01 15:00:00+01:00    1.0
NaT                          NaN

print(df.index)
>>> DatetimeIndex(['2020-01-01 14:00:00+01:00', '2020-01-01 14:15:00+01:00',
               '2020-01-01 14:30:00+01:00', '2020-01-01 14:45:00+01:00',
               '2020-01-01 15:00:00+01:00',                       'NaT'],
              dtype='datetime64[ns, UTC+01:00]', name='day_time', freq=None)

Issue Description

I expect, that the returned DataFrame by pd.read_csv() has a pd.DatetimeIndex if I parse dates and there are some missing values in the date column, which should be NaT.

Expected Behavior

What I see, is, that my expectations are fulfilled, if I parse only on column.

If I parse more than one column, the missing values are not returned as NaT. At the end, the index of the index is of type Index and not of type DatetimeIndex.

Installed Versions

INSTALLED VERSIONS ------------------ commit : 2a953cf python : 3.10.9.final.0 python-bits : 64 OS : Linux OS-release : 4.19.0-21-amd64 Version : #1 SMP Debian 4.19.249-2 (2022-06-30) machine : x86_64 processor : x86_64 byteorder : little LC_ALL : en_US.UTF-8 LANG : en_US.UTF-8 LOCALE : en_US.UTF-8

pandas : 2.1.3
numpy : 1.26.2
pytz : 2022.7.1
dateutil : 2.8.2
setuptools : 67.6.0
pip : 23.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.11.0
pandas_datareader : None
bs4 : 4.11.2
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.11.4
sqlalchemy : 2.0.7
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.19.0
tzdata : 2023.3
qtpy : None
pyqt5 : None

@mosc9575 mosc9575 added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Dec 8, 2023
@charizard-knows
Copy link

charizard-knows commented Dec 29, 2023

The internal call to pd.to_datetime are set to errors = ignore which is why when it was given 'nan nan' (notice that here it is 'nan nan' which is in the string format) it found that it did not match the pattern of previous dates which is "%Y-%m-%d %H:%M:%S%z". So it did not parse any dates and left them as is in string form. You can confirm this by using type(df.at[0,'day_time']

The issue does not occur if you yourself trigger pd.to_datetime(df['day'] + ' ' df['time'], errors = 'ignore') after df is created because pandas knows that nan + nan = nan.

Root cause of issue is that when parse_dates is passed, Pandas forces the cols to be strings which coerces nan to 'nan'.

The right way would be fix the parser which coerces two columns to be strings .
I have pinned the function call strs = parsing.concat_date_cols(date_cols) as the culprit in the file .venv\Lib\site-packages\pandas\io\parsers\base_parser.py . This is the place where two columns with nan come back as a single col with 'nan nan'. parsing module is implemented in cpp in pandas\_libs\parsers.cp311-win_amd64.pyd and I couldn't view it as it is in compiled form and I don't know how to view the source of it

The simple fix is changing the argument of internal call in .venv\Lib\site-packages\pandas\io\parsers\base_parser.py of result = tools.to_datetime to have errors = 'coerce'.

The workaround is to use pd.to_datetime after the df is created.

If any maintainer can comment on this, I'd be happy to co ordinate and create a pull request for this

@mroeschke
Copy link
Member

Thanks for the report here but the functionality of specifying multiple columns to combine into a date was deprecated and will be removed in pandas 3.0 #56569 so going to close as a won't fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Needs Triage Issue that has not been reviewed by a pandas team member
Projects
None yet
Development

No branches or pull requests

3 participants