Skip to content

BUG: datetime total_seconds inaccurate #35158

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
frd-glovo opened this issue Jul 7, 2020 · 5 comments
Closed
2 of 3 tasks

BUG: datetime total_seconds inaccurate #35158

frd-glovo opened this issue Jul 7, 2020 · 5 comments
Labels
Duplicate Report Duplicate issue or pull request

Comments

@frd-glovo
Copy link

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

import pandas as pd

df = pd.DataFrame({'x':['2020-07-07T10:33:14.971260986',
       '2020-07-07T10:17:14.971260986', '2020-07-07T10:11:14.971260986',
       '2020-07-07T09:59:13.971260986'],'y':['2020-07-07T10:35:14.971260986',
       '2020-07-07T10:24:14.971260986', '2020-07-07T10:15:14.971260986',
       '2020-07-07T10:04:14.971260986']})

df.x = pd.to_datetime(df.x)
df.y = pd.to_datetime(df.y)

print((df.y-df.x).dt.total_seconds()[0]) # 120.00000000000001
print((df.y-df.x).apply(lambda x: x.total_seconds())[0]) #120.0

Problem description

There seems to be a loss of precision when doing date substractions and using total_seconds method.

As a reference, using an apply with the standard library total_seconds yields the correct value.

Expected Output

Value should be 120.0

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.5.final.0
python-bits : 64
OS : Darwin
OS-release : 19.0.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8

pandas : 1.0.5
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 42.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.14.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.11
tables : None
tabulate : 0.8.6
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

@frd-glovo frd-glovo added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 7, 2020
@frd-glovo frd-glovo changed the title BUG: datetime total_seconds not precise BUG: datetime total_seconds inaccurate Jul 7, 2020
@gimseng
Copy link
Contributor

gimseng commented Jul 7, 2020

I confirmed that I got the same issue. Got 120.00000000000001 vs 120.0.

Output of pd.show_versions()

Details

INSTALLED VERSIONS

commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.19.112+
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.0.3
numpy : 1.18.5
pytz : 2019.3
dateutil : 2.8.1
pip : 20.1.1
setuptools : 46.1.3.post20200325
Cython : 0.29.20
pytest : 5.4.1
hypothesis : 5.10.0
sphinx : 3.0.2
blosc : None
feather : 0.4.1
xlsxwriter : 1.2.8
lxml.etree : 4.5.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: 0.8.1
bs4 : 4.9.0
bottleneck : 1.3.2
fastparquet : None
gcsfs : 0.6.1
lxml.etree : 4.5.0
matplotlib : 3.2.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : 0.16.0
pytables : None
pytest : 5.4.1
pyxlsb : None
s3fs : 0.4.2
scipy : 1.4.1
sqlalchemy : 1.3.16
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.15.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.8
numba : 0.48.0

  • Code
  • Markdown

@gimseng
Copy link
Contributor

gimseng commented Jul 7, 2020

I think float64 in python stored around 15 digits in some sense. I counted the number of digits, and it seems to be roughly there. So I think its an artifact of float64.

As to why when you use apply, it rounded up, I have no idea.

@gimseng
Copy link
Contributor

gimseng commented Jul 9, 2020

Upon some searching, I realized this is a duplicate. See #34290. So maybe close this.

@frd-glovo
Copy link
Author

Apologies, I didn't find that issue. Yes, let's close this.

@simonjayhawkins
Copy link
Member

Apologies, I didn't find that issue. Yes, let's close this.

closing as duplicate of #34290

@simonjayhawkins simonjayhawkins added Duplicate Report Duplicate issue or pull request and removed Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Duplicate Report Duplicate issue or pull request
Projects
None yet
Development

No branches or pull requests

3 participants