Skip to content

BUG: numpy>1.24 triggers RuntimeWarning when converting NaN via pd.to_datetime() #50702

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
fpavogt opened this issue Jan 12, 2023 · 9 comments
Closed
2 of 3 tasks
Labels
Bug Datetime Datetime data dtype

Comments

@fpavogt
Copy link

fpavogt commented Jan 12, 2023

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import numpy as np
import pandas as pd


pdf = pd.DataFrame([1., 2., np.nan])
# The next line triggers a RuntimeWarning
pd.to_timedelta(pdf[0], unit='s')

Issue Description

The following change in numpy>1.24 is triggering a RuntimeWarning when trying to convert a np.nan to a pandas datetime object.

The cause of the problem (numpy change) is similar to #50681, but the fix on the pandas side is distinct. Hence this new issue.

Expected Behavior

No warning should be issued. I believe this could be fixed by not using .astype(np.int64), but rather .round(0) in the relevant part of the code (l.902-916 in core/arrays/timedeltas.py). Unless this would lead to precision issues, as warned in the comments ?

Edit: The .view() command leads to floating point errors at times (with my new proposition of code). Using .astype() seems to work better ... but some more thoughts is likely required to find a solid solution.

    elif is_float_dtype(data.dtype):
        # cast the unit, multiply base/frac separately
        # to avoid precision issues from float -> int
        mask = np.isnan(data)
        # The next few lines are effectively a vectorized 'cast_from_unit'
        m, p = precision_from_unit(unit or "ns")
        #base = data.astype(np.int64) # original
        base = data.round(0)  # My suggestion  
        frac = data - base
        if p:
            frac = np.round(frac, p)
        #data = (base * m + (frac * m).astype(np.int64)).view("timedelta64[ns]")  # original
        data = (base * m + (frac * m).round(0)).astype("timedelta64[ns]")  # My suggestion
        data[mask] = iNaT
        copy = False

Installed Versions

INSTALLED VERSIONS

commit : 8dab54d
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 22.1.0
Version : Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:54 PDT 2022; root:xnu-8792.41.9~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8

pandas : 1.5.2
numpy : 1.24.1
pytz : 2022.7
dateutil : 2.8.2
setuptools : 65.6.3
pip : 22.3.1
Cython : None
pytest : 7.2.0
hypothesis : None
sphinx : 5.3.0
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.6.0
pandas_datareader: None
bs4 : None
bottleneck : None
brotli :
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.6.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.3
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

@fpavogt fpavogt added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jan 12, 2023
@MarcoGorelli
Copy link
Member

thanks for the report @fpavogt

@MarcoGorelli MarcoGorelli added Datetime Datetime data dtype and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Jan 12, 2023
@MarcoGorelli MarcoGorelli added this to the 2.0 milestone Jan 12, 2023
@mroeschke mroeschke modified the milestones: 2.0, 3.0 Feb 8, 2023
@fpavogt
Copy link
Author

fpavogt commented Feb 14, 2023

Just noticed the push to milestone 3.0 ... I guess this is driven by the fact that 2.0 is overdue, and there is no formal work on this issue just yet ? Since this issue is making a mess of my code, I might be able to find the time to submit a PR for it (rather than patch it hastily on my end). But I'll admit that the possibility of floating point errors dampens a bit my enthusiasm to have this be my first contribution to pandas ...

pandas gurus: is there any future in the work-around suggested above ?

@MarcoGorelli
Copy link
Member

Hey,

I don't think anyone has worked on this yet, and it's not a blocker for the 2.0 release. This doesn't mean you'll need to wait until 3.0, if someone suggests a fix it could land in 2.1 / 2.2 / etc...

Pinging @jbrockmendel , but also @rebecca-palmer who's done some seriously careful and high-quality work in this space

@rebecca-palmer
Copy link
Contributor

I think this was recently fixed in #51006 ... with warnings.catch_warnings(), which we might not want for a different reason: it's not thread-safe.

It might be better to use with np.errstate(all='ignore'), but I don't know whether that's thread-safe or not.

I think @fpavogt 's .round(0) version would work, possibly with less rounding error since it rounds instead of truncating, but I'm not sure of that.

data=data.copy();data[mask]=0 (before the base/frac split) probably works (because we fill in data[mask]=iNaT afterwards) but with a speed penalty.

@fpavogt
Copy link
Author

fpavogt commented Feb 23, 2023

Is there a way to easily test for rounding errors that is platform independent ? I suppose I could find an example on my machine, but would it really make a useful addition to the test suite ?

As per the warnings.catch_warnings(), I can't imagine this would make sense in the long term, since this numpy change is there to stay, right ? Burying this behind a catch_warnings feels a bit like hiding the dust under the carpet to me ... ;)

@jbrockmendel
Copy link
Member

I suppose I could find an example on my machine, but would it really make a useful addition to the test suite ?

If there's a bug on a particular platform I'd rather know about it then not, even if we don't have it in the CI. So go for it.

The .round(0) solution seems like a hack to me. Best guess is long-term we're going to need a vectorized version of cast_from_unit in the cython code.

@fpavogt
Copy link
Author

fpavogt commented Feb 28, 2023

Hey @jbrockmendel,

To to avoid any misunderstandings down the line: I am not aware of any rounding issue bug on my platform at this point ... I merely meant I could try to find one, if it is of concern.

@mroeschke
Copy link
Member

I am not seeing this on the main version of pandas with numpy 1.25

In [1]: import numpy as np
   ...: import pandas as pd
   ...: 
   ...: 
   ...: pdf = pd.DataFrame([1., 2., np.nan])
   ...: # The next line triggers a RuntimeWarning
   ...: pd.to_timedelta(pdf[0], unit='s')
Out[1]: 
0   0 days 00:00:01
1   0 days 00:00:02
2               NaT
Name: 0, dtype: timedelta64[ns]

In [2]: np.__version__
Out[2]: '1.25.1'

@mroeschke mroeschke removed this from the 3.0 milestone Jul 19, 2023
@fpavogt
Copy link
Author

fpavogt commented Aug 11, 2023

I confirm that I am also no longer seeing this with pandas 2.0.3 and numpy 1.25.1.

This ticket can thus probably be closed. Unless the questions raised by @rebecca-palmer regarding the use of warnings.catch_warnings() to fix the problem need to be kept visible ... but I suppose that would be best done in a dedicated issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Datetime Datetime data dtype
Projects
None yet
Development

No branches or pull requests

5 participants