Skip to content

Inconsistent behaviour in Timestamp.round #22591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
miccoli opened this issue Sep 4, 2018 · 9 comments · Fixed by #22802
Closed

Inconsistent behaviour in Timestamp.round #22591

miccoli opened this issue Sep 4, 2018 · 9 comments · Fixed by #22802
Labels
Datetime Datetime data dtype
Milestone

Comments

@miccoli
Copy link
Contributor

miccoli commented Sep 4, 2018

Code Sample

>>> import pandas as pd
>>> a = pd.to_datetime('2000-01-01 0:0:0.124999360')
>>> b = pd.to_datetime('2018-01-01 0:0:0.124999360')
>>> a.round('us')
Timestamp('2000-01-01 00:00:00.124999')
>>> b.round('us')
Timestamp('2018-01-01 00:00:00.125000')

Problem description

According to the standard definition of rounding to a multiple b.round('us') in the example above gives the wrong answer.

I suspect that there is loss of precision due to a floating point conversion.

>>> round(a.value / 1e9, 6)
946684800.124999
>>> round(b.value / 1e9, 6)
1514764800.125

Expected Output

>>> b.round('us')
Timestamp('2018-01-01 00:00:00.124999')

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.7.0.final.0
python-bits: 64
OS: Darwin
OS-release: 17.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.23.4
pytest: None
pip: 18.0
setuptools: 39.0.1
Cython: None
numpy: 1.15.1
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

@miccoli miccoli changed the title Inconsistent behaviour in Timestamp.round Inconsistent behaviour in Timestamp.round Sep 4, 2018
@TomAugspurger
Copy link
Contributor

This is in

def round_ns(values, rounder, freq):
if you want to take a look. There's a comment indicating that float precision has been a problem in the past.

@TomAugspurger TomAugspurger added the Datetime Datetime data dtype label Sep 4, 2018
@miccoli
Copy link
Contributor Author

miccoli commented Sep 4, 2018

By inspecting the code I see further problems...
E.g.

if unit < 1000:
# for nano rounding, work with the last 6 digits separately
# due to float precision
buff = 1000000
assumes that unit is a divisor of 1_000_000.

Therefore

>>> a.round('27ns')
Timestamp('2000-01-01 00:00:00.124999351')

is wrong. In fact I would expect a.round('27ns').value % 27 == 0 and not
a.round('27ns').value % 1_000_000 % 27 == 0 as per the current implementation.

Of course the semantics of a.round('27ns') are questionable, but the current implementation appears at least arbitrary.

I don't know if floating point arithmetic is used because it is considered faster than integer arithmetic, but it seems quite complex to retain full precision in all possible edge cases.

@miccoli
Copy link
Contributor Author

miccoli commented Sep 5, 2018

I've just noticed that one costly bit of integer arithmetic is already present:

mask = values % unit == 0

In fact once you compute values % unit floor and ceil are trivial:

floor = lambda values, unit: values - ( values % unit)
ceil = lambda values, unit: values + (-values % unit)

plus some code for handling under/overflow.

Rounding is a little bit more complicated, mainly because we first need to exactly define rounding semantics. If the maintainers confirm the present behaviour as a bug, (and not a feature), I can submit a PR.

@miccoli
Copy link
Contributor Author

miccoli commented Sep 5, 2018

same problem for Timedelta:

def _round(self, freq, rounder):
cdef:
int64_t result, unit
unit = to_offset(freq).nanos
result = unit * rounder(self.value / float(unit))
return Timedelta(result, unit='ns')

but of course precision loss is very unlikely in real world applications.

@miccoli
Copy link
Contributor Author

miccoli commented Sep 5, 2018

This issue is exactly the same as #19206, only showing up on different edge case. Taking into account that the current code also patches #21262, I would suggest a entirely new implementation based on int64 and to entirely drop floating point.

A similar rounding bug due to floating point arithmetic is numpy/numpy#11881

@miccoli
Copy link
Contributor Author

miccoli commented Sep 9, 2018

I have a proof of concept implementation in miccoli/pandas@1adfd41

Before issuing a PR I would like to receive some comments on following aspects:

  • pure int64 implementation (drops floating point and recourse to np.round, np.floor, np.ceil
  • tie breaking rule for halfway numbers
    • I propose to break compatibility with the current floating point base rule, round to even, and opt in favour of a round towards -∞
  • should also time deltas use the same int64 implementation?
  • general comments on the coding style.

Current code seems to me badly broken, but things concerning rounding are really complex, therefore a review of the proposed new semantics is necessary.

@jreback
Copy link
Contributor

jreback commented Sep 9, 2018

so does your soln break any tests?

seems a lot simpler though pls don’t change actual rounding semantics
follow numpy exactly

whether using numpy functions or not doesn’t matter

@miccoli
Copy link
Contributor Author

miccoli commented Sep 9, 2018

Yes, a few tests are broken, e.g.

_____________ TestTimestampUnaryOps.test_round_frequencies[20130104 12:00:00-D-20130105] _____________

self = 
timestamp = '20130104 12:00:00', freq = 'D', expected = Timestamp('2013-01-05 00:00:00')

    @pytest.mark.parametrize('timestamp, freq, expected', [
        ('20130101 09:10:11', 'D', '20130101'),
        ('20130101 19:10:11', 'D', '20130102'),
        ('20130201 12:00:00', 'D', '20130202'),
        ('20130104 12:00:00', 'D', '20130105'),
        ('2000-01-05 05:09:15.13', 'D', '2000-01-05 00:00:00'),
        ('2000-01-05 05:09:15.13', 'H', '2000-01-05 05:00:00'),
        ('2000-01-05 05:09:15.13', 'S', '2000-01-05 05:09:15')
    ])
    def test_round_frequencies(self, timestamp, freq, expected):
        dt = Timestamp(timestamp)
        result = dt.round(freq)
        expected = Timestamp(expected)
>       assert result == expected
E       AssertionError: assert Timestamp('2013-01-04 00:00:00') == Timestamp('2013-01-05 00:00:00')

pandas/tests/scalar/timestamp/test_unary_ops.py:35: AssertionError

The reason for dropping default numpy rounding mode (which follows IEEE754) is that IMHO "round to even" does not makes much sense in the Timestamp context.

  • Current implementation:
>>> pd.__version__
'0.23.4'
>>> pd.to_datetime('2018-01-01 12:00:00').round('1d')
Timestamp('2018-01-01 00:00:00')
>>> pd.to_datetime('2018-01-02 12:00:00').round('1d')
Timestamp('2018-01-03 00:00:00')
>>> pd.to_datetime('2018-01-03 12:00:00').round('1d')
Timestamp('2018-01-03 00:00:00')
>>> pd.to_datetime('2019-01-01 12:00:00').round('1d')
Timestamp('2019-01-02 00:00:00')
>>> pd.to_datetime('2019-01-02 12:00:00').round('1d')
Timestamp('2019-01-02 00:00:00')
>>> pd.to_datetime('2019-01-03 12:00:00').round('1d')
Timestamp('2019-01-04 00:00:00')
>>> d = pd.DatetimeIndex(start='2018-01-01 00:00:00', end='2018-12-31 23:59:59.999999999', freq='1h')
>>> (d.round('1d') - d).values.sum() / np.timedelta64(1, 'D')
-0.5
  • Proposed implementation:
>>> pd.__version__
'0.24.0.dev0+564.g1adfd4155'
>>> pd.to_datetime('2018-01-01 12:00:00').round('1d')
Timestamp('2018-01-01 00:00:00')
>>> pd.to_datetime('2018-01-02 12:00:00').round('1d')
Timestamp('2018-01-02 00:00:00')
>>> pd.to_datetime('2018-01-03 12:00:00').round('1d')
Timestamp('2018-01-03 00:00:00')
>>> pd.to_datetime('2019-01-01 12:00:00').round('1d')
Timestamp('2019-01-01 00:00:00')
>>> pd.to_datetime('2019-01-02 12:00:00').round('1d')
Timestamp('2019-01-02 00:00:00')
>>> pd.to_datetime('2019-01-03 12:00:00').round('1d')
Timestamp('2019-01-03 00:00:00')
>>> d = pd.DatetimeIndex(start='2018-01-01 00:00:00', end='2018-12-31 23:59:59.999999999', freq='1h')
>>> (d.round('1d') - d).values.sum() / np.timedelta64(1, 'D')
-182.5

Here you see that noon is rounded up or down to the midnight of the same or the following day depending on whether the day number (counted starting from the unix epoch '1970-01-01') is odd or even. On the contrary with my proposed implementation noon is always rounded down to midnight of the same day, which is more intuitive.

Of course a rigorous assessment of merits or demerits of both rounding modes requires an analysis of all the possible settings in which Datetime.round will be used. I would tend to favour an intuitive and predictable result over the reduction of the bias toward -∞. It is also debatable if in this context bias towards even is preferable with respect to bias toward -∞, but frankly I do not have an answer here, it is (for now) just a matter of taste.

I understand the need for not breaking compatibility with previous behaviour, so I will implement also the RoundTo.NEAREST_HALF_EVEN mode, to make a more informed decision possible.

@miccoli
Copy link
Contributor Author

miccoli commented Sep 21, 2018

I have miccoli/pandas@0416b06 almost ready for PR, implementing a rounding mode compatible with the current flawed implementation which is based on floating point rounding semantics and numpy.around.

I have just a point I'm unable to resolve: I'm using numpy.divmod which was introduced in numpy version 1.13.0, while the current pandas requirement is

min_numpy_ver = '1.9.0'

Can I leave the faster numpy.divmod or should I stick to the 1.9.0 version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Datetime Datetime data dtype
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants