Skip to content

groupby with daily frequency fails with NonExistentTimeError on clock change day in Brazil #23742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pierremoulinier opened this issue Nov 16, 2018 · 8 comments · Fixed by #23756
Labels
Datetime Datetime data dtype Reshaping Concat, Merge/Join, Stack/Unstack, Explode Testing pandas testing functions or related to the test suite
Milestone

Comments

@pierremoulinier
Copy link

Code Sample

import pandas as pd
idx = pd.date_range(start='2017-10-10', end='2017-10-20', freq='1H')
idx = idx.tz_localize('UTC').tz_convert('America/Sao_Paulo')
df = pd.DataFrame(data=list(range(len(idx))), index=idx)
groups = df.groupby(pd.Grouper(freq='1D'))

Problem description

This code raises NonExistentTimeError

The root cause of the problem seems to be that pandas assumes that all days start at midnight local time, which is not the case in Sao Paulo on short clock-change days (the local canonical time goes from 23:59 on Oct 14 to 01:00 on Oct 15). The grouper tries to build a tz-aware object for 2017-10-15 00:00 which is not a valid "canonical" local time in Sao Paulo. This results in an exception being raised.

This is of course a problem when I want to aggregate daily values based on 'Sao Paulo days'.

A possible workaround is to write a custom grouper that will build the first timestamp of the day by using helper functions such as pytz.normalize or dateutil.tz.resolve_imaginary.

However using a custom python function for grouping significantly degrades grouping performance.

Expected Output

This should not raise an exception, it should return valid pandas group by objects. On the clock-change day, all timestamps from 2017-10-15 01:00 to 2017-10-15 23:59 local time should be grouped together and the timestamp associated with that group should be a tz-aware object representing 2017-10-15 01:00 local time.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-17134-Microsoft
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.22.0
pytest: None
pip: None
setuptools: 39.0.1
Cython: None
numpy: 1.13.3
scipy: 0.19.1
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

@pierremoulinier pierremoulinier changed the title groupby with daily frequency fails with NonExistentTimeError on clock change day in Brasil groupby with daily frequency fails with NonExistentTimeError on clock change day in Brazil Nov 16, 2018
@mroeschke
Copy link
Member

I get the same error as you in the latest release v0.23.4, but this does not raise on master (as lot of timezone things were fixed recently) and will be available with the upcomming v0.24.0 release.

In [3]: pd.__version__
Out[3]: '0.24.0.dev0+1028.gdb2066b7d'

In [4]: import pandas as pd
   ...: idx = pd.date_range(start='2017-10-10', end='2017-10-20', fre
   ...: q='1H')
   ...: idx = idx.tz_localize('UTC').tz_convert('America/Sao_Paulo')
   ...: df = pd.DataFrame(data=list(range(len(idx))), index=idx)
   ...: groups = df.groupby(pd.Grouper(freq='1D'))
   ...:
   ...:

In [5]: groups
Out[5]: <pandas.core.groupby.generic.DataFrameGroupBy object at 0x117b940b8>

Canonically though, df.groupby(pd.Grouper(freq='1D')) is equivalent to df.resample('D')

@gfyoung gfyoung added Datetime Datetime data dtype Reshaping Concat, Merge/Join, Stack/Unstack, Explode labels Nov 16, 2018
@gfyoung
Copy link
Member

gfyoung commented Nov 16, 2018

@mroeschke : Add a test to close this? Or is this sufficiently covered?

@gfyoung gfyoung added the Testing pandas testing functions or related to the test suite label Nov 16, 2018
@mroeschke
Copy link
Member

I think this test covers a similar resample case, but I don't think we strictly have a test like this for groupby(Grouper(...))

def test_resample_nonexistent_time_bin_edge(self):

@gfyoung
Copy link
Member

gfyoung commented Nov 16, 2018

Hmm...might as well add a test specifically for this case then. Doesn't hurt 🙂

@pierremoulinier
Copy link
Author

Sorry for the late reply, I only just had a look at the behavior for 0.24.0 -- indeed it does not raise anymore but I am not sure the new behavior is correct.

It looks like if I call groups.aggregate(pd.np.mean) the timestamps in the resulting dataframe will be the following:

2017-10-10 00:00:00-03:00
2017-10-11 00:00:00-03:00
...
2017-10-14 00:00:00-03:00
2017-10-15 01:00:00-02:00
2017-10-16 01:00:00-02:00
2017-10-17 01:00:00-02:00
...

In other words all the days get 'shifted' by the clock change, and timestamps on or after the 16th of October no longer represent the start of the local day.

I guess this is correct if we consider that 1D acually means "24 hours", however this is not the behavior I was expecting when using localized timestamps in a timezone with a DST offset (I would use a fixed UTC offset timezone such as "Etc/GMT+3" to achieve that result).

Note that pandas 0.23 does have the behavior I expect for New York timezone, for instance:

pd.date_range(start='2018-03-10', end='2018-03-13', freq='D', tz="America/New_York")

DatetimeIndex(['2018-03-10 00:00:00-05:00', '2018-03-11 00:00:00-05:00',
               '2018-03-12 00:00:00-04:00', '2018-03-13 00:00:00-04:00'],
              dtype='datetime64[ns, America/New_York]', freq='D')

In the example above, 1D is not a synonym for 24 hours, as there is only a 23 hour interval between the timestamp on the 11th and the one on the 12th.

Is the new behavior a conscious change or did this go unnoticed?

@mroeschke
Copy link
Member

In current master, the 'CD' offset was introduced to preserve the "1 calendar day" behavior and 'D' was changed to always mean 24 Hours; however, this is in the process of being reversed. #22867

@pierremoulinier
Copy link
Author

Hi Matthew, thanks for your reply above, this makes things clearer. I tried using 'CD' on 0.24 and got the same NonExistentTimeError I mentionned initially.

> date_range(start='2017-10-09', end='2017-10-20', freq='CD', tz="America/Sao_Paulo")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/github/pandas/pandas/core/indexes/datetimes.py", line 1521, in date_range
    closed=closed, **kwargs)
  File "/Users/github/pandas/pandas/core/indexes/datetimes.py", line 229, in __new__
    closed=closed, ambiguous=ambiguous)
  File "/Users/github/pandas/pandas/core/arrays/datetimes.py", line 310, in _generate_range
    tz, ambiguous=ambiguous)
  File "pandas/_libs/tslibs/conversion.pyx", line 1044, in pandas._libs.tslibs.conversion.tz_localize_to_utc
    raise pytz.NonExistentTimeError(stamp)
pytz.exceptions.NonExistentTimeError: 2017-10-15 00:00:00 

I suspect the original problem was not fixed in 0.24 but only hidden by the 'Day' vs 'Calendar Day' behavior change. I think issue will reappear when reverting this behavior change unless it is addressed in PR #22867

What's the best way to track this? Should we re-open this issue?

@mroeschke
Copy link
Member

Thanks for the example. A nonexistent keyword argument was added recently in tz_localize that can help solve this problem.

date_range(start='2017-10-09', end='2017-10-20', freq='CD').tz_localize("America/Sao_Paulo", nonexistent='shift')

The tz kwarg is similar to tz_localize except it lacks the error handling kwargs of ambiguous or nonexistent times and therefore does not make any assumptions of how to handle these times..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Datetime Datetime data dtype Reshaping Concat, Merge/Join, Stack/Unstack, Explode Testing pandas testing functions or related to the test suite
Projects
None yet
4 participants