Skip to content

test_time failure in CI logs #25875

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
WillAyd opened this issue Mar 26, 2019 · 7 comments
Closed

test_time failure in CI logs #25875

WillAyd opened this issue Mar 26, 2019 · 7 comments
Labels
CI Continuous Integration Unreliable Test Unit tests that occasionally fail
Milestone

Comments

@WillAyd
Copy link
Member

WillAyd commented Mar 26, 2019

The following failure has shown up a couple times in CI today. Haven't looked in detail but assume it to be an unreliable test.

https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=9836&view=logs&jobId=a69e7846-138e-5465-0656-921e8964615b&taskId=56da51de-fd5a-5466-5244-b5f65d252624&lineStart=42&lineEnd=95&colStart=1&colEnd=64

=================================== FAILURES ===================================
_____________________________ TestTSPlot.test_time _____________________________
[gw0] linux -- Python 3.6.6 /home/vsts/miniconda3/envs/pandas-dev/bin/python

self = <pandas.tests.plotting.test_datetimelike.TestTSPlot object at 0x7f2889f69978>

    @pytest.mark.slow
    def test_time(self):
        t = datetime(1, 1, 1, 3, 30, 0)
        deltas = np.random.randint(1, 20, 3).cumsum()
        ts = np.array([(t + timedelta(minutes=int(x))).time() for x in deltas])
        df = DataFrame({'a': np.random.randn(len(ts)),
                        'b': np.random.randn(len(ts))},
                       index=ts)
        fig, ax = self.plt.subplots()
        df.plot(ax=ax)
    
        # verify tick labels
        fig.canvas.draw()
        ticks = ax.get_xticks()
        labels = ax.get_xticklabels()
        for t, l in zip(ticks, labels):
            m, s = divmod(int(t), 60)
            h, m = divmod(m, 60)
            rs = l.get_text()
            if len(rs) > 0:
                if s != 0:
                    xp = time(h, m, s).strftime('%H:%M:%S')
                else:
                    xp = time(h, m, s).strftime('%H:%M')
                assert xp == rs
    
        # change xlim
        ax.set_xlim('1:30', '5:00')
    
        # check tick labels again
        fig.canvas.draw()
        ticks = ax.get_xticks()
        labels = ax.get_xticklabels()
        for t, l in zip(ticks, labels):
            m, s = divmod(int(t), 60)
            h, m = divmod(m, 60)
            rs = l.get_text()
            if len(rs) > 0:
                if s != 0:
                    xp = time(h, m, s).strftime('%H:%M:%S')
                else:
                    xp = time(h, m, s).strftime('%H:%M')
>               assert xp == rs
E               AssertionError: assert '01:06:40' == '03:40'
E                 - 01:06:40
E                 + 03:40
@WillAyd WillAyd added CI Continuous Integration Unreliable Test Unit tests that occasionally fail labels Mar 26, 2019
@gfyoung gfyoung added this to the 0.25.0 milestone Mar 26, 2019
@gfyoung
Copy link
Member

gfyoung commented Mar 26, 2019

Yeah...it seems to fail on and off.

@WillAyd
Copy link
Member Author

WillAyd commented Mar 27, 2019

@gfyoung can you reproduce locally by chance? Tried a few times on macOS with no luck curious if you fared any better

@gfyoung
Copy link
Member

gfyoung commented Mar 27, 2019

Of course not. Why would flaky CI make our lives easy? 😂

@gfyoung
Copy link
Member

gfyoung commented Mar 27, 2019

But actually, though, unless we can figure something out now, I think it's best skipping this test (for now) so that we can merge other PR's and return to this before the 0.25.0 release.

@WillAyd
Copy link
Member Author

WillAyd commented Mar 27, 2019

Hmm not sure about skipping the whole thing. I think we could at least salvage everything up until the xlim change and maybe skip everything thereafter for the time being.

Could also be explicit about the data to use rather than relying on random

@gfyoung
Copy link
Member

gfyoung commented Mar 27, 2019

I think we could at least salvage everything up until the xlim change and maybe skip everything thereafter for the time being.

That's fair.

Could also be explicit about the data to use rather than relying on random

Possibly, could try that and see what happens.

@mroeschke
Copy link
Member

Looks like this test is no longer xfailed (and possibly the CI has recovered?) We can reopen if we see it again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Continuous Integration Unreliable Test Unit tests that occasionally fail
Projects
None yet
Development

No branches or pull requests

4 participants