Skip to content

Use !shouldfail rather than !mayfail on unit tests #2256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

thk123
Copy link
Contributor

@thk123 thk123 commented May 30, 2018

May fail runs the test and if it fails prints the output. Should fail runs the test and only prints the output if it actually passes. This makes it easier when looking at build output to determine which unit test has actually failed.

@@ -7,7 +7,7 @@

#ifdef SHARING

SCENARIO("irept_sharing_trade_offs", "[!mayfail][core][utils][irept]")
SCENARIO("irept_sharing_trade_offs", "[!shouldfail][core][utils][irept]")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I agree with this one: it should actually not fail, it's just a currently known bug in the implementation... (the same is true of the other change in this file). I'd suggest @NathanJPhillips comment on this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Nathan is AL atm, though this isn't critical so can wait if you don't agree with below)

I personally think if it starts not failing, we should turn it on (i.e. remove the flag). This current flag allows it to fluctuate between working and not working without anyone ever noticing. By changing it to !shouldfail, if someone fixes it, the tests will fail and they will know to turn them on (and not accidentally write a bunch more!)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to take a quick look at #1980 for the related discussion. I'm fine with your approach.

@antlechner
Copy link
Contributor

I'm happy with this change (after finding it difficult to figure out which test was actually failing on CI), but could it also be documented that [!shouldfail] does not mean "this test should fail" at least in some cases? Either in a comment above every scenario line or somewhere central. Otherwise it might be hard to tell the difference at some point between known bugs and tests that we want to fail for some reason.

@thk123
Copy link
Contributor Author

thk123 commented May 30, 2018

The travis output still seems to be outputing failing unit tests inspite of docs so I'll need to look into why that as otherwise this serves no purpose 🙃

@peterschrammel
Copy link
Member

@thk123, why cannot we negate the assertion logic to make the tests actually pass (unless they are known bugs)?

@thk123
Copy link
Contributor Author

thk123 commented May 30, 2018

@peterschrammel Well I think they are known bugs, so when they're fixed we just want to turn them on (rather than having to flip all the failing conditions).

@owen-mc-diffblue
Copy link
Contributor

I think the name "shouldfail" is unfortunate (I prefer "xfail", short for "expected to fail", which is what pytest uses). Still, I think the way that @thk123 is using it is good: in the long term no tests should be marked [shouldfail]; it should be used as a temporary solution when a test is failing and it can't be fixed immediately.

Copy link
Contributor

@owen-mc-diffblue owen-mc-diffblue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved assuming it does what @thk123 originally thought it did in terms of reducing output and making it easier to see what is really failing.

@owen-mc-diffblue
Copy link
Contributor

@thk123 I cherry-picked this commit and I also see that it still prints the failed assertions for [!shouldfail] tests. I don't see anything in the documentation that says it shouldn't do that (though I wish it did).

I still think that it is worth merging this PR so that if a bug gets inadvertently fixed we notice.

@thk123
Copy link
Contributor Author

thk123 commented Jul 10, 2018

@owen-jones-diffblue yup you're right, must have misread it. I agree this is still better so I shall rebase and merge it (to ensure no shouldfail unit tests have been fixed in the meantime)

@owen-mc-diffblue
Copy link
Contributor

And I guess you should check there aren't any new mayfail tests.

May fail runs the test and if it fails prints the output. Should fail
runs the test and only prints the output if it actually passes. This
makes it easier when looking at build output to determine which unit
test has actually failed.
@thk123 thk123 force-pushed the improvement/mark-failing-unit-tests-as-should-fail branch from 83d6c43 to e93bf20 Compare August 8, 2018 09:57
@thk123
Copy link
Contributor Author

thk123 commented Aug 8, 2018

So JBMC unit tests are failing - I note that both the AWS build and the make builds don't fail which makes me think they are not running the JBMC unit tests. @peterschrammel are you aware of this?

@antlechner
Copy link
Contributor

@thk123 @peterschrammel Has the AWS / make build / unit test issue been resolved?

@thk123
Copy link
Contributor Author

thk123 commented Mar 4, 2020

Replaced by #5248

@thk123 thk123 closed this Mar 4, 2020
@thk123 thk123 deleted the improvement/mark-failing-unit-tests-as-should-fail branch March 4, 2020 14:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants