-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Consider lowering runTest
default timeout
#3270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The reason for adding a timeout at all was to make sure that the CI doesn't hang when it's obvious that the test won't be passing. If someone runs a test manually and observes it hanging, they can just interrupt it. Note also that this is not a timeout for the whole test, it's only a timeout for nothing happening in dispatchers that can be observed by the test framework. We won't throw, for example, on something like this: runTest {
while (true) {
delay(1000)
}
} So, this timeout is not an indication of the test taking too long, it's an indication of no observable activity, possibly indicating a deadlock. So, you're right that a whole minute is like an eternity—this was the intention: if nothing observable happens a whole eternity later, the CI should fail. 10 seconds are also like an eternity, but only if we're CPU-bound. However, the things forked off to |
I'm sympathetic to the idea that there exists someone writing a unit test which does >10 seconds of wall-clock work, but isn't that why the parameter exists? That outlier can specify a longer timeout since they know they're doing a huge batch DB operation. |
The worry was that this is an occasional outlier, which will fit in 10 seconds easily 99% of the time, but then will flake due to some connection being surprisingly slow. We expect 60 seconds to be enough for everyone. This cutoff is necessarily arbitrary, so this is not a hard stance, of course. A question though: who would benefit from tests failing in 10 seconds instead of 60?
|
Now that I think about it, I'm in the wrong in the "manually running a test batch" scenario, as the test will fail and crash, which will be easy to notice. Is this the scenario that you think suffers due to a large timeout? |
Manual testing is not only about running the test, but also about compilation, which may take few tens of seconds (especially with Gradle [configuration|build] cache-miss). So I usually switch context and wait for the IDE notification what is the test result. This happened to me multiple times. |
I generally prefer a lower timeout for our CI. When I inadvertantly break something in a way that omits an emission each test will hang until timeout. Turbine used to have a 1 second timeout, but we removed it in the latest version in favor of the one in |
These are all good points: if we treat the timeout in However, our timeout is for a very specific case when there are no emissions for a long time. For example, it will not catch this typical bug: runTest {
var x = 0
launch {
delay(1000)
x = 0
}
while (x == 0) {
delay(100)
}
} Since the test always does something, What worries me is that, if we make the timeout low, people may get the impression that |
Yeah that makes sense. Given how |
I would be happy if I could configure it from a property or something similar. I'm moving from And once I have that report I need to go to each one, lower their timeout and start working on them to, once they are fixed, remove the timeout parameter because it just adds verbosity to my code. I can undesrtand that 60 seconds is a really safe default value. But I don't want that default value because it slow my development flow a lot. And I don't want to add the tiemout value on each |
@BraisGabin, maybe https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-debug/kotlinx.coroutines.debug.junit4/-coroutines-timeout/index.html would help? |
@dkhalanskyjb, @BraisGabin note that |
Will be fixed with 3b22c27 |
The fix is available in 1.7.0-Beta |
Hello, probably quite late to the conversation but 10 seconds is extreemly low value, we have a mid size project consisting of 20+ gradle modules. When gralde parellism is turn on CPU is 100% busy with compiling and running CPU Thread * number of taks. Our setup uses Koin and Mockk. When we just run a tests on a single module, bootstraping (loading all the mockk classes) takes around 2 seconds for the first test. It happens that we have many modules with 0-20 tests roughly in each of them. When we run multiple tests on multiple modules the boostrap time extends to even 10-25 seconds! Just to answer upcoming quesitons, the rest of the tests take under 500ms. Please note the test reports hmtls are not done in the order they are run, so the second test in the report is reality the first test to run, so 12 seconds includes the bootstrapping (setting up the testing env) under super heavy load of CPU. Please note we observe the same problem on our local dev machines (Intel i9 12th gen, or Mac M2) as well as on Azure CI env. We recently updated to: and had to create a custom I am interested in your view Examples: Simple test, with mockk and without result no cpu load result cpu load 100% (using stress test from s-tui) |
I've had the same issue pop up since this change occurred, but more flaky (i.e. it wouldn't happen on local dev machines where the first test would only take maybe 5 seconds, but it would occasionally take longer than 10 seconds on CI). There seems to be a discussion active about the slowness of ByteBuddy installation here: mockk/mockk#13 I'm electing to perform my mock setup outside of the call to |
I created a follow up ticket to solve the issues this brings here: #3800 |
60 seconds is a really long time for nothing to happen. It's like an eternity for a CPU, especially one which is skipping delays. I would personally prefer something low like 2s (as is the default for JS's Mocha runner) but something like 10s seems a reasonable default that it covers 99.9% of real-world usage without also being a whole minute.
The text was updated successfully, but these errors were encountered: