-
Notifications
You must be signed in to change notification settings - Fork 159
Run tests asynchronously #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
No. |
Pytest-xdist perhaps can help here? Not sure it is compatible, but thought
I would mention
Em dom, 24 de dez de 2017 09:07, Andrew Svetlov <[email protected]>
escreveu:
… No.
Running 2 tests on the same loop concurrently breaks test isolation
principle (*testA* can be broken by side effect of *testB*, asyncio and
pytest-asyncio` iteself cannot detect the situation).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#69 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABCO_M3Ns-l0ZPSs4kae6OdDNXrF_n_-ks5tDjB1gaJpZM4Pymic>
.
|
Update Python versions in build matrix
I think this would be a very useful feature. I often use Is there any chance that such a feature could be added? |
I too would like a limited feature like this. In fact, I'll try to hack up an ugly demo and see how far this takes me... |
As a staring point, you could override def pytest_runtestloop(session):
if session.testsfailed and not session.config.option.continue_on_collection_errors:
raise session.Interrupted("%d errors during collection" % session.testsfailed)
if session.config.option.collectonly:
return True
for i, item in enumerate(session.items):
nextitem = session.items[i + 1] if i + 1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldfail:
raise session.Failed(session.shouldfail)
if session.shouldstop:
raise session.Interrupted(session.shouldstop)
return True You would separate the items in two lists: This is just to give you an initial idea, I'm sure there are a lot more details you will encounter as you dig into this. 😁 |
pytest-yield runs tests concurrently, though via generators rather than via asyncio (and it seems to be incompatible with asyncio) and it looks like the removal of yield tests breaks it. Notably, it doesn't try to address the isolation problem, so it's better for running a batch of integration tests. |
@Tinche is there any feedback from your end on whether or not this could be something exposed by pytest-asyncio? I am asking because if one where to prototype a solution that would not be accepted on grounds of philosophy, then forking to something like pytest-asyncio-concurrent could be a viable strategy. It seems like a very useful feature, especially for longer running tests. I don't mind as the author of tests to ensure isolation across tests by enforcing it in fixtures. |
Is there any progress on this? The side-effect argument doesn't convince me. One could add a pure_function marker that allows them to run in tandem. Also tests really shouldn't have side-effects in the first place... The contract for most testing software doesn't say that the order of the tests will be respected. |
Wow, Elijah, tone it down :) Not so easyI was thinking of hacking something up, but I realised that my tests Likewise, most tests share fixtures, there needs to be a way to specify that a given fixture is This first problem is harder than appears at first:
I'm still considering tests with non-overlapping Regarding side-effectsI think there are 2 kinds of common tests, one is a unit test on a pure function, input is sent to code under test, output is examined. Such test can be parallelised. Then again, code under test is typically small, synchronous (or if technically async, doesn't actually wait) and would hardly benefit from concurrent async testing. The other is a functionality test, typically the global environment is set up, code under test is called, and global environment is examined afterwards. Such a test is built on side-effects, such a test is slow if async (database, network, ...) and this I think is where concurrent testing would shine. Edit: re: test execution orderLet's not conflate test order with concurrency. Here's a simple example: @pytest.parametrize("now", (42, 99))
async def test_request_start_time(now):
with patch("time.time", return_value=now):
assert (await create_request()).start_time == now These two tests are order-independent: |
Sorry for the tone! I had not had my coffee yet when I posted. Will post less tone-y in the future -- I understand that you folks have thought a lot more about this than me :) Thanks for the detailed response. You have a very good point about side-effects -- if you're testing for them specifically, then running in parallel might be a little messy. I can also see that order can be specified (https://stackoverflow.com/questions/17571438/test-case-execution-order-in-pytest). I was just making the assumption that most tests were pure unit tests (as you described), but that won't be the case for all users. To be honest I'm not familiar with the mocking you're doing, but it seems to me that passing the burden of determining "function purity" to the user as much as possible is the best way to solve this. From an API perspective, one could easily imagine: That way you have fairly trivial backwards compatibility (it defaults to If you have a stateful service and you're looking at side-effects, I think it should only matter in the case in which ordering of queries matter. So if that's not the case, then you can pass Cheers, |
I got the same problem, so, is there any aproach for this? |
HERE! This is what you wanted! More simpler~ |
Use After that, run several Open-source example: https://github.com/micktwomey/pytest-circleci/blob/master/pytest_circleci/plugin.py |
for anyone who needs this, I got cooperative multitasking working here: |
Ah, I want this feature!!! |
this issue as proposed should be closed, its simply not sensible to have concurrent setupstate in a pytest session having pytest itself be async supportive is a different can of beans and starts with making pluggy async-native (which would also be appreciated by projects like datasette) |
@RonnyPfannschmidt Thanks for getting involved :) Could you elaborate on your comment? What problems do you see with running tests concurrently in general? Could they not share a single SetupState instance? What's the connection between running async tests concurrently and making pluggy async-native? |
Async pluggy is needed to manage function coloring within pytest Concurrent pytest would have to create multiple sessions that each maintained a distinct Setupstate Technically it wouldn't even matter if threads or async tasks where used |
Distinct sessions / collection is necessary as setupstates currently taint nodes (alltho it kinda can work in trivial cases) |
I see, thanks for the explanation. As far as I understand it's pretty much impossible to run tests concurrently with the current way pluggy and pytest work. This is not something we can solve in pytest-asyncio. If someone would like to push this feature forward, please get in touch with https://github.com/pytest-dev/pluggy to discuss ways to make pluggy async-native. I'm closing this issue for now. As always, feel free to add to the discussion anyways, if any new information pops up. |
Hi,
I'm not sure this is the purpose of this library but I want to run pytest tests asynchronously.
Consider this example:
$ py.test -q .. 2 passed in 4.01 seconds
It would be nice to run the test suite in ~2 seconds instead of 4. Is it currently possible with pytest-asyncio or with another library ? I guess we would need to
asyncio.gather()
all async tests and run them in the same event loop.Thanks !
The text was updated successfully, but these errors were encountered: