-
-
Notifications
You must be signed in to change notification settings - Fork 46.6k
Adopt a more robust testing system #309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I found this nice test runner https://github.com/CleanCut/green |
Hi @AnshulMalik , I am interested in contributing. I noticed that different algorithims have different test structures. Some have tests in the docstring. Some print out values when you run the module (although they make no assertions on what should be printed). One has it the way I think it should be done.... Python/data_structures/union_find/tests_union_find.py I propose that I switch all current tests (docstring + printstatements) that exist to this format. Once they are switched, using green -vvv while in the root directory will run all of the tests and show any failure messages. It is possible green can be used as the runner for travis.yml although I am not certain. We should also make a setup.py that installs green if you want that to be the test runner. Let me know if you have any ideas about this. This will be my first attempted open source contribution. |
Thanks for looking into it. I think that's exactly what we want, we want the unit tests written for the algorithms, and be consistent and be able to use travis to test. |
Hi, @AnshulMalik and @abebinder I am interested in contributing and helping on this issue if that's okay. Just to clarify our goal is to use the green test runner with Travis to test all the present data structure and algorithm implementations while setting a standard for writing tests for the project? |
I believe that's correct@nicholasraphael. School got the best of me so its all you. |
Hi, open source newbie here. Adding unit tests seems to be quite important for the future growth of this amazing project. I was wondering how do we find out who the administrator is ans ask his/her attention on this project? I would love to contribute in this unit testing if others are busy with school. :) |
Hello! Since nothing has happened here for several months, my group and I thought we'd have a stab at it. We're doing a project for uni where we need to put roughly 100 man-hours into an open source project until Wednesday the 27th. In other words, we can get a lot done in the coming week, but we need to start immediately. These are our goals, off the top of our heads:
We are primarily looking to make the test suite more manageable. Apart from adding @SandersLin Hope we're not undercutting you here, have you performed any work on this? We couldn't find evidence anyone had actually done anything to this end, and we're kind of in a hurry to start. @AnshulMalik Do you have any objections to this proposal? I can also add that the stakes are low for us as this is a mandatory university project, we need to put this time into something and found this to be a fun use of that time. If you like what we accomplish, you can merge it. If you don't like it, you simply reject it and there are no hard feelings. |
@ashwek @poyea Thanks for the quick thumbs up, we have started planning! We would like to present an alternative to import unittest
class TrivialTests(unittest.TestCase):
def test_inequality(self):
a = 2
b = a + 1
self.assertNotEqual(a, b) # note the use of a bulky method and here is the equivalent with def test_inequality():
a = 2
b = a + 1
assert a != b # note the use of a plain assert Compared to
Our view is that you may get inexperienced contributors who don't really know how to install Python packages. Even though it's not hard, and we would of course provide documentation for how to set up a development environment, this might be a barrier for entry that you guys don't want. Using just Unless we hear back from you telling us otherwise, we'll proceed with |
Very nice. I also found this comparison, but I believe it is completely fine to go with |
Yes, |
If you take the initiative, then it's up to you! Therefore if you prefer |
Alright, thanks for the quick responses! We may consider converting everything to We'll open issues for reasonably sized and independent subtasks, doing all of this in one PR would be a bit... drastic. |
Why is this not resolved after one year? Why no automated testing from Travis or CircleCI or Azure, etc? |
I think, for this kind of project, best option is doctests. Doctests are built in to python, but also supported by pytest, so pytest can be installed as optional dependency. Doctests would serve as documentation how algorithm work and what results it gives. There is no need to have tests in separate files, because usually algorithm implementations are quite small one function pieces of code. |
Doctests are good but we need a test runner that can kick off pytest to run all those doctests. Travis or CircleCI or Azure, etc could run pytest and pytest could run all the doctests. That way we would know that out tests pass before each pull request is reviewed. |
I see, that there are already some tests, that have doctests:
|
If someone with Admin rights can kick off a Travis build at https://travis-ci.org/TheAlgorithms/Python the I can configure pytest to run the doctests. |
I find that tests are currently very limited in capability. And there are very few tests with each algorithm.
We should have more tests and an efficient testing mechanism.
The text was updated successfully, but these errors were encountered: