Skip to content

Adopt a more robust testing system #309

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
AnshulMalik opened this issue Jun 2, 2018 · 18 comments · Fixed by #964
Closed

Adopt a more robust testing system #309

AnshulMalik opened this issue Jun 2, 2018 · 18 comments · Fixed by #964

Comments

@AnshulMalik
Copy link
Member

I find that tests are currently very limited in capability. And there are very few tests with each algorithm.

We should have more tests and an efficient testing mechanism.

@AnshulMalik
Copy link
Member Author

I found this nice test runner https://github.com/CleanCut/green

@abebinder
Copy link

abebinder commented Oct 11, 2018

Hi @AnshulMalik , I am interested in contributing. I noticed that different algorithims have different test structures. Some have tests in the docstring. Some print out values when you run the module (although they make no assertions on what should be printed). One has it the way I think it should be done....

Python/data_structures/union_find/tests_union_find.py

I propose that I switch all current tests (docstring + printstatements) that exist to this format. Once they are switched, using green -vvv while in the root directory will run all of the tests and show any failure messages.

It is possible green can be used as the runner for travis.yml although I am not certain.

We should also make a setup.py that installs green if you want that to be the test runner.

Let me know if you have any ideas about this. This will be my first attempted open source contribution.

@AnshulMalik
Copy link
Member Author

Thanks for looking into it. I think that's exactly what we want, we want the unit tests written for the algorithms, and be consistent and be able to use travis to test.
I think we can make travis do anything we want :) we can install stuff there.

@nicholasraphael
Copy link

Hi, @AnshulMalik and @abebinder I am interested in contributing and helping on this issue if that's okay. Just to clarify our goal is to use the green test runner with Travis to test all the present data structure and algorithm implementations while setting a standard for writing tests for the project?

@abebinder
Copy link

I believe that's correct@nicholasraphael. School got the best of me so its all you.

@SandersLin
Copy link
Contributor

Hi, open source newbie here. Adding unit tests seems to be quite important for the future growth of this amazing project. I was wondering how do we find out who the administrator is ans ask his/her attention on this project? I would love to contribute in this unit testing if others are busy with school. :)

@slarse
Copy link

slarse commented Feb 20, 2019

Hello! Since nothing has happened here for several months, my group and I thought we'd have a stab at it. We're doing a project for uni where we need to put roughly 100 man-hours into an open source project until Wednesday the 27th. In other words, we can get a lot done in the coming week, but we need to start immediately.

These are our goals, off the top of our heads:

  • Package the project (with the current structure, each top-level directory will become a package).
  • Move the tests spread about main-functions, test classes and docstrings in the project into a separate test suite
    • We will not remove any main functions, as many of them seem to have interactive functionality
  • Convert manual inspection print-tests into unit tests
  • Fix any broken tests
  • Use coverage metrics
  • Update .travis.yml to run the test suite
  • Document how to expand the test suite with tests for new modules
  • All the while, maintain compatibility with both Python 2.7 and Python 3.6+
  • Use unittest such that we can use any standard test runner (e.g. pytest, green, or unittest itself)

We are primarily looking to make the test suite more manageable. Apart from adding __init__.py modules in the different packages and moving actual unit tests out to a unified test suite, there should be minimal intrusion into the current project.

@SandersLin Hope we're not undercutting you here, have you performed any work on this? We couldn't find evidence anyone had actually done anything to this end, and we're kind of in a hurry to start.

@AnshulMalik Do you have any objections to this proposal?

I can also add that the stakes are low for us as this is a mandatory university project, we need to put this time into something and found this to be a fun use of that time. If you like what we accomplish, you can merge it. If you don't like it, you simply reject it and there are no hard feelings.

@slarse
Copy link

slarse commented Feb 20, 2019

@ashwek @poyea Thanks for the quick thumbs up, we have started planning!

We would like to present an alternative to unittest: pytest (link) is (in our humble opinions) a much better testing framework, in which tests are a lot easier to write (but it comes with higher barrier for entry for contributors, see below). For example, here is a trivial unittest test class:

import unittest

class TrivialTests(unittest.TestCase):
    def test_inequality(self):
        a = 2
        b = a + 1
        self.assertNotEqual(a, b) # note the use of a bulky method

and here is the equivalent with pytest:

def test_inequality():
    a = 2
    b = a + 1
    assert a != b # note the use of a plain assert

Compared to unittest, however, there are two major downsides for contributors looking to develop code here:

  1. pytest is an external dependency that must be installed.
  2. pytest cannot find the production code unless the project (this one) is installed properly.
    • This could be a problem for inexperienced contributors

Our view is that you may get inexperienced contributors who don't really know how to install Python packages. Even though it's not hard, and we would of course provide documentation for how to set up a development environment, this might be a barrier for entry that you guys don't want.

Using just unittest, we can have a separate test suite (i.e. in a separate top-level directory called tests) that can find the production code without a need for installing anything. It sets the absolute lowest barrier for entry to contribute to the project. We think this outweighs the benefit of using a more modern framework like pytest, but we wanted to present the option of using a more modern tool as maybe our view of the project's target audience is not correct.

Unless we hear back from you telling us otherwise, we'll proceed with unittest as the framework.

@poyea
Copy link
Member

poyea commented Feb 20, 2019

Very nice. I also found this comparison, but I believe it is completely fine to go with unittest since as you've said, it's built-in.😉🤞

@slarse
Copy link

slarse commented Feb 20, 2019

Yes, pytest is superior in every respect in my opinion, but it does add a slight amount of complexity to the project as a whole. We don't mind doing unittest, and we don't mind doing pytest (we love pytest :D), it is entirely up to you!

@poyea
Copy link
Member

poyea commented Feb 20, 2019

If you take the initiative, then it's up to you! Therefore if you prefer pytest, please go ahead!!! Once you're set, you may consider opening a new issue (for implementation/technical issues) to indicate that you're working on it or part of it. Anyway, feel free to ask for any help!😂

@slarse
Copy link

slarse commented Feb 20, 2019

Alright, thanks for the quick responses! We may consider converting everything to pytest then, it's a fantastic framework.

We'll open issues for reasonably sized and independent subtasks, doing all of this in one PR would be a bit... drastic.

@cclauss
Copy link
Member

cclauss commented Jun 4, 2019

Why is this not resolved after one year? Why no automated testing from Travis or CircleCI or Azure, etc?

@sirex
Copy link
Contributor

sirex commented Jun 6, 2019

I think, for this kind of project, best option is doctests. Doctests are built in to python, but also supported by pytest, so pytest can be installed as optional dependency.

Doctests would serve as documentation how algorithm work and what results it gives. There is no need to have tests in separate files, because usually algorithm implementations are quite small one function pieces of code.

@cclauss
Copy link
Member

cclauss commented Jun 6, 2019

Doctests are good but we need a test runner that can kick off pytest to run all those doctests. Travis or CircleCI or Azure, etc could run pytest and pytest could run all the doctests. That way we would know that out tests pass before each pull request is reviewed.

@sirex
Copy link
Contributor

sirex commented Jun 6, 2019

I see, that there are already some tests, that have doctests:

> rg --count '>>>' . | wc -l
29

> rg --count 'doctest' . | wc -l
18

> rg --count 'unittest' . | wc -l
7

@cclauss
Copy link
Member

cclauss commented Jun 6, 2019

If someone with Admin rights can kick off a Travis build at https://travis-ci.org/TheAlgorithms/Python the I can configure pytest to run the doctests.

@cclauss
Copy link
Member

cclauss commented Jul 6, 2019

#964

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants