Skip to content

numpy tests #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ev-br opened this issue Jan 2, 2023 · 1 comment
Closed

numpy tests #10

ev-br opened this issue Jan 2, 2023 · 1 comment

Comments

@ev-br
Copy link
Collaborator

ev-br commented Jan 2, 2023

The approach of gh-8 and earlier was to manually port tests from numpy on a per-function basis: grep the numpy checkout for TestFunction and test_function, and copy-paste the results. The result is more consistent than the numpy test suite, where related tests can be spread over core/tests and lib/tests and various test files and classes.

However the resulting test suite is not maintainable. gh-8 mentioned sources of various tests in the commit messages but this is of limited help if at some point we want to sync the test suite to a new numpy version.

So maybe the right thing to do is to

  • copy the whole numpy test siute over, with all its quirks.
  • remove things we do not want to support (datetimes, record arrays etc)
  • xfail/skip the rest
  • then start undoing the xfails function by function.

This will introduce some duplication, but this is likely OK.

If the initial phase (remove extras + xfail everything) is confined to a few commits --- or maybe a commit per test file --- working through a sync of the numpy test suite sounds just about doable.

Caveats:

  • some numpy tests have dozens of assertions, with a mix of wanted and unwanted stuff. So some splitting will be needed. Probably the guideline is to minimize the churn. No rewrites, no moving tests around. Everything stays where it was.
  • there are compiled test modules (ufuncs, mainly). These need to be replicated.
  • to minimize the churn, will need to first port numpy.testing (easy if tedious) and numpy.random.

Automating the selection of wanted vs unwanted stuff does not sound realistic. So it will need to be manual no matter what.

This was referenced Jan 2, 2023
@lezcano
Copy link
Collaborator

lezcano commented Jan 3, 2023

Also a nit. Probably moving forward it'd be better to first put up a PR just adding test (no need to review this PR) and merge it. After that, put another one that exercises those tests and put the modifications needed for those tests to work in that second PR. That would make those PRs much easier to review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants