You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The approach of gh-8 and earlier was to manually port tests from numpy on a per-function basis: grep the numpy checkout for TestFunction and test_function, and copy-paste the results. The result is more consistent than the numpy test suite, where related tests can be spread over core/tests and lib/tests and various test files and classes.
However the resulting test suite is not maintainable. gh-8 mentioned sources of various tests in the commit messages but this is of limited help if at some point we want to sync the test suite to a new numpy version.
So maybe the right thing to do is to
copy the whole numpy test siute over, with all its quirks.
remove things we do not want to support (datetimes, record arrays etc)
xfail/skip the rest
then start undoing the xfails function by function.
This will introduce some duplication, but this is likely OK.
If the initial phase (remove extras + xfail everything) is confined to a few commits --- or maybe a commit per test file --- working through a sync of the numpy test suite sounds just about doable.
Caveats:
some numpy tests have dozens of assertions, with a mix of wanted and unwanted stuff. So some splitting will be needed. Probably the guideline is to minimize the churn. No rewrites, no moving tests around. Everything stays where it was.
there are compiled test modules (ufuncs, mainly). These need to be replicated.
to minimize the churn, will need to first port numpy.testing (easy if tedious) and numpy.random.
Automating the selection of wanted vs unwanted stuff does not sound realistic. So it will need to be manual no matter what.
The text was updated successfully, but these errors were encountered:
Also a nit. Probably moving forward it'd be better to first put up a PR just adding test (no need to review this PR) and merge it. After that, put another one that exercises those tests and put the modifications needed for those tests to work in that second PR. That would make those PRs much easier to review.
Uh oh!
There was an error while loading. Please reload this page.
The approach of gh-8 and earlier was to manually port tests from numpy on a per-function basis: grep the numpy checkout for
TestFunction
andtest_function
, and copy-paste the results. The result is more consistent than the numpy test suite, where related tests can be spread overcore/tests
andlib/tests
and various test files and classes.However the resulting test suite is not maintainable. gh-8 mentioned sources of various tests in the commit messages but this is of limited help if at some point we want to sync the test suite to a new numpy version.
So maybe the right thing to do is to
This will introduce some duplication, but this is likely OK.
If the initial phase (remove extras + xfail everything) is confined to a few commits --- or maybe a commit per test file --- working through a sync of the numpy test suite sounds just about doable.
Caveats:
numpy.testing
(easy if tedious) andnumpy.random
.Automating the selection of wanted vs unwanted stuff does not sound realistic. So it will need to be manual no matter what.
The text was updated successfully, but these errors were encountered: