Skip to content

pymc3/tests/test_step.py::TestMLDA::test_acceptance_rate_against_coarseness fails for unknown reason #4267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
michaelosthege opened this issue Nov 27, 2020 · 5 comments · Fixed by #6007

Comments

@michaelosthege
Copy link
Member

Description of your problem

The CI of #4261 failed because of a test failure (see the log) that is probably unrelated to the changes from that PR.

The PR was already merged, but the test remains suspicious. Is it maybe not deterministic?

@gmingas can you take a look or delegate with someone else familiar with the MLDA test suite?

Versions and main components

  • PyMC3 Version: master
@gmingas
Copy link
Contributor

gmingas commented Nov 30, 2020

This must be because no seed has been set but looking at it now. @mikkelbue could you have a look too?

@mikkelbue
Copy link
Contributor

@michaelosthege I cannot replicate this error on my machine, even without setting random seed. But it is not deterministic, as you suspected, and it is possible, albeit unlikely, that a coarser model could get better acceptance rate.
We could do something like:
random_seeds = [1,2,3]
for coarse_model, seed in zip(possible_coarse_models, random_seeds)
or just run each sampler with the same random seed, since the samplers will diverge with different coarse models anyway.

@MarcoGorelli
Copy link
Contributor

this looks like it's resolved itself since we removed processing tests in parallel, I guess it's because it couldn't converge otherwise?

I think we're OK to close anyway

@gmingas
Copy link
Contributor

gmingas commented Dec 7, 2020

@mikkelbue i think it is best to just add the same random seed for all samplers just to avoid non-deterministic outcomes.

@mikkelbue
Copy link
Contributor

@MarcoGorelli Thanks for letting us know. The master branch on our repository is up-to-date with pymc-devs, and I have added random seeds to three different tests, where it was missing: test_nonparallelized_chains_are_random, test_parallelized_chains_are_random and test_acceptance_rate_against_coarseness. Just in case. See alan-turing-institute@b883250

I'll set up a PR, in case you want to merge this.

@gmingas Maybe I didn't explain my intention very well. The samplers belonging to each coarse model would be initialised from different (but pre-set) random seeds. That wouldn't result in any non-deterministic behaviour, as far as I can think.
But I have used the same seed for all of them, anyway. No sense in complicating things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants