From 104ca4d4791808e94517a017c30fff8f3ba84340 Mon Sep 17 00:00:00 2001 From: reshamas Date: Sat, 9 Jul 2022 13:37:21 -0400 Subject: [PATCH 1/2] update notebook rendering --- .../Bayes_factor.ipynb | 27 ++++++++++++++++--- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/examples/diagnostics_and_criticism/Bayes_factor.ipynb b/examples/diagnostics_and_criticism/Bayes_factor.ipynb index b771b2d6f..87c6b981c 100644 --- a/examples/diagnostics_and_criticism/Bayes_factor.ipynb +++ b/examples/diagnostics_and_criticism/Bayes_factor.ipynb @@ -1103,13 +1103,17 @@ "In this example the observed data $y$ is more consistent with `model_1` (because the prior is concentrated around the correct value of $\\theta$) than `model_0` (which assigns equal probability to every possible value of $\\theta$), and this difference is captured by the Bayes factor. We could say Bayes factors are measuring which model, as a whole, is better, including details of the prior that may be irrelevant for parameter inference. In fact in this example we can also see that it is possible to have two different models, with different Bayes factors, but nevertheless get very similar predictions. The reason is that the data is informative enough to reduce the effect of the prior up to the point of inducing a very similar posterior. As predictions are computed from the posterior we also get very similar predictions. In most scenarios when comparing models what we really care is the predictive accuracy of the models, if two models have similar predictive accuracy we consider both models as similar. To estimate the predictive accuracy we can use tools like PSIS-LOO-CV (`az.loo`), WAIC (`az.waic`), or cross-validation." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Authors\n", - "* Authored by Osvaldo Martin\n", - "* Updated by Osvaldo Martin in May, 2022" + "* Authored by Osvaldo Martin ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ ))\n", + "* Updated by Osvaldo Martin in May, 2022 ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ ))\n" ] }, { @@ -1150,6 +1154,21 @@ "%load_ext watermark\n", "%watermark -n -u -v -iv -w" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + ":::{include} ../page_footer.md\n", + ":::" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -1157,7 +1176,7 @@ "hash": "d4ca51fc2fdee62b1a00ff5126f64ae66836e25d3ba6f45d8551026256283997" }, "kernelspec": { - "display_name": "Python 3.9.7 ('base')", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, From 3f821746b577874bbe973b3f931ead2cebc2af07 Mon Sep 17 00:00:00 2001 From: reshamas Date: Wed, 20 Jul 2022 12:28:12 -0400 Subject: [PATCH 2/2] run pre-commit; add myst file --- .../diagnostics_and_criticism/Bayes_factor.ipynb | 7 +------ .../diagnostics_and_criticism/Bayes_factor.myst.md | 14 ++++++++++---- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/examples/diagnostics_and_criticism/Bayes_factor.ipynb b/examples/diagnostics_and_criticism/Bayes_factor.ipynb index 87c6b981c..1d96b2442 100644 --- a/examples/diagnostics_and_criticism/Bayes_factor.ipynb +++ b/examples/diagnostics_and_criticism/Bayes_factor.ipynb @@ -1103,17 +1103,12 @@ "In this example the observed data $y$ is more consistent with `model_1` (because the prior is concentrated around the correct value of $\\theta$) than `model_0` (which assigns equal probability to every possible value of $\\theta$), and this difference is captured by the Bayes factor. We could say Bayes factors are measuring which model, as a whole, is better, including details of the prior that may be irrelevant for parameter inference. In fact in this example we can also see that it is possible to have two different models, with different Bayes factors, but nevertheless get very similar predictions. The reason is that the data is informative enough to reduce the effect of the prior up to the point of inducing a very similar posterior. As predictions are computed from the posterior we also get very similar predictions. In most scenarios when comparing models what we really care is the predictive accuracy of the models, if two models have similar predictive accuracy we consider both models as similar. To estimate the predictive accuracy we can use tools like PSIS-LOO-CV (`az.loo`), WAIC (`az.waic`), or cross-validation." ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [] - }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Authored by Osvaldo Martin ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ ))\n", - "* Updated by Osvaldo Martin in May, 2022 ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ ))\n" + "* Updated by Osvaldo Martin in May, 2022 ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ ))" ] }, { diff --git a/myst_nbs/diagnostics_and_criticism/Bayes_factor.myst.md b/myst_nbs/diagnostics_and_criticism/Bayes_factor.myst.md index 095c49353..d44d1fdc6 100644 --- a/myst_nbs/diagnostics_and_criticism/Bayes_factor.myst.md +++ b/myst_nbs/diagnostics_and_criticism/Bayes_factor.myst.md @@ -6,7 +6,7 @@ jupytext: format_version: 0.13 jupytext_version: 1.13.7 kernelspec: - display_name: Python 3.9.7 ('base') + display_name: Python 3 (ipykernel) language: python name: python3 --- @@ -283,9 +283,8 @@ In this example the observed data $y$ is more consistent with `model_1` (because +++ -## Authors -* Authored by Osvaldo Martin -* Updated by Osvaldo Martin in May, 2022 +* Authored by Osvaldo Martin ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ )) +* Updated by Osvaldo Martin in May, 2022 ([pymc#xxxx](https://github.com/pymc-devs/pymc/pull/ )) +++ @@ -295,3 +294,10 @@ In this example the observed data $y$ is more consistent with `model_1` (because %load_ext watermark %watermark -n -u -v -iv -w ``` + +:::{include} ../page_footer.md +::: + +```{code-cell} ipython3 + +```