Skip to content

Commit 9fad19c

Browse files
authored
Update model averaging to book style (#414)
* notebook: model_averaging (header,footer updates) * model_averaging: adding myst file * add watermark header; add footer * rerunning myst file * update notebook tags * recreate myst file after updating notebook tags * minor text fixes * adding myst file * add References section at the end * add myst file * add 2 more urls to list in pre-commit yaml file * update yaml, bibtext, citation references * adding myst file * remove end of | * revert unwanted updates to moderation_analysis + add excluded nb link
1 parent aa1ee22 commit 9fad19c

File tree

4 files changed

+328
-143
lines changed

4 files changed

+328
-143
lines changed

.pre-commit-config.yaml

+5-2
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,8 @@ repos:
8282
examples/samplers/SMC-ABC_Lotka-Volterra_example.ipynb|
8383
examples/splines/spline.ipynb|
8484
examples/survival_analysis/censored_data.ipynb|
85-
examples/survival_analysis/weibull_aft.ipynb)
85+
examples/survival_analysis/weibull_aft.ipynb|
86+
examples/howto/custom_distribution.ipynb)
8687
entry: >
8788
(?x)(arviz-devs.github.io|
8889
aesara.readthedocs.io|
@@ -94,7 +95,9 @@ repos:
9495
docs.python.org|
9596
xarray.pydata.org
9697
python.arviz.org|
97-
docs.xarray.dev)
98+
docs.xarray.dev|
99+
www.pymc.io|
100+
docs.scipy.org/doc)
98101
language: pygrep
99102
types_or: [markdown, rst, jupyter]
100103
- repo: https://github.com/mwouts/jupytext

examples/diagnostics_and_criticism/model_averaging.ipynb

+243-120
Large diffs are not rendered by default.

examples/references.bib

+12
Original file line numberDiff line numberDiff line change
@@ -481,6 +481,18 @@ @book{wilkinson2005grammar
481481
issn = {1431-8784},
482482
isbn = {978-0-387-24544-7}
483483
}
484+
@article{Yao_2018,
485+
doi = {10.1214/17-ba1091},
486+
url = {https://doi.org/10.1214\%2F17-ba1091},
487+
year = 2018,
488+
month = {sep},
489+
publisher = {Institute of Mathematical Statistics},
490+
volume = {13},
491+
number = {3},
492+
author = {Yuling Yao and Aki Vehtari and Daniel Simpson and Andrew Gelman},
493+
title = {Using Stacking to Average Bayesian Predictive Distributions (with Discussion)},
494+
journal = {Bayesian Analysis}
495+
}
484496
@article{yuan2009bayesian,
485497
title = {Bayesian mediation analysis.},
486498
author = {Yuan, Ying and MacKinnon, David P},

myst_nbs/diagnostics_and_criticism/model_averaging.myst.md

+68-21
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,20 @@ jupytext:
66
format_version: 0.13
77
jupytext_version: 1.13.7
88
kernelspec:
9-
display_name: Python PyMC3 (Dev)
9+
display_name: Python 3 (ipykernel)
1010
language: python
11-
name: pymc3-dev-py38
11+
name: python3
1212
---
1313

14+
(model_averaging)=
15+
# Model Averaging
16+
17+
:::{post} Aug 2022
18+
:tags: model comparison, model averaging
19+
:category: intermediate
20+
:author: Osvaldo Martin
21+
:::
22+
1423
```{code-cell} ipython3
1524
---
1625
papermill:
@@ -27,7 +36,7 @@ import numpy as np
2736
import pandas as pd
2837
import pymc3 as pm
2938
30-
print(f"Running on PyMC3 v{pm.__version__}")
39+
print(f"Running on PyMC3 v{pm.__version__}")
3140
```
3241

3342
```{code-cell} ipython3
@@ -47,17 +56,16 @@ az.style.use("arviz-darkgrid")
4756

4857
+++ {"papermill": {"duration": 0.068882, "end_time": "2020-11-29T12:13:08.020372", "exception": false, "start_time": "2020-11-29T12:13:07.951490", "status": "completed"}, "tags": []}
4958

50-
# Model averaging
51-
52-
When confronted with more than one model we have several options. One of them is to perform model selection, using for example a given Information Criterion as exemplified [in this notebook](model_comparison.ipynb) and this other [example](GLM-model-selection.ipynb). Model selection is appealing for its simplicity, but we are discarding information about the uncertainty in our models. This is somehow similar to computing the full posterior and then just keep a point-estimate like the posterior mean; we may become overconfident of what we really know.
59+
When confronted with more than one model we have several options. One of them is to perform model selection, using for example a given Information Criterion as exemplified the PyMC examples {ref}`pymc:model_comparison` and the {ref}`GLM-model-selection`. Model selection is appealing for its simplicity, but we are discarding information about the uncertainty in our models. This is somehow similar to computing the full posterior and then just keep a point-estimate like the posterior mean; we may become overconfident of what we really know. You can also browse the {doc}`blog/tag/model-comparison` tag to find related posts.
5360

5461
One alternative is to perform model selection but discuss all the different models together with the computed values of a given Information Criterion. It is important to put all these numbers and tests in the context of our problem so that we and our audience can have a better feeling of the possible limitations and shortcomings of our methods. If you are in the academic world you can use this approach to add elements to the discussion section of a paper, presentation, thesis, and so on.
5562

56-
Yet another approach is to perform model averaging. The idea now is to generate a meta-model (and meta-predictions) using a weighted average of the models. There are several ways to do this and PyMC3 includes 3 of them that we are going to briefly discuss, you will find a more thorough explanation in the work by [Yuling Yao et. al.](https://arxiv.org/abs/1704.02030)
63+
Yet another approach is to perform model averaging. The idea now is to generate a meta-model (and meta-predictions) using a weighted average of the models. There are several ways to do this and PyMC includes 3 of them that we are going to briefly discuss, you will find a more thorough explanation in the work by {cite:t}`Yao_2018`. PyMC integrates with ArviZ for model comparison.
64+
5765

5866
## Pseudo Bayesian model averaging
5967

60-
Bayesian models can be weighted by their marginal likelihood, this is known as Bayesian Model Averaging. While this is theoretically appealing, is problematic in practice: on the one hand the marginal likelihood is highly sensible to the specification of the prior, in a way that parameter estimation is not, and on the other computing the marginal likelihood is usually a challenging task. An alternative route is to use the values of WAIC (Widely Applicable Information Criterion) or LOO (pareto-smoothed importance sampling Leave-One-Out cross-validation), which we will call generically IC, to estimate weights. We can do this by using the following formula:
68+
Bayesian models can be weighted by their marginal likelihood, this is known as Bayesian Model Averaging. While this is theoretically appealing, it is problematic in practice: on the one hand the marginal likelihood is highly sensible to the specification of the prior, in a way that parameter estimation is not, and on the other, computing the marginal likelihood is usually a challenging task. An alternative route is to use the values of WAIC (Widely Applicable Information Criterion) or LOO (pareto-smoothed importance sampling Leave-One-Out cross-validation), which we will call generically IC, to estimate weights. We can do this by using the following formula:
6169

6270
$$w_i = \frac {e^{ - \frac{1}{2} dIC_i }} {\sum_j^M e^{ - \frac{1}{2} dIC_j }}$$
6371

@@ -71,7 +79,7 @@ The above formula for computing weights is a very nice and simple approach, but
7179

7280
## Stacking
7381

74-
The third approach implemented in PyMC3 is know as _stacking of predictive distributions_ and it has been recently [proposed](https://arxiv.org/abs/1704.02030). We want to combine several models in a metamodel in order to minimize the diverge between the meta-model and the _true_ generating model, when using a logarithmic scoring rule this is equivalently to:
82+
The third approach implemented in PyMC is known as _stacking of predictive distributions_ by {cite:t}`Yao_2018`. We want to combine several models in a metamodel in order to minimize the divergence between the meta-model and the _true_ generating model, when using a logarithmic scoring rule this is equivalent to:
7583

7684
$$\max_{w} \frac{1}{n} \sum_{i=1}^{n}log\sum_{k=1}^{K} w_k p(y_i|y_{-i}, M_k)$$
7785

@@ -81,11 +89,11 @@ The quantity $p(y_i|y_{-i}, M_k)$ is the leave-one-out predictive distribution f
8189

8290
## Weighted posterior predictive samples
8391

84-
Once we have computed the weights, using any of the above 3 methods, we can use them to get a weighted posterior predictive samples. PyMC3 offers functions to perform these steps in a simple way, so let see them in action using an example.
92+
Once we have computed the weights, using any of the above 3 methods, we can use them to get a weighted posterior predictive samples. PyMC offers functions to perform these steps in a simple way, so let see them in action using an example.
8593

86-
The following example is taken from the superb book [Statistical Rethinking](http://xcelab.net/rm/statistical-rethinking/) by Richard McElreath. You will find more PyMC3 examples from this book in this [repository](https://github.com/aloctavodia/Statistical-Rethinking-with-Python-and-PyMC3). We are going to explore a simplified version of it. Check the book for the whole example and a more thorough discussion of both, the biological motivation for this problem and a theoretical/practical discussion of using Information Criteria to compare, select and average models.
94+
The following example is taken from the superb book {cite:t}`mcelreath2018statistical` by Richard McElreath. You will find more PyMC examples from this book in the repository [Statistical-Rethinking-with-Python-and-PyMC](https://github.com/pymc-devs/pymc-resources/tree/main/Rethinking_2). We are going to explore a simplified version of it. Check the book for the whole example and a more thorough discussion of both, the biological motivation for this problem and a theoretical/practical discussion of using Information Criteria to compare, select and average models.
8795

88-
Briefly, our problem is as follows: We want to explore the composition of milk across several primate species, it is hypothesized that females from species of primates with larger brains produce more _nutritious_ milk (loosely speaking this is done _in order to_ support the development of such big brains). This is an important question for evolutionary biologists and try to give and answer we will use 3 variables, two predictor variables: the proportion of neocortex compare to the total mass of the brain and the logarithm of the body mass of the mothers. And for predicted variable, the kilocalories per gram of milk. With these variables we are going to build 3 different linear models:
96+
Briefly, our problem is as follows: We want to explore the composition of milk across several primate species, it is hypothesized that females from species of primates with larger brains produce more _nutritious_ milk (loosely speaking this is done _in order to_ support the development of such big brains). This is an important question for evolutionary biologists and try to give an answer we will use 3 variables, two predictor variables: the proportion of neocortex compare to the total mass of the brain and the logarithm of the body mass of the mothers. And for predicted variable, the kilocalories per gram of milk. With these variables we are going to build 3 different linear models:
8997

9098
1. A model using only the neocortex variable
9199
2. A model using only the logarithm of the mass variable
@@ -211,7 +219,7 @@ az.plot_forest(traces, figsize=(10, 5));
211219

212220
+++ {"papermill": {"duration": 0.052958, "end_time": "2020-11-29T12:14:55.196722", "exception": false, "start_time": "2020-11-29T12:14:55.143764", "status": "completed"}, "tags": []}
213221

214-
Another option is to plot several traces in a same plot is to use `densityplot`. This plot is somehow similar to a forestplot, but we get truncated KDE plots (by default 95% credible intervals) grouped by variable names together with a point estimate (by default the mean).
222+
Another option is to plot several traces in a same plot is to use `plot_density`. This plot is somehow similar to a forestplot, but we get truncated KDE (kernel density estimation) plots (by default 95% credible intervals) grouped by variable names together with a point estimate (by default the mean).
215223

216224
```{code-cell} ipython3
217225
---
@@ -223,12 +231,25 @@ papermill:
223231
status: completed
224232
tags: []
225233
---
226-
az.plot_density(traces, var_names=["alpha", "sigma"]);
234+
ax = az.plot_density(
235+
traces,
236+
var_names=["alpha", "sigma"],
237+
shade=0.1,
238+
data_labels=["Model 0 (neocortex)", "Model 1 (log_mass)", "Model 2 (neocortex+log_mass)"],
239+
)
240+
241+
ax[0, 0].set_xlabel("Density")
242+
ax[0, 0].set_ylabel("")
243+
ax[0, 0].set_title("95% Credible Intervals: alpha")
244+
245+
ax[0, 1].set_xlabel("Density")
246+
ax[0, 1].set_ylabel("")
247+
ax[0, 1].set_title("95% Credible Intervals: sigma")
227248
```
228249

229250
+++ {"papermill": {"duration": 0.055089, "end_time": "2020-11-29T12:14:57.977616", "exception": false, "start_time": "2020-11-29T12:14:57.922527", "status": "completed"}, "tags": []}
230251

231-
Now that we have sampled the posterior for the 3 models, we are going to use WAIC (Widely applicable information criterion) to compare the 3 models. We can do this using the `compare` function included with PyMC3.
252+
Now that we have sampled the posterior for the 3 models, we are going to use WAIC (Widely applicable information criterion) to compare the 3 models. We can do this using the `compare` function included with ArviZ.
232253

233254
```{code-cell} ipython3
234255
---
@@ -247,11 +268,11 @@ comp
247268

248269
+++ {"papermill": {"duration": 0.056609, "end_time": "2020-11-29T12:14:58.387481", "exception": false, "start_time": "2020-11-29T12:14:58.330872", "status": "completed"}, "tags": []}
249270

250-
We can see that the best model is `model_2`, the one with both predictor variables. Notice the DataFrame is ordered from lowest to highest WAIC (_i.e_ from _better_ to _worst_ model). Check [this notebook](model_comparison.ipynb) for a more detailed discussing on model comparison.
271+
We can see that the best model is `model_2`, the one with both predictor variables. Notice the DataFrame is ordered from lowest to highest WAIC (_i.e_ from _better_ to _worst_ model). Check the {ref}`pymc:model_comparison` for a more detailed discussion on model comparison.
251272

252-
We can also see that we get a column with the relative `weight` for each model (according to the first equation at the beginning of this notebook). This weights can be _vaguely_ interpreted as the probability that each model will make the correct predictions on future data. Of course this interpretation is conditional on the models used to compute the weights, if we add or remove models the weights will change. And also is dependent on the assumptions behind WAIC (or any other Information Criterion used). So try to do not overinterpret these `weights`.
273+
We can also see that we get a column with the relative `weight` for each model (according to the first equation at the beginning of this notebook). This weights can be _vaguely_ interpreted as the probability that each model will make the correct predictions on future data. Of course this interpretation is conditional on the models used to compute the weights, if we add or remove models the weights will change. And also is dependent on the assumptions behind WAIC (or any other Information Criterion used). So try to not overinterpret these `weights`.
253274

254-
Now we are going to use copmuted `weights` to generate predictions based not on a single model but on the weighted set of models. This is one way to perform model averaging. Using PyMC3 we can call the `sample_posterior_predictive_w` function as follows:
275+
Now we are going to use computed `weights` to generate predictions based not on a single model, but on the weighted set of models. This is one way to perform model averaging. Using PyMC we can call the `sample_posterior_predictive_w` function as follows:
255276

256277
```{code-cell} ipython3
257278
---
@@ -275,7 +296,7 @@ ppc_w = pm.sample_posterior_predictive_w(
275296

276297
Notice that we are passing the weights ordered by their index. We are doing this because we pass `traces` and `models` ordered from model 0 to 2, but the computed weights are ordered from lowest to highest WAIC (or equivalently from larger to lowest weight). In summary, we must be sure that we are correctly pairing the weights and models.
277298

278-
We are also going to compute PPCs for the lowest-WAIC model
299+
We are also going to compute PPCs for the lowest-WAIC model.
279300

280301
```{code-cell} ipython3
281302
---
@@ -292,7 +313,7 @@ ppc_2 = pm.sample_posterior_predictive(trace=trace_2, model=model_2, progressbar
292313

293314
+++ {"papermill": {"duration": 0.058214, "end_time": "2020-11-29T12:15:55.404271", "exception": false, "start_time": "2020-11-29T12:15:55.346057", "status": "completed"}, "tags": []}
294315

295-
A simple way to compare both kind of predictions is to plot their mean and hpd interval
316+
A simple way to compare both kind of predictions is to plot their mean and hpd interval.
296317

297318
```{code-cell} ipython3
298319
---
@@ -329,7 +350,30 @@ As we can see the mean value is almost the same for both predictions but the unc
329350

330351
There are other ways to average models such as, for example, explicitly building a meta-model that includes all the models we have. We then perform parameter inference while jumping between the models. One problem with this approach is that jumping between models could hamper the proper sampling of the posterior.
331352

332-
Besides averaging discrete models we can sometimes think of continuous versions of them. A toy example is to imagine that we have a coin and we want to estimated it's degree of bias, a number between 0 and 1 being 0.5 equal chance of head and tails. We could think of two separated models one with a prior biased towards heads and one towards tails. We could fit both separate models and then average them using, for example, IC-derived weights. An alternative, is to build a hierarchical model to estimate the prior distribution, instead of contemplating two discrete models we will be computing a continuous model that includes these the discrete ones as particular cases. Which approach is better? That depends on our concrete problem. Do we have good reasons to think about two discrete models, or is our problem better represented with a continuous bigger model?
353+
Besides averaging discrete models we can sometimes think of continuous versions of them. A toy example is to imagine that we have a coin and we want to estimated its degree of bias, a number between 0 and 1 having a 0.5 equal chance of head and tails (fair coin). We could think of two separate models one with a prior biased towards heads and one towards tails. We could fit both separate models and then average them using, for example, IC-derived weights. An alternative, is to build a hierarchical model to estimate the prior distribution, instead of contemplating two discrete models we will be computing a continuous model that includes these the discrete ones as particular cases. Which approach is better? That depends on our concrete problem. Do we have good reasons to think about two discrete models, or is our problem better represented with a continuous bigger model?
354+
355+
+++
356+
357+
## Authors
358+
359+
* Authored by Osvaldo Martin in June 2017 ([pymc#2273](https://github.com/pymc-devs/pymc/pull/2273))
360+
* Updated by Osvaldo Martin in December 2017 ([pymc#2741](https://github.com/pymc-devs/pymc/pull/2741))
361+
* Updated by Marco Gorelli in November 2020 ([pymc#4271](https://github.com/pymc-devs/pymc/pull/4271))
362+
* Moved from pymc to pymc-examples repo in December 2020 ([pymc-examples#8](https://github.com/pymc-devs/pymc-examples/pull/8))
363+
* Updated by Raul Maldonado in February 2021 ([pymc#25](https://github.com/pymc-devs/pymc-examples/pull/25))
364+
* Updated Markdown and styling by @reshamas in August 2022, ([pymc-examples#414](https://github.com/pymc-devs/pymc-examples/pull/414))
365+
366+
+++
367+
368+
## References
369+
370+
:::{bibliography}
371+
:filter: docname in docnames
372+
:::
373+
374+
+++
375+
376+
## Watermark
333377

334378
```{code-cell} ipython3
335379
---
@@ -344,3 +388,6 @@ tags: []
344388
%load_ext watermark
345389
%watermark -n -u -v -iv -w
346390
```
391+
392+
:::{include} ../page_footer.md
393+
:::

0 commit comments

Comments
 (0)