Skip to content

Commit 991d934

Browse files
committed
Success
1 parent 97d267c commit 991d934

File tree

2 files changed

+2384
-1373
lines changed

2 files changed

+2384
-1373
lines changed

examples/mixture_models/dependent_density_regression.ipynb

+2,374-1,369
Large diffs are not rendered by default.

examples/mixture_models/dependent_density_regression.myst.md

+10-4
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ jupytext:
55
format_name: myst
66
format_version: 0.13
77
kernelspec:
8-
display_name: Python 3
8+
display_name: Python 3 (ipykernel)
99
language: python
1010
name: python3
1111
---
@@ -286,7 +286,7 @@ pm.model_to_graphviz(model)
286286

287287
+++ {"id": "gUPThEEEg8LF"}
288288

289-
We now sample from the dependent density regression model using a Metropolis sampler. The default NUTS sampler has a difficult time sampling from this model, and the traceplots show poor convergence.
289+
We now sample from the dependent density regression model using a Metropolis sampler. The default NUTS sampler has a difficult time sampling the stick-breaking model, so we will employ a `CompoundSampler`, using a slice sampler for `alpha` and `beta` while leaving NUTS for the rest of the parameters.
290290

291291
```{code-cell} ipython3
292292
---
@@ -298,7 +298,13 @@ id: FSYdNHFUg8LF
298298
outputId: 829d4ee8-c971-4962-aa71-265f93eeb356
299299
---
300300
with model:
301-
trace = pm.sample(random_seed=SEED, step=pm.Metropolis(), draws=10_000, tune=10_000, cores=2)
301+
trace = pm.sample(random_seed=SEED, step=pm.Slice([alpha, beta]), tune=5_000, cores=2)
302+
```
303+
304+
We can see from the R-hat diagnostics below (all near 1.0) that the model is reasonably well converged.
305+
306+
```{code-cell} ipython3
307+
az.summary(trace, var_names=["beta"])
302308
```
303309

304310
+++ {"id": "io6KXPdgg8LF"}
@@ -327,7 +333,7 @@ ax.set_ylabel("Largest posterior expected\nmixture weight");
327333

328334
+++ {"id": "6Pq0WqBbg8LF"}
329335

330-
Since only three mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.)
336+
Since only six mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.)
331337

332338
Visually, it is reasonable that the LIDAR data has three linear components, so these posterior expected weights seem to have identified the structure of the data well. We now sample from the posterior predictive distribution to get a better understand the model's performance.
333339

0 commit comments

Comments
 (0)