You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/mixture_models/dependent_density_regression.myst.md
+10-4
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ jupytext:
5
5
format_name: myst
6
6
format_version: 0.13
7
7
kernelspec:
8
-
display_name: Python 3
8
+
display_name: Python 3 (ipykernel)
9
9
language: python
10
10
name: python3
11
11
---
@@ -286,7 +286,7 @@ pm.model_to_graphviz(model)
286
286
287
287
+++ {"id": "gUPThEEEg8LF"}
288
288
289
-
We now sample from the dependent density regression model using a Metropolis sampler. The default NUTS sampler has a difficult time sampling from this model, and the traceplots show poor convergence.
289
+
We now sample from the dependent density regression model using a Metropolis sampler. The default NUTS sampler has a difficult time sampling the stick-breaking model, so we will employ a `CompoundSampler`, using a slice sampler for `alpha`and `beta` while leaving NUTS for the rest of the parameters.
Since only three mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.)
336
+
Since only six mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.)
331
337
332
338
Visually, it is reasonable that the LIDAR data has three linear components, so these posterior expected weights seem to have identified the structure of the data well. We now sample from the posterior predictive distribution to get a better understand the model's performance.
0 commit comments