Skip to content

Commit dd60b0f

Browse files
authored
Move LKJ to howto. (#606)
* Move LKJ to howto. * Move longitudinal to timeseries. * Reorder sections. * Rename sections.
1 parent 8e67a67 commit dd60b0f

File tree

5 files changed

+9
-9
lines changed

5 files changed

+9
-9
lines changed

examples/case_studies/LKJ.ipynb renamed to examples/howto/LKJ.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@
161161
"\n",
162162
"The LKJ distribution provides a prior on the correlation matrix, $\\mathbf{C} = \\textrm{Corr}(x_i, x_j)$, which, combined with priors on the standard deviations of each component, [induces](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n416.pdf) a prior on the covariance matrix, $\\Sigma$. Since inverting $\\Sigma$ is numerically unstable and inefficient, it is computationally advantageous to use the [Cholesky decompositon](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\\Sigma$, $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$, where $\\mathbf{L}$ is a lower-triangular matrix. This decompositon allows computation of the term $(\\mathbf{x} - \\mu)^{\\top} \\Sigma^{-1} (\\mathbf{x} - \\mu)$ using back-substitution, which is more numerically stable and efficient than direct matrix inversion.\n",
163163
"\n",
164-
"PyMC supports LKJ priors for the Cholesky decomposition of the covariance matrix via the [LKJCholeskyCov](https://docs.pymc.io/en/latest/api/distributions/generated/pymc.LKJCholeskyCov.html) distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\\mathbf{x}$, and the PyMC distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\\mathbf{x}$. The LKJ distribution has the density $f(\\mathbf{C}\\ |\\ \\eta) \\propto |\\mathbf{C}|^{\\eta - 1}$, so $\\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\\eta \\to \\infty$.\n",
164+
"PyMC supports LKJ priors for the Cholesky decomposition of the covariance matrix via the {class}`pymc.LKJCholeskyCov` distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\\mathbf{x}$, and the PyMC distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\\mathbf{x}$. The LKJ distribution has the density $f(\\mathbf{C}\\ |\\ \\eta) \\propto |\\mathbf{C}|^{\\eta - 1}$, so $\\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\\eta \\to \\infty$.\n",
165165
"\n",
166166
"In this example, we model the standard deviations with $\\textrm{Exponential}(1.0)$ priors, and the correlation matrix as $\\mathbf{C} \\sim \\textrm{LKJ}(\\eta = 2)$."
167167
]
@@ -308,7 +308,7 @@
308308
"id": "QOCi1RKvr2Ph"
309309
},
310310
"source": [
311-
"We sample from this model using NUTS and give the trace to [ArviZ](https://arviz-devs.github.io/arviz/) for summarization:"
311+
"We sample from this model using NUTS and give the trace to {ref}`arviz` for summarization:"
312312
]
313313
},
314314
{

examples/case_studies/LKJ.myst.md renamed to examples/howto/LKJ.myst.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ $$f(\mathbf{x}\ |\ \mu, \Sigma^{-1}) = (2 \pi)^{-\frac{k}{2}} |\Sigma|^{-\frac{1
101101

102102
The LKJ distribution provides a prior on the correlation matrix, $\mathbf{C} = \textrm{Corr}(x_i, x_j)$, which, combined with priors on the standard deviations of each component, [induces](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n416.pdf) a prior on the covariance matrix, $\Sigma$. Since inverting $\Sigma$ is numerically unstable and inefficient, it is computationally advantageous to use the [Cholesky decompositon](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\Sigma$, $\Sigma = \mathbf{L} \mathbf{L}^{\top}$, where $\mathbf{L}$ is a lower-triangular matrix. This decompositon allows computation of the term $(\mathbf{x} - \mu)^{\top} \Sigma^{-1} (\mathbf{x} - \mu)$ using back-substitution, which is more numerically stable and efficient than direct matrix inversion.
103103

104-
PyMC supports LKJ priors for the Cholesky decomposition of the covariance matrix via the [LKJCholeskyCov](https://docs.pymc.io/en/latest/api/distributions/generated/pymc.LKJCholeskyCov.html) distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\mathbf{x}$, and the PyMC distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\mathbf{x}$. The LKJ distribution has the density $f(\mathbf{C}\ |\ \eta) \propto |\mathbf{C}|^{\eta - 1}$, so $\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\eta \to \infty$.
104+
PyMC supports LKJ priors for the Cholesky decomposition of the covariance matrix via the {class}`pymc.LKJCholeskyCov` distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\mathbf{x}$, and the PyMC distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\mathbf{x}$. The LKJ distribution has the density $f(\mathbf{C}\ |\ \eta) \propto |\mathbf{C}|^{\eta - 1}$, so $\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\eta \to \infty$.
105105

106106
In this example, we model the standard deviations with $\textrm{Exponential}(1.0)$ priors, and the correlation matrix as $\mathbf{C} \sim \textrm{LKJ}(\eta = 2)$.
107107

@@ -175,7 +175,7 @@ with model:
175175

176176
+++ {"id": "QOCi1RKvr2Ph"}
177177

178-
We sample from this model using NUTS and give the trace to [ArviZ](https://arviz-devs.github.io/arviz/) for summarization:
178+
We sample from this model using NUTS and give the trace to {ref}`arviz` for summarization:
179179

180180
```{code-cell} ipython3
181181
---

sphinxext/thumbnail_extractor.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -105,17 +105,17 @@
105105
"introductory": "Introductory",
106106
"fundamentals": "Library Fundamentals",
107107
"howto": "How to",
108-
"generalized_linear_models": "(Generalized) Linear and Hierarchical Linear Models",
108+
"generalized_linear_models": "Generalized Linear Models",
109109
"case_studies": "Case Studies",
110110
"causal_inference": "Causal Inference",
111-
"diagnostics_and_criticism": "Diagnostics and Model Criticism",
112111
"gaussian_processes": "Gaussian Processes",
112+
"time_series": "Time Series",
113+
"spatial": "Spatial Analysis",
114+
"diagnostics_and_criticism": "Diagnostics and Model Criticism",
113115
"bart": "Bayesian Additive Regressive Trees",
114116
"mixture_models": "Mixture Models",
115117
"survival_analysis": "Survival Analysis",
116-
"time_series": "Time Series",
117-
"spatial": "Spatial Analysis",
118-
"ode_models": "Inference in ODE models",
118+
"ode_models": "ODE models",
119119
"samplers": "MCMC",
120120
"variational_inference": "Variational Inference",
121121
}

0 commit comments

Comments
 (0)