Skip to content

Commit dbec349

Browse files
Tom's Dec 20 edits of two lectures
1 parent 4941d8f commit dbec349

File tree

2 files changed

+34
-3
lines changed

2 files changed

+34
-3
lines changed

lectures/bayes_nonconj.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ kernelspec:
1313

1414
# Non-Conjugate Priors
1515

16-
This lecture is a sequel to the QuantEcon lecture <https://python.quantecon.org/prob_meaning.html>
16+
This lecture is a sequel to the {doc}`quantecon lecture <prob_meaning>`.
1717

1818
That lecture offers a Bayesian interpretation of probability in a setting in which the likelihood function and the prior distribution
1919
over parameters just happened to form a **conjugate** pair in which
@@ -88,7 +88,7 @@ from numpyro.optim import Adam as nAdam
8888

8989
## Unleashing MCMC on a Binomial Likelihood
9090

91-
This lecture begins with the binomial example in the QuantEcon lecture <https://python.quantecon.org/prob_meaning.html>
91+
This lecture begins with the binomial example in the {doc}`quantecon lecture <prob_meaning>`.
9292

9393
That lecture computed a posterior
9494

@@ -103,7 +103,7 @@ We use both the packages `pyro` and `numpyro` with assistance from `jax` to app
103103

104104
We use several alternative prior distributions
105105

106-
We compare computed posteriors with ones associated with a conjugate prior as described in QuantEcon lecture https://python.quantecon.org/prob_meaning.html
106+
We compare computed posteriors with ones associated with a conjugate prior as described in {doc}`the quantecon lecture <prob_meaning>`
107107

108108

109109
### Analytical Posterior

lectures/prob_meaning.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -693,3 +693,34 @@ Thus, the Bayesian statististian comes to believe that $\theta$ is near $.4$.
693693
As shown in the figure above, as the number of observations grows, the Bayesian coverage intervals (BCIs) become narrower and narrower around $0.4$.
694694

695695
However, if you take a closer look, you will find that the centers of the BCIs are not exactly $0.4$, due to the persistent influence of the prior distribution and the randomness of the simulation path.
696+
697+
698+
## Role of a Conjugate Prior
699+
700+
We have made assumptions that link functional forms of our likelihood function and our prior in a way that has eased our calculations considerably.
701+
702+
In particular, our assumptions that the likelihood function is **binomial** and that the prior distribution is a **beta distribution** have the consequence that the posterior distribution implied by Bayes' Law is also a **beta distribution**.
703+
704+
So posterior and prior are both beta distributions, albeit ones with different parameters.
705+
706+
When a likelihood function and prior fit together like hand and glove in this way, we can say that the prior and posterior are **conjugate distributions**.
707+
708+
In this situation, we also sometimes say that we have **conjugate prior** for the likelihood function $\textrm{Prob}(X | \theta)$.
709+
710+
Typically, the functional form of the likelihood function determines the functional form of a **conjugate prior**.
711+
712+
A natural question to ask is why should a person's personal prior about a parameter $\theta$ be restricted to be described by a conjugate prior?
713+
714+
Why not some other functional form that more sincerely describes the person's beliefs.
715+
716+
To be argumentative, one could ask, why should the form of the likelihood function have *anything* to say about my
717+
personal beliefs about $\theta$?
718+
719+
A dignified response to that question is, well, it shouldn't, but if you want to compute a posterior easily you'll just be happier if your prior is conjugate to your likelihood.
720+
721+
Otherwise, your posterior won't have a convenient analytical form and you'll be in the situation of wanting to
722+
apply the Markov chain Monte Carlo techniques deployed in {doc}`this quantecon lecture <bayes_nonconj>`.
723+
724+
We also apply these powerful methods to approximating Bayesian posteriors for non-conjugate priors in
725+
{doc}`this quantecon lecture <ar1_bayes>` and {doc}`this quantecon lecture <ar1_turningpts>`
726+

0 commit comments

Comments
 (0)