1
1
---
2
2
jupytext :
3
3
text_representation :
4
- extension : .myst
4
+ extension : .md
5
5
format_name : myst
6
6
format_version : 0.13
7
- jupytext_version : 1.13.8
7
+ jupytext_version : 1.16.4
8
8
kernelspec :
9
9
display_name : Python 3 (ipykernel)
10
10
language : python
@@ -43,7 +43,6 @@ The two Python modules are
43
43
44
44
As usual, we begin by importing some Python code.
45
45
46
-
47
46
``` {code-cell} ipython3
48
47
:tags: [hide-output]
49
48
@@ -80,10 +79,8 @@ from numpyro.infer import SVI as nSVI
80
79
from numpyro.infer import ELBO as nELBO
81
80
from numpyro.infer import Trace_ELBO as nTrace_ELBO
82
81
from numpyro.optim import Adam as nAdam
83
-
84
82
```
85
83
86
-
87
84
## Unleashing MCMC on a Binomial Likelihood
88
85
89
86
This lecture begins with the binomial example in the {doc}` quantecon lecture <prob_meaning> ` .
@@ -252,7 +249,6 @@ We will use the following priors:
252
249
253
250
- The truncated Laplace can be created using ` Numpyro ` 's ` TruncatedDistribution ` class.
254
251
255
-
256
252
``` {code-cell} ipython3
257
253
# used by Numpyro
258
254
def TruncatedLogNormal_trans(loc, scale):
@@ -560,19 +556,17 @@ class BayesianInference:
560
556
Computes numerically the posterior distribution with beta prior parametrized by (alpha0, beta0)
561
557
given data using MCMC
562
558
"""
563
- # tensorize
564
- data = torch.tensor(data)
565
-
566
559
# use pyro
567
560
if self.solver=='pyro':
568
-
561
+ # tensorize
562
+ data = torch.tensor(data)
569
563
nuts_kernel = NUTS(self.model)
570
564
mcmc = MCMC(nuts_kernel, num_samples=num_samples, warmup_steps=num_warmup, disable_progbar=True)
571
565
mcmc.run(data)
572
566
573
567
# use numpyro
574
568
elif self.solver=='numpyro':
575
-
569
+ data = np.array(data, dtype=float)
576
570
nuts_kernel = nNUTS(self.model)
577
571
mcmc = nMCMC(nuts_kernel, num_samples=num_samples, num_warmup=num_warmup, progress_bar=False)
578
572
mcmc.run(self.rng_key, data=data)
@@ -655,15 +649,15 @@ class BayesianInference:
655
649
params : the learned parameters for guide
656
650
losses : a vector of loss at each step
657
651
"""
658
- # tensorize data
659
- if not torch.is_tensor(data):
660
- data = torch.tensor(data)
661
652
662
653
# initiate SVI
663
654
svi = self.SVI_init(guide_dist=guide_dist)
664
655
665
656
# do gradient steps
666
657
if self.solver=='pyro':
658
+ # tensorize data
659
+ if not torch.is_tensor(data):
660
+ data = torch.tensor(data)
667
661
# store loss vector
668
662
losses = np.zeros(n_steps)
669
663
for step in range(n_steps):
@@ -676,6 +670,7 @@ class BayesianInference:
676
670
}
677
671
678
672
elif self.solver=='numpyro':
673
+ data = np.array(data, dtype=float)
679
674
result = svi.run(self.rng_key, n_steps, data, progress_bar=False)
680
675
params = dict(
681
676
(key, np.asarray(value)) for key, value in result.params.items()
@@ -898,7 +893,6 @@ For the same Beta prior, we shall
898
893
899
894
Let's start with the analytical method that we described in this quantecon lecture <https://python.quantecon.org/prob_meaning.html>
900
895
901
-
902
896
```{code-cell} ipython3
903
897
# First examine Beta priors
904
898
BETA_pyro = BayesianInference(param=(5,5), name_dist='beta', solver='pyro')
@@ -952,12 +946,10 @@ will be more accurate, as we shall see next.
952
946
953
947
(Increasing the step size increases computational time though).
954
948
955
-
956
949
```{code-cell} ipython3
957
950
BayesianInferencePlot(true_theta, num_list, BETA_numpyro).SVI_plot(guide_dist='beta', n_steps=100000)
958
951
```
959
952
960
-
961
953
## Non-conjugate Prior Distributions
962
954
963
955
Having assured ourselves that our MCMC and VI methods can work well when we have conjugate prior and so can also compute analytically, we
@@ -1052,7 +1044,6 @@ SVI_num_steps = 50000
1052
1044
example_CLASS = BayesianInference(param=(0,1), name_dist='uniform', solver='numpyro')
1053
1045
print(f'=======INFO=======\nParameters: {example_CLASS.param}\nPrior Dist: {example_CLASS.name_dist}\nSolver: {example_CLASS.solver}')
1054
1046
BayesianInferencePlot(true_theta, num_list, example_CLASS).SVI_plot(guide_dist='normal', n_steps=SVI_num_steps)
1055
-
1056
1047
```
1057
1048
1058
1049
```{code-cell} ipython3
0 commit comments