41
41
42
42
43
43
class SMC_KERNEL (ABC ):
44
- """Base class for the Sequential Monte Carlo kernels
44
+ """Base class for the Sequential Monte Carlo kernels.
45
45
46
46
To create a new SMC kernel you should subclass from this.
47
47
@@ -53,73 +53,73 @@ class SMC_KERNEL(ABC):
53
53
to sampling from the prior distribution. This method is only called
54
54
if `start` is not specified.
55
55
56
- _initialize_kernel: default
56
+ _initialize_kernel : default
57
57
Creates initial population of particles in the variable
58
58
`self.tempered_posterior` and populates the `self.var_info` dictionary
59
59
with information about model variables shape and size as
60
- {var.name : (var.shape, var.size)
60
+ {var.name : (var.shape, var.size)}.
61
61
62
- The functions self.prior_logp_func and self.likelihood_logp_func are
62
+ The functions ` self.prior_logp_func` and ` self.likelihood_logp_func` are
63
63
created in this step. These expect a 1D numpy array with the summed
64
64
sizes of each raveled model variable (in the order specified in
65
- model.inial_point ).
65
+ :meth:`pymc.Model.initial_point` ).
66
66
67
67
Finally, this method computes the log prior and log likelihood for
68
- the initial particles, and saves them in self.prior_logp and
69
- self.likelihood_logp.
68
+ the initial particles, and saves them in ` self.prior_logp` and
69
+ ` self.likelihood_logp` .
70
70
71
71
This method should not be modified.
72
72
73
- setup_kernel: optional
73
+ setup_kernel : optional
74
74
May include any logic that should be performed before sampling
75
75
starts.
76
76
77
77
During each sampling stage the following methods are called in order:
78
78
79
- update_beta_and_weights: default
80
- The inverse temperature self.beta is updated based on the self.likelihood_logp
81
- and `threshold` parameter
79
+ update_beta_and_weights : default
80
+ The inverse temperature self.beta is updated based on the ` self.likelihood_logp`
81
+ and `threshold` parameter.
82
82
83
- The importance self.weights of each particle are computed from the old and newly
84
- selected inverse temperature
83
+ The importance ` self.weights` of each particle are computed from the old and newly
84
+ selected inverse temperature.
85
85
86
86
The iteration number stored in `self.iteration` is updated by this method.
87
87
88
- Finally the model log_marginal_likelihood of the tempered posterior
89
- is updated from these weights
88
+ Finally the model ` log_marginal_likelihood` of the tempered posterior
89
+ is updated from these weights.
90
90
91
- resample: default
92
- The particles in self.posterior are sampled with replacement based
93
- on self.weights, and the used resampling indexes are saved in
91
+ resample : default
92
+ The particles in ` self.posterior` are sampled with replacement based
93
+ on ` self.weights` , and the used resampling indexes are saved in
94
94
`self.resampling_indexes`.
95
95
96
- The arrays self.prior_logp, self.likelihood_logp are rearranged according
97
- to the order of the resampled particles. self.tempered_posterior_logp
98
- is computed from these and the current self.beta
96
+ The arrays ` self.prior_logp` and ` self.likelihood_logp` are rearranged according
97
+ to the order of the resampled particles. ` self.tempered_posterior_logp`
98
+ is computed from these and the current ` self.beta`.
99
99
100
- tune: optional
101
- May include logic that should be performed before every mutation step
100
+ tune : optional
101
+ May include logic that should be performed before every mutation step.
102
102
103
- mutate: REQUIRED
104
- Mutate particles in self.tempered_posterior
103
+ mutate : REQUIRED
104
+ Mutate particles in ` self.tempered_posterior`.
105
105
106
- This method is further responsible to update the self.prior_logp,
107
- self.likelihod_logp and self.tempered_posterior_logp, corresponding
108
- to each mutated particle
106
+ This method is further responsible to update the ` self.prior_logp` ,
107
+ ` self.likelihod_logp` and ` self.tempered_posterior_logp` , corresponding
108
+ to each mutated particle.
109
109
110
- sample_stats: default
110
+ sample_stats : default
111
111
Returns important sampling_stats at the end of each stage in a dictionary
112
- format. This will be saved in the final InferenceData objcet under `sample_stats`.
112
+ format. This will be saved in the final InferenceData object under `sample_stats`.
113
113
114
114
Finally, at the end of sampling the following methods are called:
115
115
116
- _posterior_to_trace: default
116
+ _posterior_to_trace : default
117
117
Convert final population of particles to a posterior trace object.
118
118
This method should not be modified.
119
119
120
- sample_settings: default:
120
+ sample_settings : default
121
121
Returns important sample_settings at the end of sampling in a dictionary
122
- format. This will be saved in the final InferenceData objcet under `sample_stats`.
122
+ format. This will be saved in the final InferenceData object under `sample_stats`.
123
123
124
124
"""
125
125
@@ -132,23 +132,29 @@ def __init__(
132
132
threshold = 0.5 ,
133
133
):
134
134
"""
135
+ Initialize the SMC_kernel class.
135
136
136
137
Parameters
137
138
----------
138
- draws: int
139
- The number of samples to draw from the posterior (i.e. last stage). And also the number of
139
+ draws : int, default 2000
140
+ The number of samples to draw from the posterior (i.e. last stage). Also the number of
140
141
independent chains. Defaults to 2000.
141
- start: dict, or array of dict
142
+ start : dict, or array of dict, default None
142
143
Starting point in parameter space. It should be a list of dict with length `chains`.
143
144
When None (default) the starting point is sampled from the prior distribution.
144
- model: Model (optional if in ``with`` context) ).
145
- random_seed: int
145
+ model : Model (optional if in ``with`` context).
146
+ random_seed : int, array_like of int, RandomState or Generator, optional
146
147
Value used to initialize the random number generator.
147
- threshold: float
148
+ threshold : float, default 0.5
148
149
Determines the change of beta from stage to stage, i.e.indirectly the number of stages,
149
150
the higher the value of `threshold` the higher the number of stages. Defaults to 0.5.
150
151
It should be between 0 and 1.
151
152
153
+ Attributes
154
+ ----------
155
+ self.var_info : dict
156
+ Dictionary that contains information about model variables shape and size.
157
+
152
158
"""
153
159
154
160
self .draws = draws
@@ -199,7 +205,7 @@ def initialize_population(self) -> Dict[str, np.ndarray]:
199
205
return cast (Dict [str , np .ndarray ], dict_prior )
200
206
201
207
def _initialize_kernel (self ):
202
- """Create variables and logp function necessary to run kernel
208
+ """Create variables and logp function necessary to run SMC kernel
203
209
204
210
This method should not be overwritten. If needed, use `setup_kernel`
205
211
instead.
@@ -301,17 +307,17 @@ def mutate(self):
301
307
def sample_stats (self ) -> Dict :
302
308
"""Stats to be saved at the end of each stage
303
309
304
- These stats will be saved under `sample_stats` in the final InferenceData.
310
+ These stats will be saved under `sample_stats` in the final InferenceData object .
305
311
"""
306
312
return {
307
313
"log_marginal_likelihood" : self .log_marginal_likelihood if self .beta == 1 else np .nan ,
308
314
"beta" : self .beta ,
309
315
}
310
316
311
317
def sample_settings (self ) -> Dict :
312
- """Kernel settings to be saved once at the end of sampling
318
+ """SMC_kernel settings to be saved once at the end of sampling.
313
319
314
- These stats will be saved under `sample_stats` in the final InferenceData.
320
+ These stats will be saved under `sample_stats` in the final InferenceData object .
315
321
316
322
"""
317
323
return {
@@ -347,15 +353,19 @@ def _posterior_to_trace(self, chain=0) -> NDArray:
347
353
348
354
349
355
class IMH (SMC_KERNEL ):
350
- """Independent Metropolis-Hastings SMC kernel """
356
+ """Independent Metropolis-Hastings SMC_kernel """
351
357
352
358
def __init__ (self , * args , correlation_threshold = 0.01 , ** kwargs ):
353
359
"""
354
360
Parameters
355
361
----------
356
- correlation_threshold: float
357
- The lower the value the higher the number of IMH steps computed automatically.
362
+ correlation_threshold : float, default 0.01
363
+ The lower the value, the higher the number of IMH steps computed automatically.
358
364
Defaults to 0.01. It should be between 0 and 1.
365
+ **kwargs : dict, optional
366
+ Keyword arguments passed to the SMC_kernel. Refer to SMC_kernel documentation for a
367
+ list of all possible arguments.
368
+
359
369
"""
360
370
super ().__init__ (* args , ** kwargs )
361
371
self .correlation_threshold = correlation_threshold
@@ -449,15 +459,19 @@ def get(self, b):
449
459
450
460
451
461
class MH (SMC_KERNEL ):
452
- """Metropolis-Hastings SMC kernel """
462
+ """Metropolis-Hastings SMC_kernel """
453
463
454
464
def __init__ (self , * args , correlation_threshold = 0.01 , ** kwargs ):
455
465
"""
456
466
Parameters
457
467
----------
458
- correlation_threshold: float
459
- The lower the value the higher the number of MH steps computed automatically.
468
+ correlation_threshold : float, default 0.01
469
+ The lower the value, the higher the number of MH steps computed automatically.
460
470
Defaults to 0.01. It should be between 0 and 1.
471
+ **kwargs : dict, optional
472
+ Keyword arguments passed to the SMC_kernel. Refer to SMC_kernel documentation for a
473
+ list of all possible arguments.
474
+
461
475
"""
462
476
super ().__init__ (* args , ** kwargs )
463
477
self .correlation_threshold = correlation_threshold
@@ -468,7 +482,7 @@ def __init__(self, *args, correlation_threshold=0.01, **kwargs):
468
482
469
483
def setup_kernel (self ):
470
484
"""Proposal dist is just a Multivariate Normal with unit identity covariance.
471
- Dimension specific scaling is provided by self.proposal_scales and set in self.tune()
485
+ Dimension specific scaling is provided by ` self.proposal_scales` and set in ` self.tune()`
472
486
"""
473
487
ndim = self .tempered_posterior .shape [1 ]
474
488
self .proposal_scales = np .full (self .draws , min (1 , 2.38 ** 2 / ndim ))
@@ -586,11 +600,11 @@ def _logp_forw(point, out_vars, in_vars, shared):
586
600
Parameters
587
601
----------
588
602
out_vars : list
589
- containing :class:`pymc. Distribution` for the output variables
603
+ Containing Distribution for the output variables
590
604
in_vars : list
591
- containing :class:`pymc. Distribution` for the input variables
605
+ Containing Distribution for the input variables
592
606
shared : list
593
- containing :class:`aesara.tensor.Tensor` for depended shared data
607
+ Containing TensorVariable for depended shared data
594
608
"""
595
609
596
610
# Replace integer inputs with rounded float inputs
0 commit comments