jupytext | kernelspec | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
(diagnosing_with_divergences)=
:::{post} Feb, 2018 :tags: hierarchical model, diagnostics :category: intermediate :author: Agustina Arroyuelo :::
from collections import defaultdict
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
print(f"Running on PyMC3 v{pm.__version__}")
%config InlineBackend.figure_format = 'retina'
az.style.use("arviz-darkgrid")
SEED = [20100420, 20134234]
This notebook is a PyMC3 port of Michael Betancourt's post on mc-stan. For detailed explanation of the underlying mechanism please check the original post, Diagnosing Biased Inference with Divergences and Betancourt's excellent paper, A Conceptual Introduction to Hamiltonian Monte Carlo.
+++
Bayesian statistics is all about building a model and estimating the parameters in that model. However, a naive or direct parameterization of our probability model can sometimes be ineffective, you can check out Thomas Wiecki's blog post, Why hierarchical models are awesome, tricky, and Bayesian on the same issue in PyMC3. Suboptimal parameterization often leads to slow sampling, and more problematic, biased MCMC estimators.
More formally, as explained in the original post, Diagnosing Biased Inference with Divergences:
Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution,
using the states of a Markov chain,
These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long,
To be useful in applied analyses, we need MCMC estimators to converge to the true expectation values sufficiently quickly that they are reasonably accurate before we exhaust our finite computational resources. This fast convergence requires strong ergodicity conditions to hold, in particular geometric ergodicity between a Markov transition and a target distribution. Geometric ergodicity is usually the necessary condition for MCMC estimators to follow a central limit theorem, which ensures not only that they are unbiased even after only a finite number of iterations but also that we can empirically quantify their precision using the MCMC standard error.
Unfortunately, proving geometric ergodicity is infeasible for any nontrivial problem. Instead we must rely on empirical diagnostics that identify obstructions to geometric ergodicity, and hence well-behaved MCMC estimators. For a general Markov transition and target distribution, the best known diagnostic is the split
Hamiltonian Monte Carlo, for example, is especially powerful in this regard as its failures to be geometrically ergodic with respect to any target distribution manifest in distinct behaviors that have been developed into sensitive diagnostics. One of these behaviors is the appearance of divergences that indicate the Hamiltonian Markov chain has encountered regions of high curvature in the target distribution which it cannot adequately explore.
In this notebook we aim to identify divergences and the underlying pathologies in PyMC3
.
+++
The hierarchical model of the Eight Schools dataset (Rubin 1981) as seen in Stan
:
where
Inferring the hierarchical hyperparameters,
# Data of the Eight Schools Model
J = 8
y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])
sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])
# tau = 25.
Stan
model:
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta[J];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta ~ normal(mu, tau);
y ~ normal(theta, sigma);
}
Similarly, we can easily implement it in PyMC3
with pm.Model() as Centered_eight:
mu = pm.Normal("mu", mu=0, sigma=5)
tau = pm.HalfCauchy("tau", beta=5)
theta = pm.Normal("theta", mu=mu, sigma=tau, shape=J)
obs = pm.Normal("obs", mu=theta, sigma=sigma, observed=y)
Unfortunately, this direct implementation of the model exhibits a pathological geometry that frustrates geometric ergodicity. Even more worrisome, the resulting bias is subtle and may not be obvious upon inspection of the Markov chain alone. To understand this bias, let's consider first a short Markov chain, commonly used when computational expediency is a motivating factor, and only afterwards a longer Markov chain.
+++
with Centered_eight:
short_trace = pm.sample(600, chains=2, random_seed=SEED)
In the original post a single chain of 1200 sample is applied. However, since split PyMC3
we fit 2 chains with 600 sample each instead.
The Gelman-Rubin diagnostic
az.summary(short_trace).round(2)
Moreover, the trace plots all look fine. Let's consider, for example, the hierarchical standard deviation
---
mystnb:
figure:
caption: Trace plot of log(tau)
name: nb-divergence-traceplot
image:
alt: log-tau
---
# plot the trace of log(tau)
ax = az.plot_trace(
{"log(tau)": short_trace.get_values(varname="tau_log__", combine=False)}, legend=True
)
ax[0, 1].set_xlabel("Draw")
ax[0, 1].set_ylabel("log(tau)")
ax[0, 1].set_title("")
ax[0, 0].set_xlabel("log(tau)")
ax[0, 0].set_title("Probability density function of log(tau)");
Unfortunately, the resulting estimate for the mean of
# plot the estimate for the mean of log(τ) cumulating mean
logtau = np.log(short_trace["tau"])
mlogtau = [np.mean(logtau[:i]) for i in np.arange(1, len(logtau))]
plt.figure(figsize=(15, 4))
plt.axhline(0.7657852, lw=2.5, color="gray")
plt.plot(mlogtau, lw=2.5)
plt.ylim(0, 2)
plt.xlabel("Iteration")
plt.ylabel("MCMC mean of log(tau)")
plt.title("MCMC estimation of log(tau)");
Hamiltonian Monte Carlo, however, is not so oblivious to these issues as
# display the total number and percentage of divergent
divergent = short_trace["diverging"]
print("Number of Divergent %d" % divergent.nonzero()[0].size)
divperc = divergent.nonzero()[0].size / len(short_trace) * 100
print("Percentage of Divergent %.1f" % divperc)
Even with a single short chain these divergences are able to identity the bias and advise skepticism of any resulting MCMC estimators.
Additionally, because the divergent transitions, here shown in green, tend to be located near the pathologies we can use them to identify the location of the problematic neighborhoods in parameter space.
def pairplot_divergence(trace, ax=None, divergence=True, color="C3", divergence_color="C2"):
theta = trace.get_values(varname="theta", combine=True)[:, 0]
logtau = trace.get_values(varname="tau_log__", combine=True)
if not ax:
_, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.plot(theta, logtau, "o", color=color, alpha=0.5)
if divergence:
divergent = trace["diverging"]
ax.plot(theta[divergent], logtau[divergent], "o", color=divergence_color)
ax.set_xlabel("theta[0]")
ax.set_ylabel("log(tau)")
ax.set_title("scatter plot between log(tau) and theta[0]")
return ax
pairplot_divergence(short_trace);
It is important to point out that the pathological samples from the trace are not necessarily concentrated at the funnel: when a divergence is encountered, the subtree being constructed is rejected and the transition samples uniformly from the existing discrete trajectory. Consequently, divergent samples will not be located exactly in the region of high curvature.
In pymc3
, we recently implemented a warning system that also saves the information of where the divergence occurs, and hence you can visualize them directly. To be more precise, what we include as the divergence point in the warning is the point where that problematic leapfrog step started. Some could also be because the divergence happens in one of the leapfrog step (which strictly speaking is not a point). But nonetheless, visualizing these should give a closer proximate where the funnel is.
Notices that only the first 100 divergences are stored, so that we don't eat all memory.
divergent_point = defaultdict(list)
chain_warn = short_trace.report._chain_warnings
for i in range(len(chain_warn)):
for warning_ in chain_warn[i]:
if warning_.step is not None and warning_.extra is not None:
for RV in Centered_eight.free_RVs:
para_name = RV.name
divergent_point[para_name].append(warning_.extra[para_name])
for RV in Centered_eight.free_RVs:
para_name = RV.name
divergent_point[para_name] = np.asarray(divergent_point[para_name])
tau_log_d = divergent_point["tau_log__"]
theta0_d = divergent_point["theta"]
Ndiv_recorded = len(tau_log_d)
_, ax = plt.subplots(1, 2, figsize=(15, 6), sharex=True, sharey=True)
pairplot_divergence(short_trace, ax=ax[0], color="C7", divergence_color="C2")
plt.title("scatter plot between log(tau) and theta[0]")
pairplot_divergence(short_trace, ax=ax[1], color="C7", divergence_color="C2")
theta_trace = short_trace["theta"]
theta0 = theta_trace[:, 0]
ax[1].plot(
[theta0[divergent == 1][:Ndiv_recorded], theta0_d],
[logtau[divergent == 1][:Ndiv_recorded], tau_log_d],
"k-",
alpha=0.5,
)
ax[1].scatter(
theta0_d, tau_log_d, color="C3", label="Location of Energy error (start location of leapfrog)"
)
plt.title("scatter plot between log(tau) and theta[0]")
plt.legend();
There are many other ways to explore and visualize the pathological region in the parameter space. For example, we can reproduce Figure 5b in Visualization in Bayesian workflow
tracedf = pm.trace_to_dataframe(short_trace)
plotorder = [
"mu",
"tau",
"theta__0",
"theta__1",
"theta__2",
"theta__3",
"theta__4",
"theta__5",
"theta__6",
"theta__7",
]
tracedf = tracedf[plotorder]
_, ax = plt.subplots(1, 2, figsize=(15, 4), sharex=True, sharey=True)
ax[0].plot(tracedf.values[divergent == 0].T, color="k", alpha=0.025)
ax[0].plot(tracedf.values[divergent == 1].T, color="C2", lw=0.5)
ax[1].plot(tracedf.values[divergent == 0].T, color="k", alpha=0.025)
ax[1].plot(tracedf.values[divergent == 1].T, color="C2", lw=0.5)
divsp = np.hstack(
[
divergent_point["mu"],
np.exp(divergent_point["tau_log__"]),
divergent_point["theta"],
]
)
ax[1].plot(divsp.T, "C3", lw=0.5)
plt.ylim([-20, 40])
plt.xticks(range(10), plotorder)
plt.tight_layout()
# A small wrapper function for displaying the MCMC sampler diagnostics as above
def report_trace(trace):
# plot the trace of log(tau)
az.plot_trace({"log(tau)": trace.get_values(varname="tau_log__", combine=False)})
# plot the estimate for the mean of log(τ) cumulating mean
logtau = np.log(trace["tau"])
mlogtau = [np.mean(logtau[:i]) for i in np.arange(1, len(logtau))]
plt.figure(figsize=(15, 4))
plt.axhline(0.7657852, lw=2.5, color="gray")
plt.plot(mlogtau, lw=2.5)
plt.ylim(0, 2)
plt.xlabel("Iteration")
plt.ylabel("MCMC mean of log(tau)")
plt.title("MCMC estimation of log(tau)")
plt.show()
# display the total number and percentage of divergent
divergent = trace["diverging"]
print("Number of Divergent %d" % divergent.nonzero()[0].size)
divperc = divergent.nonzero()[0].size / len(trace) * 100
print("Percentage of Divergent %.1f" % divperc)
# scatter plot between log(tau) and theta[0]
# for the identification of the problematic neighborhoods in parameter space
pairplot_divergence(trace);
Given the potential insensitivity of split Stan
recommend always running multiple chains as long as possible to have the best chance to observe any obstructions to geometric ergodicity. Because it is not always possible to run long chains for complex models, however, divergences are an incredibly powerful diagnostic for biased MCMC estimation.
with Centered_eight:
longer_trace = pm.sample(4000, chains=2, tune=1000, random_seed=SEED)
report_trace(longer_trace)
az.summary(longer_trace).round(2)
Similar to the result in Stan
,
In fact the sticky intervals are the Markov chain trying to correct the biased exploration. If we ran the chain even longer then it would eventually get stuck again and drag the MCMC estimator down towards the true value. Given an infinite number of iterations this delicate balance asymptotes to the true expectation as we’d expect given the consistency guarantee of MCMC. Stopping after any finite number of iterations, however, destroys this balance and leaves us with a significant bias.
More details can be found in Betancourt's recent paper.
+++
Divergences in Hamiltonian Monte Carlo arise when the Hamiltonian transition encounters regions of extremely large curvature, such as the opening of the hierarchical funnel. Unable to accurate resolve these regions, the transition malfunctions and flies off towards infinity. With the transitions unable to completely explore these regions of extreme curvature, we lose geometric ergodicity and our MCMC estimators become biased.
Algorithm implemented in Stan
uses a heuristic to quickly identify these misbehaving trajectories, and hence label divergences, without having to wait for them to run all the way to infinity. This heuristic can be a bit aggressive, however, and sometimes label transitions as divergent even when we have not lost geometric ergodicity.
To resolve this potential ambiguity we can adjust the step size,
Like Stan
, the step size in PyMC3
is tuned automatically during warm up, but we can coerce smaller step sizes by tweaking the configuration of PyMC3
's adaptation routine. In particular, we can increase the target_accept
parameter from its default value of 0.8 closer to its maximum value of 1.
+++
with Centered_eight:
fit_cp85 = pm.sample(5000, chains=2, tune=2000, target_accept=0.85)
with Centered_eight:
fit_cp90 = pm.sample(5000, chains=2, tune=2000, target_accept=0.90)
with Centered_eight:
fit_cp95 = pm.sample(5000, chains=2, tune=2000, target_accept=0.95)
with Centered_eight:
fit_cp99 = pm.sample(5000, chains=2, tune=2000, target_accept=0.99)
df = pd.DataFrame(
[
longer_trace["step_size"].mean(),
fit_cp85["step_size"].mean(),
fit_cp90["step_size"].mean(),
fit_cp95["step_size"].mean(),
fit_cp99["step_size"].mean(),
],
columns=["Step_size"],
)
df["Divergent"] = pd.Series(
[
longer_trace["diverging"].sum(),
fit_cp85["diverging"].sum(),
fit_cp90["diverging"].sum(),
fit_cp95["diverging"].sum(),
fit_cp99["diverging"].sum(),
]
)
df["delta_target"] = pd.Series([".80", ".85", ".90", ".95", ".99"])
df
Here, the number of divergent transitions dropped dramatically when delta was increased to 0.99.
This behavior also has a nice geometric intuition. The more we decrease the step size the more the Hamiltonian Markov chain can explore the neck of the funnel. Consequently, the marginal posterior distribution for
Since in PyMC3
after tuning we have a smaller step size than Stan
, the geometery is better explored.
However, the Hamiltonian transition is still not geometrically ergodic with respect to the centered implementation of the Eight Schools model. Indeed, this is expected given the observed bias.
_, ax = plt.subplots(1, 1, figsize=(10, 6))
pairplot_divergence(fit_cp99, ax=ax, color="C3", divergence=False)
pairplot_divergence(longer_trace, ax=ax, color="C1", divergence=False)
ax.legend(["Centered, delta=0.99", "Centered, delta=0.85"]);
logtau0 = longer_trace["tau_log__"]
logtau2 = np.log(fit_cp90["tau"])
logtau1 = fit_cp99["tau_log__"]
plt.figure(figsize=(15, 4))
plt.axhline(0.7657852, lw=2.5, color="gray")
mlogtau0 = [np.mean(logtau0[:i]) for i in np.arange(1, len(logtau0))]
plt.plot(mlogtau0, label="Centered, delta=0.85", lw=2.5)
mlogtau2 = [np.mean(logtau2[:i]) for i in np.arange(1, len(logtau2))]
plt.plot(mlogtau2, label="Centered, delta=0.90", lw=2.5)
mlogtau1 = [np.mean(logtau1[:i]) for i in np.arange(1, len(logtau1))]
plt.plot(mlogtau1, label="Centered, delta=0.99", lw=2.5)
plt.ylim(0, 2)
plt.xlabel("Iteration")
plt.ylabel("MCMC mean of log(tau)")
plt.title("MCMC estimation of log(tau)")
plt.legend();
Although reducing the step size improves exploration, ultimately it only reveals the true extent the pathology in the centered implementation. Fortunately, there is another way to implement hierarchical models that does not suffer from the same pathologies.
In a non-centered parameterization we do not try to fit the group-level parameters directly, rather we fit a latent Gaussian variable from which we can recover the group-level parameters with a scaling and a translation.
Stan model:
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta_tilde[J];
}
transformed parameters {
real theta[J];
for (j in 1:J)
theta[j] = mu + tau * theta_tilde[j];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta_tilde ~ normal(0, 1);
y ~ normal(theta, sigma);
}
with pm.Model() as NonCentered_eight:
mu = pm.Normal("mu", mu=0, sigma=5)
tau = pm.HalfCauchy("tau", beta=5)
theta_tilde = pm.Normal("theta_t", mu=0, sigma=1, shape=J)
theta = pm.Deterministic("theta", mu + tau * theta_tilde)
obs = pm.Normal("obs", mu=theta, sigma=sigma, observed=y)
with NonCentered_eight:
fit_ncp80 = pm.sample(5000, chains=2, tune=1000, random_seed=SEED, target_accept=0.80)
az.summary(fit_ncp80).round(2)
As shown above, the effective sample size per iteration has drastically improved, and the trace plots no longer show any "stickyness". However, we do still see the rare divergence. These infrequent divergences do not seem concentrate anywhere in parameter space, which is indicative of the divergences being false positives.
report_trace(fit_ncp80)
As expected of false positives, we can remove the divergences entirely by decreasing the step size.
with NonCentered_eight:
fit_ncp90 = pm.sample(5000, chains=2, tune=1000, random_seed=SEED, target_accept=0.90)
# display the total number and percentage of divergent
divergent = fit_ncp90["diverging"]
print("Number of Divergent %d" % divergent.nonzero()[0].size)
The more agreeable geometry of the non-centered implementation allows the Markov chain to explore deep into the neck of the funnel, capturing even the smallest values of tau
(
_, ax = plt.subplots(1, 1, figsize=(10, 6))
pairplot_divergence(fit_ncp80, ax=ax, color="C0", divergence=False)
pairplot_divergence(fit_cp99, ax=ax, color="C3", divergence=False)
pairplot_divergence(fit_cp90, ax=ax, color="C1", divergence=False)
ax.legend(["Non-Centered, delta=0.80", "Centered, delta=0.99", "Centered, delta=0.90"]);
logtaun = fit_ncp80["tau_log__"]
plt.figure(figsize=(15, 4))
plt.axhline(0.7657852, lw=2.5, color="gray")
mlogtaun = [np.mean(logtaun[:i]) for i in np.arange(1, len(logtaun))]
plt.plot(mlogtaun, color="C0", lw=2.5, label="Non-Centered, delta=0.80")
mlogtau1 = [np.mean(logtau1[:i]) for i in np.arange(1, len(logtau1))]
plt.plot(mlogtau1, color="C3", lw=2.5, label="Centered, delta=0.99")
mlogtau0 = [np.mean(logtau0[:i]) for i in np.arange(1, len(logtau0))]
plt.plot(mlogtau0, color="C1", lw=2.5, label="Centered, delta=0.90")
plt.ylim(0, 2)
plt.xlabel("Iteration")
plt.ylabel("MCMC mean of log(tau)")
plt.title("MCMC estimation of log(tau)")
plt.legend();
- Adapted from Michael Betancourt's post January 2017, Diagnosing Biased Inference with Divergences
- Updated by Agustina Arroyuelo in February 2018, (pymc#2861)
- Updated by @CloudChaoszero in January 2021, (pymc-examples#25)
- Updated Markdown and styling by @reshamas in August 2022, (pymc-examples#402)
%load_ext watermark
%watermark -n -u -v -iv -w
:::{include} ../page_footer.md :::