Skip to content

Small typos corrections in notebooks SARMA and VARMAX examples #440

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 14 additions & 14 deletions notebooks/SARMA Example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "827516e4",
"metadata": {},
"outputs": [],
Expand All @@ -100,7 +100,7 @@
"# Hidden state noise coefficients\n",
"R = np.array([[1.0], [MA_params[0]]])\n",
"\n",
"# Hidden state covaraince matrix\n",
"# Hidden state covariance matrix\n",
"Q = np.array([[0.8]])\n",
"\n",
"# Observation matrix\n",
Expand Down Expand Up @@ -666,9 +666,9 @@
"source": [
"# Estimation\n",
"\n",
"Finally on to actually fitting the model. There are ways to speed things up a bit. We are using stationary covariance initialization, which saves us from estimating the intial covaraince matrix. PyMC Statespace also offers different flavors of Kalman Filter, some with some built in speedups (`steady_state`, but it's not JAX compatible). \n",
"Finally on to actually fitting the model. There are ways to speed things up a bit. We are using stationary covariance initialization, which saves us from estimating the initial covariance matrix. PyMC Statespace also offers different flavors of Kalman Filter, some with some built in speedups (`steady_state`, but it's not JAX compatible). \n",
"\n",
"For now we'll just go with the standard filter. There is a separate notbook explaining and comparing the different filtering choices."
"For now we'll just go with the standard filter. There is a separate notebook explaining and comparing the different filtering choices."
]
},
{
Expand Down Expand Up @@ -1141,7 +1141,7 @@
"\\Sigma^\\star &= T \\Sigma^\\star T^T − T \\Sigma^\\star Z^T F{-1} Z \\Sigma^\\star T^T + R Q R^T\n",
"\\end{align}$$\n",
"\n",
"The first equation assumes that $T$ is invertable, which means that the system needs to be stationary. These results thus don't apply to, say, a Gaussian Random Walk model. The second equation is a Discrete Matrix Riccati Equation. A `pytensor` solver is available, but is not JAX compaible."
"The first equation assumes that $T$ is invertable, which means that the system needs to be stationary. These results thus don't apply to, say, a Gaussian Random Walk model. The second equation is a Discrete Matrix Riccati Equation. A `pytensor` solver is available, but is not JAX compatible."
]
},
{
Expand Down Expand Up @@ -1311,7 +1311,7 @@
"source": [
"Comparing the posterior outputs, you can see that only the Kalman predictions have posterior errors. This is because there is no measurement error in this model, nor unobserved shocks to the hidden states. In the context of the Kalman Filter, this has the effect of noiselessly encoding the data during the filter step (and the smoother is based on the filtered distribution). \n",
"\n",
"In all three posterior predicitive distributions, we can see that there is noise in the hidden state estimation. This amounts to estimation of the innovation series, scaled by the MA parameter $\\theta$ (refer to the algebra above). This amounts to draws from a fixed Normal distribution in the filtered and predictive distributions, but more closely follows the data in the smoothed distirbution, since this takes into account both past and future observations. "
"In all three posterior predicitive distributions, we can see that there is noise in the hidden state estimation. This amounts to estimation of the innovation series, scaled by the MA parameter $\\theta$ (refer to the algebra above). This amounts to draws from a fixed Normal distribution in the filtered and predictive distributions, but more closely follows the data in the smoothed distribution, since this takes into account both past and future observations. "
]
},
{
Expand Down Expand Up @@ -1438,7 +1438,7 @@
"source": [
"## Forecasting\n",
"\n",
"Forcasting is also easy, just use the `forecast` method."
"Forecasting is also easy, just use the `forecast` method."
]
},
{
Expand Down Expand Up @@ -1529,7 +1529,7 @@
"source": [
"## Porcupine Graph\n",
"\n",
"A \"procupine graph\" shows model forcasts at different periods in the time series. The name comes from the fact that forecasts lines poke out from the data like porcupine quills. We make one for this data to show the flexibility of the forecast method.\n",
"A \"porcupine graph\" shows model forecasts at different periods in the time series. The name comes from the fact that forecasts lines poke out from the data like porcupine quills. We make one for this data to show the flexibility of the forecast method.\n",
"\n",
"This isn't a true porcupine graph though, because the \"forecasts\" for the in-sample period are generated using parameters fit on data from the future. As noted, it's just a nice demonstration of the forecast method."
]
Expand Down Expand Up @@ -1776,7 +1776,7 @@
"source": [
"## Interpretable ARMA\n",
"\n",
"In addition to the usual formulation, there's also an \"intrepretable\" formulation of the ARMA model. It has significantly more states, which makes it a bad choice if speed is critical. But for a smallish model on smallish data, it lets us directly recover the innovation trajectory.\n",
"In addition to the usual formulation, there's also an \"interpretable\" formulation of the ARMA model. It has significantly more states, which makes it a bad choice if speed is critical. But for a smallish model on smallish data, it lets us directly recover the innovation trajectory.\n",
"\n",
"We can also add measurement error to the model, in case we have some reason to doubt the data. We'll also show off the missing data interpolation ability of the state space model."
]
Expand Down Expand Up @@ -2299,7 +2299,7 @@
"source": [
"# Seasonal Terms and Differences\n",
"\n",
"Next, consider the airline passanger dataset. This dataset is interesting for several reasons. It has a non-stationary trends, and exhibits a strong seasonal pattern. "
"Next, consider the airline passenger dataset. This dataset is interesting for several reasons. It has a non-stationary trends, and exhibits a strong seasonal pattern. "
]
},
{
Expand Down Expand Up @@ -2350,7 +2350,7 @@
"\n",
"The second source of non-stationarity is the growing autocovariance. It is clear that the size of the seasonal pattern is increasing over time, which is a tell-tale sign of a multiplicative timeseries, wherein the level of the series gets into the variance. Unfortunately, linear state space models can only handle linear models. So we will need to convert the multiplicative errors to linear errors by taking logs of the data. This cannot be done inside the model, and needs to be done by hand.\n",
"\n",
"Let's take a look at what the log differences look like. This is just for illustration -- as noted, we don't acutally need to difference by hand"
"Let's take a look at what the log differences look like. This is just for illustration -- as noted, we don't actually need to difference by hand"
]
},
{
Expand Down Expand Up @@ -2442,7 +2442,7 @@
},
{
"cell_type": "code",
"execution_count": 41,
"execution_count": null,
"id": "9d0e15bf-2f88-4c50-ba4b-900a9bc66beb",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -2704,7 +2704,7 @@
" \"x0\", pt.concatenate([intercept, pt.zeros(ss_mod.k_states - 1)]), dims=[\"state\"]\n",
" )\n",
"\n",
" # Give State 0 (the non-zero one) it's own sigma for the initial covariance, while all the stationary states can share a single\n",
" # Give State 0 (the non-zero one) its own sigma for the initial covariance, while all the stationary states can share a single\n",
" # sigma\n",
" sigma_P0 = pm.Gamma(\"sigma_P0\", alpha=2, beta=10, shape=(2,))\n",
" P0 = pt.eye(ss_mod.k_states) * sigma_P0[1]\n",
Expand Down Expand Up @@ -2843,7 +2843,7 @@
"id": "23ff69b1-e41e-4183-a392-b472a43d7e48",
"metadata": {},
"source": [
"### Forcasts"
"### Forecasts"
]
},
{
Expand Down
18 changes: 9 additions & 9 deletions notebooks/VARMAX Example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -214,10 +214,10 @@
"source": [
"# PyMC Statespace\n",
"\n",
"To fit a VAR model with `pymc-statespace`, use the `BayesianVARMAX` model. We will recieve a message giving the names of the parameters we need to set priors for. In this case, we need:\n",
"To fit a VAR model with `pymc-statespace`, use the `BayesianVARMAX` model. We will receive a message giving the names of the parameters we need to set priors for. In this case, we need:\n",
"\n",
"- `x0`, a guess at the initial state of the system. \n",
"- `P0`, a guess at the initial covaraince of the system\n",
"- `P0`, a guess at the initial covariance of the system\n",
"- `ar_params`, the autoregressive parameters\n",
"- `state_cov`, the covariance matrix of the shocks to the system\n",
"\n",
Expand Down Expand Up @@ -348,7 +348,7 @@
"id": "16f65990-6343-4cff-ae14-8ba6ea91dbb7",
"metadata": {},
"source": [
"For readibility in the PyMC model block, we can unpack them into separate variables"
"For readability in the PyMC model block, we can unpack them into separate variables"
]
},
{
Expand Down Expand Up @@ -409,7 +409,7 @@
"2. Ravel all these variables and concatenate them together into a single vector called \"theta\".\n",
"3. One variable at a time, read \"theta\" like tape. For each chunk of variables, reshape them into the expected shape and put them in the correct location in the correct matrix.\n",
"\n",
"We are worried about where the `ar_coefs` end up, so let's look at an example. We start with a 3d tensor, because PyMC makes it really easy to decalare the variables that way. It will look like a matrix of three 3-by-2 matrices:\n",
"We are worried about where the `ar_coefs` end up, so let's look at an example. We start with a 3d tensor, because PyMC makes it really easy to declare the variables that way. It will look like a matrix of three 3-by-2 matrices:\n",
"\n",
"$\\begin{bmatrix}\n",
"\\begin{bmatrix} a_{x,1,x} & a_{x,1,y} & a_{x,1,z} \\\\ a_{x,2,x} & a_{x,2,y} & a_{x,2,z} \\end{bmatrix} \\\\\n",
Expand Down Expand Up @@ -951,12 +951,12 @@
"source": [
"# Stability Analysis\n",
"\n",
"We can get posterior eigenvalues of the transition matrix to better understand the range of dynamics our model believes are possible. Unforunately we can't use `sample_posterior_predictive` to help us do that, because `pt.linalg.eig` discards the imaginary eigenvalues. Instead, we have to use `xarray.apply_ufunc`. "
"We can get posterior eigenvalues of the transition matrix to better understand the range of dynamics our model believes are possible. Unfortunately we can't use `sample_posterior_predictive` to help us do that, because `pt.linalg.eig` discards the imaginary eigenvalues. Instead, we have to use `xarray.apply_ufunc`. "
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "81362229",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -993,7 +993,7 @@
}
],
"source": [
"# By deafult we don't store the statespace matrices in idata at sampling time, so we need to go back and sample them.\n",
"# By default we don't store the statespace matrices in idata at sampling time, so we need to go back and sample them.\n",
"# The sample_statespace_matrices class is a helper to do this.\n",
"\n",
"matrix_idata = bvar_mod.sample_statespace_matrices(idata, [\"T\"])"
Expand Down Expand Up @@ -1178,7 +1178,7 @@
"source": [
"Another interesting thing to do is look at the dynamics of samples where the eigenvalues are maximally imaginary. First of all, it's a sanity check on the `xr.apply_ufunc`, which is non-trivial to use. But secondly, it's interesting to see periodic dynamics. Some macroeconomic models seek imaginary eigenvalues as a way to generating business cycles. \n",
"\n",
"For this sample, we can see some convergant osciallating behavior in the IRFs."
"For this sample, we can see some convergent oscillating behavior in the IRFs."
]
},
{
Expand Down Expand Up @@ -1324,7 +1324,7 @@
"id": "dd0ed7f8",
"metadata": {},
"source": [
"## Forcasting"
"## Forecasting"
]
},
{
Expand Down
Loading