From 5373e46d33cea3b52c40d3d03358def957625938 Mon Sep 17 00:00:00 2001 From: Andrea Catelli Date: Tue, 25 Mar 2025 11:40:38 +0100 Subject: [PATCH] Small typos corrections in notebooks SARMA and VARMAX examples --- notebooks/SARMA Example.ipynb | 28 ++++++++++++++-------------- notebooks/VARMAX Example.ipynb | 18 +++++++++--------- 2 files changed, 23 insertions(+), 23 deletions(-) diff --git a/notebooks/SARMA Example.ipynb b/notebooks/SARMA Example.ipynb index d65fe718..51a36214 100644 --- a/notebooks/SARMA Example.ipynb +++ b/notebooks/SARMA Example.ipynb @@ -80,7 +80,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "id": "827516e4", "metadata": {}, "outputs": [], @@ -100,7 +100,7 @@ "# Hidden state noise coefficients\n", "R = np.array([[1.0], [MA_params[0]]])\n", "\n", - "# Hidden state covaraince matrix\n", + "# Hidden state covariance matrix\n", "Q = np.array([[0.8]])\n", "\n", "# Observation matrix\n", @@ -666,9 +666,9 @@ "source": [ "# Estimation\n", "\n", - "Finally on to actually fitting the model. There are ways to speed things up a bit. We are using stationary covariance initialization, which saves us from estimating the intial covaraince matrix. PyMC Statespace also offers different flavors of Kalman Filter, some with some built in speedups (`steady_state`, but it's not JAX compatible). \n", + "Finally on to actually fitting the model. There are ways to speed things up a bit. We are using stationary covariance initialization, which saves us from estimating the initial covariance matrix. PyMC Statespace also offers different flavors of Kalman Filter, some with some built in speedups (`steady_state`, but it's not JAX compatible). \n", "\n", - "For now we'll just go with the standard filter. There is a separate notbook explaining and comparing the different filtering choices." + "For now we'll just go with the standard filter. There is a separate notebook explaining and comparing the different filtering choices." ] }, { @@ -1141,7 +1141,7 @@ "\\Sigma^\\star &= T \\Sigma^\\star T^T − T \\Sigma^\\star Z^T F{-1} Z \\Sigma^\\star T^T + R Q R^T\n", "\\end{align}$$\n", "\n", - "The first equation assumes that $T$ is invertable, which means that the system needs to be stationary. These results thus don't apply to, say, a Gaussian Random Walk model. The second equation is a Discrete Matrix Riccati Equation. A `pytensor` solver is available, but is not JAX compaible." + "The first equation assumes that $T$ is invertable, which means that the system needs to be stationary. These results thus don't apply to, say, a Gaussian Random Walk model. The second equation is a Discrete Matrix Riccati Equation. A `pytensor` solver is available, but is not JAX compatible." ] }, { @@ -1311,7 +1311,7 @@ "source": [ "Comparing the posterior outputs, you can see that only the Kalman predictions have posterior errors. This is because there is no measurement error in this model, nor unobserved shocks to the hidden states. In the context of the Kalman Filter, this has the effect of noiselessly encoding the data during the filter step (and the smoother is based on the filtered distribution). \n", "\n", - "In all three posterior predicitive distributions, we can see that there is noise in the hidden state estimation. This amounts to estimation of the innovation series, scaled by the MA parameter $\\theta$ (refer to the algebra above). This amounts to draws from a fixed Normal distribution in the filtered and predictive distributions, but more closely follows the data in the smoothed distirbution, since this takes into account both past and future observations. " + "In all three posterior predicitive distributions, we can see that there is noise in the hidden state estimation. This amounts to estimation of the innovation series, scaled by the MA parameter $\\theta$ (refer to the algebra above). This amounts to draws from a fixed Normal distribution in the filtered and predictive distributions, but more closely follows the data in the smoothed distribution, since this takes into account both past and future observations. " ] }, { @@ -1438,7 +1438,7 @@ "source": [ "## Forecasting\n", "\n", - "Forcasting is also easy, just use the `forecast` method." + "Forecasting is also easy, just use the `forecast` method." ] }, { @@ -1529,7 +1529,7 @@ "source": [ "## Porcupine Graph\n", "\n", - "A \"procupine graph\" shows model forcasts at different periods in the time series. The name comes from the fact that forecasts lines poke out from the data like porcupine quills. We make one for this data to show the flexibility of the forecast method.\n", + "A \"porcupine graph\" shows model forecasts at different periods in the time series. The name comes from the fact that forecasts lines poke out from the data like porcupine quills. We make one for this data to show the flexibility of the forecast method.\n", "\n", "This isn't a true porcupine graph though, because the \"forecasts\" for the in-sample period are generated using parameters fit on data from the future. As noted, it's just a nice demonstration of the forecast method." ] @@ -1776,7 +1776,7 @@ "source": [ "## Interpretable ARMA\n", "\n", - "In addition to the usual formulation, there's also an \"intrepretable\" formulation of the ARMA model. It has significantly more states, which makes it a bad choice if speed is critical. But for a smallish model on smallish data, it lets us directly recover the innovation trajectory.\n", + "In addition to the usual formulation, there's also an \"interpretable\" formulation of the ARMA model. It has significantly more states, which makes it a bad choice if speed is critical. But for a smallish model on smallish data, it lets us directly recover the innovation trajectory.\n", "\n", "We can also add measurement error to the model, in case we have some reason to doubt the data. We'll also show off the missing data interpolation ability of the state space model." ] @@ -2299,7 +2299,7 @@ "source": [ "# Seasonal Terms and Differences\n", "\n", - "Next, consider the airline passanger dataset. This dataset is interesting for several reasons. It has a non-stationary trends, and exhibits a strong seasonal pattern. " + "Next, consider the airline passenger dataset. This dataset is interesting for several reasons. It has a non-stationary trends, and exhibits a strong seasonal pattern. " ] }, { @@ -2350,7 +2350,7 @@ "\n", "The second source of non-stationarity is the growing autocovariance. It is clear that the size of the seasonal pattern is increasing over time, which is a tell-tale sign of a multiplicative timeseries, wherein the level of the series gets into the variance. Unfortunately, linear state space models can only handle linear models. So we will need to convert the multiplicative errors to linear errors by taking logs of the data. This cannot be done inside the model, and needs to be done by hand.\n", "\n", - "Let's take a look at what the log differences look like. This is just for illustration -- as noted, we don't acutally need to difference by hand" + "Let's take a look at what the log differences look like. This is just for illustration -- as noted, we don't actually need to difference by hand" ] }, { @@ -2442,7 +2442,7 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": null, "id": "9d0e15bf-2f88-4c50-ba4b-900a9bc66beb", "metadata": {}, "outputs": [ @@ -2704,7 +2704,7 @@ " \"x0\", pt.concatenate([intercept, pt.zeros(ss_mod.k_states - 1)]), dims=[\"state\"]\n", " )\n", "\n", - " # Give State 0 (the non-zero one) it's own sigma for the initial covariance, while all the stationary states can share a single\n", + " # Give State 0 (the non-zero one) its own sigma for the initial covariance, while all the stationary states can share a single\n", " # sigma\n", " sigma_P0 = pm.Gamma(\"sigma_P0\", alpha=2, beta=10, shape=(2,))\n", " P0 = pt.eye(ss_mod.k_states) * sigma_P0[1]\n", @@ -2843,7 +2843,7 @@ "id": "23ff69b1-e41e-4183-a392-b472a43d7e48", "metadata": {}, "source": [ - "### Forcasts" + "### Forecasts" ] }, { diff --git a/notebooks/VARMAX Example.ipynb b/notebooks/VARMAX Example.ipynb index cffeca7e..0c3243d6 100644 --- a/notebooks/VARMAX Example.ipynb +++ b/notebooks/VARMAX Example.ipynb @@ -214,10 +214,10 @@ "source": [ "# PyMC Statespace\n", "\n", - "To fit a VAR model with `pymc-statespace`, use the `BayesianVARMAX` model. We will recieve a message giving the names of the parameters we need to set priors for. In this case, we need:\n", + "To fit a VAR model with `pymc-statespace`, use the `BayesianVARMAX` model. We will receive a message giving the names of the parameters we need to set priors for. In this case, we need:\n", "\n", "- `x0`, a guess at the initial state of the system. \n", - "- `P0`, a guess at the initial covaraince of the system\n", + "- `P0`, a guess at the initial covariance of the system\n", "- `ar_params`, the autoregressive parameters\n", "- `state_cov`, the covariance matrix of the shocks to the system\n", "\n", @@ -348,7 +348,7 @@ "id": "16f65990-6343-4cff-ae14-8ba6ea91dbb7", "metadata": {}, "source": [ - "For readibility in the PyMC model block, we can unpack them into separate variables" + "For readability in the PyMC model block, we can unpack them into separate variables" ] }, { @@ -409,7 +409,7 @@ "2. Ravel all these variables and concatenate them together into a single vector called \"theta\".\n", "3. One variable at a time, read \"theta\" like tape. For each chunk of variables, reshape them into the expected shape and put them in the correct location in the correct matrix.\n", "\n", - "We are worried about where the `ar_coefs` end up, so let's look at an example. We start with a 3d tensor, because PyMC makes it really easy to decalare the variables that way. It will look like a matrix of three 3-by-2 matrices:\n", + "We are worried about where the `ar_coefs` end up, so let's look at an example. We start with a 3d tensor, because PyMC makes it really easy to declare the variables that way. It will look like a matrix of three 3-by-2 matrices:\n", "\n", "$\\begin{bmatrix}\n", "\\begin{bmatrix} a_{x,1,x} & a_{x,1,y} & a_{x,1,z} \\\\ a_{x,2,x} & a_{x,2,y} & a_{x,2,z} \\end{bmatrix} \\\\\n", @@ -951,12 +951,12 @@ "source": [ "# Stability Analysis\n", "\n", - "We can get posterior eigenvalues of the transition matrix to better understand the range of dynamics our model believes are possible. Unforunately we can't use `sample_posterior_predictive` to help us do that, because `pt.linalg.eig` discards the imaginary eigenvalues. Instead, we have to use `xarray.apply_ufunc`. " + "We can get posterior eigenvalues of the transition matrix to better understand the range of dynamics our model believes are possible. Unfortunately we can't use `sample_posterior_predictive` to help us do that, because `pt.linalg.eig` discards the imaginary eigenvalues. Instead, we have to use `xarray.apply_ufunc`. " ] }, { "cell_type": "code", - "execution_count": 18, + "execution_count": null, "id": "81362229", "metadata": {}, "outputs": [ @@ -993,7 +993,7 @@ } ], "source": [ - "# By deafult we don't store the statespace matrices in idata at sampling time, so we need to go back and sample them.\n", + "# By default we don't store the statespace matrices in idata at sampling time, so we need to go back and sample them.\n", "# The sample_statespace_matrices class is a helper to do this.\n", "\n", "matrix_idata = bvar_mod.sample_statespace_matrices(idata, [\"T\"])" @@ -1178,7 +1178,7 @@ "source": [ "Another interesting thing to do is look at the dynamics of samples where the eigenvalues are maximally imaginary. First of all, it's a sanity check on the `xr.apply_ufunc`, which is non-trivial to use. But secondly, it's interesting to see periodic dynamics. Some macroeconomic models seek imaginary eigenvalues as a way to generating business cycles. \n", "\n", - "For this sample, we can see some convergant osciallating behavior in the IRFs." + "For this sample, we can see some convergent oscillating behavior in the IRFs." ] }, { @@ -1324,7 +1324,7 @@ "id": "dd0ed7f8", "metadata": {}, "source": [ - "## Forcasting" + "## Forecasting" ] }, {