diff --git a/lectures/BCG_complete_mkts.md b/lectures/BCG_complete_mkts.md index b5a87189..0e77e277 100644 --- a/lectures/BCG_complete_mkts.md +++ b/lectures/BCG_complete_mkts.md @@ -802,44 +802,44 @@ It consists of 4 functions that do the following things: - First, create a grid for capital. - Then for each value of capital stock in the grid, compute the left side of the planner's first-order necessary condition for $k$, that is, - + $$ \beta \alpha A K^{\alpha -1} \int \left( \frac{w_1(\epsilon) + A K^\alpha e^\epsilon}{w_0 - K } \right)^{-\gamma} e^\epsilon g(\epsilon) d \epsilon - 1 =0 $$ - + - Find $k$ that solves this equation. * `q` computes Arrow security prices as a function of the productivity shock $\epsilon$ and capital $K$: - + $$ q(\epsilon;K) = \beta \left( \frac{u'\left( w_1(\epsilon) + A K^\alpha e^\epsilon\right)} {u'(w_0 - K )} \right) $$ - + * `V` solves for the firm value given capital $k$: - + $$ V = - k + \int A k^\alpha e^\epsilon q(\epsilon; K) d \epsilon $$ - + * `opt_c` computes optimal consumptions $c^i_0$, and $c^i(\epsilon)$: - The function first computes weight $\eta$ using the budget constraint for agent 1: - + $$ w_0^1 + \theta_0^1 V + \int w_1^1(\epsilon) q(\epsilon) d \epsilon = c_0^1 + \int c_1^1(\epsilon) q(\epsilon) d \epsilon = \eta \left( C_0 + \int C_1(\epsilon) q(\epsilon) d \epsilon \right) $$ where - + $$ \begin{aligned} C_0 & = w_0 - K \cr C_1(\epsilon) & = w_1(\epsilon) + A K^\alpha e^\epsilon \cr \end{aligned} $$ - + - It computes consumption for each agent as - + $$ \begin{aligned} c_0^1 & = \eta C_0 \cr @@ -848,7 +848,7 @@ It consists of 4 functions that do the following things: c_1^2 (\epsilon) & = (1 - \eta) C_1(\epsilon) \end{aligned} $$ - + The list of parameters includes: @@ -868,7 +868,6 @@ The list of parameters includes: Gauss-Hermite quadrature: default value is 10 ```{code-cell} ipython -import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm @@ -1205,4 +1204,3 @@ Image(fig.to_image(format="png")) # fig.show() will provide interactive plot when running # notebook locally ``` - diff --git a/lectures/BCG_incomplete_mkts.md b/lectures/BCG_incomplete_mkts.md index 84c92a0e..78cb5578 100644 --- a/lectures/BCG_incomplete_mkts.md +++ b/lectures/BCG_incomplete_mkts.md @@ -580,39 +580,39 @@ Here goes: $|\theta^1_h - \theta^1_l|$ is large: * Compute agent 1’s valuation of the equity claim with a fixed-point iteration: - + $q_1 = \beta \int \frac{u^\prime(c^1_1(\epsilon))}{u^\prime(c^1_0)} d^e(k,b;\epsilon) g(\epsilon) \ d\epsilon$ - + where - + $c^1_1(\epsilon) = w^1_1(\epsilon) + \theta^1 d^e(k,b;\epsilon)$ - + and - + $c^1_0 = w^1_0 + \theta^1_0V - q_1\theta^1$ * Compute agent 2’s valuation of the bond claim with a fixed-point iteration: - + $p = \beta \int \frac{u^\prime(c^2_1(\epsilon))}{u^\prime(c^2_0)} d^b(k,b;\epsilon) g(\epsilon) \ d\epsilon$ - + where - + $c^2_1(\epsilon) = w^2_1(\epsilon) + \theta^2 d^e(k,b;\epsilon) + b$ - + and - + $c^2_0 = w^2_0 + \theta^2_0 V - q_1 \theta^2 - pb$ * Compute agent 2’s valuation of the equity claim with a fixed-point iteration: - + $q_2 = \beta \int \frac{u^\prime(c^2_1(\epsilon))}{u^\prime(c^2_0)} d^e(k,b;\epsilon) g(\epsilon) \ d\epsilon$ - + where - + $c^2_1(\epsilon) = w^2_1(\epsilon) + \theta^2 d^e(k,b;\epsilon) + b$ - + and - + $c^2_0 = w^2_0 + \theta^2_0 V - q_2 \theta^2 - pb$ * If $q_1 > q_2$, Set $\theta_l = \theta^1$; otherwise, set $\theta_h = \theta^1$. @@ -620,7 +620,7 @@ Here goes: $|\theta^1_h - \theta^1_l|$ is small. 1. Set bond price as $p$ and equity price as $q = \max(q_1,q_2)$. 1. Compute optimal choices of consumption: - + $$ \begin{aligned} c^1_0 &= w^1_0 + \theta^1_0V - q\theta^1 \\ @@ -629,7 +629,7 @@ Here goes: c^2_1(\epsilon) &= w^2_1(\epsilon) + \theta^2 d^e(k,b;\epsilon) + b \end{aligned} $$ - + 1. (Here we confess to abusing notation again, but now in a different way. In step 7, we interpret frozen $c^i$s as Big $C^i$. We do this to solve the firm’s problem.) Fixing the @@ -637,13 +637,13 @@ Here goes: choices of capital $k$ and debt level $b$ using the firm’s first order necessary conditions. 1. Compute deviations from the firm’s FONC for capital $k$ as: - + $kfoc = \beta \alpha A k^{\alpha - 1} \left( \int \frac{u^\prime(c^2_1(\epsilon))}{u^\prime(c^2_0)} e^\epsilon g(\epsilon) \ d\epsilon \right) - 1$ - If $kfoc > 0$, Set $k_l = k$; otherwise, set $k_h = k$. - Repeat steps 4 through 7A until $|k_h-k_l|$ is small. 1. Compute deviations from the firm’s FONC for debt level $b$ as: - + $bfoc = \beta \left[ \int_{\epsilon^*}^\infty \left( \frac{u^\prime(c^1_1(\epsilon))}{u^\prime(c^1_0)} \right) g(\epsilon) \ d\epsilon - \int_{\epsilon^*}^\infty \left( \frac{u^\prime(c^2_1(\epsilon))}{u^\prime(c^2_0)} \right) g(\epsilon) \ d\epsilon \right]$ - If $bfoc > 0$, Set $b_h = b$; otherwise, set $b_l = b$. @@ -651,7 +651,7 @@ Here goes: 1. Given prices $q$ and $p$ from step 6, and the firm choices of $k$ and $b$ from step 7, compute the synthetic firm value: - + $V_x = -k + q + pb$ - If $V_x > V$, then set $V_l = V$; otherwise, set $V_h = V$. @@ -704,12 +704,9 @@ Parameters include: - bound: Bound for truncated normal distribution. Default value is 3. ```{code-cell} ipython -import pandas as pd import numpy as np -from scipy.stats import norm from scipy.stats import truncnorm from scipy.integrate import quad -from scipy.optimize import bisect from numba import njit from interpolation import interp ``` @@ -1946,4 +1943,3 @@ Agents of type 2 value bonds more highly (they want more hedging). Taken together with our earlier plot of equity holdings, these graphs confirm our earlier conjecture that while both type of agents hold equities, only agents of type 2 holds bonds. - diff --git a/lectures/additive_functionals.md b/lectures/additive_functionals.md index 5d71be79..aff8e5dc 100644 --- a/lectures/additive_functionals.md +++ b/lectures/additive_functionals.md @@ -75,7 +75,6 @@ Let's start with some imports: ```{code-cell} ipython3 import numpy as np -import scipy as sp import scipy.linalg as la import quantecon as qe import matplotlib.pyplot as plt diff --git a/lectures/amss.md b/lectures/amss.md index 9e02beec..7e3117fa 100644 --- a/lectures/amss.md +++ b/lectures/amss.md @@ -46,8 +46,6 @@ from interpolation.splines import eval_linear, UCGrid, nodes from quantecon import optimize, MarkovChain from numba import njit, prange, float64 from numba.experimental import jitclass - -%matplotlib inline ``` In {doc}`an earlier lecture `, we described a model of @@ -1033,4 +1031,3 @@ problem, there exists another realization $\tilde s^t$ with the same history up until the previous period, i.e., $\tilde s^{t-1}= s^{t-1}$, but where the multiplier on constraint {eq}`AMSS_46` takes a positive value, so $\gamma_t(\tilde s^t)>0$. - diff --git a/lectures/arellano.md b/lectures/arellano.md index df2fe239..4de172f3 100644 --- a/lectures/arellano.md +++ b/lectures/arellano.md @@ -78,10 +78,8 @@ Let's start with some imports: import matplotlib.pyplot as plt import numpy as np import quantecon as qe -import random -from numba import njit, int64, float64, prange -from numba.experimental import jitclass +from numba import njit, prange %matplotlib inline ``` diff --git a/lectures/asset_pricing_lph.md b/lectures/asset_pricing_lph.md index e212a8c1..f24f543d 100644 --- a/lectures/asset_pricing_lph.md +++ b/lectures/asset_pricing_lph.md @@ -26,7 +26,7 @@ kernelspec: This lecture is about some implications of asset-pricing theories that are based on the equation $E m R = 1,$ where $R$ is the gross return on an asset, $m$ is a stochastic discount factor, and $E$ is a mathematical expectation with respect to a joint probability distribution of $R$ and $m$. -Instances of this equation occur in many models. +Instances of this equation occur in many models. ```{note} Chapter 1 of {cite}`Ljungqvist2012` describes the role that this equation plays in a diverse set of @@ -34,19 +34,19 @@ models in macroeconomics, monetary economics, and public finance. ``` -We aim to convey insights about empirical implications of this equation brought out in the work of Lars Peter Hansen {cite}`HansenRichard1987` and Lars Peter Hansen and Ravi Jagannathan {cite}`Hansen_Jagannathan_1991`. +We aim to convey insights about empirical implications of this equation brought out in the work of Lars Peter Hansen {cite}`HansenRichard1987` and Lars Peter Hansen and Ravi Jagannathan {cite}`Hansen_Jagannathan_1991`. -By following their footsteps, from that single equation we'll derive +By following their footsteps, from that single equation we'll derive -* a mean-variance frontier +* a mean-variance frontier -* a single-factor model of excess returns +* a single-factor model of excess returns To do this, we use two ideas: * the equation $E m R =1 $ that is implied by an application of a *law of one price* - + * a Cauchy-Schwartz inequality In particular, we'll apply a Cauchy-Schwartz inequality to a population linear least squares regression equation that is @@ -68,11 +68,11 @@ We begin with a **key asset pricing equation**: $$ -E m R^i = 1 +E m R^i = 1 $$ (eq:EMR1) for $i=1, \ldots, I$ and where - + $$ \begin{aligned} m &=\text { stochastic discount factor } \\ @@ -81,18 +81,18 @@ E &\sim \text { mathematical expectation } \end{aligned} $$ -The random gross return $R^i$ for every asset $i$ and the scalar stochastic discount factor $m$ -live in a common probability space. +The random gross return $R^i$ for every asset $i$ and the scalar stochastic discount factor $m$ +live in a common probability space. {cite}`HansenRichard1987` and {cite}`Hansen_Jagannathan_1991` explain how **existence** of a scalar stochastic discount factor that verifies equation -{eq}`eq:EMR1` is implied by a __law of one price__ that requires that all portfolios of assets +{eq}`eq:EMR1` is implied by a __law of one price__ that requires that all portfolios of assets that bring the same payouts have the same price. They also explain how the __absence of an arbitrage__ opportunity implies that the stochastic discount factor $m \geq 0$. In order to say something about the **uniqueness** of a stochastic discount factor, we would have to impose more theoretical structure than we do in this -lecture. +lecture. For example, in **complete markets** models like those illustrated in this lecture [equilibrium capital structures with incomplete markets](https://python-advanced.quantecon.org/BCG_incomplete_mkts.html), the stochastic discount factor is unique. @@ -118,34 +118,34 @@ This remark of Lars Hansen refers to the fact that interesting restrictions can Let's do this step by step. First note that the definition of a -covariance +covariance $\operatorname{cov}\left(m, R^{i}\right) = E (m - E m)(R^i - E R^i) $ implies that -$$ +$$ E m R^i = E m E R^{i}+\operatorname{cov}\left(m, R^{i}\right) $$ -Substituting this result into +Substituting this result into equation {eq}`eq:EMR1` gives $$ -1 = E m E R^{i}+\operatorname{cov}\left(m, R^{i}\right) -$$ (eq:EMR2) - +1 = E m E R^{i}+\operatorname{cov}\left(m, R^{i}\right) +$$ (eq:EMR2) + Next note that for a risk-free asset with non-random gross return $R^f$, equation -{eq}`eq:EMR1` becomes +{eq}`eq:EMR1` becomes $$ 1 = E R^f m = R^f E m. $$ -This is true because we can pull the constant $R^f$ outside the mathematical expectation. +This is true because we can pull the constant $R^f$ outside the mathematical expectation. It follows that the gross return on a risk-free asset is -$$ -R^{f} = 1 / E(m) +$$ +R^{f} = 1 / E(m) $$ Using this formula for $R^f$ in equation {eq}`eq:EMR2` and rearranging, it follows that @@ -157,15 +157,15 @@ $$ which can be rearranged to become $$ -E R^i = R^{f}-\operatorname{cov}\left(m, R^{i}\right) R^{f} . +E R^i = R^{f}-\operatorname{cov}\left(m, R^{i}\right) R^{f} . $$ It follows that we can express an **excess return** $E R^{i}-R^{f}$ on asset $i$ relative to the risk-free rate as -$$ -E R^{i}-R^{f} = -\operatorname{cov}\left(m, R^{i}\right) R^{f} +$$ +E R^{i}-R^{f} = -\operatorname{cov}\left(m, R^{i}\right) R^{f} $$ (eq:EMR3) - + Equation {eq}`eq:EMR3` can be rearranged to display important parts of asset pricing theory. @@ -175,23 +175,23 @@ Equation {eq}`eq:EMR3` can be rearranged to display important parts of asset pri We can obtain the celebrated **expected-return-Beta -representation** for gross return $R^i$ by simply rearranging excess return equation {eq}`eq:EMR3` to become $$ -E R^{i}=R^{f}+\left(\underbrace{\frac{\operatorname{cov}\left(R^{i}, m\right)}{\operatorname{var}(m)}}_{\quad\quad\beta_{i,m} = \text{regression coefficient}}\right)\left(\underbrace{-\frac{\operatorname{var}(m)}{E(m)}}_{\quad\lambda_{m} = \text{price of risk}}\right) +E R^{i}=R^{f}+\left(\underbrace{\frac{\operatorname{cov}\left(R^{i}, m\right)}{\operatorname{var}(m)}}_{\quad\quad\beta_{i,m} = \text{regression coefficient}}\right)\left(\underbrace{-\frac{\operatorname{var}(m)}{E(m)}}_{\quad\lambda_{m} = \text{price of risk}}\right) $$ - + or $$ -E R^{i}=R^{f}+\beta_{i, m} \lambda_{m} +E R^{i}=R^{f}+\beta_{i, m} \lambda_{m} $$ (eq:ERbetarep) -Here +Here * $\beta_{i,m}$ is a (population) least squares regression coefficient of gross return $R^i$ on stochastic discount factor $m$ - + * $\lambda_m$ is minus the variance of $m$ divided by the mean of $m$, an object that is sometimes called a **price of risk**. -Because $\lambda_m < 0$, equation {eq}`eq:ERbetarep` asserts that +Because $\lambda_m < 0$, equation {eq}`eq:ERbetarep` asserts that * assets whose returns are **positively** correlated with the stochastic discount factor (SDF) $m$ have expected returns **lower** than the risk-free rate $R^f$ * assets whose returns are **negatively** correlated with the SDF $m$ have expected returns **higher** than the risk-free rate $R^f$ @@ -205,13 +205,13 @@ status: Before we dive into that more, we'll pause to look at an example of an SDF. - + To interpret representation {eq}`eq:ERbetarep`, the following widely used example helps. - - -**Example** + + +**Example** Let $c_t$ be the logarithm of the consumption of a _representative consumer_ or just a single consumer for whom we have consumption data. @@ -230,53 +230,53 @@ function $U'(C) = C^{-\gamma}$. In this case, letting $c_t = \log(C_t)$, we can write $m_{t+1}$ as -$$ -m_{t+1} = \exp(-\rho) \exp(- \gamma(c_{t+1} - c_t)) +$$ +m_{t+1} = \exp(-\rho) \exp(- \gamma(c_{t+1} - c_t)) $$ where $ \rho > 0$, $\gamma > 0$. -A popular model for the growth of log of consumption is +A popular model for the growth of log of consumption is -$$ -c_{t+1} - c_t = \mu + \sigma_c \epsilon_{t+1} +$$ +c_{t+1} - c_t = \mu + \sigma_c \epsilon_{t+1} $$ where $\epsilon_{t+1} \sim {\mathcal N}(0,1)$. Here $\{c_t\}$ is a random walk with drift $\mu$, a good approximation to US per capital consumption growth. -Again here +Again here * $\gamma >0$ is a coefficient of relative risk aversion - - * $\rho >0 $ is a fixed intertemporal discount rate - -So we have -$$ -m_{t+1} = \exp(-\rho) \exp( - \gamma \mu - \gamma \sigma_c \epsilon_{t+1}) + * $\rho >0 $ is a fixed intertemporal discount rate + +So we have + +$$ +m_{t+1} = \exp(-\rho) \exp( - \gamma \mu - \gamma \sigma_c \epsilon_{t+1}) $$ -In this case +In this case -$$ -E m_{t+1} = \exp(-\rho) \exp \left( - \gamma \mu + \frac{\sigma_c^2 \gamma^2}{2} \right) +$$ +E m_{t+1} = \exp(-\rho) \exp \left( - \gamma \mu + \frac{\sigma_c^2 \gamma^2}{2} \right) $$ -and +and -$$ -\operatorname{var}(m_{t+1}) = E(m) [ \exp(\sigma_c^2 \gamma^2) - 1) ] +$$ +\operatorname{var}(m_{t+1}) = E(m) [ \exp(\sigma_c^2 \gamma^2) - 1) ] $$ -When $\gamma >0$, it is true that +When $\gamma >0$, it is true that * when consumption growth is **high**, $m$ is **low** - + * when consumption growth is **low**, $m$ is **high** - -According to representation {eq}`eq:ERbetarep`, an asset with a gross return $R^i$ that is expected to be **high** when consumption growth is **low** has $\beta_{i,m}$ positive and a **low** expected return. + +According to representation {eq}`eq:ERbetarep`, an asset with a gross return $R^i$ that is expected to be **high** when consumption growth is **low** has $\beta_{i,m}$ positive and a **low** expected return. * because it has a high gross return when consumption growth is low, it is a good hedge against consumption risk. That justifies its low average return. @@ -285,14 +285,14 @@ An asset with an $R^i$ that is **low** when consumption growth is **low** has $\ * because it has a low gross return when consumption growth is low, it is a poor hedge against consumption risk. That justifies its high average return. - -## Mean-Variance Frontier + +## Mean-Variance Frontier Now we'll derive the celebrated **mean-variance frontier**. We do this using a method deployed by Lars Peter Hansen and Scott -Richard {cite}`HansenRichard1987`. +Richard {cite}`HansenRichard1987`. ```{note} Methods of Hansen and Richard are described and used extensively by {cite}`Cochrane_2005`. @@ -307,14 +307,14 @@ A convenient way to remember the Cauchy-Schwartz inequality in our context is th Let's apply that idea to deduce -$$ -1= E\left(m R^{i}\right)=E(m) E\left(R^{i}\right)+\rho_{m, R^{i}}\frac{\sigma(m)}{E(m)} \sigma\left(R^{i}\right) -$$ (eq:EMR5) +$$ +1= E\left(m R^{i}\right)=E(m) E\left(R^{i}\right)+\rho_{m, R^{i}}\frac{\sigma(m)}{E(m)} \sigma\left(R^{i}\right) +$$ (eq:EMR5) where the correlation coefficient $\rho_{m, R^i}$ is defined as -$$ -\rho_{m, R^i} \equiv \frac{\operatorname{cov}\left(m, R^{i}\right)}{\sigma(m) \sigma\left(R^{i}\right)} +$$ +\rho_{m, R^i} \equiv \frac{\operatorname{cov}\left(m, R^{i}\right)}{\sigma(m) \sigma\left(R^{i}\right)} $$ @@ -329,7 +329,7 @@ $$ Because $\rho_{m, R^i} \in [-1,1]$, it follows that $|\rho_{m, R^i}| \leq 1$ and that $$ -\left|E R^i-R^{f}\right| \leqslant \frac{\sigma(m)}{E(m)} \sigma\left(R^{i}\right) +\left|E R^i-R^{f}\right| \leqslant \frac{\sigma(m)}{E(m)} \sigma\left(R^{i}\right) $$ (eq:ERM6) Inequality {eq}`eq:ERM6` delineates a **mean-variance frontier** @@ -388,7 +388,7 @@ ax.plot(x, z_values, label=r'$R^f - \frac{\sigma(m)}{E(m)} \sigma(R^i)$') plt.title('mean standard deviation frontier') plt.xlabel(r"$\sigma(R^i)$") plt.ylabel(r"$E (R^i) $") -plt.text(.053, 1.015, "(.05,1.015)") +plt.text(.053, 1.015, "(.05,1.015)") ax.plot(.05, 1.015, 'o', label="$(\sigma(R^j), E R^j)$") # Add a legend and show the plot ax.legend() @@ -396,21 +396,21 @@ plt.show() ``` -The figure shows two straight lines, the blue upper one being the locus of $( \sigma(R^i), E(R^i)$ pairs that are on -the **mean-variance frontier** or **mean-standard-deviation frontier**. +The figure shows two straight lines, the blue upper one being the locus of $( \sigma(R^i), E(R^i)$ pairs that are on +the **mean-variance frontier** or **mean-standard-deviation frontier**. The green dot refers to a return $R^j$ that is **not** on the frontier and that has moments -$(\sigma(R^j), E R^j) = (.05, 1.015)$. +$(\sigma(R^j), E R^j) = (.05, 1.015)$. It is described by the statistical model -$$ +$$ R^j = R^i + \epsilon^j $$ where $R^i$ is a return that is on the frontier and $\epsilon^j$ is a random variable that has mean zero and that is orthogonal to $R^i$. -Then $ E R^j = E R^i$ and, as a consequence of $R^j$ not being on the frontier, +Then $ E R^j = E R^i$ and, as a consequence of $R^j$ not being on the frontier, $$ \sigma^2(R^j) = \sigma^2(R^i) + \sigma^2(\epsilon^j) @@ -431,14 +431,14 @@ This is a measure of the part of the risk in $R^j$ that is not priced because it An asset's **Sharpe ratio** is defined as $$ - \frac{E(R^i) - R^f}{\sigma(R^i)} + \frac{E(R^i) - R^f}{\sigma(R^i)} $$ The above figure reminds us that all assets $R^i$ whose returns are on the mean-standard deviation frontier satisfy $$ -\frac{E(R^i) - R^f}{\sigma(R^i)} = \frac{\sigma(m)}{E m} +\frac{E(R^i) - R^f}{\sigma(R^i)} = \frac{\sigma(m)}{E m} $$ The ratio $\frac{\sigma(m)}{E m} $ is often called the **market price of risk**. @@ -448,45 +448,45 @@ Evidently it equals the maximum Sharpe ratio for any asset or portfolio of asset ## Mathematical Structure of Frontier -The mathematical structure of the mean-variance frontier described by inequality {eq}`eq:ERM6` implies +The mathematical structure of the mean-variance frontier described by inequality {eq}`eq:ERM6` implies that - all returns on the frontier are perfectly correlated. Thus, - - * Let $R^m, R^{mv}$ be two returns on the frontier. - + + * Let $R^m, R^{mv}$ be two returns on the frontier. + * Then for some scalar $a$, a return $R^{m v}$ on the mean-variance frontier satisfies the affine equation $R^{m v}=R^{f}+a\left(R^{m}-R^{f}\right)$ . This is an **exact** equation with no **residual**. - - -- each return $R^{mv}$ that is on the mean-variance frontier is perfectly (negatively) correlated with $m$ - - * $\left(\rho_{m, R^{mv}}=-1\right) \Rightarrow \begin{cases} m=a+b R^{m v} \\ R^{m v}=e+d m \end{cases}$ for some scalars $a, b, e, d$, - + + +- each return $R^{mv}$ that is on the mean-variance frontier is perfectly (negatively) correlated with $m$ + + * $\left(\rho_{m, R^{mv}}=-1\right) \Rightarrow \begin{cases} m=a+b R^{m v} \\ R^{m v}=e+d m \end{cases}$ for some scalars $a, b, e, d$, + Therefore, **any return on the mean-variance frontier is a legitimate stochastic discount factor** - for any mean-variance-efficient return $R^{m v}$ that is on the frontier but that is **not** $R^{f}$, there exists a **single-beta representation** for any return $R^i$ that takes the form: -$$ -E R^{i}=R^{f}+\beta_{i, R^{m v}}\left[E\left(R^{m v}\right)-R^{f}\right] -$$ (eq:EMR7) +$$ +E R^{i}=R^{f}+\beta_{i, R^{m v}}\left[E\left(R^{m v}\right)-R^{f}\right] +$$ (eq:EMR7) - the regression coefficient $\beta_{i, R^{m v}}$ is often called asset $i$'s **beta** - -- The special case of a single-beta representation {eq}`eq:EMR7` with $ R^{i}=R^{m v}$ is - + +- The special case of a single-beta representation {eq}`eq:EMR7` with $ R^{i}=R^{m v}$ is + $E R^{m v}=R^{f}+1 \cdot\left[E\left(R^{m v}\right)-R^{f}\right] $ - + +++ ## Multi-factor Models -The single-beta representation {eq}`eq:EMR7` is a special case of the multi-factor model +The single-beta representation {eq}`eq:EMR7` is a special case of the multi-factor model $$ @@ -495,7 +495,7 @@ $$ where $\lambda_j$ is the price of being exposed to risk factor $f_t^j$ and $\beta_{i,j}$ is asset $i$'s exposure to that -risk factor. +risk factor. To uncover the $\beta_{i,j}$'s, one takes data on time series of the risk factors $f_t^j$ that are being priced and specifies the following least squares regression @@ -503,32 +503,32 @@ and specifies the following least squares regression $$ R_{t}^{i}=a_{i}+\beta_{i, a} f_{t}^{a}+\beta_{i, b} f_{t}^{b}+\ldots+\epsilon_{t}^{i}, \quad t=1,2, \ldots, T\\ -\epsilon_{t}^{i} \perp f_{t}^{j}, i=1,2, \ldots, I; j = a, b, \ldots +\epsilon_{t}^{i} \perp f_{t}^{j}, i=1,2, \ldots, I; j = a, b, \ldots $$ (eq:timeseriesrep) Special cases are: - * a popular **single-factor** model specifies the single factor $f_t$ to be the return on the market portfolio - + * a popular **single-factor** model specifies the single factor $f_t$ to be the return on the market portfolio + * another popular **single-factor** model called the **consumption-based model** specifies the factor to be $ m_{t+1} = \beta \frac{u^{\prime}\left(c_{t+1}\right)}{u^{\prime}\left(c_{t}\right)}$, where $c_t$ is a representative consumer's time $t$ consumption. - - -As a reminder, model objects are interpreted as follows: - - * $\beta_{i,a}$ is the exposure of return $R^i$ to risk factor $f_a$ - - * $\lambda_{a}$ is the price of exposure to risk factor $f_a$ - + + +As a reminder, model objects are interpreted as follows: + + * $\beta_{i,a}$ is the exposure of return $R^i$ to risk factor $f_a$ + + * $\lambda_{a}$ is the price of exposure to risk factor $f_a$ + ## Empirical Implementations -We briefly describe empirical implementations of multi-factor generalizations of the single-factor model described above. +We briefly describe empirical implementations of multi-factor generalizations of the single-factor model described above. Two representations of a multi-factor model play importnt roles in empirical applications. One is the time series regression {eq}`eq:timeseriesrep` The other representation entails a **cross-section regression** of **average returns** $E R^i$ for assets -$i =1, 2, \ldots, I$ on **prices of risk** $\lambda_j$ for $j =a, b, c, \ldots$ +$i =1, 2, \ldots, I$ on **prices of risk** $\lambda_j$ for $j =a, b, c, \ldots$ Here is the cross-section regression specification for a multi-factor model: @@ -575,22 +575,20 @@ E R^{e i}=\beta_{i, a} \lambda_{a}+\beta_{i, b} \lambda_{b}+\cdots+\alpha_{i}, i $$ -In the following exercises, we illustrate aspects of these empirical strategies on artificial data. +In the following exercises, we illustrate aspects of these empirical strategies on artificial data. Our basic tools are random number generator that we shall use to create artificial samples that conform to the theory and least squares regressions that let us watch aspects of the theory at work. These exercises will further convince us that asset pricing theory is mostly about covariances and least squares regressions. -## Exercises +## Exercises Let's start with some imports. ```{code-cell} ipython3 import numpy as np -from scipy.stats import stats import statsmodels.api as sm -from statsmodels.sandbox.regression.gmm import GMM import matplotlib.pyplot as plt %matplotlib inline ``` @@ -618,7 +616,7 @@ def simple_ols(X, Y, constant=False): :label: apl_ex1 ``` -Look at the equation, +Look at the equation, $$ R^i_t - R^f = \beta_{i, R^m} (R^m_t - R^f) + \sigma_i \varepsilon_{i, t}. @@ -858,7 +856,7 @@ The system of two linear equations is shown below: $$ \begin{aligned} a ((E(R^f) + \xi) + b ((E(R^f) + \xi)^2 + \lambda^2 + \sigma_f^2) & =1 \cr -a E(R^f) + b (E(R^f)^2 + \xi E(R^f) + \sigma_f ^ 2) & = 1 +a E(R^f) + b (E(R^f)^2 + \xi E(R^f) + \sigma_f ^ 2) & = 1 \end{aligned} $$ @@ -916,4 +914,4 @@ a_hat, b_hat, M_hat ``` ```{solution-end} -``` \ No newline at end of file +``` diff --git a/lectures/black_litterman.md b/lectures/black_litterman.md index 4fbade9b..35d62504 100644 --- a/lectures/black_litterman.md +++ b/lectures/black_litterman.md @@ -33,9 +33,9 @@ This lecture describes extensions to the classical mean-variance portfolio theor The classic theory described there assumes that a decision maker completely trusts the statistical model that he posits to govern the joint distribution of returns on a list of available assets. -Both extensions described here put distrust of that statistical model into the mind of the decision maker. +Both extensions described here put distrust of that statistical model into the mind of the decision maker. -One is a model of Black and Litterman {cite}`black1992global` that imputes to the decision maker distrust of historically estimated mean returns but still complete trust of estimated covariances of returns. +One is a model of Black and Litterman {cite}`black1992global` that imputes to the decision maker distrust of historically estimated mean returns but still complete trust of estimated covariances of returns. The second model also imputes to the decision maker doubts about his statistical model, but now by saying that, because of that distrust, the decision maker uses a version of robust control theory described in this lecture [Robustness](https://python-advanced.quantecon.org/robustness.html). @@ -85,7 +85,6 @@ Let's start with some imports: ```{code-cell} ipython import numpy as np -import scipy as sp import scipy.stats as stat import matplotlib.pyplot as plt %matplotlib inline @@ -447,7 +446,7 @@ def BL_plot(τ): plt.show() ``` -## Bayesian Interpretation +## Bayesian Interpretation Consider the following Bayesian interpretation of the Black-Litterman recommendation. @@ -1159,7 +1158,7 @@ precision of the mean estimate than for our variance estimate. ## Special Case -- IID Sample -We start our analysis with the benchmark case of IID data. +We start our analysis with the benchmark case of IID data. Consider a sample of size $N$ generated by the following IID process, @@ -1250,7 +1249,7 @@ The following figure illustrates how the dependence between the observations is related to the sampling frequency - For any given $h$, the autocorrelation converges to zero as we increase the distance -- $n$-- between the observations. This represents the "weak dependence" of the $X$ process. - + - Moreover, for a fixed lag length, $n$, the dependence vanishes as the sampling frequency goes to infinity. In fact, letting $h$ go to $\infty$ gives back the case of IID data. ```{code-cell} python3 @@ -1304,11 +1303,11 @@ $$ It is explicit in the above equation that time dependence in the data inflates the variance of the mean estimator through the covariance -terms. +terms. Moreover, as we can see, a higher sampling frequency---smaller $h$---makes all the covariance terms larger, everything else being -fixed. +fixed. This implies a relatively slower rate of convergence of the sample average for high-frequency data. @@ -1422,4 +1421,3 @@ relative MSEs and the sampling frequency dependence gets more pronounced, the rate of convergence of the mean estimator's MSE deteriorates more than that of the variance estimator. - diff --git a/lectures/cattle_cycles.md b/lectures/cattle_cycles.md index a010cb9c..656d2a29 100644 --- a/lectures/cattle_cycles.md +++ b/lectures/cattle_cycles.md @@ -50,7 +50,6 @@ We make the following imports: ```{code-cell} ipython import numpy as np import matplotlib.pyplot as plt -from quantecon import LQ from collections import namedtuple from quantecon import DLE from math import sqrt @@ -406,4 +405,3 @@ The fact that $y_t$ is a weighted moving average of $x_t$ creates a humped shape response of the total stock in response to demand shocks, contributing to the cyclicality seen in the first graph of this lecture. - diff --git a/lectures/chang_credible.md b/lectures/chang_credible.md index 4d1311c1..0a527522 100644 --- a/lectures/chang_credible.md +++ b/lectures/chang_credible.md @@ -30,7 +30,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie --- tags: [hide-output] --- -!pip install polytope quantecon +!pip install polytope ``` ## Overview @@ -86,7 +86,6 @@ Let's start with some standard imports: ```{code-cell} ipython import numpy as np -import quantecon as qe import polytope import matplotlib.pyplot as plt %matplotlib inline @@ -363,7 +362,7 @@ Chang works with be a value associated with a particular competitive equilibrium. * A recursive representation of a credible government policy is a pair of initial conditions $(w_0, \theta_0)$ and a five-tuple of functions - + $$ h(w_t, \theta_t), m(h_t, w_t, \theta_t), x(h_t, w_t, \theta_t), \chi(h_t, w_t, \theta_t),\Psi(h_t, w_t, \theta_t) $$ @@ -372,10 +371,10 @@ Chang works with * Starting from an initial condition $(w_0, \theta_0)$, a credible government policy can be constructed by iterating on these functions in the following order that respects the within-period timing: - + ```{math} :label: chang501 - + \begin{aligned} \hat h_t & = h(w_t,\theta_t) \\ m_t & = m(h_t, w_t,\theta_t) \\ @@ -384,7 +383,7 @@ Chang works with \theta_{t+1} & = \Psi(h_t, w_t,\theta_t) \end{aligned} ``` - + * Here it is to be understood that $\hat h_t$ is the action that the government policy instructs the government to take, while $h_t$ possibly not equal to $\hat h_t$ is some other action that the @@ -888,4 +887,3 @@ plot_equilibria(ch2) ``` Evidently, the Ramsey plan is now sustainable. - diff --git a/lectures/chang_ramsey.md b/lectures/chang_ramsey.md index 80105575..595289b6 100644 --- a/lectures/chang_ramsey.md +++ b/lectures/chang_ramsey.md @@ -30,7 +30,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie --- tags: [hide-output] --- -!pip install polytope quantecon +!pip install polytope ``` ## Overview @@ -75,7 +75,6 @@ We'll start with some standard imports: ```{code-cell} ipython import numpy as np import polytope -import quantecon as qe import matplotlib.pyplot as plt %matplotlib inline ``` @@ -373,10 +372,10 @@ Chang constructs the following objects mapping $\theta$ into this period’s $(h, m, x)$ and next period’s $\theta$, respectively. * A competitive equilibrium can be represented recursively by iterating on - + ```{math} :label: Chang500 - + \begin{split} h_t & = h(\theta_t) \\ m_t & = m(\theta_t) \\ @@ -385,7 +384,7 @@ Chang constructs the following objects \end{split} ``` starting from $\theta_0$ - + The range and domain of $\Psi(\cdot)$ are both $\Omega$ 1. A recursive representation of a Ramsey plan * A recursive representation of a Ramsey plan is a recursive @@ -488,11 +487,11 @@ two-step procedure to find at least the *value* of the Ramsey outcome to the representative household 1. Find the indirect value function $w(\theta)$ defined as - + $$ w(\theta) = \max_{(\vec m, \vec x, \vec h) \in \Gamma(\theta)} \sum_{t=0}^\infty \beta^t \left[ u(f(x_t)) + v(m_t) \right] $$ - + 1. Compute the value of the Ramsey outcome by solving $\max_{\theta \in \Omega} w(\theta)$. Thus, Chang states the following @@ -1104,4 +1103,3 @@ sequentially, rather than once and for all at time $0$ will choose to implement In the process of constructing them, we shall construct another, smaller set of competitive equilibria. - diff --git a/lectures/growth_in_dles.md b/lectures/growth_in_dles.md index 37c3a0ba..29fa5f87 100644 --- a/lectures/growth_in_dles.md +++ b/lectures/growth_in_dles.md @@ -53,7 +53,7 @@ We require the following imports import numpy as np import matplotlib.pyplot as plt %matplotlib inline -from quantecon import LQ, DLE +from quantecon import DLE ``` ## Common Structure @@ -62,24 +62,24 @@ Our example economies have the following features - Information flows are governed by an exogenous stochastic process $z_t$ that follows - + $$ z_{t+1} = A_{22}z_t + C_2w_{t+1} $$ where $w_{t+1}$ is a martingale difference sequence. - Preference shocks $b_t$ and technology shocks $d_t$ are linear functions of $z_t$ - + $$ b_t = U_bz_t $$ $$ d_t = U_dz_t $$ - + - Consumption and physical investment goods are produced using the following technology - + $$ \Phi_c c_t + \Phi_g g_t + \Phi_i i_t = \Gamma k_{t-1} + d_t $$ @@ -95,7 +95,7 @@ Our example economies have the following features $l_t$ is the amount of labor supplied by the representative household. - Preferences of a representative household are described by - + $$ -\frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(s_t-b_t)\cdot(s_t - b_t) + l_t^2], 0 < \beta < 1 $$ @@ -105,7 +105,7 @@ Our example economies have the following features $$ h_t = \Delta_h h_{t-1} + \Theta_h c_t $$ - + where $s_t$ is a vector of consumption services, and $h_t$ is a vector of household capital stocks. @@ -128,7 +128,7 @@ Choose $\{c_t, s_t, i_t, h_t, k_t, g_t\}_{t=0}^\infty$ to maximize $$ -\frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(s_t-b_t)\cdot(s_t - b_t) + g_t \cdot g_t] -$$ +$$ subject to the linear constraints @@ -602,4 +602,3 @@ being larger than one. ```{code-cell} python3 econ5.endo, econ5.exo ``` - diff --git a/lectures/hs_invertibility_example.md b/lectures/hs_invertibility_example.md index 83a574bc..74d16443 100644 --- a/lectures/hs_invertibility_example.md +++ b/lectures/hs_invertibility_example.md @@ -47,7 +47,6 @@ We'll make these imports: import numpy as np import quantecon as qe import matplotlib.pyplot as plt -from quantecon import LQ from quantecon import DLE from math import sqrt %matplotlib inline @@ -285,7 +284,7 @@ plt.show() ``` The above figure displays the impulse response of consumption and the -net-of-interest deficit to the innovations $w_t$ to the consumer's non-financial income +net-of-interest deficit to the innovations $w_t$ to the consumer's non-financial income or endowment process. Consumption displays the characteristic "random walk" response with @@ -396,4 +395,3 @@ response to $w_{1t}$). Thus, the innovations to $(c_t - d_t)$ as revealed by the vector autoregression depend on what the economic agent views as "old news". - diff --git a/lectures/irfs_in_hall_model.md b/lectures/irfs_in_hall_model.md index d204bad6..f96ed3bd 100644 --- a/lectures/irfs_in_hall_model.md +++ b/lectures/irfs_in_hall_model.md @@ -45,7 +45,6 @@ We'll make these imports: import numpy as np import matplotlib.pyplot as plt %matplotlib inline -from quantecon import LQ from quantecon import DLE ``` @@ -300,4 +299,3 @@ Example 1. As in Example 2, the endowment shock has permanent effects on neither variable. - diff --git a/lectures/knowing_forecasts_of_others.md b/lectures/knowing_forecasts_of_others.md index 1b44f329..5089c3c2 100644 --- a/lectures/knowing_forecasts_of_others.md +++ b/lectures/knowing_forecasts_of_others.md @@ -69,7 +69,7 @@ Therefore, he constructed a more manageable approximating model in which a hidde a demand shock is revealed to all firms after a fixed, finite number of periods. -In this lecture, we illustrate again the theme that **finding the state is an art** by +In this lecture, we illustrate again the theme that **finding the state is an art** by showing how to formulate Townsend's original model in terms of a low-dimensional state space. We show that Townsend's model shares equilibrium prices and quantities with those that @@ -89,8 +89,8 @@ Rather than directly deploying the {cite}`PCL` machinery here, we shall instead of the good produced by the other industry. * We compute a population linear least squares regression of the noisy signal at time $t$ that firms in the other industry would receive in a pooling equilibrium on time $t$ information that a firm receives in Townsend's - original model. -* The $R^2$ in this regression equals $1$. + original model. +* The $R^2$ in this regression equals $1$. * That verifies that a firm's information set in Townsend's original model equals its information set in a pooling equilibrium. * Therefore, equilibrium @@ -99,7 +99,7 @@ Rather than directly deploying the {cite}`PCL` machinery here, we shall instead ### A Sequence of Models We proceed by describing a sequence of models of two industries that are linked in a -single way: +single way: * shocks to the demand curves for their products have a common component. @@ -248,7 +248,7 @@ This provides the equilibrium when $\theta_t$ is observed at $t$ but future $\theta_{t+j}$ and $\epsilon_{t+j}^i$ are not observed. -To find an equilibrium when a history $w^t$ observations of a **single** noise-ridden +To find an equilibrium when a history $w^t$ observations of a **single** noise-ridden $\theta_t$ is observed, we again apply a certainty equivalence principle and replace future values of the random variables $\theta_s, \epsilon_{s}^i, s \geq t$ with their @@ -291,7 +291,7 @@ their counterparts in a pooling equilibrium because firms in industry $i$ are able to infer the noisy signal about the demand shock received by firms in industry $-i$. -We shall verify this assertion by using a guess and verify tactic that involves running a least +We shall verify this assertion by using a guess and verify tactic that involves running a least squares regression and inspecting its $R^2$. [^footnote0] ## Equilibrium Conditions @@ -790,7 +790,7 @@ on $\theta_t$ and perform the following steps: using `quantecon.solve_discrete_riccati` - Add a *measurement equation* for $P_t^i = b k_t^i + \theta_t + e_t$, $\theta_t + e_t$, - and $e_t$ to system {eq}`sol0a`. + and $e_t$ to system {eq}`sol0a`. - Write the resulting system in state-space form and encode it using `quantecon.LinearStateSpace` - Use methods of the `quantecon.LinearStateSpace` to compute impulse response @@ -920,9 +920,7 @@ components of the state vector (step 5 above) by using the `stationary_distribut ```{code-cell} ipython import numpy as np import quantecon as qe -from plotly.subplots import make_subplots import plotly.graph_objects as go -import plotly.express as px import plotly.offline as pyo from statsmodels.regression.linear_model import OLS from IPython.display import display, Latex, Image @@ -1339,7 +1337,7 @@ R_squared ## Key Step Now we come to the key step for verifying that equilibrium outcomes for prices and quantities are identical -in the pooling equilibrium original model that led Townsend to deduce an infinite-dimensional state space. +in the pooling equilibrium original model that led Townsend to deduce an infinite-dimensional state space. We accomplish this by computing a population linear least squares regression of the noisy signal that firms in the other industry receive in a pooling equilibrium on time $t$ information that a firm would receive in Townsend's @@ -1370,7 +1368,7 @@ set in Townsend's original model equals its information set in a pooling equilib Therefore, equilibrium prices and quantities in Townsend's original model equal those in a pooling equilibrium. -+++ + ## An observed common shock benchmark @@ -1385,24 +1383,24 @@ Thus, consider a version of our model in which histories of both $\epsilon_t^i$ In this case, the firm's optimal decision rule is described by $$ -k_{t+1}^i = \tilde \lambda k_t^i + \frac{1}{\lambda - \rho} \hat \theta_{t+1} +k_{t+1}^i = \tilde \lambda k_t^i + \frac{1}{\lambda - \rho} \hat \theta_{t+1} $$ where $\hat \theta_{t+1} = E_t \theta_{t+1}$ is given by $$ -\hat \theta_{t+1} = \rho \theta_t +\hat \theta_{t+1} = \rho \theta_t $$ -Thus, the firm's decision rule can be expressed +Thus, the firm's decision rule can be expressed $$ -k_{t+1}^i = \tilde \lambda k_t^i + \frac{\rho}{\lambda - \rho} \theta_t +k_{t+1}^i = \tilde \lambda k_t^i + \frac{\rho}{\lambda - \rho} \theta_t $$ -Consequently, when a history $\theta_s, s \leq t$ is observed without noise, +Consequently, when a history $\theta_s, s \leq t$ is observed without noise, the following state space system prevails: $$ @@ -1410,10 +1408,10 @@ $$ \begin{bmatrix} \theta_{t+1} \cr k_{t+1}^i \end{bmatrix} & = \begin{bmatrix} \rho & 0 \cr \frac{\rho}{\lambda -\rho} & \tilde \lambda \end{bmatrix} - \begin{bmatrix} \theta_t \cr k_t^i \end{bmatrix} + \begin{bmatrix} \theta_t \cr k_t^i \end{bmatrix} + \begin{bmatrix} \sigma_v \cr 0 \end{bmatrix} z_{1,t+1} \cr \begin{bmatrix} \theta_t \cr k_t^i \end{bmatrix} & = \begin{bmatrix} 1 & 0 \cr 0 & 1 \end{bmatrix} -\begin{bmatrix} \theta_t \cr k_t^i \end{bmatrix} + +\begin{bmatrix} \theta_t \cr k_t^i \end{bmatrix} + \begin{bmatrix} 0 \cr 0 \end{bmatrix} z_{1,t+1} \end{aligned} $$ @@ -1425,11 +1423,11 @@ As usual, the system can be written as $$ \begin{aligned} x_{t+1} & = A x_t + C z_{t+1} \cr -y_t & = G x_t + H w_{t+1} +y_t & = G x_t + H w_{t+1} \end{aligned} $$ -In order once again to use the quantecon class `quantecon.LinearStateSpace`, let's form pertinent state-space matrices +In order once again to use the quantecon class `quantecon.LinearStateSpace`, let's form pertinent state-space matrices ```{code-cell} ipython3 Ao_lss = np.array([[ρ, 0.], @@ -1493,19 +1491,19 @@ Image(fig_comb.to_image(format="png")) The three panels in the graph above show that - responses of $ k_t^i $ to shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process are **largest** in the no-noisy-signal structure in which the firm observes $\theta_t$ at time $t$ -- responses of $ k_t^i $ to shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process are **smaller** in the two-noisy-signal structure -- responses of $ k_t^i $ to shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process are **smallest** in the one-noisy-signal structure +- responses of $ k_t^i $ to shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process are **smaller** in the two-noisy-signal structure +- responses of $ k_t^i $ to shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process are **smallest** in the one-noisy-signal structure -With respect to the iid demand shocks $e_t$ the graphs show that +With respect to the iid demand shocks $e_t$ the graphs show that - responses of $ k_t^i $ to shocks $ e_t $ to the hidden Markov demand state $ \theta_t $ process are **smallest** (i.e., nonexistent) in the no-noisy-signal structure in which the firm observes $\theta_t$ at time $t$ -- responses of $ k_t^i $ to shocks $ e_t $ to the hidden Markov demand state $ \theta_t $ process are **larger** in the two-noisy-signal structure -- responses of $ k_t^i $ to idiosyncratic *own-market* noise-shocks $ e_t $ are **largest** in the one-noisy-signal structure +- responses of $ k_t^i $ to shocks $ e_t $ to the hidden Markov demand state $ \theta_t $ process are **larger** in the two-noisy-signal structure +- responses of $ k_t^i $ to idiosyncratic *own-market* noise-shocks $ e_t $ are **largest** in the one-noisy-signal structure -Among other things, these findings indicate that time series correlations and coherences between outputs in the two industries are higher in the two-noisy-signals or **pooling** model than they are in the one-noisy signal model. +Among other things, these findings indicate that time series correlations and coherences between outputs in the two industries are higher in the two-noisy-signals or **pooling** model than they are in the one-noisy signal model. The enhanced influence of the shocks $ v_t $ to the hidden Markov demand state $ \theta_t $ process that emerges from the two-noisy-signal model relative to the one-noisy-signal model is a symptom of a lower @@ -1531,9 +1529,9 @@ display(Latex(f'Two noisy-signals structure: {round(κ_two, 6)}')) ``` Another lesson that comes from the preceding three-panel graph is that the presence of iid noise -$\epsilon_t^i$ in industry $i$ generates a response in $k_t^{-i}$ in the two-noisy-signal structure, but not in the one-noisy-signal structure. +$\epsilon_t^i$ in industry $i$ generates a response in $k_t^{-i}$ in the two-noisy-signal structure, but not in the one-noisy-signal structure. + -+++ ## Notes on History of the Problem @@ -1545,11 +1543,11 @@ Thus, - Townsend wanted to assume that at time $ t $ firms in industry $ i $ observe $ k_t^i, Y_t^i, P_t^i, (P^{-i})^t $, where $ (P^{-i})^t $ is the history of prices in - the other market up to time $ t $. + the other market up to time $ t $. - Because that turned out to be too challenging, Townsend made a sensible alternative assumption that eased his calculations: that after a large number $ S $ of periods, firms in industry $ i $ observe the - hidden Markov component of the demand shock $ \theta_{t-S} $. + hidden Markov component of the demand shock $ \theta_{t-S} $. Townsend argued that the more manageable model could do a good job of @@ -1589,7 +1587,7 @@ forecasting the forecasts of others. Because those forecasts are the same as their own, they know them. -+++ + ### Further historical remarks @@ -1652,4 +1650,3 @@ to read our findings in light of {cite}`ams` is that, relative to the number of signals agents observe, Townsend's section 8 model has too few random shocks to get higher order beliefs to play a role. - diff --git a/lectures/lucas_asset_pricing_dles.md b/lectures/lucas_asset_pricing_dles.md index 1de088a4..19bffcbc 100644 --- a/lectures/lucas_asset_pricing_dles.md +++ b/lectures/lucas_asset_pricing_dles.md @@ -55,7 +55,6 @@ We'll also need the following imports ```{code-cell} ipython import numpy as np import matplotlib.pyplot as plt -from quantecon import LQ from quantecon import DLE %matplotlib inline ``` @@ -298,4 +297,3 @@ plt.show() We can see the tendency of the term structure to slope up when rates are low (and down when rates are high) has been accentuated relative to the first instance of our economy. - diff --git a/lectures/matsuyama.md b/lectures/matsuyama.md index ea963ae8..73af4b07 100644 --- a/lectures/matsuyama.md +++ b/lectures/matsuyama.md @@ -46,8 +46,7 @@ Let's start with some imports: import numpy as np import matplotlib.pyplot as plt %matplotlib inline -import seaborn as sns -from numba import jit, vectorize +from numba import jit from ipywidgets import interact ``` diff --git a/lectures/muth_kalman.md b/lectures/muth_kalman.md index 7f4ce448..67364167 100644 --- a/lectures/muth_kalman.md +++ b/lectures/muth_kalman.md @@ -42,11 +42,9 @@ We'll also need the following imports: import matplotlib.pyplot as plt %matplotlib inline import numpy as np -import scipy.linalg as la from quantecon import Kalman from quantecon import LinearStateSpace -from scipy.stats import norm np.set_printoptions(linewidth=120, precision=4, suppress=True) ``` @@ -372,4 +370,3 @@ engineer ```{code-cell} python3 print(f'decay parameter 1 - K1 = {1 - K1}') ``` - diff --git a/lectures/opt_tax_recur.md b/lectures/opt_tax_recur.md index 96f91c6e..8c8815ec 100644 --- a/lectures/opt_tax_recur.md +++ b/lectures/opt_tax_recur.md @@ -70,7 +70,6 @@ Let's start with some standard imports: ```{code-cell} ipython import numpy as np import matplotlib.pyplot as plt -%matplotlib inline from scipy.optimize import root from quantecon import MarkovChain from quantecon.optimize.nelder_mead import nelder_mead @@ -603,10 +602,10 @@ Here is a computational algorithm: of $\vec x$. * these depend on $\Phi$. 1. Find a $\Phi$ that satisfies - + ```{math} :label: Bellman2cons - + u_{c,0} b_0 = u_{c,0} (n_0 - g_0) - u_{l,0} n_0 + \beta \sum_{s=1}^S \Pi(s | s_0) x(s) ``` by gradually raising $\Phi$ if the left side of {eq}`Bellman2cons` @@ -1420,4 +1419,3 @@ By comparing these recursive formulations, we shall glean a sense in which the dimension of the state is lower in the Lucas Stokey model. Accompanying that difference in dimension will be different dynamics of government debt. - diff --git a/lectures/permanent_income_dles.md b/lectures/permanent_income_dles.md index 3f3d6110..0c73e810 100644 --- a/lectures/permanent_income_dles.md +++ b/lectures/permanent_income_dles.md @@ -54,9 +54,7 @@ Models of Dynamic Linear Economies" {cite}`HS2013`. We'll also require the following imports ```{code-cell} ipython -import quantecon as qe import numpy as np -import scipy.linalg as la import matplotlib.pyplot as plt %matplotlib inline from quantecon import DLE @@ -298,4 +296,3 @@ ax2.plot(econ1.k[0], label='Debt', c='r') ax2.legend() plt.show() ``` - diff --git a/lectures/rosen_schooling_model.md b/lectures/rosen_schooling_model.md index 71629969..8610b755 100644 --- a/lectures/rosen_schooling_model.md +++ b/lectures/rosen_schooling_model.md @@ -44,10 +44,8 @@ We'll also need the following imports: ```{code-cell} ipython import numpy as np import matplotlib.pyplot as plt -from quantecon import LQ from collections import namedtuple from quantecon import DLE -from math import sqrt %matplotlib inline ``` @@ -337,4 +335,3 @@ Increasing the number of periods of schooling lowers the number of new students in response to a demand shock. This occurs because with longer required schooling, new students ultimately benefit less from the impact of that shock on wages. - diff --git a/lectures/smoothing_tax.md b/lectures/smoothing_tax.md index 3bcb2499..c9e49bd2 100644 --- a/lectures/smoothing_tax.md +++ b/lectures/smoothing_tax.md @@ -98,7 +98,6 @@ import numpy as np import quantecon as qe import matplotlib.pyplot as plt %matplotlib inline -import scipy.linalg as la ``` To exploit the isomorphism between consumption-smoothing and tax-smoothing models, we simply use code from {doc}`Consumption Smoothing with Complete and Incomplete Markets ` @@ -936,4 +935,3 @@ In both {doc}`Optimal Taxation in an LQ Economy ` and {doc}`Optimal Ta In {doc}`optimal taxation with incomplete markets `, we study an **incomplete-markets** model in which the government also manipulates prices of government debt. - diff --git a/lectures/un_insure.md b/lectures/un_insure.md index 72c0511d..e441b6aa 100644 --- a/lectures/un_insure.md +++ b/lectures/un_insure.md @@ -15,10 +15,10 @@ kernelspec: This lecture describes a model of optimal unemployment insurance created by Shavell and Weiss (1979) {cite}`Shavell_Weiss_79`. - + We use recursive techniques of -Hopenhayn and Nicolini (1997) {cite}`Hopenhayn_Nicolini_97` to +Hopenhayn and Nicolini (1997) {cite}`Hopenhayn_Nicolini_97` to compute optimal insurance plans for Shavell and Weiss's model. @@ -30,28 +30,28 @@ An unemployed worker orders stochastic processes of consumption and search effort $\{c_t , a_t\}_{t=0}^\infty$ according to -$$ -E \sum_{t=0}^\infty \beta^t \left[ u(c_t) - a_t \right] +$$ +E \sum_{t=0}^\infty \beta^t \left[ u(c_t) - a_t \right] $$ (eq:hugo1) %\EQN hugo1 where $\beta \in (0,1)$ and $u(c)$ is strictly increasing, twice differentiable, -and strictly concave. +and strictly concave. We assume that $u(0)$ is well defined. We require that $c_t \geq 0$ and $ a_t \geq 0$. All jobs are alike and pay wage -$w >0$ units of the consumption good each period forever. +$w >0$ units of the consumption good each period forever. An unemployed worker searches with effort $a$ and with probability $p(a)$ receives a permanent job at the beginning -of the next period. +of the next period. Furthermore, $a=0$ when the worker is -employed. +employed. The probability of finding a job is $p(a)$ where $p$ is an increasing, strictly concave, @@ -71,7 +71,7 @@ Once a worker has found a job, he is beyond the planner's grasp. * This is Shavell and Weiss's assumption, but not Hopenhayn and Nicolini's. * Hopenhayn and Nicolini allow the unemployment insurance agency to -impose history-dependent taxes on previously unemployed workers. +impose history-dependent taxes on previously unemployed workers. * Since there is no incentive problem after the worker has found a job, it is optimal for the agency to provide an employed worker with a constant level of consumption. @@ -80,10 +80,10 @@ a permanent per-period history-dependent tax on a previously unemployed but presently employed worker. -### Autarky +### Autarky As a benchmark, we first study the fate of an unemployed worker -who has no access to unemployment insurance. +who has no access to unemployment insurance. Because employment is an absorbing state for the worker, we work backward from that @@ -96,50 +96,50 @@ be $u(c)-a = u(w)$ forever. Therefore, -$$ -V^e = {u(w) \over (1-\beta)} . +$$ +V^e = {u(w) \over (1-\beta)} . $$ (eq:hugo2) Now let $V^u$ be the expected discounted present value of utility for an unemployed worker who chooses consumption, effort pair $(c,a)$ -optimally. +optimally. -It satisfies the Bellman equation +It satisfies the Bellman equation -$$ +$$ V^u = \max_{a \geq 0} \biggl\{ u(0) - a + \beta \left[ - p(a) V^e + (1-p(a)) V^u \right] \biggr\} . + p(a) V^e + (1-p(a)) V^u \right] \biggr\} . $$ (eq:hugo3) - + The first-order condition for a maximum is -$$ -\beta p'(a) \left[V^e - V^u \right] \leq 1 , +$$ +\beta p'(a) \left[V^e - V^u \right] \leq 1 , $$ (eq:hugo4) -with equality if $a>0$. +with equality if $a>0$. Since there is no state variable in this infinite horizon problem, there is a time-invariant optimal search intensity $a$ and an associated value of being unemployed $V^u$. -Let $V_{\rm aut} = V^u$ solve Bellman equation {eq}`eq:hugo3`. +Let $V_{\rm aut} = V^u$ solve Bellman equation {eq}`eq:hugo3`. -Equations {eq}`eq:hugo3` - and {eq}`eq:hugo4` +Equations {eq}`eq:hugo3` + and {eq}`eq:hugo4` form the basis for -an iterative algorithm for computing $V^u = V_{\rm aut}$. +an iterative algorithm for computing $V^u = V_{\rm aut}$. * Let $V^u_j$ be -the estimate of $V_{\rm aut}$ at the $j$th iteration. +the estimate of $V_{\rm aut}$ at the $j$th iteration. * Use this value in equation {eq}`eq:hugo4` and solve -for an estimate of effort $a_j$. +for an estimate of effort $a_j$. * Use this value in a version of equation {eq}`eq:hugo3` with $V^u_j$ on the right side -to compute $V^u_{j+1}$. +to compute $V^u_{j+1}$. * Iterate to convergence. @@ -150,10 +150,10 @@ Another benchmark model helps set the stage for the model with private informati In this model, the unemployment agency has full information about the unemployed work. We study optimal provision of insurance with -full information. +full information. -An insurance agency can set both -the consumption and search effort of an unemployed person. +An insurance agency can set both +the consumption and search effort of an unemployed person. The agency wants to design an unemployment insurance contract to give @@ -162,9 +162,9 @@ the unemployed worker expected discounted utility $V > V_{\rm aut}$. The planner wants to deliver value $V$ efficiently, meaning in a way that minimizes expected discounted cost, using $\beta$ as the discount factor. - + We formulate the optimal insurance problem -recursively. +recursively. Let $C(V)$ be the expected discounted cost of giving the worker expected discounted utility @@ -181,9 +181,9 @@ continuation value $V^u$, should the worker be unlucky and not find a job. $(c, a, V^u)$ are chosen to be functions of $V$ and to -satisfy the Bellman equation +satisfy the Bellman equation -$$ +$$ C(V) = \min_{c, a, V^u} \biggl\{ c + \beta [1 - p(a)] C(V^u) \biggr\} , $$ (eq:hugo5) @@ -204,12 +204,12 @@ The right side of Bellman equation {eq}`eq:hugo5` is attained by policy functions $c=c(V), a=a(V)$, and $V^u=V^u(V)$. The promise-keeping constraint, - equation {eq}`eq:hugo6`, + equation {eq}`eq:hugo6`, asserts that the 3-tuple $(c, a, V^u)$ attains -at least $V$. +at least $V$. Let $\theta$ be a Lagrange multiplier -on constraint {eq}`eq:hugo6`. +on constraint {eq}`eq:hugo6`. At an interior solution, the first-order conditions with @@ -219,12 +219,12 @@ $$ \begin{aligned} \theta & = {1 \over u'(c)}\,, \cr C(V^u) & = \theta \left[ {1 \over \beta p'(a)} - (V^e - V^u) \right]\,, \cr - C'(V^u) & = \theta\,. + C'(V^u) & = \theta\,. \end{aligned} $$ (eq:hugo7) The envelope condition $C'(V) = \theta$ and the third equation -of {eq}`eq:hugo7` imply that $C'(V^u) =C'(V)$. +of {eq}`eq:hugo7` imply that $C'(V^u) =C'(V)$. Strict convexity of $C$ then implies that $V^u =V$ @@ -232,15 +232,15 @@ implies that $V^u =V$ Applied repeatedly over time, $V^u=V$ makes the continuation value remain constant during the entire -spell of unemployment. +spell of unemployment. The first equation of {eq}`eq:hugo7` determines $c$, and the second equation of {eq}`eq:hugo7` determines -$a$, both as functions of promised value $V$. +$a$, both as functions of promised value $V$. That $V^u = V$ then implies that $c$ and $a$ are held constant during the unemployment -spell. +spell. Thus, the unemployed worker's consumption $c$ and search effort $a$ are both fully smoothed during the unemployment spell. @@ -251,7 +251,7 @@ employment and unemployment unless $V=V^e$. ### Incentive Problem The preceding efficient insurance scheme requires that the insurance agency -control both $c$ and $a$. +control both $c$ and $a$. It will not do for the insurance agency simply to announce $c$ and then allow the worker to choose $a$. @@ -285,7 +285,7 @@ insurance arrangement. If he were free to choose $a$, the worker would therefore want to fulfill {eq}`eq:hugo4`, either at equality so long as $a >0$, or by setting -$a=0$ otherwise. +$a=0$ otherwise. Starting from the $a$ associated with the social insurance scheme, @@ -293,7 +293,7 @@ he would establish the desired equality in {eq}`eq:hugo4` by *lowering* $a$, thereby decreasing the term $[ \beta p'(a) ]^{-1}$ (which also lowers $(V^e - V^u)$ when the value of being -unemployed $V^u$ increases). +unemployed $V^u$ increases). If an equality can be established before @@ -324,7 +324,7 @@ completely characterized by the first-order condition {eq}`eq:hugo4`, an instance of the so-called first-order approach to incentive problems. Given a contract, the individual will choose search effort according to -first-order condition {eq}`eq:hugo4`. +first-order condition {eq}`eq:hugo4`. This fact leads the insurance agency to design the unemployment insurance contract to respect this restriction. @@ -361,8 +361,8 @@ $$ C(V^u) & = \theta \left[ {1 \over \beta p'(a)} - (V^e - V^u) \right] \,-\, \eta {p''(a) \over p'(a)} (V^e - V^u) \cr & = \,- \eta {p''(a) \over p'(a)} (V^e - V^u) \,, \cr - C'(V^u) & = \theta \,-\, \eta {p'(a) \over 1-p(a)}\, , -\end{aligned} + C'(V^u) & = \theta \,-\, \eta {p'(a) \over 1-p(a)}\, , +\end{aligned} $$ (eq:hugo8) where the second equality in the second equation in {eq}`eq:hugo8` follows from strict equality @@ -371,11 +371,11 @@ of the incentive constraint {eq}`eq:hugo4` when $a>0$. As long as the insurance scheme is associated with costs, so that $C(V^u)>0$, first-order condition in the second equation of {eq}`eq:hugo8` implies that the multiplier $\eta$ is strictly -positive. +positive. The first-order condition in the second equation of the third equality in {eq}`eq:hugo8` and the envelope condition $C'(V) = \theta$ together allow us to conclude that -$C'(V^u) < C'(V)$. +$C'(V^u) < C'(V)$. Convexity of $C$ then implies that $V^u < V$. @@ -383,23 +383,23 @@ Convexity of $C$ then implies that $V^u < V$. After we have also used the first equation of {eq}`eq:hugo8`, it follows that in order to provide the proper incentives, the consumption of the unemployed worker must decrease as the duration of the unemployment -spell lengthens. +spell lengthens. It also follows from {eq}`eq:hugo4` at equality that search effort $a$ rises as $V^u$ falls, i.e., it rises with the duration of unemployment. The duration dependence of benefits is designed to provide -incentives to search. +incentives to search. To see this, from the third equation of {eq}`eq:hugo8`, notice how the conclusion that consumption falls with the duration of unemployment depends on the assumption that more search effort -raises the prospect of finding a job, i.e., that $p'(a) > 0$. +raises the prospect of finding a job, i.e., that $p'(a) > 0$. If $p'(a) =0$, then the third equation of {eq}`eq:hugo8` and the strict convexity of $C$ imply that -$V^u =V$. +$V^u =V$. Thus, when $p'(a) =0$, there is no reason for the planner to make consumption fall with the duration of @@ -416,11 +416,11 @@ unemployment. It is useful to note that there are natural lower and upper bounds to the set -of continuation values $V^u$. +of continuation values $V^u$. The lower bound is the expected lifetime utility in autarky, -$V_{\rm aut}$. +$V_{\rm aut}$. To compute the upper bound, represent condition {eq}`eq:hugo4` as @@ -431,15 +431,15 @@ $$ with equality if $ a > 0$. -If there is zero search effort, then $V^u \geq V^e -[\beta p'(0)]^{-1}$. +If there is zero search effort, then $V^u \geq V^e -[\beta p'(0)]^{-1}$. Therefore, to rule out zero search effort we require $$ V^u < V^e - [\beta p'(0)]^{-1} . -$$ +$$ -(Remember that $p''(a) < 0$.) +(Remember that $p''(a) < 0$.) This step gives our upper bound for $V^u$. @@ -451,15 +451,15 @@ a minimization over the one choice variable $V^u$. First express the promise-keeping constraint {eq}`eq:hugo6` at equality as -$$ -u(c) = V + a - \beta \{p(a) V^e +[1-p(a)] V^u \} -$$ +$$ +u(c) = V + a - \beta \{p(a) V^e +[1-p(a)] V^u \} +$$ -so that consumption is +so that consumption is -$$ +$$ c = u^{-1}\left( - V+a -\beta [p(a)V^e + (1-p(a))V^u] \right). + V+a -\beta [p(a)V^e + (1-p(a))V^u] \right). $$ (eq:hugo21) Similarly, solving the inequality {eq}`eq:hugo4` for $a$ leads to @@ -477,13 +477,13 @@ a = \max\left\{0, {\log[r \beta (V^e - V^u)] \over r } \right\}. $$ (eq:hugo22) Formulas {eq}`eq:hugo21` and {eq}`eq:hugo22` express $(c,a)$ as functions -of $V$ and the continuation value $V^u$. +of $V$ and the continuation value $V^u$. Using these functions allows us to write the Bellman equation in $C(V)$ as -$$ -C(V) = \min_{V^u} \left\{ c + \beta [1 - p(a)] C(V^u) \right\} +$$ +C(V) = \min_{V^u} \left\{ c + \beta [1 - p(a)] C(V^u) \right\} $$ (eq:hugo23) where $c$ and $a$ are given by equations {eq}`eq:hugo21` and {eq}`eq:hugo22`. @@ -500,16 +500,14 @@ To do this, we'll load some useful modules ```{code-cell} ipython3 import numpy as np import scipy as sp -from scipy import optimize import matplotlib.pyplot as plt -from scipy.interpolate import interp1d ``` -We first create a class to set up a particular parametrization. +We first create a class to set up a particular parametrization. ```{code-cell} ipython3 class params_instance: - + def __init__(self, r, β = 0.999, @@ -523,7 +521,7 @@ class params_instance: self.Ve = uw/(1-β) ``` -### Parameter Values +### Parameter Values For the other parameters we have just loaded in the above Python code, we'll set brate the net interest rate $r$ to match the hazard rate -- the probability of finding a job in one period -- in US data. @@ -546,29 +544,29 @@ def p_prime(a,r): return r*np.exp(-r*a) # The utiliy function -def u(self,c): +def u(self,c): return (c**(1-self.σ))/(1-self.σ) def u_inv(self,x): - return ((1-self.σ)*x)**(1/(1-self.σ)) + return ((1-self.σ)*x)**(1/(1-self.σ)) ``` Recall that under autarky the value for an unemployed worker -satisfies the Bellman equation +satisfies the Bellman equation $$ -V^u = \max_{a} \{u(0) - a + \beta\left[p_{r}(a)V^e + (1-p_{r}(a))V^u\right]\} +V^u = \max_{a} \{u(0) - a + \beta\left[p_{r}(a)V^e + (1-p_{r}(a))V^u\right]\} $$ (eq:yad1) -At the optimal choice of $a$, we have the first order condition for this problem as: +At the optimal choice of $a$, we have the first order condition for this problem as: $$ -\beta p_{r}'(a)[V^e - V^u] \leq 1 +\beta p_{r}'(a)[V^e - V^u] \leq 1 $$ (eq:yad2) with equality when a >0. -Given an interest rate $\bar{r}$, we can solve the autarky problem as follows: +Given an interest rate $\bar{r}$, we can solve the autarky problem as follows: 1. Guess $V^u \in \mathbb{R}^{+}$ 2. Given $V^u$, use the FOC {eq}`eq:yad2` to calculate the implied optimal search effort $a$ @@ -576,12 +574,12 @@ Given an interest rate $\bar{r}$, we can solve the autarky problem as follows: 4. Update guess for $V^u$ accordingly, then return to 2) and repeat until the Bellman equation is satisfied. For a given $r$ and guess $V^u$, -the function `Vu_error` calculates the error in the Bellman equation under the optimal search intensity. +the function `Vu_error` calculates the error in the Bellman equation under the optimal search intensity. We'll soon use this as an input to computing $V^u$. ```{code-cell} ipython3 -# The error in the Bellman equation that requires equality at +# The error in the Bellman equation that requires equality at # the optimal choices. def Vu_error(self,Vu,r): β= self.β @@ -592,9 +590,9 @@ def Vu_error(self,Vu,r): return error ``` -Since the calibration exercise is to match the hazard rate under autarky to the data, we must find an interest rate $r$ to match `p(a,r) = 0.1`. +Since the calibration exercise is to match the hazard rate under autarky to the data, we must find an interest rate $r$ to match `p(a,r) = 0.1`. -The function below `r_error` calculates, for a given guess of $r$ the difference between the model implied equilibrium hazard rate and 0.1. +The function below `r_error` calculates, for a given guess of $r$ the difference between the model implied equilibrium hazard rate and 0.1. This will be used to solve for the a calibrated $r^*$. @@ -612,7 +610,7 @@ def r_error(self,r): Now, let us create an instance of the model with our parametrization ```{code-cell} ipython3 -params = params_instance(r = 1e-2) +params = params_instance(r = 1e-2) # Create some lambda functions useful for fsolve function Vu_error_Λ = lambda Vu,r: Vu_error(params,Vu,r) r_error_Λ = lambda r: r_error(params,r) @@ -620,7 +618,7 @@ r_error_Λ = lambda r: r_error(params,r) We want to compute an $r$ that is consistent with the hazard rate 0.1 in autarky. -To do so, we will use a bisection strategy. +To do so, we will use a bisection strategy. ```{code-cell} ipython3 r_calibrated = sp.optimize.brentq(r_error_Λ,1e-10,1-1e-10) @@ -640,19 +638,19 @@ Now that we have calibrated our interest rate $r$, we can continue with solving +++ -Our approach to solving the full model is a variant on Judd (1998) {cite}`Judd1998`, who uses a polynomial to approximate the value function and a numerical optimizer to perform the optimization at each iteration. +Our approach to solving the full model is a variant on Judd (1998) {cite}`Judd1998`, who uses a polynomial to approximate the value function and a numerical optimizer to perform the optimization at each iteration. In contrast, we will use cubic splines to interpolate across a pre-set grid of points to approximate the value function. For further details of the Judd (1998) {cite}`Judd1998` method, see {cite}`Ljungqvist2012`, Section 5.7. +++ Our strategy involves finding a function $C(V)$ -- the expected cost of giving the worker value $V$ -- that satisfies the Bellman equation: - + $$ C(V) = \min_{c,a,V^u} \{c + \beta\left[1-p(a)\right]C(V^u)\} $$ (eq:yad3) -To solve this model, notice that in equations {eq}`eq:hugo21` and {eq}`eq:hugo22`, we have analytical solutions of $c$ and $a$ in terms of (at most) promised value $V$ and $V^u$ (and other parameters). +To solve this model, notice that in equations {eq}`eq:hugo21` and {eq}`eq:hugo22`, we have analytical solutions of $c$ and $a$ in terms of (at most) promised value $V$ and $V^u$ (and other parameters). We can substitute these equations for $c$ and $a$ and obtain the functional equation {eq}`eq:hugo23` that we want to solve. @@ -660,9 +658,9 @@ We can substitute these equations for $c$ and $a$ and obtain the functional equa ```{code-cell} ipython3 def calc_c(self,Vu,V,a): ''' - Calculates the optimal consumption choice coming from the constraint of the insurer's problem + Calculates the optimal consumption choice coming from the constraint of the insurer's problem (which is also a Bellman equation) - ''' + ''' β,Ve,r = self.β,self.Ve,self.r c = u_inv(self,V + a - β*(p(a,r)*Ve + (1-p(a,r))*Vu)) @@ -674,7 +672,7 @@ def calc_a(self,Vu): ''' r,β,Ve = self.r,self.β,self.Ve - + a_temp = np.log(r*β*(Ve - Vu))/r a = max(0,a_temp) return a @@ -685,16 +683,16 @@ $V^u$. With this in hand, we have our algorithm. -### Algorithm +### Algorithm -1. Fix a set of grid points $grid_V$ for $V$ and $Vu_{grid}$ for $V^u$ +1. Fix a set of grid points $grid_V$ for $V$ and $Vu_{grid}$ for $V^u$ 2. Guess a function $C_0(V)$ that is evaluated at a grid $grid_V$. 3. For each point in $grid_V$ find the $V^u$ that minimizes the expression on right side of {eq}`eq:hugo23`. We find the minimum by evaluating the right side of {eq}`eq:hugo23` at each point in $Vu_{grid}$ and then finding the minimum using cubic splines. -4. Evaluating the minimum across all points in $grid_V$ gives you another function $C_1(V)$. +4. Evaluating the minimum across all points in $grid_V$ gives you another function $C_1(V)$. 5. If $C_0(V)$ and $C_1(V)$ are sufficiently different, then repeat steps 3-4 again. Otherwise, we are done. 6. Thus, the iterations are $C_{j+1}(V) = \min_{c,a, V^u} \{c - \beta [1 - p(a) ] C_j(V)\} $. -The function `iterate_C` below executes step 3 in the above algorithm. +The function `iterate_C` below executes step 3 in the above algorithm. ```{code-cell} ipython3 # Operator iterate_C that calculates the next iteration of the cost function. @@ -711,7 +709,7 @@ def iterate_C(self,C_old,Vu_grid): V_star = np.zeros(n_grid) C_new2 = np.zeros(n_grid) - V_star2 = np.zeros(n_grid) + V_star2 = np.zeros(n_grid) for V_i in range(n_grid): C_Vi_temp = np.zeros(n_grid) @@ -722,7 +720,7 @@ def iterate_C(self,C_old,Vu_grid): a_i = calc_a(self,Vu_grid[Vu_i]) c_i = calc_c(self,Vu_grid[Vu_i],Vu_grid[V_i],a_i) - C_Vi_temp[Vu_i] = c_i + β*(1-p(a_i,r))*C_old[Vu_i] + C_Vi_temp[Vu_i] = c_i + β*(1-p(a_i,r))*C_old[Vu_i] cons_Vi_temp[Vu_i] = c_i a_Vi_temp[Vu_i] = a_i @@ -734,10 +732,10 @@ def iterate_C(self,C_old,Vu_grid): res = sp.optimize.minimize_scalar(C_Vi_temp_interp,method='bounded',bounds = (Vu_min,Vu_max)) V_star[V_i] = res.x C_new[V_i] = res.fun - + # Save the associated consumpton and search policy functions as well cons_star[V_i] = cons_Vi_temp_interp(V_star[V_i]) - a_star[V_i] = a_Vi_temp_interp(V_star[V_i]) + a_star[V_i] = a_Vi_temp_interp(V_star[V_i]) return C_new,V_star,cons_star,a_star ``` @@ -755,7 +753,7 @@ def solve_incomplete_info_model(self,Vu_grid,Vu_aut,tol = 1e-6,max_iter = 10000) while itertol: C_new,V_new,cons_star,a_star = iterate_C(self,C_old,Vu_grid) error = np.max(np.abs(C_new - C_old)) - + #Only print the iterations every 50 steps if iter % 50 ==0: print(f"Iteration: {iter}, error:{error}") @@ -773,7 +771,7 @@ Using the above functions, we create another instance of the parameters with the ```{code-cell} ipython3 ##? Create another instance with the correct r now -params = params_instance(r = r_calibrated) +params = params_instance(r = r_calibrated) #Set up grid Vu_min = Vu_aut @@ -814,7 +812,7 @@ Vu_0_hold = np.array([Vu_aut,16942,17000]) ```{code-cell} ipython3 for i,Vu_0, in enumerate(Vu_0_hold): - Vu_t[0,i] = Vu_0 + Vu_t[0,i] = Vu_0 for t in range(1,T_max): cons_t[t-1,i] = cons_star_interp(Vu_t[t-1,i]) a_t[t-1,i] = a_star_interp(Vu_t[t-1,i]) @@ -853,51 +851,51 @@ plt.show() For an initial promised value $V^u = V_{\rm aut}$, the planner chooses the autarky level of $0$ for the replacement ratio and instructs the worker to search at the autarky search intensity, regardless of the duration of unemployment -But for $V^u > V_{\rm aut}$, the planner makes the replacement ratio decline and search effort increase with the duration of unemployment. +But for $V^u > V_{\rm aut}$, the planner makes the replacement ratio decline and search effort increase with the duration of unemployment. ### Interpretations -The downward slope of the replacement ratio when $V^u > V_{\rm aut}$ is a consequence of the - the planner's limited information about the worker's search effort. +The downward slope of the replacement ratio when $V^u > V_{\rm aut}$ is a consequence of the + the planner's limited information about the worker's search effort. By providing the worker with a duration-dependent schedule of replacement ratios, the planner induces the worker in effect to reveal -his/her search effort to the planner. +his/her search effort to the planner. We saw earlier that with full information, the planner would smooth consumption over an unemployment spell by -keeping the replacement ratio constant. +keeping the replacement ratio constant. With private information, the planner can't observe the worker's search effort and therefore makes the replacement ratio fall. Evidently, search effort rise as the duration of unemployment increases, especially -early in an unemployment spell. +early in an unemployment spell. There is a **carrot-and-stick** aspect to the replacement rate and search effort schedules: * the **carrot** occurs in the forms of high compensation and low search -effort early in an unemployment spell. +effort early in an unemployment spell. * the **stick** occurs in the low compensation and high effort later in -the spell. +the spell. We shall encounter a related carrot-and-stick feature in our other lectures about dynamic programming squared. The planner offers declining benefits and induces increased search effort as the duration of an unemployment spell rises in order to provide an unemployed worker with proper incentives, not to punish an unlucky worker -who has been unemployed for a long time. +who has been unemployed for a long time. The planner believes that a worker who has been unemployed a long time is unlucky, not that he has -done anything wrong (i.e., has not lived up to the contract). +done anything wrong (i.e., has not lived up to the contract). Indeed, the contract is designed to induce the unemployed workers to search in -the way the planner expects. +the way the planner expects. The falling consumption and rising search effort of the unlucky ones with long unemployment spells are