You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/lake_model.md
+6-13Lines changed: 6 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -218,7 +218,7 @@ This class will
218
218
1. store the primitives $\alpha, \lambda, b, d$
219
219
1. compute and store the implied objects $g, A, \hat A$
220
220
1. provide methods to simulate dynamics of the stocks and rates
221
-
1. provide a method to compute the steady state of the rate
221
+
2. provide a method to compute the steady state vector $\bar x$ of employment and unemployment rates using {ref}`a technique <dynamics_workers>` we previously introduced for computing stationary distributions of Markov chains
222
222
223
223
Please be careful because the implied objects $g, A, \hat A$ will not change
224
224
if you only change the primitives.
@@ -265,12 +265,8 @@ class LakeModel:
265
265
--------
266
266
xbar : steady state vector of employment and unemployment rates
267
267
"""
268
-
x = np.full(2, 0.5)
269
-
error = tol + 1
270
-
while error > tol:
271
-
new_x = self.A_hat @ x
272
-
error = np.max(np.abs(new_x - x))
273
-
x = new_x
268
+
x = np.array([self.A_hat[0, 1], self.A_hat[1, 0]])
269
+
x /= x.sum()
274
270
return x
275
271
276
272
def simulate_stock_path(self, X0, T):
@@ -415,6 +411,7 @@ plt.tight_layout()
415
411
plt.show()
416
412
```
417
413
414
+
(dynamics_workers)=
418
415
## Dynamics of an Individual Worker
419
416
420
417
An individual worker's employment dynamics are governed by a {doc}`finite state Markov process <finite_markov>`.
@@ -1016,12 +1013,8 @@ class LakeModelModified:
1016
1013
--------
1017
1014
xbar : steady state vector of employment and unemployment rates
1018
1015
"""
1019
-
x = np.full(2, 0.5)
1020
-
error = tol + 1
1021
-
while error > tol:
1022
-
new_x = self.A_hat @ x
1023
-
error = np.max(np.abs(new_x - x))
1024
-
x = new_x
1016
+
x = np.array([self.A_hat[0, 1], self.A_hat[1, 0]])
0 commit comments