You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/eigen_II.md
+28-28Lines changed: 28 additions & 28 deletions
Original file line number
Diff line number
Diff line change
@@ -34,12 +34,12 @@ In addition to what's in Anaconda, this lecture will need the following librarie
34
34
:class: warning
35
35
If you are running this lecture locally it requires [graphviz](https://www.graphviz.org)
36
36
to be installed on your computer. Installation instructions for graphviz can be found
37
-
[here](https://www.graphviz.org/download/)
37
+
[here](https://www.graphviz.org/download/)
38
38
```
39
39
40
40
In this lecture we will begin with the foundational concepts in spectral theory.
41
41
42
-
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
42
+
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
43
43
44
44
We will use the following imports:
45
45
@@ -119,7 +119,7 @@ In other words, if $w$ is a left eigenvector of matrix A, then $A^T w = \lambda
119
119
This hints at how to compute left eigenvectors
120
120
121
121
```{code-cell} ipython3
122
-
A = np.array([[3, 2],
122
+
A = np.array([[3, 2],
123
123
[1, 4]])
124
124
125
125
# Compute right eigenvectors and eigenvalues
@@ -174,7 +174,7 @@ $A$ is a nonnegative square matrix.
174
174
175
175
If a matrix $A \geq 0$ then,
176
176
177
-
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
177
+
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
178
178
2. for any other eigenvalue (possibly complex) $\lambda$ of $A$, $|\lambda| \leq r(A)$.
179
179
3. we can find a nonnegative and nonzero eigenvector $v$ such that $Av = r(A)v$.
180
180
@@ -204,8 +204,8 @@ Now let's consider examples for each case.
204
204
Consider the following irreducible matrix A:
205
205
206
206
```{code-cell} ipython3
207
-
A = np.array([[0, 1, 0],
208
-
[.5, 0, .5],
207
+
A = np.array([[0, 1, 0],
208
+
[.5, 0, .5],
209
209
[0, 1, 0]])
210
210
```
211
211
@@ -228,8 +228,8 @@ Now we can go through our checklist to verify the claims of the Perron-Frobenius
228
228
Consider the following primitive matrix B:
229
229
230
230
```{code-cell} ipython3
231
-
B = np.array([[0, 1, 1],
232
-
[1, 0, 1],
231
+
B = np.array([[0, 1, 1],
232
+
[1, 0, 1],
233
233
[1, 1, 0]])
234
234
235
235
np.linalg.matrix_power(B, 2)
@@ -298,7 +298,7 @@ def check_convergence(M):
298
298
n_list = [1, 10, 100, 1000, 10000]
299
299
300
300
for n in n_list:
301
-
301
+
302
302
# Compute (A/r)^n
303
303
M_n = np.linalg.matrix_power(M/r, n)
304
304
@@ -313,8 +313,8 @@ def check_convergence(M):
313
313
A1 = np.array([[1, 2],
314
314
[1, 4]])
315
315
316
-
A2 = np.array([[0, 1, 1],
317
-
[1, 0, 1],
316
+
A2 = np.array([[0, 1, 1],
317
+
[1, 0, 1],
318
318
[1, 1, 0]])
319
319
320
320
A3 = np.array([[0.971, 0.029, 0.1, 1],
@@ -336,8 +336,8 @@ The convergence is not observed in cases of non-primitive matrices.
336
336
Let's go through an example
337
337
338
338
```{code-cell} ipython3
339
-
B = np.array([[0, 1, 1],
340
-
[1, 0, 0],
339
+
B = np.array([[0, 1, 1],
340
+
[1, 0, 0],
341
341
[1, 0, 0]])
342
342
343
343
# This shows that the matrix is not primitive
@@ -358,7 +358,7 @@ In fact we have already seen the theorem in action before in {ref}`the markov ch
358
358
(spec_markov)=
359
359
#### Example 3: Connection to Markov chains
360
360
361
-
We are now prepared to bridge the languages spoken in the two lectures.
361
+
We are now prepared to bridge the languages spoken in the two lectures.
362
362
363
363
A primitive matrix is both irreducible (or strongly connected in the language of graph) and aperiodic.
364
364
@@ -410,22 +410,22 @@ $$
410
410
411
411
This is proven in {cite}`sargent2023economic` and a nice discussion can be found [here](https://math.stackexchange.com/questions/2433997/can-all-matrices-be-decomposed-as-product-of-right-and-left-eigenvector).
412
412
413
-
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
413
+
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
414
414
415
415
Premultiplying $P^t$ by arbitrary $\psi \in \mathscr{D}(S)$ and rearranging now gives
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
421
+
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
422
422
423
423
As we have seen, the largest eigenvalue for a primitive stochastic matrix is one.
424
424
425
-
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
425
+
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
426
426
but it is out of the scope of this lecture.
427
427
428
-
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
428
+
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
429
429
430
430
431
431
Hence, after taking the Euclidean norm deviation, we obtain
@@ -438,7 +438,7 @@ Thus, the rate of convergence is governed by the modulus of the second largest e
438
438
439
439
440
440
(la_neumann)=
441
-
## The Neumann Series Lemma
441
+
## The Neumann Series Lemma
442
442
443
443
```{index} single: Neumann's Lemma
444
444
```
@@ -450,12 +450,12 @@ many applications in economics.
450
450
451
451
Here's a fundamental result about series that you surely know:
0 commit comments