Skip to content

Commit f5b17cc

Browse files
committed
ignore: removes whitespace
1 parent 264cb80 commit f5b17cc

File tree

1 file changed

+28
-28
lines changed

1 file changed

+28
-28
lines changed

lectures/eigen_II.md

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -34,12 +34,12 @@ In addition to what's in Anaconda, this lecture will need the following librarie
3434
:class: warning
3535
If you are running this lecture locally it requires [graphviz](https://www.graphviz.org)
3636
to be installed on your computer. Installation instructions for graphviz can be found
37-
[here](https://www.graphviz.org/download/)
37+
[here](https://www.graphviz.org/download/)
3838
```
3939

4040
In this lecture we will begin with the foundational concepts in spectral theory.
4141

42-
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
42+
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
4343

4444
We will use the following imports:
4545

@@ -119,7 +119,7 @@ In other words, if $w$ is a left eigenvector of matrix A, then $A^T w = \lambda
119119
This hints at how to compute left eigenvectors
120120

121121
```{code-cell} ipython3
122-
A = np.array([[3, 2],
122+
A = np.array([[3, 2],
123123
[1, 4]])
124124
125125
# Compute right eigenvectors and eigenvalues
@@ -174,7 +174,7 @@ $A$ is a nonnegative square matrix.
174174
175175
If a matrix $A \geq 0$ then,
176176
177-
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
177+
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
178178
2. for any other eigenvalue (possibly complex) $\lambda$ of $A$, $|\lambda| \leq r(A)$.
179179
3. we can find a nonnegative and nonzero eigenvector $v$ such that $Av = r(A)v$.
180180
@@ -204,8 +204,8 @@ Now let's consider examples for each case.
204204
Consider the following irreducible matrix A:
205205

206206
```{code-cell} ipython3
207-
A = np.array([[0, 1, 0],
208-
[.5, 0, .5],
207+
A = np.array([[0, 1, 0],
208+
[.5, 0, .5],
209209
[0, 1, 0]])
210210
```
211211

@@ -228,8 +228,8 @@ Now we can go through our checklist to verify the claims of the Perron-Frobenius
228228
Consider the following primitive matrix B:
229229

230230
```{code-cell} ipython3
231-
B = np.array([[0, 1, 1],
232-
[1, 0, 1],
231+
B = np.array([[0, 1, 1],
232+
[1, 0, 1],
233233
[1, 1, 0]])
234234
235235
np.linalg.matrix_power(B, 2)
@@ -298,7 +298,7 @@ def check_convergence(M):
298298
n_list = [1, 10, 100, 1000, 10000]
299299
300300
for n in n_list:
301-
301+
302302
# Compute (A/r)^n
303303
M_n = np.linalg.matrix_power(M/r, n)
304304
@@ -313,8 +313,8 @@ def check_convergence(M):
313313
A1 = np.array([[1, 2],
314314
[1, 4]])
315315
316-
A2 = np.array([[0, 1, 1],
317-
[1, 0, 1],
316+
A2 = np.array([[0, 1, 1],
317+
[1, 0, 1],
318318
[1, 1, 0]])
319319
320320
A3 = np.array([[0.971, 0.029, 0.1, 1],
@@ -336,8 +336,8 @@ The convergence is not observed in cases of non-primitive matrices.
336336
Let's go through an example
337337

338338
```{code-cell} ipython3
339-
B = np.array([[0, 1, 1],
340-
[1, 0, 0],
339+
B = np.array([[0, 1, 1],
340+
[1, 0, 0],
341341
[1, 0, 0]])
342342
343343
# This shows that the matrix is not primitive
@@ -358,7 +358,7 @@ In fact we have already seen the theorem in action before in {ref}`the markov ch
358358
(spec_markov)=
359359
#### Example 3: Connection to Markov chains
360360

361-
We are now prepared to bridge the languages spoken in the two lectures.
361+
We are now prepared to bridge the languages spoken in the two lectures.
362362

363363
A primitive matrix is both irreducible (or strongly connected in the language of graph) and aperiodic.
364364

@@ -410,22 +410,22 @@ $$
410410

411411
This is proven in {cite}`sargent2023economic` and a nice discussion can be found [here](https://math.stackexchange.com/questions/2433997/can-all-matrices-be-decomposed-as-product-of-right-and-left-eigenvector).
412412

413-
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
413+
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
414414

415415
Premultiplying $P^t$ by arbitrary $\psi \in \mathscr{D}(S)$ and rearranging now gives
416416

417417
$$
418418
\psi P^t-\psi^*=\sum_{i=1}^{n-1} \lambda_i^t \psi v_i w_i^{\top}
419419
$$
420420

421-
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
421+
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
422422

423423
As we have seen, the largest eigenvalue for a primitive stochastic matrix is one.
424424

425-
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
425+
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
426426
but it is out of the scope of this lecture.
427427

428-
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
428+
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
429429

430430

431431
Hence, after taking the Euclidean norm deviation, we obtain
@@ -438,7 +438,7 @@ Thus, the rate of convergence is governed by the modulus of the second largest e
438438

439439

440440
(la_neumann)=
441-
## The Neumann Series Lemma
441+
## The Neumann Series Lemma
442442

443443
```{index} single: Neumann's Lemma
444444
```
@@ -450,12 +450,12 @@ many applications in economics.
450450

451451
Here's a fundamental result about series that you surely know:
452452

453-
If $a$ is a number and $|a| < 1$, then
453+
If $a$ is a number and $|a| < 1$, then
454454

455455
```{math}
456456
:label: gp_sum
457-
458-
\sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}
457+
458+
\sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}
459459
460460
```
461461

@@ -476,7 +476,7 @@ Using matrix algebra we can conclude that the solution to this system of equatio
476476

477477
```{math}
478478
:label: neumann_eqn
479-
479+
480480
x^{*} = (I-A)^{-1}b
481481
482482
```
@@ -493,7 +493,7 @@ The following is a fundamental result in functional analysis that generalizes
493493
494494
Let $A$ be a square matrix and let $A^k$ be the $k$-th power of $A$.
495495
496-
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
496+
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
497497
498498
* $\{\lambda_i\}_i$ is the set of eigenvalues of $A$ and
499499
* $|\lambda_i|$ is the modulus of the complex number $\lambda_i$
@@ -517,7 +517,7 @@ r = max(abs(λ) for λ in evals) # compute spectral radius
517517
print(r)
518518
```
519519

520-
The spectral radius $r(A)$ obtained is less than 1.
520+
The spectral radius $r(A)$ obtained is less than 1.
521521

522522
Thus, we can apply the Neumann Series lemma to find $(I-A)^{-1}$.
523523

@@ -541,7 +541,7 @@ for i in range(50):
541541
Let's check equality between the sum and the inverse methods.
542542

543543
```{code-cell} ipython3
544-
np.allclose(A_sum, B_inverse)
544+
np.allclose(A_sum, B_inverse)
545545
```
546546

547547
Although we truncate the infinite sum at $k = 50$, both methods give us the same
@@ -566,11 +566,11 @@ The following table describes how output is distributed within the economy:
566566
| Industry | $x_2$ | 0.2$x_1$ | 0.4$x_2$ |0.3$x_3$ | 5 |
567567
| Service | $x_3$ | 0.2$x_1$ | 0.5$x_2$ |0.1$x_3$ | 12 |
568568

569-
The first row depicts how agriculture's total output $x_1$ is distributed
569+
The first row depicts how agriculture's total output $x_1$ is distributed
570570

571571
* $0.3x_1$ is used as inputs within agriculture itself,
572572
* $0.2x_2$ is used as inputs by the industry sector to produce $x_2$ units,
573-
* $0.3x_3$ is used as inputs by the service sector to produce $x_3$ units and
573+
* $0.3x_3$ is used as inputs by the service sector to produce $x_3$ units and
574574
* 4 units is the external demand by consumers.
575575

576576
We can transform this into a system of linear equations for the 3 sectors as

0 commit comments

Comments
 (0)