Skip to content

Commit 0f73d25

Browse files
Tom's March 13 edits of var_dmd.md lecture on plane back to SF
1 parent 52a9195 commit 0f73d25

File tree

1 file changed

+35
-23
lines changed

1 file changed

+35
-23
lines changed

lectures/var_dmd.md

Lines changed: 35 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -262,9 +262,9 @@ $$ (eq:AhatSVDformula)
262262
263263
264264
265-
We turn to the **tall and skinny** case associated with **Dynamic Mode Decomposition**, the case in which $ m >>n $.
265+
We turn to the $ m >>n $ **tall and skinny** case associated with **Dynamic Mode Decomposition**.
266266
267-
Here an $ m \times n $ data matrix $ \tilde X $ contains many more attributes $ m $ than individuals $ n $.
267+
Here an $ m \times n+1 $ data matrix $ \tilde X $ contains many more attributes (or variables) $ m $ than time periods $ n+1 $.
268268
269269
270270
Dynamic mode decomposition was introduced by {cite}`schmid2010`,
@@ -502,7 +502,9 @@ $$
502502
$$ (eq:Ahatwithtildes)
503503
504504
505-
Paralleling a step used to construct Representation 1, define a transition matrix for a rotated $p \times 1$ state $\tilde b_t$ by
505+
**Computing Dominant Eigenvectors of $\hat A$**
506+
507+
We begin by paralleling a step used to construct Representation 1, define a transition matrix for a rotated $p \times 1$ state $\tilde b_t$ by
506508
507509
$$
508510
\tilde A =\tilde U^\top \hat A \tilde U
@@ -521,6 +523,7 @@ $$
521523
= \tilde U^\top X' \tilde V \tilde \Sigma^{-1} \tilde U^\top
522524
$$ (eq:tildeAverify)
523525
526+
524527
525528
526529
Next, we'll just compute the regression coefficients in a projection of $\hat A$ on $\tilde U$ using a standard least-squares formula
@@ -530,10 +533,12 @@ $$
530533
\tilde U^\top X' \tilde V \tilde \Sigma^{-1} \tilde U^\top = \tilde A .
531534
$$
532535
536+
Thus, we have verified that $\tilde A$ is a least-squares projection of $\hat A$ onto $\tilde U$.
533537
538+
**An Inverse Challenge**
534539
535540
536-
Note that because we are using a reduced SVD, $\tilde U \tilde U^\top \neq I$.
541+
Because we are using a reduced SVD, $\tilde U \tilde U^\top \neq I$.
537542
538543
Consequently,
539544
@@ -543,14 +548,17 @@ $$
543548
544549
so we can't simply recover $\hat A$ from $\tilde A$ and $\tilde U$.
545550
551+
**A Blind Alley**
546552
547-
Nevertheless, we hope for the best and proceed to construct an eigendecomposition of the
548-
$p \times p$ matrix $\tilde A$:
553+
We can start by hoping for the best and proceeding to construct an eigendecomposition of the $p \times p$ matrix $\tilde A$:
549554
550555
$$
551-
\tilde A = \tilde W \Lambda \tilde W^{-1} .
556+
\tilde A = \tilde W \Lambda \tilde W^{-1}
552557
$$ (eq:tildeAeigenred)
553558
559+
where $\Lambda$ is a diagonal matrix of $p$ eigenvalues and the columns of $\tilde W$
560+
are corresponding eigenvectors.
561+
554562
555563
Mimicking our procedure in Representation 2, we cross our fingers and compute an $m \times p$ matrix
556564
@@ -575,7 +583,9 @@ That
575583
$ \hat A \tilde \Phi_s \neq \tilde \Phi_s \Lambda $ means that, unlike the corresponding situation in Representation 2, columns of $\tilde \Phi_s = \tilde U \tilde W$
576584
are **not** eigenvectors of $\hat A$ corresponding to eigenvalues on the diagonal of matix $\Lambda$.
577585
578-
But in a quest for eigenvectors of $\hat A$ that we **can** compute with a reduced SVD, let's define the $m \times p$ matrix
586+
**An Approach That Works**
587+
588+
Continuing our quest for eigenvectors of $\hat A$ that we **can** compute with a reduced SVD, let's define an $m \times p$ matrix
579589
$\Phi$ as
580590
581591
$$
@@ -584,7 +594,7 @@ $$ (eq:Phiformula)
584594
585595
It turns out that columns of $\Phi$ **are** eigenvectors of $\hat A$.
586596
587-
This is a consequence of a result established by Tu et al. {cite}`tu_Rowley`, which we now present.
597+
This is a consequence of a result established by Tu et al. {cite}`tu_Rowley` that we now present.
588598
589599
590600
@@ -603,12 +613,14 @@ $$
603613
\end{aligned}
604614
$$
605615
606-
Thus, we have deduced that
616+
so that
607617
608618
$$
609-
\hat A \Phi = \Phi \Lambda
619+
\hat A \Phi = \Phi \Lambda .
610620
$$ (eq:APhiLambda)
611621
622+
623+
612624
Let $\phi_i$ be the $i$th column of $\Phi$ and $\lambda_i$ be the corresponding $i$ eigenvalue of $\tilde A$ from decomposition {eq}`eq:tildeAeigenred`.
613625
614626
Equating the $m \times 1$ vectors that appear on the two sides of equation {eq}`eq:APhiLambda` gives
@@ -625,7 +637,7 @@ This concludes the proof.
625637
Also see {cite}`DDSE_book` (p. 238)
626638
627639
628-
### Decoder of $X$ as a linear projection
640+
### Decoder of $\check b$ as a linear projection
629641
630642
631643
@@ -639,7 +651,7 @@ $$
639651
$$ (eq:Aform12)
640652
641653
642-
From formula {eq}`eq:Aform12` we can deduce the reduced dimension dynamics
654+
From formula {eq}`eq:Aform12` we can deduce dynamics of the $p \times 1$ vector $\check b_t$:
643655
644656
$$
645657
\check b_{t+1} = \Lambda \check b_t
@@ -673,7 +685,7 @@ $$ (eq:Xcheck_)
673685
674686
is an $m \times n$ matrix of least squares projections of $X$ on $\Phi$.
675687
676-
688+
**Variance Decomposition of $X$**
677689
678690
By virtue of the least-squares projection theory discussed in this quantecon lecture <https://python-advanced.quantecon.org/orth_proj.html>, we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$ plus a matrix of errors.
679691
@@ -703,15 +715,15 @@ Rearranging the orthogonality conditions {eq}`eq:orthls` gives $X^\top \Phi =
703715
704716
705717
706-
### A useful approximation
718+
### An Approximation
707719
708720
709721
710-
There is a useful way to approximate the $p \times 1$ vector $\check b_t$ instead of using formula {eq}`eq:decoder102`.
722+
We now describe a way to approximate the $p \times 1$ vector $\check b_t$ instead of using formula {eq}`eq:decoder102`.
711723
712724
In particular, the following argument adapted from {cite}`DDSE_book` (page 240) provides a computationally efficient way to approximate $\check b_t$.
713725
714-
For convenience, we'll do this first for time $t=1$.
726+
For convenience, we'll apply the method at time $t=1$.
715727
716728
717729
@@ -723,7 +735,7 @@ $$ (eq:X1proj)
723735
724736
where $\check b_1$ is a $p \times 1$ vector.
725737
726-
Recall from representation 1 above that $X_1 = U \tilde b_1$, where $\tilde b_1$ is a time $1$ basis vector for representation 1 and $U$ is from a full SVD of $X$.
738+
Recall from representation 1 above that $X_1 = U \tilde b_1$, where $\tilde b_1$ is a time $1$ basis vector for representation 1 and $U$ is from the full SVD $X = U \Sigma V^\top$.
727739
728740
It then follows from equation {eq}`eq:Xbcheck` that
729741
@@ -741,7 +753,7 @@ $$
741753
$$
742754
743755
744-
Replacing the error term $U^\top \epsilon_1$ by zero, and replacing $U$ from a full SVD of $X$ with $\tilde U$ from a reduced SVD, we obtain an approximation $\hat b_1$ to $\tilde b_1$:
756+
Replacing the error term $U^\top \epsilon_1$ by zero, and replacing $U$ from a **full** SVD of $X$ with $\tilde U$ from a **reduced** SVD, we obtain an approximation $\hat b_1$ to $\tilde b_1$:
745757
746758
747759
@@ -785,22 +797,22 @@ $$
785797
$$ (eq:bphieqn)
786798
787799
788-
(To highlight that {eq}`eq:beqnsmall` is an approximation, users of DMD sometimes call components of the basis vector $\check b_t = \Phi^+ X_t $ the **exact** DMD modes.)
800+
(To highlight that {eq}`eq:beqnsmall` is an approximation, users of DMD sometimes call components of basis vector $\check b_t = \Phi^+ X_t $ the **exact** DMD modes and components of $\hat b_t = ( \tilde W \Lambda)^{-1} \tilde U^\top X_t$ the **approximate** modes.)
789801
790-
Conditional on $X_t$, we can compute our decoded $\check X_{t+j}, j = 1, 2, \ldots $ from either
802+
Conditional on $X_t$, we can compute a decoded $\check X_{t+j}, j = 1, 2, \ldots $ from the exact modes via
791803
792804
$$
793805
\check X_{t+j} = \Phi \Lambda^j \Phi^{+} X_t
794806
$$ (eq:checkXevoln)
795807
796808
797-
or use the approximation
809+
or use compute a decoded $\hat X_{t+j}$ from approximate modes via
798810
799811
$$
800812
\hat X_{t+j} = \Phi \Lambda^j (\tilde W \Lambda)^{-1} \tilde U^\top X_t .
801813
$$ (eq:checkXevoln2)
802814
803-
We can then use $\check X_{t+j}$ or $\hat X_{t+j}$ to forecast $X_{t+j}$.
815+
We can then use a decoded $\check X_{t+j}$ or $\hat X_{t+j}$ to forecast $X_{t+j}$.
804816
805817
806818

0 commit comments

Comments
 (0)