Skip to content

Commit 8aacf8a

Browse files
Tom's Dec 21 edit of svd lecture
1 parent a6fd5d3 commit 8aacf8a

File tree

1 file changed

+9
-13
lines changed

1 file changed

+9
-13
lines changed

lectures/svd_intro.md

Lines changed: 9 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -670,7 +670,6 @@ We turn to the **tall and skinny** case associated with **Dynamic Mode Decompos
670670
671671
Here an $ m \times n $ data matrix $ \tilde X $ contains many more attributes $ m $ than individuals $ n $.
672672
673-
This
674673
675674
Dynamic mode decomposition was introduced by {cite}`schmid2010`,
676675
@@ -684,9 +683,7 @@ X_{t+1} = A X_t + C \epsilon_{t+1}
684683
$$ (eq:VARfirstorder)
685684
686685
where $\epsilon_{t+1}$ is the time $t+1$ instance of an i.i.d. $m \times 1$ random vector with mean vector
687-
zero and identity covariance matrix and
688-
689-
where
686+
zero and identity covariance matrix and where
690687
the $ m \times 1 $ vector $ X_t $ is
691688
692689
$$
@@ -741,9 +738,9 @@ Two possible cases are
741738
* $ n > > m$, so that we have many more time series observations $n$ than variables $m$
742739
* $m > > n$, so that we have many more variables $m $ than time series observations $n$
743740
744-
At a general level that includes both of these special cases, a common formula describes the least squares estimator $\hat A$ of $A$ for both cases.
741+
At a general level that includes both of these special cases, a common formula describes the least squares estimator $\hat A$ of $A$.
745742
746-
But some important details differ.
743+
But important details differ.
747744
748745
The common formula is
749746
@@ -907,7 +904,7 @@ where $ r < p $.
907904
908905
Next, we describe alternative representations of our first-order linear dynamic system.
909906
910-
**Guide to three representations:** In practice, we'll be interested in Representation 3. We present the first 2 in order to set the stage for some intermediate steps that might help us understand what is under the hood of Representation 3. In applications, we'll use only a small subset of the DMD to approximate dynamics. To to that, we'll want to be using the reduced SVD's affiliated with representation 3, not the full SVD's affiliated with Representations 1 and 2.
907+
**Guide to three representations:** In practice, we'll be interested in Representation 3. We present the first 2 in order to set the stage for some intermediate steps that might help us understand what is under the hood of Representation 3. In applications, we'll use only a small subset of the DMD to approximate dynamics. To do that, we'll want to use the reduced SVD's affiliated with representation 3, not the full SVD's affiliated with representations 1 and 2.
911908
912909
+++
913910
@@ -979,7 +976,7 @@ where we use $\overline X_{t+1}, t \geq 1 $ to denote a forecast.
979976
980977
This representation is related to one originally proposed by {cite}`schmid2010`.
981978
982-
It can be regarded as an intermediate step to a related representation 3 to be presented later
979+
It can be regarded as an intermediate step on the way to obtaining a related representation 3 to be presented later
983980
984981
985982
As with Representation 1, we continue to
@@ -994,7 +991,7 @@ As we observed and illustrated earlier in this lecture
994991
995992
* (b) for a reduced SVD of $X$, $U^T U $ is not an identity matrix.
996993
997-
As we shall see later, a full SVD is too confining for what we ultimately want to do, namely, situations in which $U^T U$ is **not** an identity matrix because we use a reduced SVD of $X$.
994+
As we shall see later, a full SVD is too confining for what we ultimately want to do, namely, cope with situations in which $U^T U$ is **not** an identity matrix because we use a reduced SVD of $X$.
998995
999996
But for now, let's proceed under the assumption that we are using a full SVD so that both of the preceding two requirements (a) and (b) are satisfied.
1000997
@@ -1101,7 +1098,7 @@ We'll say more about this interpretation in a related context when we discuss re
11011098
11021099
We turn next to an alternative representation suggested by Tu et al. {cite}`tu_Rowley`.
11031100
1104-
It is more appropriate to use this alternative representation when, as in practice is typically the case, we use a reduced SVD.
1101+
It is more appropriate to use this alternative representation when, as is typically the case in practice, we use a reduced SVD.
11051102
11061103
11071104
@@ -1302,8 +1299,7 @@ is an $m \times n$ matrix of least squares projections of $X$ on $\Phi$.
13021299
13031300
13041301
1305-
By virtue of least-squares projection theory discussed here <https://python-advanced.quantecon.org/orth_proj.html>,
1306-
we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$ plus a matrix of errors.
1302+
By virtue of least-squares projection theory discussed in this quantecon lecture e <https://python-advanced.quantecon.org/orth_proj.html>, we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$ plus a matrix of errors.
13071303
13081304
13091305
To verify this, note that the least squares projection $\check X$ is related to $X$ by
@@ -1411,7 +1407,7 @@ $$ (eq:beqnsmall)
14111407
14121408
14131409
1414-
which is computationally efficient approximation to the following instance of equation {eq}`eq:decoder102` for the initial vector $\check b_1$:
1410+
which is a computationally efficient approximation to the following instance of equation {eq}`eq:decoder102` for the initial vector $\check b_1$:
14151411
14161412
$$
14171413
\check b_1= \Phi^{+} X_1

0 commit comments

Comments
 (0)