Skip to content

Commit 07b3a97

Browse files
Tom's second July 17 edits of svd lecture
1 parent eb5e1aa commit 07b3a97

File tree

1 file changed

+70
-31
lines changed

1 file changed

+70
-31
lines changed

lectures/svd_intro.md

Lines changed: 70 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1099,7 +1099,7 @@ $$
10991099
X = \tilde U \tilde \Sigma \tilde V^T,
11001100
$$
11011101
1102-
where now $\tilde U$ is $m \times p$, $\tilde \Sigma$ is $ p \times p$ and $\tilde V^T$ is $p \times n$.
1102+
where now $\tilde U$ is $m \times p$, $\tilde \Sigma$ is $ p \times p$, and $\tilde V^T$ is $p \times n$.
11031103
11041104
Our minimum-norm least-squares estimator approximator of $A$ now has representation
11051105
@@ -1114,26 +1114,52 @@ $$
11141114
\tilde A =\tilde U^T \hat A \tilde U
11151115
$$ (eq:Atildered)
11161116
1117-
Because we are now working with a reduced SVD, $\tilde U \tilde U^T \neq I$.
11181117
1119-
Since
1118+
**Interpretation as projection coefficients**
1119+
1120+
1121+
{cite}`DDSE_book` remark that $\tilde A$ can be interpreted in terms of a projection of $\hat A$ onto the $p$ modes in $\tilde U$.
1122+
1123+
To verify this, first note that, because $ \tilde U^T \tilde U = I$, it follows that
1124+
1125+
$$
1126+
\tilde A = \tilde U^T \hat A \tilde U = \tilde U^T X' \tilde V \tilde \Sigma^{-1} \tilde U^T \tilde U
1127+
= \tilde U^T X' \tilde V \tilde \Sigma^{-1}
1128+
$$ (eq:tildeAverify)
1129+
1130+
1131+
1132+
Next, we'll just compute the regression coefficients in a projection of $\hat A$ on $\tilde U$ using the
1133+
standard least-square formula
1134+
1135+
$$
1136+
(\tilde U^T \tilde U)^{-1} \tilde U^T \hat A = (\tilde U^T \tilde U)^{-1} \tilde U^T X' \tilde V \tilde \Sigma^{1} =
1137+
\tilde U^T X' \tilde V \tilde \Sigma^{-1} = \tilde A .
1138+
$$
1139+
1140+
1141+
1142+
1143+
Note that because we are now working with a reduced SVD, $\tilde U \tilde U^T \neq I$.
1144+
1145+
Consequently,
11201146
11211147
$$
11221148
\hat A \neq \tilde U \tilde A \tilde U^T,
11231149
$$
11241150
1125-
we can't simply recover $\hat A$ from $\tilde A$ and $\tilde U$.
1151+
and we can't simply recover $\hat A$ from $\tilde A$ and $\tilde U$.
11261152
11271153
1128-
Nevertheless, we hope for the best and construct an eigendecomposition of the
1154+
Nevertheless, we hope for the best and proceed to construct an eigendecomposition of the
11291155
$p \times p$ matrix $\tilde A$:
11301156
11311157
$$
11321158
\tilde A = \tilde W \Lambda \tilde W^{-1} .
11331159
$$ (eq:tildeAeigenred)
11341160
11351161
1136-
Mimicking our procedure in Representation 2, we cross our fingers and compute the $m \times p$ matrix
1162+
Mimicking our procedure in Representation 2, we cross our fingers and compute an $m \times p$ matrix
11371163
11381164
$$
11391165
\tilde \Phi_s = \tilde U \tilde W
@@ -1200,7 +1226,7 @@ $$
12001226
\hat A \phi_i = \lambda_i \phi_i .
12011227
$$
12021228
1203-
Evidently, $\phi_i$ is an eigenvector of $\hat A$ that corresponds to eigenvalue $\lambda_i$ of both $\tilde A$ and $\hat A$.
1229+
This equation confirms that $\phi_i$ is an eigenvector of $\hat A$ that corresponds to eigenvalue $\lambda_i$ of both $\tilde A$ and $\hat A$.
12041230
12051231
This concludes the proof.
12061232
@@ -1246,8 +1272,8 @@ $$
12461272
\check b = (\Phi^T \Phi)^{-1} \Phi^T X
12471273
$$ (eq:checkbform)
12481274
1249-
The $p \times n$ matrix $\check b$ is recognizable as the matrix of least squares regression coefficients of the $m \times n$ matrix
1250-
$X$ on the $m \times p$ matrix $\Phi$ and
1275+
The $p \times n$ matrix $\check b$ is recognizable as a matrix of least squares regression coefficients of the $m \times n$ matrix
1276+
$X$ on the $m \times p$ matrix $\Phi$ and consequently
12511277
12521278
$$
12531279
\check X = \Phi \check b
@@ -1272,7 +1298,7 @@ or
12721298
12731299
$$
12741300
X = \Phi \check b + \epsilon
1275-
$$
1301+
$$ (eq:Xbcheck)
12761302
12771303
where $\epsilon$ is an $m \times n$ matrix of least squares errors satisfying the least squares
12781304
orthogonality conditions $\epsilon^T \Phi =0 $ or
@@ -1288,79 +1314,92 @@ which implies formula {eq}`eq:checkbform`.
12881314
12891315
12901316
1291-
### Alternative algorithm
1317+
### A useful approximation
12921318
12931319
12941320
12951321
There is a useful way to approximate the $p \times 1$ vector $\check b_t$ instead of using formula
12961322
{eq}`eq:decoder102`.
12971323
1298-
In particular, the following argument from {cite}`DDSE_book` (page 240) provides a computationally efficient way
1299-
to compute $\check b_t$.
1324+
In particular, the following argument adapted from {cite}`DDSE_book` (page 240) provides a computationally efficient way
1325+
to approximate $\check b_t$.
13001326
13011327
For convenience, we'll do this first for time $t=1$.
13021328
13031329
13041330
1305-
For $t=1$, we have
1331+
For $t=1$, from equation {eq}`eq:Xbcheck` we have
13061332
13071333
$$
1308-
X_1 = \Phi \check b_1
1334+
\check X_1 = \Phi \check b_1
13091335
$$ (eq:X1proj)
13101336
13111337
where $\check b_1$ is a $p \times 1$ vector.
13121338
1313-
Recall from representation 1 above that $X_1 = U \tilde b_1$, where $\tilde b_1$ is a time $1$ basis vector for representation 1.
1339+
Recall from representation 1 above that $X_1 = U \tilde b_1$, where $\tilde b_1$ is a time $1$ basis vector for representation 1 and $U$ is from a full SVD of $X$.
1340+
1341+
It then follows from equation {eq}`eq:Xbcheck` that
13141342
1315-
It then follows from equation {eq}`eq:Phiformula` that
13161343
13171344
$$
1318-
U \tilde b_1 = X' V \Sigma^{-1} W \check b_1
1345+
U \tilde b_1 = X' \tilde V \tilde \Sigma^{-1} \tilde W \check b_1 + \epsilon_1
1346+
$$
1347+
1348+
where $\epsilon_1$ is a least-squares error vector from equation {eq}`eq:Xbcheck`.
1349+
1350+
It follows that
1351+
1352+
$$
1353+
\tilde b_1 = U^T X' V \tilde \Sigma^{-1} \tilde W \check b_1 + U^T \epsilon_1
13191354
$$
13201355
1321-
and consequently
1356+
1357+
Replacing the error term $U^T \epsilon_1$ by zero, and replacing $U$ from a full SVD of $X$ with
1358+
$\tilde U$ from a reduced SVD, we obtain an approximation $\hat b_1$ to $\tilde b_1$:
1359+
1360+
13221361
13231362
$$
1324-
\tilde b_1 = U^T X' V \Sigma^{-1} W \check b_1
1363+
\hat b_1 = \tilde U^T X' \tilde V \tilde \Sigma^{-1} \tilde W \check b_1
13251364
$$
13261365
1327-
Recall that from equation {eq}`eq:AhatSVDformula`, $ \tilde A = U^T X' V \Sigma^{-1}$.
1366+
Recall that from equation {eq}`eq:tildeAverify`, $ \tilde A = \tilde U^T X' \tilde V \tilde \Sigma^{-1}$.
13281367
13291368
It then follows that
13301369
13311370
$$
1332-
\tilde b_1 = \tilde A W \check b_1
1371+
\hat b_1 = \tilde A \tilde W \check b_1
13331372
$$
13341373
1335-
and therefore, by the eigendecomposition {eq}`eq:tildeAeigen` of $\tilde A$, we have
1374+
and therefore, by the eigendecomposition {eq}`eq:tildeAeigenred` of $\tilde A$, we have
13361375
13371376
$$
1338-
\tilde b_1 = W \Lambda \check b_1
1377+
\hat b_1 = \tilde W \Lambda \check b_1
13391378
$$
13401379
13411380
Consequently,
13421381
13431382
$$
1344-
\check b_1 = ( W \Lambda)^{-1} \tilde b_1
1383+
\hat b_1 = ( \tilde W \Lambda)^{-1} \tilde b_1
13451384
$$
13461385
13471386
or
13481387
13491388
13501389
$$
1351-
\check b_1 = ( W \Lambda)^{-1} U^T X_1 ,
1390+
\hat b_1 = ( \tilde W \Lambda)^{-1} \tilde U^T X_1 ,
13521391
$$ (eq:beqnsmall)
13531392
13541393
13551394
1356-
which is computationally more efficient than the following instance of equation {eq}`eq:decoder102` for approximating the initial vector $\check b_1$:
1395+
which is computationally efficient approximation to the following instance of equation {eq}`eq:decoder102` for the initial vector $\check b_1$:
13571396
13581397
$$
13591398
\check b_1= \Phi^{+} X_1
13601399
$$ (eq:bphieqn)
13611400
13621401
1363-
Users of DMD sometimes call components of the basis vector $\check b_t = \Phi^+ X_t \equiv (W \Lambda)^{-1} U^T X_t$ the **exact** DMD modes.
1402+
(To highlight that {eq}`eq:beqnsmall` is an approximation, users of DMD sometimes call components of the basis vector $\check b_t = \Phi^+ X_t $ the **exact** DMD modes.)
13641403
13651404
Conditional on $X_t$, we can compute our decoded $\check X_{t+j}, j = 1, 2, \ldots $ from
13661405
either
@@ -1370,13 +1409,13 @@ $$
13701409
$$ (eq:checkXevoln)
13711410
13721411
1373-
or
1412+
or use the approximation
13741413
13751414
$$
1376-
\check X_{t+j} = \Phi \Lambda^j (W \Lambda)^{-1} U^T X_t .
1415+
\hat X_{t+j} = \Phi \Lambda^j (W \Lambda)^{-1} \tilde U^T X_t .
13771416
$$ (eq:checkXevoln2)
13781417
1379-
We can then use $\check X_{t+j}$ to forcast $X_{t+j}$.
1418+
We can then use $\check X_{t+j}$ or $\hat X_{t+j}$ to forecast $X_{t+j}$.
13801419
13811420
13821421

0 commit comments

Comments
 (0)