Skip to content

Commit 09c3897

Browse files
authored
FIX: tidy up and fix align issue for LaTeX (#384)
1 parent dd9fd01 commit 09c3897

File tree

1 file changed

+24
-49
lines changed

1 file changed

+24
-49
lines changed

lectures/svd_intro.md

Lines changed: 24 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ kernelspec:
1313

1414
# Singular Value Decomposition (SVD)
1515

16-
17-
1816
## Overview
1917

2018
The **singular value decomposition** (SVD) is a work-horse in applications of least squares projection that
@@ -38,16 +36,16 @@ Necessarily, $p \leq \min(m,n)$.
3836

3937
In much of this lecture, we'll think of $X$ as a matrix of **data** in which
4038

41-
* each column is an **individual** -- a time period or person, depending on the application
39+
* each column is an **individual** -- a time period or person, depending on the application
4240

43-
* each row is a **random variable** describing an attribute of a time period or a person, depending on the application
41+
* each row is a **random variable** describing an attribute of a time period or a person, depending on the application
4442

4543

4644
We'll be interested in two situations
4745

48-
* A **short and fat** case in which $m << n$, so that there are many more columns (individuals) than rows (attributes).
46+
* A **short and fat** case in which $m << n$, so that there are many more columns (individuals) than rows (attributes).
4947

50-
* A **tall and skinny** case in which $m >> n$, so that there are many more rows (attributes) than columns (individuals).
48+
* A **tall and skinny** case in which $m >> n$, so that there are many more rows (attributes) than columns (individuals).
5149

5250

5351
We'll apply a **singular value decomposition** of $X$ in both situations.
@@ -116,29 +114,20 @@ Thus,
116114
117115
We'll apply this circle of ideas later in this lecture when we study Dynamic Mode Decomposition.
118116
119-
120-
121-
122-
123117
**Road Ahead**
124118
125119
What we have described above is called a **full** SVD.
126120
127-
128-
129121
In a **full** SVD, the shapes of $U$, $\Sigma$, and $V$ are $\left(m, m\right)$, $\left(m, n\right)$, $\left(n, n\right)$, respectively.
130122
131123
Later we'll also describe an **economy** or **reduced** SVD.
132124
133125
Before we study a **reduced** SVD we'll say a little more about properties of a **full** SVD.
134126
135-
136127
## Four Fundamental Subspaces
137128
138-
139129
Let ${\mathcal C}$ denote a column space, ${\mathcal N}$ denote a null space, and ${\mathcal R}$ denote a row space.
140130
141-
142131
Let's start by recalling the four fundamental subspaces of an $m \times n$
143132
matrix $X$ of rank $p$.
144133
@@ -263,14 +252,12 @@ $$
263252
\end{aligned}
264253
$$ (eq:fourspaceSVD)
265254
266-
267-
268255
Since $U$ and $V$ are both orthonormal matrices, collection {eq}`eq:fourspaceSVD` asserts that
269256
270-
* $U_L$ is an orthonormal basis for the column space of $X$
271-
* $U_R$ is an orthonormal basis for the null space of $X^\top $
272-
* $V_L$ is an orthonormal basis for the row space of $X$
273-
* $V_R$ is an orthonormal basis for the null space of $X$
257+
* $U_L$ is an orthonormal basis for the column space of $X$
258+
* $U_R$ is an orthonormal basis for the null space of $X^\top $
259+
* $V_L$ is an orthonormal basis for the row space of $X$
260+
* $V_R$ is an orthonormal basis for the null space of $X$
274261
275262
276263
We have verified the four claims in {eq}`eq:fourspaceSVD` simply by performing the multiplications called for by the right side of {eq}`eq:fullSVDpartition` and reading them.
@@ -286,8 +273,6 @@ Sometimes these properties are described with the following two pairs of orthogo
286273
* ${\mathcal C}(X)$ is the orthogonal complement of $ {\mathcal N}(X^\top )$
287274
* ${\mathcal R}(X)$ is the orthogonal complement ${\mathcal N}(X)$
288275
289-
290-
291276
Let's do an example.
292277
293278
@@ -340,13 +325,13 @@ Suppose that we want to construct the best rank $r$ approximation of an $m \tim
340325
341326
By best, we mean a matrix $X_r$ of rank $r < p$ that, among all rank $r$ matrices, minimizes
342327
343-
$$ || X - X_r || $$
328+
$$
329+
|| X - X_r ||
330+
$$
344331
345332
where $ || \cdot || $ denotes a norm of a matrix $X$ and where $X_r$ belongs to the space of all rank $r$ matrices
346333
of dimension $m \times n$.
347334
348-
349-
350335
Three popular **matrix norms** of an $m \times n$ matrix $X$ can be expressed in terms of the singular values of $X$
351336
352337
* the **spectral** or $l^2$ norm $|| X ||_2 = \max_{||y|| \neq 0} \frac{||X y ||}{||y||} = \sigma_1$
@@ -369,12 +354,6 @@ You can read about the Eckart-Young theorem and some of its uses [here](https://
369354
370355
We'll make use of this theorem when we discuss principal components analysis (PCA) and also dynamic mode decomposition (DMD).
371356
372-
373-
374-
375-
376-
377-
378357
## Full and Reduced SVD's
379358
380359
Up to now we have described properties of a **full** SVD in which shapes of $U$, $\Sigma$, and $V$ are $\left(m, m\right)$, $\left(m, n\right)$, $\left(n, n\right)$, respectively.
@@ -385,7 +364,6 @@ Thus, note that because we assume that $X$ has rank $p$, there are only $p$ nonz
385364
386365
A **reduced** SVD uses this fact to express $U$, $\Sigma$, and $V$ as matrices with shapes $\left(m, p\right)$, $\left(p, p\right)$, $\left( n, p\right)$.
387366
388-
389367
You can read about reduced and full SVD here
390368
<https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html>
391369
@@ -411,7 +389,7 @@ VV^\top & = I & \quad V^\top V = I
411389
\end{aligned}
412390
$$
413391
414-
* In a **short-fat** case in which $m < < n$, for a **reduced** SVD
392+
* In a **short-fat** case in which $m < < n$, for a **reduced** SVD
415393
416394
$$
417395
\begin{aligned}
@@ -428,20 +406,17 @@ Let's do an exercise to compare **full** and **reduced** SVD's.
428406
To review,
429407
430408
431-
* in a **full** SVD
432-
433-
- $U$ is $m \times m$
434-
- $\Sigma$ is $m \times n$
435-
- $V$ is $n \times n$
436-
437-
* in a **reduced** SVD
438-
439-
- $U$ is $m \times p$
440-
- $\Sigma$ is $p\times p$
441-
- $V$ is $n \times p$
409+
* in a **full** SVD
442410
411+
- $U$ is $m \times m$
412+
- $\Sigma$ is $m \times n$
413+
- $V$ is $n \times n$
443414
415+
* in a **reduced** SVD
444416
417+
- $U$ is $m \times p$
418+
- $\Sigma$ is $p\times p$
419+
- $V$ is $n \times p$
445420
446421
First, let's study a case in which $m = 5 > n = 2$.
447422
@@ -618,10 +593,10 @@ If the matrix $C$ is diagonalizable, we can eigendecompose it, find its eigenval
618593
If $C$ is not diagonalizable, we can perform an SVD of $C$:
619594
620595
$$
621-
\begin{align}
596+
\begin{aligned}
622597
B^T B &= V \Sigma^\top U^\top U \Sigma V^\top \cr
623598
&= V \Sigma^\top \Sigma V^\top
624-
\end{align}
599+
\end{aligned}
625600
$$
626601
627602
$$
@@ -644,11 +619,11 @@ $$
644619
**Step 5: Create the Score Matrix:**
645620
646621
$$
647-
\begin{align}
622+
\begin{aligned}
648623
T&= BV \cr
649624
&= U\Sigma V^\top \cr
650625
&= U\Sigma
651-
\end{align}
626+
\end{aligned}
652627
$$
653628
654629

0 commit comments

Comments
 (0)