You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can read about the Eckart-Young theorem and some of its uses here <https://en.wikipedia.org/wiki/Low-rank_approximation>.
371
+
372
+
We'll make use of this theorem when we discuss principal components analysis (PCA) and also dynamic mode decomposition (DMD).
373
+
348
374
349
375
350
376
@@ -473,11 +499,7 @@ UhatUhatT, UhatTUhat
473
499
The cells above illustrate application of the `fullmatrices=True` and `full-matrices=False` options.
474
500
Using `full-matrices=False` returns a reduced singular value decomposition.
475
501
476
-
This option implements an optimal reduced rank approximation of a matrix, in the sense of minimizing the Frobenius
477
-
norm of the discrepancy between the approximating matrix and the matrix being approximated.
478
-
479
-
480
-
Optimality in this sense is established in the celebrated Eckart–Young theorem. See <https://en.wikipedia.org/wiki/Low-rank_approximation>.
502
+
The **full** and **reduced** SVd's both accurately decompose an $m \times n$ matrix $X$
481
503
482
504
When we study Dynamic Mode Decompositions below, it will be important for us to remember the preceding properties of full and reduced SVD's in such tall-skinny cases.
483
505
@@ -487,7 +509,7 @@ When we study Dynamic Mode Decompositions below, it will be important for us to
487
509
488
510
Now let's turn to a short-fat case.
489
511
490
-
To illustrate this case, we'll set $m = 2 < 5 = n $
512
+
To illustrate this case, we'll set $m = 2 < 5 = n $ and compute both full and reduced SVD's.
491
513
492
514
```{code-cell} ipython3
493
515
import numpy as np
@@ -502,11 +524,13 @@ U, S, V
502
524
print('Uhat, Shat, Vhat = ')
503
525
Uhat, Shat, Vhat
504
526
```
527
+
Let's verify that our reduced SVD accurately represents $X$
505
528
506
529
```{code-cell} ipython3
507
-
rr = np.linalg.matrix_rank(X)
508
-
print(f'rank X = {rr}')
530
+
SShat=np.diag(Shat)
531
+
np.allclose(X, Uhat@SShat@Vhat)
509
532
```
533
+
510
534
## Polar Decomposition
511
535
512
536
A **reduced** singular value decomposition (SVD) of $X$ is related to a **polar decomposition** of $X$
0 commit comments