Random Matrix Lecture 05

[[lecture-data]]

[!info]

Evaluation Error: SyntaxError: Unexpected token '>'

at DataviewInlineApi.eval (plugin:dataview:19027:21)
at evalInContext (plugin:dataview:19028:7)
at asyncEvalInContext (plugin:dataview:19038:32)
at DataviewJSRenderer.render (plugin:dataview:19064:19)
at DataviewJSRenderer.onload (plugin:dataview:18606:14)
at DataviewJSRenderer.load (app://obsidian.md/app.js:1:1182416)
at DataviewApi.executeJs (plugin:dataview:19607:18)
at DataviewCompiler.eval (plugin:digitalgarden:10763:23)
at Generator.next (<anonymous>)
at eval (plugin:digitalgarden:90:61)

Class Notes

Pages 23-

Important

2. Rectangular Matrices

Geometric method later, not at all, or in the notes

2.4 Application: Compressed Sensing

eigenvalues for Gaussian random matrices

Let GRd×m. The goal of compressed sensing is to recover xRm from measurement(s) / observation(s) y=GxRd.

Example

Medical imaging (eg, MRI). In this case, we get to choose G.

Question

How many measurements (d) are needed to recover xRm?

Answer

In general, we need G to be injective (ie, dm).

BUT if x has some sort of structure, then compressed sensing wants to make d much smaller.

Question

What kind of structure do we need for x?

Answer

We want x to be sparse.

Remark

Note that knowing that I can design G (called the sensing matrix) to recover sparse x is equivalent to knowing that there is a basis A in which x is sparse (thus, WLOG, we assume is sparse)

Demonstration

  • Assume that x is sparse is some known basis (that we choose). ie, there is some invertibleA such that x=Ax~ is sparse. (it is important that A is known as it encodes our prior assumptions about x)
  • Then we have GAx~=Gx.
  • Assuming I have designed my G to recover (sparse) x, I can then recover x~=A1x

Suppose I have some G that recovers sparse x. Then I can also recover x~ that is sparse in basis A.

2.4.1 Null Space and Restricted Isometry Properties

Question

  1. When does Gx uniquely determine x for x sparse? (when is G injective?)
  1. How do we actually recover x (quickly)?

Sparsity

We denote the sparsity of a vector x by

||x||0:=#{i:xi0}=|{i:xi0}|

If ||x||0k, we say that x is k-sparse

Note

This is not a norm; it is not homogeneous.

see sparsity

Distinguishes k-sparse vectors

G distinguishes k-sparse vectors if

||x||0,||x||0k,Gx=Gxx=x

ie G is injective on {x:||x||0k}Rn.

see distinguishes sparse vectors

If such a G exists, then yes, a way to recover x exists! Now, we want to find for a dm.

k-nullspace property

G has the k-nullspace property (NSP) if for all ||x||0k,

x0,Gx0

see nullspace property

Proposition

G distinguishes k-sparse if and only if G has the 2k-NSP

Proof

() (via contrapositive)

Suppose G does not distinguish k-sparse vectors. Then there is some xx with ||x||0,||x||0k such that Gx=Gx. Note that ||xx||02k, but

GxGx=G(xx)0$$Thus$G$doesnothavethe$2k$NSP.
()

Now, suppose G does distinguish k sparse vectors. Suppose ||x||02k and x0. Then there exist x,x k-sparse such that x=xx. Then

0GxGx=G(xx)=Gx

ie G has the 2k NSP.

see distinguishing is equivalent to double nullspace property

(k,δ) restricted isometry property

G has the (k,δ) restricted isometry property (RIP) if for all x0 with ||x||0k we have

(1-\delta)\lvert \lvert x \rvert \rvert { #2} \leq \lvert \lvert Gx \rvert \rvert { #2} \leq (1+\delta) \lvert \lvert x \rvert \rvert { #2}

see restricted isometry property

Restricted to sparse vectors, G is approximately an isometry.

Theorem

If G has (k,δ) RIP with δ<1, then G has k NSP

This follows immediately from the definition.

See restricted isometry implies nullspace property

Note

The RIP is more useful than the NSP for algorithms, especially when noise is added to y=Gxy=Gx+ε

Theorem

Suppose k1 and δ(0,1) and d64klogmδ2. Let GN(0,1d)d×m. Then

P[G has (k,δ)-RIP]12mk

(We want to reduce the number of measurements required)

Proof

Suppose S[m],|S|=k and xRm. Define

  1. x(S)Rk as the restriction to indices in S
  2. G(S)Rd×k be the restriction to columns with indices in S

If ||x||0k and all non-zero indices are in S, ie supp(x)S, then we have

Gx=G(S)x(S) and ||x||=||x(S)||

In this case, the (k,δ) RIP is equivalent to requiring that
(for all S[m] with |S|=k) we have

(1-\delta)\lvert \lvert x^{(S)} \rvert \rvert { #2} \leq \lvert \lvert G^{(S)}x^{(S)} \rvert \rvert { #2} \leq (1+\delta) \lvert \lvert x^{(S)} \rvert \rvert { #2}

Now, let x^(S):=x(S)||x(S)||. Then

δx^(S)(G(S)TG(S)Ik)x^(S)δ||G(S)TG(S)Ik||δ

So, applying the union bound, we can see

P[G not (k,δ)-RIP]UBS[m],|S|=kP[||G(S)TG(S)I+k||>δ]

Now, Law(G(S))=N(0,1d)d×m. So introduce HN(0,1)k×d. Then we have

P[G not (k,δ)-RIP](mk)P[||HHTdIk||>δd]()(mk)2exp(18min{δ2d24d,δd2})

Where we get () from the high probability bound for operator norm of difference for Gaussian covariance matrix we saw last time. And continuing the calculation, we see

P[G not (k,δ)-RIP]mk2exp(δ232d)=2exp(klogmδ232d)2exp(klogm)=2mk

see dimension requirements for gaussian random matrix to be a restricted isometry with high probability

Summary

To recover xRm (information theoretically)


Next Time:
Singular Vectors of GN(0,1)d×m (singular value decomposition)

G=UΣVT=1dσiuiviT

ui(G),vi(G) with σdσ1

Only well-defined (up to sign) if the singular values are distinct

Theorem
The singular values of G are almost surely distinct.

Review

#flashcards/math/rmt

Remark

Note that knowing that I can design G (called the sensing matrix) to recover sparse x is equivalent to {1||knowing that there is a basis A in which x is sparse} (thus, WLOG, we assume x is {1||sparse})

Demonstration

TODO

Created 2025-09-09 ֍ Last Modified 2025-09-11