Random Matrix Lecture 05

[[lecture-data]]

Info

let current = dv.current();
let previous = current.previous_lecture;
 
// get notes with this as the previous
let pages = dv.pages("")
	.where(p => p.previous_lecture)
	.where(p => String(p.previous_lecture)
		.includes(dv.current().file.link))
 
let output = `:LiArrowBigLeftDash: ${previous || "No Previous Lecture"} | `;
 
if (pages.length > 0) {
	output += pages
		.map(p => p.file.link).join(", ");
} else {
	output += 'No Next Lecture'
}
 
output += ' :LiArrowBigRightDash:'
 
dv.el('div', output)
 

Class Notes

Pages 27-30

Important

  • HW 1 posted online
  • project topics to be chosen w hw 2
  • This week only: office hours Wednesday at 3pm

2. Rectangular Matrices

Geometric method later, not at all, or in the notes

2.5 Application: Compressed Sensing

eigenvalues for Gaussian random matrices

Let . The goal of compressed sensing is to recover from measurement(s) / observation(s) .

  • some linear projection/measurement of the vector

Example

Medical imaging (eg, MRI). In this case, we get to choose .

Question

How many measurements () are needed to recover ?

Answer

In general, we need to be injective (ie, ).

BUT if has some sort of structure, then compressed sensing wants to make much smaller.

Question

What kind of structure do we need for ?

to be sparse.

We want

Remark

Note that knowing that I can design (called the sensing matrix) to recover sparse is equivalent to knowing that there is a basis in which is sparse (thus, WLOG, we assume is sparse)

Demonstration

  • Assume that is sparse is some known basis (that we choose). ie, there is some invertible such that is sparse. (it is important that is known as it encodes our prior assumptions about )
  • Then we have .
  • Assuming I have designed my to recover (sparse) , I can then recover

Suppose I have some that recovers sparse . Then I can also recover that is sparse in basis .

2.5.1 Null Space and Restricted Isometry Properties

Question

  1. When does uniquely determine for sparse? (when is injective?)
  • This is an existence question: is there a reverse mapping?
  1. How do we actually recover (quickly)?
  • This is an algorithm question: what algorithm do we want?
  • Not on topic for this class / we will not discuss

Sparsity

We denote the sparsity of a vector by If , we say that is -sparse

NOTE

This is not a norm; it is not homogeneous.

see sparsity

Distinguishes -sparse vectors

distinguishes -sparse vectors if ie is injective on .

see distinguishes sparse vectors

If such a exists, then yes, a way to recover exists! Now, we want to find for a .

-nullspace property

has the -nullspace property (NSP) if for all ,

see nullspace property

Proposition

distinguishes -sparse if and only if has the -NSP

Proof

(via contrapositive) Suppose does not distinguish -sparse vectors. Then there is some with such that . Note that , but Thus does not have the NSP.

Now, suppose does distinguish sparse vectors. Suppose and . Then there exist -sparse such that . Then ie has the NSP.

\tag*{$\blacksquare$}

see distinguishing is equivalent to double nullspace property

restricted isometry property

has the restricted isometry property (RIP) if for all with we have

see restricted isometry property

Restricted to sparse vectors, is approximately an isometry.

Theorem

If has RIP with , then has NSP

This follows immediately from the definition.

See restricted isometry implies nullspace property

NOTE

The RIP is more useful than the NSP for algorithms, especially when noise is added to

2.5.2 Random Sensing Matrices

Theorem

Suppose and and . Let . Then

(We want to reduce the number of measurements required)

Proof

Suppose and . Define

  1. as the restriction to indices in

  2. be the restriction to columns with indices in

If and all non-zero indices are in , ie , then we have In this case, the RIP is equivalent to requiring that (for all with ) we have Now, let . Then

\iff-\delta &\leq \hat{x}^{(S)}(G^{(S)T}G^{(S)} - I_{k})\hat{x}^{(S)} \leq \delta \\ \iff \lvert \lvert G^{(S)T}G^{(S)} -I_{k}\rvert \rvert &\leq\delta \end{align}$$

So, applying the union bound, we can see

\mathbb{P}[G \text{ not }(k,\delta)\text{-RIP}] &\leq_{\text{UB}} \sum_{S \subseteq[m], \lvert S \rvert =k} \mathbb{P}[\lvert \lvert G^{(S)T} G^{(S)} - I+k \rvert \rvert > \delta] \end{align}$$ Now, $\text{Law}(G^{(S)}) ={\cal N}\left( 0, \frac{1}{d} \right)^{\otimes d\times m}$. So introduce $H \sim {\cal N}(0, 1)^{\otimes k \times d}$. Then we have $$\begin{align} \mathbb{P}[G \text{ not }(k,\delta)\text{-RIP}] &\leq {{m}\choose{k}} \mathbb{P}[\lvert \lvert HH^T - dI_{k} \rvert \rvert >\delta d] \\ (*)&\leq {m\choose{k}} \cdot 2 \exp\left( -\frac{1}{8}\min \left\{ \frac{\delta^2d^2}{4d}, \frac{\delta d}{2} \right\} \right) \end{align}$$ Where we get $(*)$ from the [[high probability bound for operator norm of difference for Gaussian covariance matrix]] we saw [[Random Matrix Lecture 04#non-constructive-existence-of-nets|last time]]. And continuing the calculation, we see $$\begin{align} \mathbb{P}[G \text{ not }(k,\delta)\text{-RIP}] &\leq m^k \cdot 2 \exp\left( -\frac{\delta^2}{32}d \right) \\ &=2\exp\left( k\log m-\frac{\delta^2}{32}d \right) \\ &\leq 2\exp(-k\log m) \\ &= \frac{2}{m^k} \end{align}$$ $$\tag*{$\blacksquare$}$$

see dimension requirements for gaussian random matrix to be a restricted isometry with high probability

Summary

To recover (information theoretically)

  • in general, we need
  • if is sparse,

Next Time: Singular Vectors of (singular value decomposition)

with

Only well-defined (up to sign) if the singular values are distinct

Theorem The singular values of are almost surely distinct.

Review

rmt

Remark

Note that knowing that I can design (called the sensing matrix) to recover sparse is equivalent to {==1||knowing that I can do so and that there is a basis in which is sparse==} (thus, WLOG, we assume is {1||sparse})

Demonstration

  • Assume that is sparse is some known basis (that we choose). ie, there is some invertible such that is sparse. (it is important that is known as it encodes our prior assumptions about )
  • Then we have .
  • Assuming I have designed my to recover (sparse) , I can then recover

TODO

  • Finish cleaning ⏳ 2025-09-10 ✅ 2025-09-10
  • Finish linking ⏳ 2025-09-10 ✅ 2025-09-10
  • [-] finish adding flashcards clean ⏳ 2025-09-12 ❌ 2025-12-08
    • Flashcards up to RIP so far (non inclusive)
const { dateTime } = await cJS()
 
return function View() {
	const file = dc.useCurrentFile();
	return <p class="dv-modified">Created {dateTime.getCreated(file)}     ֍     Last Modified {dateTime.getLastMod(file)}</p>
}