Lecture 11

[[lecture-data]]

2024-09-20

Readings

  • a
 

2. Chapter 2

Claim: is unitary if and only if there exists a skew hermitian such that

Skew hermitian means that . In particular, is normal. Lets say that we unitarily diagonalize so we can write where is unitary and is diagonal. Then

B^* &= -B \\ (UDU^*)^* &= -UDU^*\\ UD^*U^* &= -UDU^* \\ \implies D^* &= -D \end{aligned}$$ And this implies that the elements of $D$ are purely imaginary. >[!def] Matrix exponential > The **matrix exponential** is >$$e^B = \sum_{i=0}^\infty \frac{1}{i!} B^i$$ >[!theorem] > >$A\in M_{n}$ is [[Concept Wiki/unitary matrices\|unitary]] if and only if there exists a $B \in M_{n}$ [[Concept Wiki/skew hermitian]] such that $A = e^B$ > ( see [[Concept Wiki/a matrix is unitary if and only if it equals the exponential of a skew hermitian matrix]]) >>[!proof]+ >>$\impliedby$ Suppose $B$ is [[Concept Wiki/skew hermitian]] and $A = e^B$. $B$ is [[Concept Wiki/normal matrix\|normal]] and we can say $B=UDU^*$ where $U$ is [[Concept Wiki/unitary matrices\|unitary]] and $D$ is diagonal. Recall that the elements of the diagonal of $D$ are pure imaginary. ie, there exist $\theta_{1},\dots,\theta_{n} \in \mathbb{R}$ such that >$$\begin{bmatrix} \theta_{1} & 0 & \dots & 0 \\ > > 0 & \theta_{2} & \ddots & \vdots & \\ > > \vdots & \ddots & \ddots & 0 \\ > >0 & \dots & 0 & \theta_{n} > >\end{bmatrix}$$ >>Then we have >>$$\begin{aligned} > >A &= e^B \\ > >&= e^{UDU^*} \\ > >&= U e^D U^* \\ > >&= U\begin{bmatrix} e^{\theta_{1}} & 0 & \dots & 0 \\ 0 & e^{\theta_{2}} & \ddots & \vdots & \\ \vdots & \ddots & \ddots & 0 \\ 0 & \dots & 0 & e^{\theta_{n}}\end{bmatrix}U^* \end{aligned}$$ > >where and each of the matrices in the last product are unitary! And so $A$ is unitary, since [[Concept Wiki/the product of unitary matrices is unitary]] > > > >$\implies$ Now, suppose $A$ is [[Concept Wiki/unitary matrices\|unitary]]. So $A$ is [[Concept Wiki/normal matrix\|normal]], say $A = W\Omega W^*$. Then > >$$\Omega = \begin{bmatrix} \omega_{1} & 0 & \dots & 0 \\ > > 0 & \omega{2} & \ddots & \vdots & \\ > >\vdots & \ddots & \ddots & 0 \\ > >0 & \dots & 0 & \omega{n} > >\end{bmatrix}$$ >>And since $A$ is unitary, it is an [[isometry]], which implies that each of the [[Concept Wiki/eigenvalue\|eigenvalues]] of $A$ have modulus 1 ($\lvert \lvert \omega_{i} \rvert \rvert = 1 \;\;\forall\;i$). Thus, we can express each $\omega_{i}$ as an angle between it and the real axis and this is given by $e^{\delta_{i}}$ for $\delta_{i} \in \mathbb{R}$. So >>$$\Omega = \begin{bmatrix} e^{\delta_{i}} & 0 & \dots & 0 \\ > > 0 & e^{\delta_{i}} & \ddots & \vdots & \\ > >\vdots & \ddots & \ddots & 0 \\ > > 0 & \dots & 0 & e^{\delta_{i}} > >\end{bmatrix}$$ >>Define >>$$\Sigma = \begin{bmatrix} \delta{1} & 0 & \dots & 0 \\ >> 0 & \delta{2} & \ddots & \vdots & \\ >> \vdots & \ddots & \ddots & 0 \\ >> 0 & \dots & 0 & \delta{n} >>\end{bmatrix}$$ >>Then $B = W\Sigma W^*$ is [[Concept Wiki/skew hermitian]] ! Then >>$$e^B = e^{W\Sigma W^*} = We^\Sigma W = W\Omega W = A$$ # 3. Chapter 3 : Jordan Decomposition Classifying matrices according to equivalence classes of [[Concept Wiki/similar matrices\|similarity]]. >[!def] Jordan Block > >$A$ is a $k\times k$ Jordan block (denoted $J_{k}(\lambda)$) is >$$\begin{bmatrix} \lambda & 1 & 0 & \dots & 0 \\ 0 & \lambda & 1 & \ddots & \vdots \\ \vdots & \ddots & \lambda & 1 & 0 \\ \vdots & & \ddots& \lambda & 1\\ 0 & \dots & \dots & 0 & \lambda\\ >\end{bmatrix}$$ >And this has [[Concept Wiki/eigenvalue]] $\lambda$ with [[Concept Wiki/spectrum\|algebraic multiplicity]] $k$. The eigenvectors >$x : [\lambda I - J_{k}(\lambda)]x = 0$ can be found to be $x_{1} = \text{anything}$ and $x_{2}=\dots=x_{n}=0$, so the [[Concept Wiki/eigenvalue\|eigenvector]] has [[Concept Wiki/spectrum\|geometric multiplicity]] 1. > >(see [[Concept Wiki/Jordan block]]) When the eigenvalue of zero, when we raise it to the order of the block $k$, we get back the $0$th matrix. We can see that $$[J_{k}(0)^\ell]_{ij} = \begin{cases} 1 \;\; \text{if } \ell = j-i \\ 0 \;\; \text{otherwise} \end{cases}$$ and $\text{rank}[J_{k}(0)^\ell] = (k-\ell)_{\geq 0}$ >[!def] Jordan Matrix > >A **Jordan Matrix** is the direct sum of [[Concept Wiki/Jordan block]]s. >$$J = J_{n_{1}}(\lambda_{1}) \oplus J_{n_{2}}(\lambda_{2}) \oplus \dots \oplus J_{n_{s}}(\lambda_{s}) = \begin{bmatrix} J_{n_{1}}(\lambda_{1}) & 0 & \dots & 0 \\ 0 & J_{n_{2}}(\lambda_{2}) & \ddots & \ddots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \dots & 0 & J_{n_{s}}(\lambda_{n}) \end{bmatrix}$$ >"$= \oplus_{i=1}^s J_{n_{i}}(\lambda_{i})$" and $\lambda_{i}$ not necessarily distinct. > >This is a block diagonal matrix with the jordan blocks along the diagonal. > >(see [[Concept Wiki/Jordan Matrix]]) >[!theorem] > >For each $A \in M_{n}$ there exists a [[Concept Wiki/Jordan Matrix]] $J$ such that $A$ is similar to $J$: >$$A = SJS^{-1}$$ >for $S \in M_{n}$. Further, $J$ is unique up to reordering its [[Concept Wiki/Jordan block]]s. > >>[!example] >>$A \in M_{3}$ with all eigenvalues $\pi$ has 3 possible similarity classes, one for each size of largest Jordan blocks. > >(see [[Concept Wiki/Jordan's Theorem]]) >[!note] > >A matrix is [[Concept Wiki/diagonalizable]] if and only if each of its jordan blocks are of size 1.