Lecture 05

[[lecture-data]]

2024-09-06

Readings

  • a
 

1. Eigenvalues and Similarity

Last time, we talked about diagonalization having the same eigenvalues as which are easy to find. Not only are the diagonal entries of the eigenvalues, but the columns of are the eigenvectors!

In particular, the change of basis from to is through eigenvectors. In the perspective of the eigenvectors , is just a diagonal operator (it breaks individual coordinates into piece and scales along those dimensions).

We can call diagonalization an eigenvalue-eigenvector decomposition.

Suppose we have a system of differential equations like which might be tricky to solve if each derivative is in terms of the other functions. but if is diagonalizable, it may be easier to solve this system by changing to which is much easier to deal with.

Let have eigenvalues . For , define

Take all ways to multiply the eigenvalues and then add them.

and Principal Submatrix

Submatrices: destroy some columns and rows. We call it a principle submatrix if we get rid of the same rows as columns (we delete each th row and th column).

Proposition

If , then

P_{A} &= \lambda^n -S_{1}\lambda^{n-1}+S_{2}\lambda^{n-2}-\dots\pm S_{n}\lambda^0 \ &=\lambda^n -E_{1}\lambda^{n-1}+E_{2}\lambda^{n-2}-\dots\pm E_{n}\lambda^0 \end{aligned}$$

It’s easy to see why the first equality holds. If we factor the polynomial into and expand out, we get exactly as the coefficients. (When select of the terms to get a term we get exactly )

Exercise

Prove the second equality via induction with the Laplace expansion

Consequences: For a diagonalizable matrix, this is easy to see. But this is useful to know for matrices that are not diagonalizable. Another perspective: when there is 0 eigenvalue, the determinant collapses to zero and the matrix is singular. When there is no 0 eigenvalue, then the matrix is invertible which we saw last time. (see determinant)

This implies that the trace of similar matrices are the same! So this means that the trace is a property of the transformation - not necessarily obvious. (see trace)

Coming up:

  • Seeing a relationship between and spectrum. They have the same nonzero eigenvalues! (for rectangular matrices and )
  • Commutativity of and if and only if simultaneously diagonalizable. To get there: some background

Multiplying partitioned matrices Suppose is partitioned, not necessarily regularly. Suppose is partitioned conformally (in the same way for the rows as the columns of )

The th block is , a submatrix is

where is and This is suspiciously like multiplying regular matrices with single entries! And each matrix in the multiplication is of size . So why does this work?

Say I want to take a single row and a single column and compute their inner product like doing it for a “normal” matrix with single entries. We are doing the same thing but chunk by chunk.

Permutation Matrices

Permutation Matrix

A permutation matrix is a square matrix such that there is one in every row, one in every column, and the rest are zeroes.

(this reorders/rearranges the rows of if we multiply and reorders the columns of if we multiply according to the pattern of the rows or columns of respectively)

Note that and also reorders diagonals.

is also a similarity transformation.

(see permutation matrix)

Suppose is a matrix and is block traingular. (Blocks along diagonal are triagular in the same way) Then

which gives us the same characteristic polynomial.

Next time: and have the same dimensions. and have the same nonzero eigenvalues.