[[lecture-data]]2024-08-28
Readings
- a
0. Chapter 0
Suppose we have
Nullspace (Kernel)
Range (of )
The range of is .
This is also a subspace of ! (think about it), and its dimension is called the rank
Note - often people refer to the range as the co-domain (the codomain is the space where function values live, but the function values do not necessarily take on all values possible in the codomain)
This can be thought of the image of
(see range of a matrix)
Proposition
Suppose that is as above; . The columns of are linearly independent if and only if the , if and only if is one-to-one.
(see linearly independent columns equivalencies)
Proof (informal) , then . By looking at the multiplication as
If
This implies that we must have that if and only if the columns are linearly independent! (think about it)
(1-1) If the nullspace is only the zero vector, then there are no other vectors that get sent to the zero vector. This means that the function is one-to-one at the origin. From the first result, that the columns of are linearly independent, then every linear combination of them with unique coefficients (multiplying by some !) will be unique.
Suppose . ie, . Since the nullspace is just the zero vector, this implies that is one-to-one
Linear systems Suppose we have a system of equations
a_{1,1}x_{1}+a_{1,2}x_{2}+\dots+a_{1,n}x_{n} &= b_{1} \\ & \vdots \\ a_{m,1}x_{1}+a_{m,2}x_{2}+\dots+a_{m,n}x_{n} &= b_{m} \\ \end{aligned}We can write this as where . We can also write it as the augmented matrix .
We can solve this system by performing row operations!
Row operations
- swap rows
- multiply by a nonzero scalar
- add one row to another
These operations do not affect the solution set.
Row Reduced Eschelon Form
This is the (unique!) row-equivalent matrix such that
The leading entry of every row is (pivot) unless the row is all zeroes
Every pivot has zeroes above and below
Pivots move strictly to the right, and rows of zeros are on the bottom
Example
1 5 0 4 0 0 1 5 0 0 0 0
Remark
Matrices form equivalence classes under row reduction, and every equivalence class has a special member that is the rref form for all those matrices. (proof later, for now just think about it)
We can find solutions for free and pivot variables. Free by fixing 1 for each free in turn and solving for the others.
Special case: Suppose that is a square matrix. Then we can row reduce . And since we have the equivalence relation between matrices under row reduction, the solution set will be the same for both systems! So this is very nice to have.
This is also very nice for solving systems in different scenarios. Suppose we want to find the (non-simultaneous) solutions to the systems , then we can simply do row reduction to find the solutions.
So how do we know when we have this?
Determinant
For , define the determinant of as Here are permutations (thats what the notation means).
Transversal - entries in a matrix such that no two entries are in the same row or column. Transversal product is the product of those entries. Generally, there are transversals in a matrix of order . This is what the definition above is doing, it is summing all of the transversal products.
Note
This is an AWFUL way to calculate the determinant, but it is a nice definition
(see determinant)
Laplace Expansion
it turns out
\det(A) &= \sum_{k=1}^n (-1)^{i+k} a_{ik} M_{ik}\ &= \sum_{k=1}^n (-1)^{j+k} a_{kj} M_{kj} \end{aligned}$$
Where , or the determinant of the matrix without the row and without the column (deleted simultaneously).
This is also a bad way to calculate because it is also factorial time!
(see laplace expansion)
Facts
if , then (can be done algebraically)
Also, where indicates some triangular matrix, and are the diagonal entries.
Thus if is a row (or column) of zeros, then the determinant is .
(see determinant)
So how do we find determinant if the first ones were bad!
Note
Suppose we have . What is the effect of the row operations we can do on the determinant
- if we swap rows, we multiply by
- if we multiply by a scalar, we multiply the determinant by that scalar
- if we add rows together, we multiply by (no change)
Theorem
This means that we can row reduce our matrix , and keep track of the changes that we have made in a multiplied constant . will be diagonal.
- Case 1: rref is , which has and we multiply by the constants we have collected to get
- Case 2: rref has a row (or col) of zeros. then