Lecture 02

[[lecture-data]]

2024-08-28

Readings

0. Chapter 0

Suppose we have AMm,n(F)

Nullspace (Kernel)

The nullspace of A is {xFn:Ax=0}

This is a subspace of Fn, and its dimension is called the nullity

(see nullspace)

Range (of A)

The range of A is {Ax:xFn}.

This is also a subspace of Fn ! (think about it), and its dimension is called the rank

Note

This can be thought of the image of A - often people refer to the range as the co-domain (the codomain is the space where function values live, but the function values do not necessarily take on all values possible in the codomain)

(see range of a matrix)

Proposition

Suppose that A is as above; AMm,n(F). The columns of A are linearly independent if and only if the null(A)=0, if and only if A is one-to-one.

(see linearly independent columns equivalencies)

Proof (informal)

If null(A)={0}, then Ax=0x=0. By looking at the multiplication as

Ax=[A1A2An][x1x2xn]T=x1A1++xnAn=0

This implies that we must have that x1,,xn=0 if and only if the columns are linearly independent! (think about it)

(1-1)
If the nullspace is only the zero vector, then there are no other vectors that get sent to the zero vector. This means that the function is one-to-one at the origin. From the first result, that the columns of A are linearly independent, then every linear combination of them with unique coefficients (multiplying by some x !) will be unique.

Suppose Ax=Ay. ie, A(xy)=0. Since the nullspace is just the zero vector, this implies that x=yA is one-to-one

Linear systems
Suppose we have a system of equations

a1,1x1+a1,2x2++a1,nxn=b1am,1x1+am,2x2++am,nxn=bm

We can write this as Ax=b where A=[aij],xFn,bFm. We can also write it as the augmented matrix [Ab].

We can solve this system by performing row operations!

Row operations

  1. swap rows
  2. multiply by a nonzero scalar
  3. add one row to another

These operations do not affect the solution set.

Row Reduced Eschelon Form

This is the (unique!) row-equivalent matrix such that

  1. The leading entry of every row is 1 (pivot) unless the row is all zeroes
  2. Every pivot has zeroes above and below
  3. Pivots move strictly to the right, and rows of zeros are on the bottom
Example

1 5 0 4
0 0 1 5
0 0 0 0

Remark

Matrices form equivalence classes under row reduction, and every equivalence class has a special member that is the rref form for all those matrices.
(proof later, for now just think about it)

We can find solutions for free and pivot variables. Free by fixing 1 for each free in turn and solving for the others.

(see reduced row eschelon form)

Special case: Suppose that A is a square matrix. Then we can row reduce [Ab][Ib]. And since we have the equivalence relation between matrices under row reduction, the solution set will be the same for both systems! So this is very nice to have.

This is also very nice for solving systems in different scenarios. Suppose we want to find the (non-simultaneous) solutions to the systems Ax=b1,Ax=b2,Ax=b3, then we can simply do row reduction [Ab1b2b3][Ib1b2b3] to find the solutions.

So how do we know when we have this?

Determinant

For AMn(F), define the determinant of A as

det(A)=σ{1,,n}{1,,n}bijectionssign(σ)i=1nai,σ(i)

Here σ are permutations (thats what the notation means).

Transversal - entries in a matrix such that no two entries are in the same row or column. Transversal product is the product of those entries. Generally, there are n! transversals in a matrix of order n. This is what the definition above is doing, it is summing all of the transversal products.

Note

This is an AWFUL way to calculate the determinant, but it is a nice definition

(see determinant)

Laplace Expansion

i,j it turns out

det(A)=k=1n(1)i+kaikMik=k=1n(1)j+kakjMkj

Where Ms,t=det(A(s,t)), or the determinant of the matrix A without the row s and without the column t (deleted simultaneously).

This is also a bad way to calculate because it is also factorial time!

(see laplace expansion)

Facts

  1. if A,BMn(F), then det(AB)=det(A)det(B)
    (can be done algebraically)

  2. Also, det(U)=λk where U indicates some triangular matrix, and λk are the diagonal entries.

  3. Thus if A is a row (or column) of zeros, then the determinant is 0.

(see determinant)

So how do we find determinant if the first ones were bad!

Note

Suppose we have AMn(F). What is the effect of the row operations we can do on the determinant

  1. if we swap rows, we multiply by 1
  2. if we multiply by a scalar, we multiply the determinant by that scalar
  3. if we add rows together, we multiply by 1 (no change)
Theorem

This means that we can row reduce our matrix A, and keep track of the changes that we have made in a multiplied constant K. A will be diagonal.

  • Case 1: rref is I, which has detI=1 and we multiply by the constants we have collected to get detA=K
  • Case 2: rref has a row (or col) of zeros. then detA=0