Linear Algebra Quick Review & Cheatsheet: Matrix Basis (Transpose, Inverse, Determinant, Rank), Gaussian Elimination, LU Decomposition, Orthogonality, QR Decomposition, Eigen Value and Eigen Vector, Symmetric Matrices, Singular Value Decomposition.
Here is the quick review for linear algebra, examples and some definitions are from MIT 18.06 Linear Algebra.
1. Basic Concepts and Terminologies
- Matrix Multiplication: multiply matrix \(A\in R^{m\times n}\) and \(B\in R^{n\times p}\) get matrix \(C\in R^{m\times p}\) where
\[
C_{ij} = \sum^{n}_{k=1}A_{ik}B_{kj}~.
\]
Matrix Transpose: the transpose of matrix \(A\in R^{m\times n}\) is \(A^{\top}\in R^{n\times m}\), which is a result of flipping rows and columns: \(A_{ij} = A^{\top}_{ji}\).
\[
\begin{aligned}
&\left(A^{\top}\right)^{\top} = A \\
&\left(AB\right)^\top = B^\top A^\top \\
&(A + B)^\top = A^\top + B^\top~.
\end{aligned}
\]Matrix Trace: the trace of matrix \(A\in R^{n\times n}\) (square matrix) is the sum of diagonal elements in the matrix:
\[
\text{tr}(A) = \sum^{n}_{i=1}A_{ii}~.
\]
For matrix \(A\) and \(B\) with equal size, \(\text{tr}(AB) = \text{tr}(BA)\).
- Matrix Inverse: the inverse of matrix \(A\in R^{n\times n}\) (square matrix) is denoted as \(A^{-1}\) and is the unique matrix such that
\[
A^{-1}A = I = AA^{-1},
\]
where \(I\) is identity matrix. Matrices not invertible are called singluar matrices. Suppose \(A\) and \(B\) are non-singular and equal size:
\[
\begin{aligned}
& \left(A^{-1}\right)^{-1} = A \\
& \left(AB\right)^{-1} = B^{-1}A^{-1} \\
& \left(A^{-1}\right)^\top = \left(A^{\top}\right)^{-1}~.
\end{aligned}
\]
- Determinant: determinant of a square matrix \(A\in R^{n\times n}\) is a function:
\[
|A|: R^{n\times n}\rightarrow R~.
\]
And
\[
\begin{aligned}
& |A| = |A^{\top}| \\
& |AB| = |A||B| \\
& |A^{-1}| = (|A|)^{-1}~.
\end{aligned}
\]
2. Gaussian Elimination
- Row Operations
To solve linear equations, we usually do row operations to get upper triangular form to reduce the unknowns. For example, substract 3 row 1 from row 2 in matrix \(A\), i.e. \(r_{2} - 3r_{1}\):
\[
\begin{bmatrix}
1 & 2 & 1 \\
3 & 8 & 1 \\
0 & 4 & 7 \\
\end{bmatrix} \longrightarrow r_{2} - 3r_{1} \longrightarrow
\begin{bmatrix}
1 & 2 & 1 \\
0 & 2 & -2 \\
0 & 4 & 7 \\
\end{bmatrix}
\]
This step is equivalent to multiply a elimination matrix at left side of \(A\), the multiplier is represented at (2, 1) of elimination
\[
E_{21}A =
\begin{bmatrix}
1 & ~ & ~ \\
-3 & 1 & ~ \\
~ & ~ & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2 & 1 \\
3 & 8 & 1 \\
0 & 4 & 7 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 & 1 \\
0 & 2 & -2 \\
0 & 4 & 7 \\
\end{bmatrix}
\]
- LU Decomposition
Continue elimination will get the upper triangular matrix:
\[
E_{32}E_{21}A =
\begin{bmatrix}
1 & ~ & ~ \\
~ & 1 & ~ \\
~ & -2 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & ~ & ~ \\
-3 & 1 & ~ \\
~ & ~ & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2 & 1 \\
3 & 8 & 1 \\
0 & 4 & 7 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 & 1 \\
~ & 2 & -2 \\
~ & ~ & 11 \\
\end{bmatrix} = U,
\]
and if the elimination matrix could be moved to right hand side, we will get
\[
A = (E_{32}E_{21})^{-1}U = \begin{bmatrix}
1 & ~ & ~ \\
~ & 1 & ~ \\
~ & 2 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & ~ & ~ \\
3 & 1 & ~ \\
~ & ~ & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2 & 1 \\
~ & 2 & -2 \\
~ & ~ & 11 \\
\end{bmatrix} =
\begin{bmatrix}
1 & ~ & ~ \\
3 & 1 & ~ \\
0 & 2 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2 & 1 \\
~ & 2 & -2 \\
~ & ~ & 11 \\
\end{bmatrix} = LU,
\]
therefore \(A\) is factorized into a lower triangular and an upper triangular matrix which is \(A=LU\).
- Solve for Inverse (Gaussian - Jordan Elimination)
Solving inverse matrix is solving the equation \(AA^{-1} = I\), and it could be done by solve \(AA^{-1}_{. i} = I_{. i}\) for \(i\) to be column index using Guassian elimination. While Guassian-Jordan elimination could solve it at once. Basically the elimination will transform \(\left[~A~|~I~\right]\) to \(\left[~I~|~A^{-1}\right]\) rather than \(U\) at left.
3. Solve Linear Equations
This involves solve \(Ax = b\) and \(Ax = 0\) (homogenous).
- Vector Space: if two vectors \(v\) and \(w\) could span a vector space, then all linear combinations of them are also in the same space.
- Subspace: subset of the vector space, it is also a vector space, e.g. a plane in 3D space \(R^{3}\) is a subpace.
- Column Space: column vectors of matrix \(A\) span its column space \(C(A)\).
- Null Space: all vectors \(x\) satisfiy \(Ax=0\) span its null space \(N(A)\).
- Linear Independent: if no vectors in \(x_{1}, x_{2}, \cdots, x_{n}\) can be represented as a linear combination of remaining vectors then this group of vectos is linearly independent.
- Rank: the size of group of linear independent column vector of \(A\) is the column rank; the size of group of linear independent row vector of \(A\) is the row rank. \(A\) is full rank if \(\text{rank}(A) = \min(m, n)\), also for \(A\in R^{m\times n}\),
\[
\begin{aligned}
& \text{rank}(A) \leq \min(m, n) \\
& \text{rank}(A) = \text{rank}(A^{\top}) \\
& \text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B)) \\
& \text{rank}(A + B) \leq \text{rank}(A) + \text{rank}(B)
\end{aligned}
\]
- Basis of Vector Space: a group of vector forms a basis if (1) they are linear independent; (2) they span a space.
- Dimension of Vector Space: number of vectors in each basis.
4. Orthogonality
- Orthogonal Matrices: matrix \(Q\) (square) is orthogonal if all columns are orthogonal, which is
\[
Q^{\top}Q = I = QQ^{\top}~.
\]
Therefore if \(Q\) is invertible, then \(Q^{-1} = Q^{\top}\).
- Projection: suppose vector \(x\) and \(y\) are independent, the projection of \(x\) on \(y\) is \(p\) where \(p=cy\), \(c\) is constant, since
\[
(x - p)^{\top}y = (x - cy)^{\top}y = x^{\top}y - cy^{\top}y = 0 \rightarrow c = \frac{x^{\top}y}{y^{\top}y}~.
\]
Gram-Schmidt Process: a method to find orthogonal basis for given linear independent vectors. Basically the process is subtract from every new vector by its projection in the directions already set. For example, given 3 independent vector \(a, b, c\), the orthogonal basis \(q_{1}, q_{2}, q_{3}\) is solved by:
\[
\begin{aligned}
& q_{1} = a \\
& q_{2} = b - \frac{q_{1}^{\top} b}{q_{1}^{\top}q_{1}}q_{1} \\
& q_{3} = c - \frac{q_{1}^{\top} c}{q_{1}^{\top}q_{1}}q_{1} - \frac{q_{2}^{\top} c}{q_{2}^{\top}q_{2}}q_{2}~.
\end{aligned}
\]
- QR Decomposition: matrix \(A\) (square) can be decomposed as \(A=QR\) where \(Q\) is an orthogonal matrix with columns to be orthogonal basis of columns of A, \(R\) is an upper triangular matrix.
5. Eigenvalue and Eigenvectors
- Eigenvalue and Eigenvectors: for square matrix \(A\), if vector \(v\) and scalar \(\lambda\) could make
\[
Av = \lambda v~,
\]
which means the transformation \(A\) apply on \(v\) is a stretching not rotation; then \(\lambda\) is the eigenvalue and \(v\) is the eigenvector of \(A\). Eigenvalues could be solved by
\[
|\lambda I - A| = 0
\]
In matrix form the defintion is
\[
AS = S\Lambda
\]
where columns of \(S\) are eigenvectors and \(\Lambda\) is diagonal matrix with eigenvalues on diagonal. Eigenvalues also relate to trace and determinant of \(A\):
\[
\begin{aligned}
\text{tr}(A) &= \sum^{n}_{i=1}\lambda_{i} \\
|A| &= \prod^{n}_{i=1}\lambda_{i}~.
\end{aligned}
\]
- Diagonalization: if \(A\) is invertible, then \(S\) is invertible,
\[
A = S\Lambda S^{-1}~.
\]
where columns of \(S\) are eigenvectors and \(\Lambda\) is diagonal matrix with eigenvalues on diagonal.
6. Symmetric Matrices
Eigen Decomposition: symmetric matrices are defined as \(A = A^{\top}\), therefore it has to be square matrices. It can be decomposed into:
\[
A = Q\Lambda Q^{\top},
\]
where \(Q\) is orthogonal matrix, \(Q^{\top}=Q^{-1}\)Quadratic Form: scalue value \(x^{\top}Ax\) is the quadratic form, with explicit form:
\[
x^{\top}Ax = \sum^{n}_{i=1}\sum^{n}_{j=1}a_{ij}x_{i}x_{j}.
\]
- Positive Semi-Definite Matrices: given a symmetric matrix \(A\), for all vectors \(x\) if \(x^{\top}Ax\geq 0\) then \(A\) is positive semi-definite.
7. Singular Value Decomposition (SVD)
For matrix \(A\) with any size, there is a decomposition:
\[
A = U\Sigma V^{\top},
\]
where \(U\) and \(V^{\top}\) are both orthogonal matrices, and \(\Sigma\) contains the singular values on diagonal. This decomposition could be analyticall solved by:
\[
\begin{aligned}
& A^{\top}A = V\Sigma^{\top}\Sigma V^{\top} \\
& AA^{\top} = U\Sigma\Sigma^{\top} U^{\top},
\end{aligned}
\]
where \(A^{\top}A\) and \(AA^{\top}\) are both squared and symmetric matrix, and they are also positive semi-definite; diagonal matrix \(\Sigma^{\top}\Sigma\) and \(\Sigma\Sigma^{\top}\) contains the eigenvalues.