Determinants

Definition and expansion formulas

Definition. Let \(A\) be an \(m\times n\)-matrix, and \(1\le i\le m\), \(1\le j\le n\). Then the \((i,j)\)-th submatrix \(A_{ij}\) is the \((m-1)\times(n-1)\) matrix you get from \(A\) by deleting its \(i\)-th row and \(j\)-th column.

Example. Let \[ A=\begin{pmatrix} 5 & -2 & 0 & 1 \\ 2 & -1 & 3 & 1 \\ 0 & 1 & -2 & 4 \\ 0 & 2 & 1 & 8 \end{pmatrix}. \] Then we have \[ A_{12}=\begin{pmatrix} 2 & 3 & 1 \\ 0 & -2 & 4 \\ 0 & 1 & 8 \end{pmatrix}\text{, and } A_{31}=\begin{pmatrix} -2 & 0 & 1 \\ -1 & 3 & 1 \\ 2 & 1 & 8 \end{pmatrix}. \]

Definition. Let \(A=\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)\). Then its determinant is \[ \det(A)=|A|=ad-bc. \] Let \(A\) be an \(n\times n\) matrix for \(n>2\). Its determinant is defined recursively. This means that supposing that the determinants of \((n-1)\times(n-1)\) matrices are defined, we can define \[ \det(A)=|A|=\sum_{i=1}^n(-1)^{i+1}a_{i1}\det(A_{i1})=a_{11}A_{11}-a_{21}A_{21}+a_{31}A_{31}+\dotsb+(-1)^{n+1}a_{n1}A_{n1}. \] Let \(1\le i,j\le n\). Then the \((i,j)\) minor of \(A\) is \(\det(A_{ij})\), and the \((i,j)\) cofactor of \(A\) is \((-1)^{i+j}\det(A_{ij})\). Therefore, \(\det(A)\) is the sum of the \((i,1)\)-cofactors of \(A\) for \(1\le i\le n\).

Example. We have \[ \begin{vmatrix} 3 & 1 & -2 \\ 0 & 2 & 1 \\ 7 & -1 & 3 \end{vmatrix}= 3\begin{vmatrix} 2 & 1 \\ -1 & 3 \end{vmatrix} - 0\begin{vmatrix} 1 & -2 \\ -1 & 3 \end{vmatrix} + 7\begin{vmatrix} 1 & -2 \\ 2 & 1 \end{vmatrix}=3(2\cdot 3-1(-1))+7(1\cdot 1-(-2)2)=56. \]

Proposition. Let \(A\) be an \(n\times n\) matrix, and let $1in. Then by expansion along the \(i\)-th row, we get \[ \det(A)=\sum_{j=1}^n(-1)^{i+j}a_{ij}\det(A_{ij}). \] Let \(1\le j\le n\). By expansion along the \(j\)-th column, we get \[ \det(A)=\sum_{i=1}^n(-1)^{i+j}a_{ij}\det(A_{ij}). \] That is, we can compute the determinant by adding up the cofactors in any row or column. This gives \(2n\) formulas for the determinant. Out of these \(2n\) choices, we want to calculate the determinant along the row or column with the most zeros, since a zero entry makes the corresponding cofactor zero, and thus we don't have to calculate the determinant of the corresponding submatrix.

Example. Let \[ A=\begin{pmatrix} 2 & 0 & -1 \\ 1 & -2 & 2 \\ 3 & 0 & 8 \end{pmatrix}. \] Using the definition, and expanding along the first column, we get \[ \det(A)=2\begin{vmatrix}-2 & 2 \\ 0 & 8 \end{vmatrix}-1\begin{vmatrix} 0 & -1 \\ 0 & 8 \end{vmatrix} + 3 \begin{vmatrix} 0 & -1 \\ -2 & 2 \end{vmatrix} = 2((-2)8-2\cdot0)-1(0\cdot 8-(-1)0)+3(0\cdot 2-(-1)(-2))=-38. \] Noticing that the second column has two zeros, we get \[ \det(A)=-0\det(A_{12})+(-2)\begin{vmatrix}2 & -1 \\ 3 & 8 \end{vmatrix}-0\det(A_{32})=(-2)(2\cdot 8-(-1)3)=-38. \]

Properties of determinants

Example. Let \[ A=\begin{pmatrix} 7 & 2 & 1 \\ 0 & 3 & -2 \\ 0 & 0 & 9 \end{pmatrix} \] Expanding along the first column, we get \[ |A|=7\begin{vmatrix}3 & -2 \\ 0 & 9 \end{vmatrix}-0\det(A_{21})+0\det(A_{31})=7(3\cdot9-(-2)0)=7\cdot3\cdot9. \] Note that the determinant can be calculated by multiplying the entries in the main diagonal.

Definition. Let \(A\) be an \(n\times n\) matrix. Then \(A\) is an upper triangular matrix, if \(a_{ij}=0\) whenever \(i>j\). \(A\) is a lower triangular matrix, if \(a_{ij}=0\) whenever \(i<j\).

Example. Suppose that \(A\) is in echelon form. Then \(A\) is an upper triangular matrix.

Example. Suppose that \(A\) is both an upper triangular and a lower triangular matrix. Then \(a_{ij}=0\) whenever \(i>j\) or \(i<j\). This means that in this case, \(A\) is a diagonal matrix.

Proposition. Let \(A\) be an upper triangular or a lower triangular matrix. Then the determinant of \(A\) is the product of the entries in the main diagonal: \[ \det(A)=\prod_{i=1}^na_{ii}=a_{11}a_{22}\dotsb a_{nn}. \]

If \(A\) does not have many zeros, then we might use elementary row or column operations to make it have more zeros. For this, we need to see how elementary row or column operations change the determinant.

Definition. Let \(A\) be an \(m\times n\) matrix. The elementary column operations are defined in a similar way to the elementary row operations.

Let \(1\le j\le n\) and \(c\) a nonzero scalar. Then \(cC_j\) denotes the operation where we multiply the entries in the \(j\)-th column by \(c\).

Let \(1\le j\ne j'\le n\). Then \(C_j\leftrightarrow C_{j'}\) denotes the operation where we swap the \(j\)-th column and the \(j'\)-th column.

Let \(1\le j\ne j'\le n\) and \(b\) a scalar. Then \(C_j+bC_{j'}\) denotes operation where we add \(b\) times the \(j'\)-th column to the \(j\)-th column.

Proposition. Let \(A\) be an \(n\times n\) matrix.

Let \(1\le i\le n\) and \(c\) a nonzero scalar. Let \(B\) be the matrix you get by applying \(cR_i\) to \(A\). Then we have \(\det(B)=c\det(A)\).

Let \(1\le j\le n\) and \(c\) a nonzero scalar. Let \(B\) be the matrix you get by applying \(cC_j\) to \(A\). Then we have \(\det(B)=c\det(A)\).

Let \(1\le i\ne i'\le n\). Let \(B\) be the matrix you get by applying \(R_i\leftrightarrow R_{i'}\) to \(A\). Then we have \(\det(B)=-\det(A)\).

Let \(1\le j\ne j'\le n\). Let \(B\) be the matrix you get by applying \(C_j\leftrightarrow C_{j'}\) to \(A\). Then we have \(\det(B)=-\det(A)\).

Let \(1\le i\ne i'\le n\) and \(b\) a scalar. Let \(B\) be the matrix you get by applying \(R_i+bR_{i'}\) to \(A\). Then we have \(\det(B)=\det(A)\).

Let \(1\le j\ne j'\le n\) and \(b\) a scalar. Let \(B\) be the matrix you get by applying \(C_j+bC_{j'}\) to \(A\). Then we have \(\det(B)=\det(A)\).

Example. Let \[ A=\begin{pmatrix} 2 & 1 & 3 \\ -2 & 3 & 1 \\ 1 & -1 & 2 \end{pmatrix} \] There are no zeros in \(A\), but there are some 1-s. We can use one of these and elementary operations to get a matrix with more zeros. For example, we can take \(a_{31}=1\) and clear out the other two entries in the 3-rd row by \(C_2+C_1\) and \(C_3-2C_1\). We get \[ |A|=\begin{vmatrix} 2 & 3 & -1 \\ -2 & 1 & 5 \\ 1 & 0 & 0 \end{vmatrix} \] This determinant we can calculate easily by expansion along the 3-rd row: \[ \begin{vmatrix} 2 & 3 & -1 \\ -2 & 1 & 5 \\ 1 & 0 & 0 \end{vmatrix}=\begin{vmatrix}3 & -1 \\ 1 & 5 \end{vmatrix}=16. \]

Example. Let \[ A=\begin{pmatrix} 2 & 3 & -2 \\ 3 & 2 & 5 \\ -2 & 4 & 3 \end{pmatrix}. \] This matrix doesn't even have 1 as an entry. But we can make it do so by performing \(R_2+R_3\). We get \[ |A|=\begin{vmatrix} 2 & 3 & -2\\ 1 & 6 & 8 \\ -2 & 4 & 3 \end{vmatrix} \] Now we can use the \((2,1)\)-entry 1 to clear out the other entries in the 1-st column by \(R_1-2R_2\) and \(R_3+2R_2\). We get \[ \begin{vmatrix} 2 & 3 & -2\\ 1 & 6 & 8 \\ -2 & 4 & 3 \end{vmatrix}= \begin{vmatrix} 0 & -9 & -18 \\ 1 & 6 & 8 \\ 0 & 16 & 19 \end{vmatrix} \] This determinant we can calculate by expansion along the 1-st column: \[ \begin{vmatrix} 0 & -9 & -18 \\ 1 & 6 & 8 \\ 0 & 16 & 19 \end{vmatrix}=-\begin{vmatrix} -9 & -18 \\ 16 & 19 \end{vmatrix}= -9\cdot19-(-18)16=-9(19-2\cdot16)=-9(-13)=117. \]

Example. Let \[ A=\begin{pmatrix} 2 & 1 & 3 \\ -2 & 3 & 1 \\ 1 & -1 & 2 \end{pmatrix} \] We already know that \(\det(A)=16\). Let's see what this makes the determinant of \[ B=\begin{pmatrix} 2 & 1 & -6\\ -2 & 3 & -2 \\ 1 & -1 & -4 \end{pmatrix}. \] We can see that we have gotten \(B\) from \(A\) by \((-2)C_3\). Therefore, we have \[ |B|=(-2)|A|=-10. \] Let's find the determinant of \[ C=\begin{pmatrix} -4 & 6 & 2 \\ 2 & 1 & 3 \\ 1 & -1 & 2 \end{pmatrix} \] We can get \(C\) from \(A\) by \(R_1\leftrightarrow R_2\) and \(2R_1\). Therefore, we have \[ |C|=2(-1)|A|=-10. \]

Applications

One can prove that taking entries from another row or column in the expansion formula, one gets zero.

Proposition. Let \(A\) be an \(n\times n\) matrix. Let \(1\le i\ne i'\ne n\). Then we have \[ \sum_{j=1}^n(-1)^{i+j}a_{i'j}\det(A_{ij})=0. \] Let \(1\le j\ne j'\le n\). Then we have \[ \sum_{i=1}^n(-1)^{i+j}a_{ij'}\det(A_{ij})=0. \]

Definition. Let \(A\) be an \(n\times n\) matrix. Then its cofactor matrix is the \(n\times n\) matrix with \((i,j)\) entry the \((i,j)\) cofactor \((-1)^{i+j}\det(A_{ij})\). The classical adjoint matrix \(\mathrm{Ad}(A)\) of \(A\) is the transpose of the cofactor matrix. That is, its \((i,j)\) entry is the \((j,i)\) cofactor.

Corollary. We have \[ A\cdot\mathrm{Ad}(A)=\det(A)I_n. \]

Corollary. Suppose that \(\det(A)\ne0\). Then \(A\) is invertible with \(A^{-1}=\frac{1}{\det(A)}\mathrm{Ad}(A)\).

Example. Let \[ A=\begin{pmatrix} 2 & 0 & -1 \\ 1 & -2 & 3 \\ 0 & 2 & 1 \end{pmatrix}. \] Then expanding along the first column, we get \[ \det(A)=2\begin{vmatrix} -2 & 3 \\ 2 & 1 \end{vmatrix}-\begin{vmatrix} 0 & -1 \\ 2 & 1 \end{vmatrix}=-18. \] Therefore, \(A\) is invertible, with inverse \[ A^{-1}=\frac{1}{\det(A)}\mathrm{Ad}(A)=-\frac{1}{18}\begin{pmatrix} -8 & -2 & -2 \\ -1 & 2 & -7 \\ 2 & -4 & -4 \end{pmatrix}. \]

Proposition. Let \(A\) and \(B\) be \(n\times n\) matrices. Then we have \[ |A|\cdot|B|=|AB| \]

Corollary. \(A\) is invertible precisely when \(\det(A)\ne0\). If \(A\) is invertible, then we get \(|A^{-1}|=|A|^{-1}\).

Proof. We have seen in the classical adjoint formula that if \(\det(A)\ne0\), then \(A\) is invertible. To show the implication the other way, suppose that \(A\) is invertible. Then we have \[ A\cdot A^{-1}=I. \] This implies \[ |A|\cdot|A^{-1}|=|A\cdot A^{-1}|=|I|=1. \] This shows that \(|A|\ne0\).

Example. Let \[ A=\begin{pmatrix} 2 & -1 & 0 \\ -1 & 3 & 2 \\ 1 & 2 & 2 \end{pmatrix}. \] Expanding along the first row, we get \[ \det(A)=2\begin{vmatrix} 3 & 2 \\ 2 & 2 \end{vmatrix}+\begin{vmatrix} -1 & 2 \\ 1 & 2 \end{vmatrix}=0. \] Therefore, \(A\) is not invertible.

Proposition (Cramer's rule). Let \(A\) be an \(n\times n\) matrix and \(\mathbf b\) an \(n\)-vector. Suppose that \(\det(A)\ne0\). Then the equation \(A\mathbf x=\mathbf b\) has a unique solution, where \(\mathbf x\) is the \(n\)-vector with \(j\)-th component \(\frac{1}{\det(A)}\) times the determinant of the matrix you get if you replace the \(j\)-th column of \(A\) by \(\mathbf b\).

Proof. We have seen that \[ \mathbf x=A^{-1}\mathbf b=\frac{1}{\det(A)}\mathrm{Ad}(A)\mathbf b. \] By construction, the \(j\)-th component of the \(n\)-vector \(\mathrm{Ad}(A)\mathbf b\) is the determinant of the matrix you get when you replace the \(j\)-th column of \(A\) by \(\mathbf b\).

Example. Consider the equation \[ \begin{pmatrix} 2 & 3 \\ -1 & 5 \end{pmatrix}\mathbf x=\begin{pmatrix} 4 \\ -7 \end{pmatrix}. \] We have \[ \begin{vmatrix} 2 & 3 \\ -1 & 5 \end{vmatrix}=13\ne0. \] Therefore, the equation has a unique solution with \[ x_1=\frac{\begin{vmatrix}4 & 3 \\ -7 & 5 \end{vmatrix}}{\begin{vmatrix}2 & 3 \\ -1 & 5 \end{vmatrix}}=\frac{41}{13} \] and \[ x_2=\frac{\begin{vmatrix}2 & 4 \\ -1 & -7 \end{vmatrix}}{\begin{vmatrix}2 & 3 \\ -1 & 5 \end{vmatrix}}=\frac{-10}{13}. \]