Applications of Determinants¶
Examination of the Linear Dependence of Vectors¶
Theorem 5. \(\\\)
The determinant of a matrix vanishes if, and only if, its columns are linearly dependent.
Thus, if \(\ \boldsymbol{A}\,=\, [\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_n\,]\,\in\,M_n(K),\ \,\) then
Proof. \(\,\) Let \(\ \boldsymbol{A}\,=\, [\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_n\,]\,\in\,M_n(K).\)
\(\Leftarrow\ :\ \ \) We assume that the columns \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \) are linearly dependent.
Then one of the columns is a linear combination of the remaining ones. Let for example
On the basis of Postulates \(\,\) 1. \(\,\) and \(\,\) 2. \(\,\) of the axiomatic definition, we obtain
Each out of the \(\,n-1\,\) components in the last sum is proportional to a determinant with two identical columns. Thus, recalling Property IIIa., we infer that \(\ \det\boldsymbol{A} = 0.\)
\(\,\)
\(\Rightarrow\ :\ \ \) We assume that the columns \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \) of the matrix \(\ \boldsymbol{A}\,\) are linearly independent.
The number \(\,n\,\) of linearly independent columns being equal to the dimension of the vector space \(\,K^n\ \) they belong to, the aforesaid columns form a basis of that space. Thus every vector in \(\,K^n\ \) can be uniquely represented as a linear combination of \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\,.\ \)
In particular, the vectors \(\,\boldsymbol{e}_j\ \) of the standard basis of the space \(\,K^n\ \) may be written as
Equations (1) assert that the \(\,j\)-th column of the identity matrix \(\,\boldsymbol{I}_n = [\,\boldsymbol{e}_1\,|\;\boldsymbol{e}_2\,|\,\dots\,|\,\boldsymbol{e}_n\,]\ \) is a linear combination of columns of matrix \(\,\boldsymbol{A},\ \) with coefficients taken from the \(\,j\)-th column of matrix \(\,\boldsymbol{B}=[b_{ij}]_{n\times n}.\ \) According to the Column Rule of Matrix Multiplication, this means that \(\ \boldsymbol{I}_n = \boldsymbol{A}\boldsymbol{B}.\ \)
Using the theorem on the determinant of a product of matrices, we may write
Hence \(\ \det{\boldsymbol{A}}\ne 0,\ \,\) because if there was \(\ \det\boldsymbol{A} = 0,\ \) it would be \(\ \det\boldsymbol{A}\,\cdot\,\det\boldsymbol{B}\,=\,0.\)
So we have proved that
which is equivalent, by contraposition, to the statement
Notes and Comments.
In view of Theorem 3. on invariance of the determinant under matrix’ transpose, \(\\\) Theorem 5. may be rewritten in the row version:
The determinant of a matrix vanishes if, and only if, its rows are linearly dependent.
Equations (1) pertain to the situation where in the vector space \(\,K^n\,\) there are two bases: the basis \(\ \,\mathcal{B}\,=\, (\boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n),\ \,\) composed of linearly independent columns of matrix \(\,\boldsymbol{A},\ \) and the standard basis \(\ \mathcal{E}\,=\, (\boldsymbol{e}_1,\boldsymbol{e}_2,\dots,\boldsymbol{e}_n)\,.\)
The matrix \(\,\boldsymbol{B}=[\,b_{ij}\,]_{n\times n}\,,\ \) whose \(\,j\)-th column is composed of the coordinates of the \(\,j\)-th vector of basis \(\ \mathcal{E}\ \) in the basis \(\ \mathcal{B}\ \ \ (j=1,2,\ldots,n),\ \,\) is named the transition matrix from basis \(\,\mathcal{B}\ \) to basis \(\ \mathcal{E}.\)
\(\;\)
Example 7. \(\,\) It is to be confirmed that the vectors
form a basis in the vector space \(\ K^n.\)
Solution.
In an \(\,n\)-dimensional vector space every set of \(\,n\,\) linearly independent vectors is a basis. \(\\\) Since \(\,\text{dim}\,K^n=n,\ \) it is enough to verify the linear independence of vectors \(\,\boldsymbol{f}_1,\,\boldsymbol{f}_2,\,\ldots,\,\boldsymbol{f}_n.\)
Using Theorem 5., we check whether the determinant of the matrix composed of these \(\,n\,\) column vectors is different from zero. The matrix being upper triangular, the calculation is trivial, leading to the positive answer:
Calculation of the Inverse of a Matrix¶
Theorem 6. \(\,\) Generalized Laplace Expansion. \(\\\)
The following relations hold true for a matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\in M_n(K):\)
This may be rewritten more succinctly as
Interpretation (row version):
\(\ i=j:\ \) The consecutive elements of a selected row of the matrix are multiplied by their cofactors; \(\,\) the sum of all such products is equal to the determinant of the matrix.
\(\ i\ne j:\ \) The consecutive elements of a selected row are multiplied by the cofactors of the corresponding elements in another row; \(\,\) the sum of all such products is equal to zero.
The column version may be interpreted in an analogous way.
Proof. \(\,\) For \(\,i=j\ \) Equation (2) becomes the Laplace expansion with respect to the \(\ i\)-th row. So, it is enough to consider the case \(\ i\ne j\ \) only.
Starting from the matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\,,\ \) we create an auxiliary matrix \(\ \boldsymbol{B}=[\,b_{ij}\,]_{n\times n}\,.\) \(\\\) \(\ \boldsymbol{B}\ \) differs from \(\ \boldsymbol{A}\ \) only in the \(\,j\)-th row, which is a repetition of the \(\,i\)-th one:
The elements \(\,b_{jk}\,\) and cofactors \(\,B_{jk}\,\) of matrix \(\,\boldsymbol{B}\,\) fulfill the relations
Because of two identical rows, the determinant of matrix \(\,\boldsymbol{B}\,\) equals zero. Taking into account equalities (3) and expansion of \(\,\det\boldsymbol{B}\ \) with respect to the \(\,j\)-th row, we obtain
Definition.
Let \(\,\boldsymbol{A}\in M_n(K)\,.\ \,\) If \(\ \det{\boldsymbol{A}}=0,\ \,\) then \(\ \boldsymbol{A}\ \,\) is called \(\,\) a \(\,\) singular matrix. \(\\\) Otherwise, \(\,\) when \(\ \det{\boldsymbol{A}}\ne 0,\ \) \(\ \boldsymbol{A}\ \) is \(\,\) a \(\,\) non-singular matrix.
Theorem 7.
A matrix \(\ \boldsymbol{A}\in M_n(K)\ \,\) is invertible \(\,\) if, and only if, \(\,\) it is non-singular.
Proof.
\(\Rightarrow\ :\ \) We assume that there exists the inverse \(\,\boldsymbol{A}^{-1}.\ \,\) Then
Hence \(\ \det\boldsymbol{A}\ne 0,\ \,\) because if there was \(\ \det\boldsymbol{A} = 0,\ \,\) there would be \(\ \det\boldsymbol{A}\,\cdot\,\det\boldsymbol{A}^{-1}\,=\;0.\)
Corollary.
If a matrix \(\,\boldsymbol{A}\in M_n(K)\ \) is invertible, \(\,\) then \(\ \,\det\boldsymbol{A}^{-1}\,=\ (\det\boldsymbol{A})^{-1}\,.\)
\(\Leftarrow\ :\ \) We assume that the matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\ \) is non-singular: \(\ \det{\boldsymbol{A}}\ne 0.\ \) Then the matrix
where \(\ A_{ij}\ \) is the cofactor of the element \(\ a_{ij}\,,\ \,\) is the inverse of matrix \(\,\boldsymbol{A}\,.\)
Indeed, elements \(\ b_{ij}\ \) of matrix \(\ \boldsymbol{B}\ \) are given by
Let \(\ \boldsymbol{A}\boldsymbol{B}=\boldsymbol{C}=[c_{ij}]_{n\times n}\,,\ \) \(\ \boldsymbol{B}\boldsymbol{A}=\boldsymbol{C'}=[c_{ij}']_{n\times n}\,.\ \) Using (2) we get
where \(\ i,j=1,2,\ldots,n.\ \,\) A matrix whose elements are Kronecker deltas \(\ \delta_{ij}\ \) is the identity matrix. \(\,\) Thus \(\ \boldsymbol{A}\boldsymbol{B}=\boldsymbol{B}\boldsymbol{A}= \boldsymbol{I}_n\,,\ \) that is \(\ \boldsymbol{B}=\boldsymbol{A}^{-1}\,. \quad\bullet\)
Definition.
For \(\,\boldsymbol{A}\in M_n(K),\ \) the transpose of the cofactor matrix is the adjugate matrix \(\,\boldsymbol{A}^D:\)
Including the adjugate matrix as an intermediate step, the procedure of calculating inverse of a matrix \(\,\boldsymbol{A}=[a_{ij}]_{n\times n}\in M_n(K)\ \) may be divided into four stages:
\(\,\) Calculate \(\ \det{\boldsymbol{A}}\ \,\) and \(\,\) check whether \(\ \det{\boldsymbol{A}}\ne 0\,.\)
\(\,\) Determine the cofactor matrix \(\,\boldsymbol{C}=[\,A_{ij}\,]_{n\times n}\ \) by replacing \(\,a_{ij}\rightarrow A_{ij}\ \) in \(\,\boldsymbol{A}.\)
\(\,\) Transpose the cofactor matrix to obtain the adjugate matrix: \(\,\boldsymbol{A}^D=\boldsymbol{C}^{\,T}.\)
\(\,\) Divide the adjugate matrix by the determinant of \(\,\boldsymbol{A}:\ \) \(\ \boldsymbol{A}^{-1}\ =\ \, \frac{1}{\det{\boldsymbol{A}}}\ \ \boldsymbol{A}^D.\) \(\\\)
Example 8. \(\,\) We shall calculate the inverse of the matrix \(\ \ \boldsymbol{A}\ =\ \left[\begin{array}{rrr} 2 & 2 & 3 \\ 1 & -1 & 0 \\ -1 & 2 & 1 \end{array}\right]\ \in M_3(Q)\,.\)
\(\ \det{\boldsymbol{A}}\ =\ \left|\begin{array}{rrr} 2 & 2 & 3 \\ 1 & -1 & 0 \\ -1 & 2 & 1 \end{array}\right|\ =\ \left|\begin{array}{rrr} 2 & 4 & 3 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{array}\right|\ =\ -\ \left|\begin{array}{cc} 4 & 3 \\ 1 & 1 \end{array}\right|\ =\ -1\,.\)
The method \(\,\) inverse()
\(\,\) of Sage returns the inverse
of a given non-singular square matrix. It may be applied both to numeric
as well as symbolic matrices.
Experiment with Sage:
Given the matrix size \(\,n\), the program displays in the symbolic form a square matrix \(\,\boldsymbol{A}=[a_{ij}]_{n\times n}\ \) and its inverse. According to the general formulas, the denominators of the inverse matrix elements contain the determinant of \(\,\boldsymbol{A},\) whereas the numerators are the appropriate cofactors.
Cramer’s Rule to Solve Systems of Linear Equations¶
We shall consider a system of \(\,n\,\) linear equations in \(\,n\,\) unknowns over a field \(\,K\):
with a non-singular (square) coefficient matrix \(\ \boldsymbol{A}=[a_{ij}]_{n\times n}:\ \) \(\ \det{\boldsymbol{A}}\ne 0.\)
Rewriting the system (5) in the matrix form
and pre-multiplying the both sides by \(\ \boldsymbol{A}^{-1},\ \) we get at once the solution:
To derive a practical formula for particular unknowns, we shall make use of the expression (4) for the inverse matrix:
Equating the respective coordinates of the column vectors on both sides of the equation, \(\\\) we come up with the explicit formula for \(\,x_j,\ \ j=1,2,\ldots,n:\)
Theorem 8. \(\,\) The Cramer’s Rule.
The linear system (5) has the unique solution given by
where \(\,D\,\) is the determinant of the coefficient matrix \(\,\boldsymbol{A},\ \) and \(\,D_j\,\) is the determinant of the matrix obtained from \(\,\boldsymbol{A}\ \) by replacing the \(\,j\)-th column with the column of constants \(\,\boldsymbol{b}.\ \) Using the column notation of matrices, this may be written as
Example 9. \(\,\) Consider the system of 3 equations in 3 unknowns over the rational field \(\,Q:\)
When in a given system the number of equations equals the number of unknowns (the coefficient matrix \(\,\boldsymbol{A}\,\) is square), then we begin with the calculation of \(\ D=\det\boldsymbol{A}.\ \) In this case
Since \(\,D\ne 0,\ \) we calculate the determinants \(\,D_1,\,D_2\,\) and \(\,D_3\,\) in the Cramer’s rule:
\(D_1\ =\ \left|\begin{array}{rrr} 4 & -1 & -1 \\ 11 & 4 & -2 \\ 11 & -2 & 4 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & 0 & -1 \\ 3 & 6 & -2 \\ 27 & -6 & 4 \end{array}\right|\ =\ -\ \left|\begin{array}{rr} 3 & 6 \\ 27 & -6 \end{array}\right|\ =\ 18\ \left|\begin{array}{rr} 1 & -1 \\ 9 & 1 \end{array}\right|\ =\ 180\,,\)
\(D_2\ =\ \left|\begin{array}{rrr} 2 & 4 & -1 \\ 3 & 11 & -2 \\ 3 & 11 & 4 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & 0 & -1 \\ -1 & 3 & -2 \\ 11 & 27 & 4 \end{array}\right|\ =\ -\ \left|\begin{array}{rr} -1 & 3 \\ 11 & 27 \end{array}\right|\ =\ 3\ \left|\begin{array}{rr} 1 & 1 \\ -11 & 9 \end{array}\right|\ =\ 60\,,\)
\(D_3\ =\ \left|\begin{array}{rrr} 2 & -1 & 4 \\ 3 & 4 & 11 \\ 3 & -2 & 11 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & -1 & 0 \\ 11 & 4 & 27 \\ -1 & -2 & 3 \end{array}\right|\ =\ \left|\begin{array}{rr} 11 & 27 \\ -1 & 3 \end{array}\right|\ =\ 3\ \left|\begin{array}{rr} 11 & 9 \\ -1 & 1 \end{array}\right|\ =\ 60\,.\)
Finally, the system has the unique solution:
In Sage, the formulas of the Cramer’s rule may be obtained also in symbolic form for any size \(\,n=2,3,\ldots\ \) of matrix \(\,\boldsymbol{A}.\ \) Namely, the solution of the system is given by the last column of the augmented matrix \(\,\boldsymbol{B}=[\,\boldsymbol{A}\,|\,\boldsymbol{b}\,]\ \) transformed to the reduced row echelon form.
Experiment with Sage:
Given the size \(\,n\,\) of the coefficient matrix \(\,\boldsymbol{A},\ \) the following program displays the augmented matrix \(\,\boldsymbol{B}\ \) in its original and row-reduced echelon form. Elements of the last column of the latter matrix, which provide the solution, are displayed additionally enlarged for a better readability.