Applications of Determinants

Examination of the Linear Dependence of Vectors

Theorem 5. \(\\\)

The determinant of a matrix vanishes if, and only if, its columns are linearly dependent.

Thus, if \(\ \boldsymbol{A}\,=\, [\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_n\,]\,\in\,M_n(K),\ \,\) then

\[\det{\boldsymbol{A}}\,=\,0\qquad\Leftrightarrow\qquad \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \ \text{are linearly dependent}.\]

Proof. \(\,\) Let \(\ \boldsymbol{A}\,=\, [\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_n\,]\,\in\,M_n(K).\)

\(\Leftarrow\ :\ \ \) We assume that the columns \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \) are linearly dependent.

Then one of the columns is a linear combination of the remaining ones. Let for example

\[\boldsymbol{A}_n\ =\ \lambda_1\,\boldsymbol{A}_1\,+\;\lambda_2\,\boldsymbol{A}_2\,+\;\ldots\,+ \lambda_{n-1}\,\boldsymbol{A}_{n-1}\,.\]

On the basis of Postulates \(\,\) 1. \(\,\) and \(\,\) 2. \(\,\) of the axiomatic definition, we obtain

\[ \begin{align}\begin{aligned}\det{\boldsymbol{A}}\ \ =\ \ \det{\,[\, \boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\,\boldsymbol{A}_{n-1}\,|\; \lambda_1\,\boldsymbol{A}_1\,+\,\lambda_2\,\boldsymbol{A}_2\,+\,\ldots\,+\, \lambda_{n-1}\,\boldsymbol{A}_{n-1}\,]}\ \ =\\=\ \ \lambda_1\,\det{\,[\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_{n-1}\,|\,\boldsymbol{A}_1\,]}\ \ +\\+\ \ \lambda_2\,\det{\,[\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_{n-1}\,|\,\boldsymbol{A}_2\,]}\ \ +\\\ldots\\+\ \ \lambda_{n-1}\,\det{\,[\,\boldsymbol{A}_1\,|\;\boldsymbol{A}_2\,|\,\dots\,|\, \boldsymbol{A}_{n-1}\,|\,\boldsymbol{A}_{n-1}\,]}\,.\end{aligned}\end{align} \]

Each out of the \(\,n-1\,\) components in the last sum is proportional to a determinant with two identical columns. Thus, recalling Property IIIa., we infer that \(\ \det\boldsymbol{A} = 0.\)

\(\,\)

\(\Rightarrow\ :\ \ \) We assume that the columns \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \) of the matrix \(\ \boldsymbol{A}\,\) are linearly independent.

The number \(\,n\,\) of linearly independent columns being equal to the dimension of the vector space \(\,K^n\ \) they belong to, the aforesaid columns form a basis of that space. Thus every vector in \(\,K^n\ \) can be uniquely represented as a linear combination of \(\ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\,.\ \)

In particular, the vectors \(\,\boldsymbol{e}_j\ \) of the standard basis of the space \(\,K^n\ \) may be written as

(1)\[\begin{split}\boldsymbol{e}_j\ \ =\ \ \sum_{s\,=\,1}^n\ b_{sj}\,\boldsymbol{A}_s\,, \qquad\text{where}\quad\boldsymbol{e}_j\ =\ \left[\begin{array}{c} 0 \\ \dots \\ 1 \\ \dots \\ 0 \end{array}\right] \leftarrow j\,,\qquad j=1,2,\ldots,n.\end{split}\]

Equations (1) assert that the \(\,j\)-th column of the identity matrix \(\,\boldsymbol{I}_n = [\,\boldsymbol{e}_1\,|\;\boldsymbol{e}_2\,|\,\dots\,|\,\boldsymbol{e}_n\,]\ \) is a linear combination of columns of matrix \(\,\boldsymbol{A},\ \) with coefficients taken from the \(\,j\)-th column of matrix \(\,\boldsymbol{B}=[b_{ij}]_{n\times n}.\ \) According to the Column Rule of Matrix Multiplication, this means that \(\ \boldsymbol{I}_n = \boldsymbol{A}\boldsymbol{B}.\ \)

Using the theorem on the determinant of a product of matrices, we may write

\[\det{\boldsymbol{A}}\,\cdot\,\det{\boldsymbol{B}}\ \ =\ \ \det{\,(\boldsymbol{A}\boldsymbol{B})}\ \ =\ \ \det{\boldsymbol{I}_n}\ =\ 1\,.\]

Hence \(\ \det{\boldsymbol{A}}\ne 0,\ \,\) because if there was \(\ \det\boldsymbol{A} = 0,\ \) it would be \(\ \det\boldsymbol{A}\,\cdot\,\det\boldsymbol{B}\,=\,0.\)

So we have proved that

\[\text{columns}\ \ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \ \text{are linearly independent} \quad\Rightarrow\quad \det{\boldsymbol{A}}\ne 0\,,\]

which is equivalent, by contraposition, to the statement

\[\det\boldsymbol{A}\ =\ 0 \quad\Rightarrow\quad \text{columns}\ \ \boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n\ \ \text{are linearly dependent}\,.\quad\bullet\]

Notes and Comments.

  • In view of Theorem 3. on invariance of the determinant under matrix’ transpose, \(\\\) Theorem 5. may be rewritten in the row version:

    The determinant of a matrix vanishes if, and only if, its rows are linearly dependent.

  • Equations (1) pertain to the situation where in the vector space \(\,K^n\,\) there are two bases: the basis \(\ \,\mathcal{B}\,=\, (\boldsymbol{A}_1,\boldsymbol{A}_2,\dots,\boldsymbol{A}_n),\ \,\) composed of linearly independent columns of matrix \(\,\boldsymbol{A},\ \) and the standard basis \(\ \mathcal{E}\,=\, (\boldsymbol{e}_1,\boldsymbol{e}_2,\dots,\boldsymbol{e}_n)\,.\)

    The matrix \(\,\boldsymbol{B}=[\,b_{ij}\,]_{n\times n}\,,\ \) whose \(\,j\)-th column is composed of the coordinates of the \(\,j\)-th vector of basis \(\ \mathcal{E}\ \) in the basis \(\ \mathcal{B}\ \ \ (j=1,2,\ldots,n),\ \,\) is named the transition matrix from basis \(\,\mathcal{B}\ \) to basis \(\ \mathcal{E}.\)

\(\;\)

Example 7. \(\,\) It is to be confirmed that the vectors

\[\begin{split}\boldsymbol{f}_1\ =\ \left[\begin{array}{c} 1 \\ 0 \\ 0 \\ \dots \\ 0 \end{array}\right]\,,\quad \boldsymbol{f}_2\ =\ \left[\begin{array}{c} 1 \\ 1 \\ 0 \\ \dots \\ 0 \end{array}\right]\,,\quad \boldsymbol{f}_3\ =\ \left[\begin{array}{c} 1 \\ 1 \\ 1 \\ \dots \\ 0 \end{array}\right]\,,\quad \dots,\quad \boldsymbol{f}_n\ =\ \left[\begin{array}{c} 1 \\ 1 \\ 1 \\ \dots \\ 1 \end{array}\right]\end{split}\]

form a basis in the vector space \(\ K^n.\)

Solution.

In an \(\,n\)-dimensional vector space every set of \(\,n\,\) linearly independent vectors is a basis. \(\\\) Since \(\,\text{dim}\,K^n=n,\ \) it is enough to verify the linear independence of vectors \(\,\boldsymbol{f}_1,\,\boldsymbol{f}_2,\,\ldots,\,\boldsymbol{f}_n.\)

Using Theorem 5., we check whether the determinant of the matrix composed of these \(\,n\,\) column vectors is different from zero. The matrix being upper triangular, the calculation is trivial, leading to the positive answer:

\[\begin{split}\det{\ [\, \boldsymbol{f}_1\,|\;\boldsymbol{f}_2\,|\,\ldots\,|\, \boldsymbol{f}_n\,]}\ \ =\ \ \left| \begin{array}{ccccc} 1 & 1 & 1 & \dots & 1 \\ 0 & 1 & 1 & \dots & 1 \\ 0 & 0 & 1 & \dots & 1 \\ \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & \dots & 1 \end{array} \right| \ \ =\ \ 1\ne 0\,.\end{split}\]

Calculation of the Inverse of a Matrix

Theorem 6. \(\,\) Generalized Laplace Expansion. \(\\\)

The following relations hold true for a matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\in M_n(K):\)

\[ \begin{align}\begin{aligned}a_{i1}\,A_{j1}\ +\ a_{i2}\,A_{j2}\ +\ \dots\ +\ a_{in}\,A_{jn}\ \ =\ \ \delta_{ij}\,\cdot\,\det\boldsymbol{A}\,,\qquad i,j=1,2,\ldots,n;\\a_{1k}\,A_{1l}\ +\ a_{2k}\,A_{2l}\ +\ \dots\ +\ a_{nk}\,A_{nl}\ \ =\ \ \delta_{kl}\,\cdot\,\det\boldsymbol{A}\,,\qquad k,l=1,2,\ldots,n.\\\begin{split}\text{where}\quad\delta_{pq}\ \,=\ \, \left\{\ \begin{array}{cc} 1 & \text{for}\ \ p=q, \\ 0 & \text{for}\ \ p\ne q; \end{array}\right.\qquad p,q=1,2,\ldots,n.\qquad \text{(the Kronecker delta)}\end{split}\end{aligned}\end{align} \]

This may be rewritten more succinctly as

(2)\[ \begin{align}\begin{aligned}\sum_{k\,=\,1}^n\ a_{ik}\ A_{jk}\ \ =\ \ \delta_{ij}\,\cdot\,\det\boldsymbol{A}\,,\qquad i,j=1,2,\ldots,n;\qquad \text{(row version)}\\\sum_{i\,=\,1}^n\ a_{ik}\ A_{il}\ \ =\ \ \delta_{kl}\,\cdot\,\det\boldsymbol{A}\,,\qquad k,l=1,2,\ldots,n;\qquad \text{(column version)}\end{aligned}\end{align} \]

Interpretation (row version):

  • \(\ i=j:\ \) The consecutive elements of a selected row of the matrix are multiplied by their cofactors; \(\,\) the sum of all such products is equal to the determinant of the matrix.

  • \(\ i\ne j:\ \) The consecutive elements of a selected row are multiplied by the cofactors of the corresponding elements in another row; \(\,\) the sum of all such products is equal to zero.

The column version may be interpreted in an analogous way.

Proof. \(\,\) For \(\,i=j\ \) Equation (2) becomes the Laplace expansion with respect to the \(\ i\)-th row. So, it is enough to consider the case \(\ i\ne j\ \) only.

Starting from the matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\,,\ \) we create an auxiliary matrix \(\ \boldsymbol{B}=[\,b_{ij}\,]_{n\times n}\,.\) \(\\\) \(\ \boldsymbol{B}\ \) differs from \(\ \boldsymbol{A}\ \) only in the \(\,j\)-th row, which is a repetition of the \(\,i\)-th one:

\[\begin{split}\boldsymbol{A}\ \ =\ \ \left[\begin{array}{c} \boldsymbol{A}_1 \\ \dots \\ \boldsymbol{A}_i \\ \dots \\ \boldsymbol{A}_j \\ \dots \\ \boldsymbol{A}_n \end{array} \right] \begin{array}{c} \; \\ \; \\ \leftarrow i \\ \; \\ \leftarrow j \\ \; \\ \; \end{array} \qquad\qquad \boldsymbol{B}\ \ =\ \ \left[\begin{array}{c} \boldsymbol{A}_1 \\ \dots \\ \boldsymbol{A}_i \\ \dots \\ \boldsymbol{A}_i \\ \dots \\ \boldsymbol{A}_n \end{array} \right] \begin{array}{c} \; \\ \; \\ \leftarrow i \\ \; \\ \leftarrow j \\ \; \\ \; \end{array}\end{split}\]

The elements \(\,b_{jk}\,\) and cofactors \(\,B_{jk}\,\) of matrix \(\,\boldsymbol{B}\,\) fulfill the relations

(3)\[b_{jk}\,=\,b_{ik}\,=\,a_{ik}\,, \qquad B_{jk}\,=\,A_{jk}\,, \qquad k=1,2,\ldots,n.\]

Because of two identical rows, the determinant of matrix \(\,\boldsymbol{B}\,\) equals zero. Taking into account equalities (3) and expansion of \(\,\det\boldsymbol{B}\ \) with respect to the \(\,j\)-th row, we obtain

\[\sum_{k\,=\,1}^n\ a_{ik}\,A_{jk}\ \ =\ \ \sum_{k\,=\,1}^n\ b_{jk}\,B_{jk}\ \ =\ \ \det\boldsymbol{B}\ \ =\ \ 0\,. \quad\bullet\]

Definition.

Let \(\,\boldsymbol{A}\in M_n(K)\,.\ \,\) If \(\ \det{\boldsymbol{A}}=0,\ \,\) then \(\ \boldsymbol{A}\ \,\) is called \(\,\) a \(\,\) singular matrix. \(\\\) Otherwise, \(\,\) when \(\ \det{\boldsymbol{A}}\ne 0,\ \) \(\ \boldsymbol{A}\ \) is \(\,\) a \(\,\) non-singular matrix.

Theorem 7.

A matrix \(\ \boldsymbol{A}\in M_n(K)\ \,\) is invertible \(\,\) if, and only if, \(\,\) it is non-singular.

Proof.

\(\Rightarrow\ :\ \) We assume that there exists the inverse \(\,\boldsymbol{A}^{-1}.\ \,\) Then

\[\det\boldsymbol{A}\,\cdot\,\det\boldsymbol{A}^{-1}\ \,=\ \, \det\,(\boldsymbol{A}\boldsymbol{A}^{-1})\ \,=\ \, \det\boldsymbol{I}_n\ \,=\ \,1\,.\]

Hence \(\ \det\boldsymbol{A}\ne 0,\ \,\) because if there was \(\ \det\boldsymbol{A} = 0,\ \,\) there would be \(\ \det\boldsymbol{A}\,\cdot\,\det\boldsymbol{A}^{-1}\,=\;0.\)

Corollary.

If a matrix \(\,\boldsymbol{A}\in M_n(K)\ \) is invertible, \(\,\) then \(\ \,\det\boldsymbol{A}^{-1}\,=\ (\det\boldsymbol{A})^{-1}\,.\)

\(\Leftarrow\ :\ \) We assume that the matrix \(\ \boldsymbol{A}=[\,a_{ij}\,]_{n\times n}\ \) is non-singular: \(\ \det{\boldsymbol{A}}\ne 0.\ \) Then the matrix

(4)\[\begin{split}\boldsymbol{B}\ \,:\,=\ \, \frac{1}{\det\boldsymbol{A}}\ \left[\begin{array}{cccc} A_{11} & A_{12} & \dots & A_{1n} \\ A_{21} & A_{22} & \dots & A_{2n} \\ \dots & \dots & \dots & \dots \\ A_{n1} & A_{n2} & \dots & A_{nn} \end{array} \right]^{\,T}=\ \ \, \frac{1}{\det\boldsymbol{A}}\ \left[\begin{array}{cccc} A_{11} & A_{21} & \dots & A_{n1} \\ A_{12} & A_{22} & \dots & A_{n2} \\ \dots & \dots & \dots & \dots \\ A_{1n} & A_{2n} & \dots & A_{nn} \end{array} \right],\end{split}\]

where \(\ A_{ij}\ \) is the cofactor of the element \(\ a_{ij}\,,\ \,\) is the inverse of matrix \(\,\boldsymbol{A}\,.\)

Indeed, elements \(\ b_{ij}\ \) of matrix \(\ \boldsymbol{B}\ \) are given by

\[b_{ij}\ \ =\ \ \frac{1}{\det{\boldsymbol{A}}}\ \ A_{ji}\,,\qquad i,j=1,2,\ldots,n.\]

Let \(\ \boldsymbol{A}\boldsymbol{B}=\boldsymbol{C}=[c_{ij}]_{n\times n}\,,\ \) \(\ \boldsymbol{B}\boldsymbol{A}=\boldsymbol{C'}=[c_{ij}']_{n\times n}\,.\ \) Using (2) we get

\[ \begin{align}\begin{aligned}c_{ij}\ \,=\ \ \sum_{s\,=\,1}^n\ a_{is}\,b_{sj} \ \,=\ \ \frac{1}{\det\boldsymbol{A}}\ \ \sum_{s\,=\,1}^n\ a_{is}\,A_{js} \ \,=\ \ \frac{1}{\det\boldsymbol{A}}\ \cdot\ \delta_{ij}\,\cdot\ \det\boldsymbol{A} \ \,=\ \,\delta_{ij}\,,\\c_{ij}'\ \,=\ \ \sum_{s\,=\,1}^n\ b_{is}\,a_{sj} \ \,=\ \ \frac{1}{\det\boldsymbol{A}}\ \ \sum_{s\,=\,1}^n\ a_{sj}\,A_{si} \ \,=\ \ \frac{1}{\det\boldsymbol{A}}\ \cdot\ \delta_{ji}\,\cdot\ \det\boldsymbol{A} \ \,=\ \,\delta_{ij}\,,\end{aligned}\end{align} \]

where \(\ i,j=1,2,\ldots,n.\ \,\) A matrix whose elements are Kronecker deltas \(\ \delta_{ij}\ \) is the identity matrix. \(\,\) Thus \(\ \boldsymbol{A}\boldsymbol{B}=\boldsymbol{B}\boldsymbol{A}= \boldsymbol{I}_n\,,\ \) that is \(\ \boldsymbol{B}=\boldsymbol{A}^{-1}\,. \quad\bullet\)

Definition.

For \(\,\boldsymbol{A}\in M_n(K),\ \) the transpose of the cofactor matrix is the adjugate matrix \(\,\boldsymbol{A}^D:\)

\[\begin{split}\boldsymbol{A}^D\ \,:\,=\ \ \, \left[\begin{array}{cccc} A_{11} & A_{12} & \dots & A_{1n} \\ A_{21} & A_{22} & \dots & A_{2n} \\ \dots & \dots & \dots & \dots \\ A_{n1} & A_{n2} & \dots & A_{nn} \end{array} \right]^{\,T}\,=\ \ \left[\begin{array}{cccc} A_{11} & A_{21} & \dots & A_{n1} \\ A_{12} & A_{22} & \dots & A_{n2} \\ \dots & \dots & \dots & \dots \\ A_{1n} & A_{2n} & \dots & A_{nn} \end{array} \right]\,.\end{split}\]

Including the adjugate matrix as an intermediate step, the procedure of calculating inverse of a matrix \(\,\boldsymbol{A}=[a_{ij}]_{n\times n}\in M_n(K)\ \) may be divided into four stages:

  1. \(\,\) Calculate \(\ \det{\boldsymbol{A}}\ \,\) and \(\,\) check whether \(\ \det{\boldsymbol{A}}\ne 0\,.\)

  2. \(\,\) Determine the cofactor matrix \(\,\boldsymbol{C}=[\,A_{ij}\,]_{n\times n}\ \) by replacing \(\,a_{ij}\rightarrow A_{ij}\ \) in \(\,\boldsymbol{A}.\)

  3. \(\,\) Transpose the cofactor matrix to obtain the adjugate matrix: \(\,\boldsymbol{A}^D=\boldsymbol{C}^{\,T}.\)

  4. \(\,\) Divide the adjugate matrix by the determinant of \(\,\boldsymbol{A}:\ \) \(\ \boldsymbol{A}^{-1}\ =\ \, \frac{1}{\det{\boldsymbol{A}}}\ \ \boldsymbol{A}^D.\) \(\\\)

Example 8. \(\,\) We shall calculate the inverse of the matrix \(\ \ \boldsymbol{A}\ =\ \left[\begin{array}{rrr} 2 & 2 & 3 \\ 1 & -1 & 0 \\ -1 & 2 & 1 \end{array}\right]\ \in M_3(Q)\,.\)

\(\ \det{\boldsymbol{A}}\ =\ \left|\begin{array}{rrr} 2 & 2 & 3 \\ 1 & -1 & 0 \\ -1 & 2 & 1 \end{array}\right|\ =\ \left|\begin{array}{rrr} 2 & 4 & 3 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{array}\right|\ =\ -\ \left|\begin{array}{cc} 4 & 3 \\ 1 & 1 \end{array}\right|\ =\ -1\,.\)

\[\begin{split}\begin{array}{lll} A_{11}=+\left|\begin{array}{rr} -1 & 0 \\ 2 & 1 \end{array}\right|\ =\ -1\,; & A_{12}=-\left|\begin{array}{rr} 1 & 0 \\ -1 & 1 \end{array}\right|\ =\ -1\,; & A_{13}=+\left|\begin{array}{rr} 1 & -1 \\ -1 & 2 \end{array}\right|\ =\ 1\,; \\ \\ A_{21}=-\left|\begin{array}{rr} 2 & 3 \\ 2 & 1 \end{array}\right|\ =\ 4\,; & A_{22}=+\left|\begin{array}{rr} 2 & 3 \\ -1 & 1 \end{array}\right|\ =\ 5\,; & A_{23}=-\left|\begin{array}{rr} 2 & 2 \\ -1 & 2 \end{array}\right|\ =\ -6\,; \\ \\ A_{31}=+\left|\begin{array}{rr} 2 & 3 \\ -1 & 0 \end{array}\right|\ =\ 3\,; & A_{32}=-\left|\begin{array}{rr} 2 & 3 \\ 1 & 0 \end{array}\right|\ =\ 3\,; & A_{33}=+\left|\begin{array}{rr} 2 & 2 \\ 1 & -1 \end{array}\right|\ =\ -4\,. \end{array}\end{split}\]
\[ \begin{align}\begin{aligned}\begin{split}\begin{array}{l} \boldsymbol{A}^D\ \ =\ \ \left[\begin{array}{rrr} -1 & -1 & 1 \\ 4 & 5 & -6 \\ 3 & 3 & -4 \end{array} \right]^{\,T}=\ \ \, \left[\begin{array}{rrr} -1 & 4 & 3 \\ -1 & 5 & 3 \\ 1 & -6 & -4 \end{array} \right]\,; \\ \\ \displaystyle \boldsymbol{A}^{-1}\ \ =\ \ \, \frac{1}{(-1)}\ \left[\begin{array}{rrr} -1 & 4 & 3 \\ -1 & 5 & 3 \\ 1 & -6 & -4 \end{array} \right]\ \ =\ \ \left[\begin{array}{rrr} 1 & -4 & -3 \\ 1 & -5 & -3 \\ -1 & 6 & 4 \end{array} \right]\,. \end{array}\end{split}\\\;\end{aligned}\end{align} \]

The method \(\,\) inverse() \(\,\) of Sage returns the inverse of a given non-singular square matrix. It may be applied both to numeric as well as symbolic matrices.

Experiment with Sage:

Given the matrix size \(\,n\), the program displays in the symbolic form a square matrix \(\,\boldsymbol{A}=[a_{ij}]_{n\times n}\ \) and its inverse. According to the general formulas, the denominators of the inverse matrix elements contain the determinant of \(\,\boldsymbol{A},\) whereas the numerators are the appropriate cofactors.

Cramer’s Rule to Solve Systems of Linear Equations

We shall consider a system of \(\,n\,\) linear equations in \(\,n\,\) unknowns over a field \(\,K\):

(5)\[\begin{split}\begin{array}{c} a_{11}\,x_1\; + \ \,a_{12}\,x_2\; + \ \,\ldots\ + \ \;a_{1n}\,x_n \ \, = \ \ b_1 \\ a_{21}\,x_1\; + \ \,a_{22}\,x_2\; + \ \,\ldots\ + \ \;a_{2n}\,x_n \ \, = \ \ b_2 \\ \quad\,\ldots\qquad\quad\ldots\qquad\ \,\ldots\qquad\ \ \ldots\qquad\ \ \,\ldots \\ a_{n1}\,x_1\; + \ \,a_{n2}\,x_2\; + \ \,\ldots\ + \ \;a_{nn}\,x_n \ \, = \ \ b_n \end{array}\end{split}\]

with a non-singular (square) coefficient matrix \(\ \boldsymbol{A}=[a_{ij}]_{n\times n}:\ \) \(\ \det{\boldsymbol{A}}\ne 0.\)

Rewriting the system (5) in the matrix form

\[\boldsymbol{A}\,\boldsymbol{x}\ =\ \boldsymbol{b}\,,\]

and pre-multiplying the both sides by \(\ \boldsymbol{A}^{-1},\ \) we get at once the solution:

\[\boldsymbol{x}\ =\ \boldsymbol{A}^{-1}\,\boldsymbol{b}\,.\]

To derive a practical formula for particular unknowns, we shall make use of the expression (4) for the inverse matrix:

\begin{eqnarray*} \left[\begin{array}{c} x_1 \\ x_2 \\ \dots \\ x_n \end{array}\right] & = & \frac{1}{\det\boldsymbol{A}}\ \left[\begin{array}{cccc} A_{11} & A_{21} & \dots & A_{n1} \\ A_{12} & A_{22} & \dots & A_{n2} \\ \dots & \dots & \dots & \dots \\ A_{1n} & A_{2n} & \dots & A_{nn} \end{array} \right]\ \left[\begin{array}{c} b_1 \\ b_2 \\ \dots \\ b_n \end{array}\right] \\ \\ & = & \frac{1}{\det\boldsymbol{A}}\ \left[\begin{array}{c} A_{11}\,b_1\ +\ A_{21}\,b_2\ +\ \dots\ +\ A_{n1}\,b_n \\ A_{12}\,b_1\ +\ A_{22}\,b_2\ +\ \dots\ +\ A_{n2}\,b_n \\ \dots\qquad\ \ \dots\qquad\ \dots\qquad\dots \\ A_{1n}\,b_1\ +\ A_{2n}\,b_2\ +\ \dots\ +\ A_{nn}\,b_n \end{array} \right]\,. \end{eqnarray*}

Equating the respective coordinates of the column vectors on both sides of the equation, \(\\\) we come up with the explicit formula for \(\,x_j,\ \ j=1,2,\ldots,n:\)

\begin{eqnarray*} x_j & = & \frac{1}{\det\boldsymbol{A}}\ \ (b_1\,A_{1j}\ +\ b_2\,A_{2j}\ +\ \dots\ +\ b_n\,A_{nj}) \\ & = & \frac{1}{\det\boldsymbol{A}}\ \ \left|\begin{array}{ccccccc} a_{11} & \dots & a_{1,j-1} & b_1 & a_{1,j+1} & \dots & a_{1n} \\ a_{21} & \dots & a_{2,j-1} & b_2 & a_{2,j+1} & \dots & a_{2n} \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n1} & \dots & a_{n,j-1} & b_n & a_{n,j+1} & \dots & a_{nn} \end{array} \right|\,,\qquad j=1,2,\ldots,n. \end{eqnarray*}

Theorem 8. \(\,\) The Cramer’s Rule.

The linear system (5) has the unique solution given by

\[x_j\ \ =\ \ \frac{D_j}{D}\,,\qquad j=1,2,\ldots,n,\]

where \(\,D\,\) is the determinant of the coefficient matrix \(\,\boldsymbol{A},\ \) and \(\,D_j\,\) is the determinant of the matrix obtained from \(\,\boldsymbol{A}\ \) by replacing the \(\,j\)-th column with the column of constants \(\,\boldsymbol{b}.\ \) Using the column notation of matrices, this may be written as

\[ \begin{align}\begin{aligned}D\ \,=\ \,\det\;[\;\boldsymbol{A}_1\,|\,\dots\,|\, \boldsymbol{A}_j\,|\,\dots\,|\,\boldsymbol{A}_n\,]\,,\\D_j\ =\ \,\det\;[\;\boldsymbol{A}_1\,|\,\dots\,|\ \boldsymbol{b}\,|\ \dots\,|\,\boldsymbol{A}_n\,]\,.\end{aligned}\end{align} \]

Example 9. \(\,\) Consider the system of 3 equations in 3 unknowns over the rational field \(\,Q:\)

\begin{alignat*}{4} 2\,x_1 & {\,} - {\,} & x_2 & {\,} - {\,} & x_3 & {\;} = {} & 4 \\ 3\,x_1 & {\,} + {\,} & 4\,x_2 & {\,} - {\,} & 2\,x_3 & {\;} = {} & 11 \\ 3\,x_1 & {\,} - {\,} & 2\,x_2 & {\,} + {\,} & 4\,x_3 & {\;} = {} & 11 \end{alignat*}

When in a given system the number of equations equals the number of unknowns (the coefficient matrix \(\,\boldsymbol{A}\,\) is square), then we begin with the calculation of \(\ D=\det\boldsymbol{A}.\ \) In this case

\[\begin{split}D\ =\ \left|\begin{array}{rrr} 2 & -1 & -1 \\ 3 & 4 & -2 \\ 3 & -2 & 4 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & 0 & -1 \\ -1 & 6 & -2 \\ 11 & -6 & 4 \end{array}\right|\ =\ -\ \left|\begin{array}{rr} -1 & 6 \\ 11 & -6 \end{array}\right|\ =\ 6\ \left|\begin{array}{rr} 1 & 1 \\ -11 & -1 \end{array}\right|\ =\ 60\,.\end{split}\]

Since \(\,D\ne 0,\ \) we calculate the determinants \(\,D_1,\,D_2\,\) and \(\,D_3\,\) in the Cramer’s rule:

\(D_1\ =\ \left|\begin{array}{rrr} 4 & -1 & -1 \\ 11 & 4 & -2 \\ 11 & -2 & 4 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & 0 & -1 \\ 3 & 6 & -2 \\ 27 & -6 & 4 \end{array}\right|\ =\ -\ \left|\begin{array}{rr} 3 & 6 \\ 27 & -6 \end{array}\right|\ =\ 18\ \left|\begin{array}{rr} 1 & -1 \\ 9 & 1 \end{array}\right|\ =\ 180\,,\)

\(D_2\ =\ \left|\begin{array}{rrr} 2 & 4 & -1 \\ 3 & 11 & -2 \\ 3 & 11 & 4 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & 0 & -1 \\ -1 & 3 & -2 \\ 11 & 27 & 4 \end{array}\right|\ =\ -\ \left|\begin{array}{rr} -1 & 3 \\ 11 & 27 \end{array}\right|\ =\ 3\ \left|\begin{array}{rr} 1 & 1 \\ -11 & 9 \end{array}\right|\ =\ 60\,,\)

\(D_3\ =\ \left|\begin{array}{rrr} 2 & -1 & 4 \\ 3 & 4 & 11 \\ 3 & -2 & 11 \end{array}\right|\ =\ \left|\begin{array}{rrr} 0 & -1 & 0 \\ 11 & 4 & 27 \\ -1 & -2 & 3 \end{array}\right|\ =\ \left|\begin{array}{rr} 11 & 27 \\ -1 & 3 \end{array}\right|\ =\ 3\ \left|\begin{array}{rr} 11 & 9 \\ -1 & 1 \end{array}\right|\ =\ 60\,.\)

Finally, the system has the unique solution:

\[x_1\ =\ \textstyle{180\over 60}\ =\ 3\,,\quad x_2\ =\ \textstyle{60\over 60}\ =\ 1\,,\quad x_3\ =\ \textstyle{60\over 60}\ =\ 1\,.\]

In Sage, the formulas of the Cramer’s rule may be obtained also in symbolic form for any size \(\,n=2,3,\ldots\ \) of matrix \(\,\boldsymbol{A}.\ \) Namely, the solution of the system is given by the last column of the augmented matrix \(\,\boldsymbol{B}=[\,\boldsymbol{A}\,|\,\boldsymbol{b}\,]\ \) transformed to the reduced row echelon form.

Experiment with Sage:

Given the size \(\,n\,\) of the coefficient matrix \(\,\boldsymbol{A},\ \) the following program displays the augmented matrix \(\,\boldsymbol{B}\ \) in its original and row-reduced echelon form. Elements of the last column of the latter matrix, which provide the solution, are displayed additionally enlarged for a better readability.