Eigenvalues and Eigenvectors¶
The set of eigenvalues of a linear operator is called the spectrum of this linear operator. We will show that eigenvectors of a linear operator associated with distinct eigenvalues are linearly independent.
Theorem 1.
Let \(\,F\in\text{End}(V)\,,\ \ V=V(K)\,,\ \ \dim V=n\geq 2.\ \,\) If a spectrum of the operator \(\,F\ \) contains \(\,k\leq n\ \) distinct eigenvalues \(\ \lambda_1,\,\lambda_2,\,\ldots,\,\lambda_k\,,\ \) then the associated eigenvectors \(\,v_1,\,v_2,\,\ldots,\,v_k\ \) are linearly independent.
Proof \(\,\) (induction on \(\,k\,\)).
\(\ k=2.\ \ \) Let \(\quad Fv_1=\lambda_1\,v_1\,,\quad Fv_2=\lambda_2\,v_2\,,\quad v_1,v_2\in V\!\smallsetminus\!\{\theta\}\,.\)
Assume that vectors \(\ v_1,\,v_2\ \) are linearly dependent, that is, there exists a non-trivial linear combination of these vectors which is equal to the zero vector:
\[\alpha_1\,v_1\,+\;\alpha_2\,v_2\ =\ \theta\,,\qquad\alpha_1\neq 0\ \ \lor\ \ \alpha_2\neq 0\,.\]If \(\ \alpha_2 = 0\,,\ \) then \(\ \alpha_1\,v_1=\theta\ \,\) with \(\ \alpha_1\neq 0\,,\ \,\) which implies that \(\\\) \(\ v_1=\theta.\ \) Hence \(\ \alpha_2\neq 0\,,\ \) and thus we can write:
\[v_2\,=\;\beta\,v_1\,,\qquad\beta\ =\ -\ \frac{\alpha_1}{\alpha_2}\ \,\neq\ \,0\,.\]Computing in two ways the action of the operator \(\,F\ \) on a vector \(\,v_2 ,\ \) we obtain
\[ \begin{align}\begin{aligned}\begin{split}\begin{array}{l} Fv_2\ =\ \lambda_2\,v_2\ =\ \lambda_2\,(\beta\,v_1)\ =\ (\beta\,\lambda_2)\ v_1\,, \\ Fv_2\ =\ F(\beta\,v_1)\ =\ \beta\ Fv_1 \ =\ \beta\,(\lambda_1\,v_1)\ =\ (\beta\,\lambda_1)\ v_1\,. \end{array}\end{split}\\\text{Hence}\qquad(\beta\,\lambda_2)\ v_1\ =\ (\beta\,\lambda_1)\ v_1\,,\qquad \text{and thus}\qquad\beta\,(\lambda_2-\lambda_1)\ v_1\ =\ \theta\,.\end{aligned}\end{align} \]Since \(\ \beta\neq 0\ \) and \(\ v_1\neq\theta ,\ \) we must have \(\,\lambda_1=\lambda_2\,.\ \) Linear independence of the eigenvectors \(\,v_1,\ v_2\ \) implies then the equality of the associated eigenvalues \(\,\lambda_1,\,\lambda_2\,.\) \(\\\) By contraposition we deduce that if the eigenvalues \(\,\lambda_1,\,\lambda_2\ \) are distinct, then \(\\\) the associated eigenvectors \(\ v_1,\ v_2\ \) are linearly independent, what was to be demonstrated.
- Assume that the theorem holds for certain \(\ k<n.\ \)We wil show that the hopothesis is true also for \(\ k+1\,.\)
Assume that the linear operator \(\,F\ \) has \(\,k+1\ \) distinct eigenvalues \(\ \lambda_1,\,\ldots,\,\lambda_k,\,\lambda_{k+1}\,,\) \(\\\) with the associated eigenvectors \(\ v_1,\,\ldots,\,v_k,\,v_{k+1}\,.\) \(\\\) By the inductive assumption, the vectors \(\ v_1,\ldots,v_k\ \) are linearly independent. \(\\\) We wil show that the whole set \(\ (v_1,\ldots,v_k,v_{k+1})\,\) is linearly independent.
Assume that the vectors \(\ v_1,\ldots,v_k,v_{k+1}\ \) are linearly dependent that is, \(\\\) there exists a non-trivial linear combination of these vectors which is equal to the zero vector:
(1)¶\[\alpha_1\,v_1\,+\,\ldots\,+\,\alpha_k\,v_k\,+\,\alpha_{k+1}\,v_{k+1}\ =\ \theta\,,\]but not all \(\ \,\alpha_i\,,\ \ i=1,\ldots,k,\,k+1\,,\ \,\) are zero. \(\\\) If \(\,\alpha_{k+1}=0\,,\ \,\) then \(\,\) the equation \(\,\) (1) \(\,\) implies
\[\alpha_1\,v_1\,+\,\ldots\,+\,\alpha_k\,v_k\ = \theta\]with not all \(\ \,\alpha_i\,,\ \ i=1,\ldots,k\,,\ \,\) equal to zero. \(\\\) This contradicts the assumption that the vectors \(\ v_1,\ldots,v_k\,\ \) are linearly independent.
Hence, \(\ \alpha_{k+1}\neq 0\,,\ \,\) and thus the equality (1) may be written as:
\[v_{k+1}\ =\ \beta_1\,v_1\,+\,\ldots\,+\,\beta_k\,v_k\,,\qquad \beta_i\ =\ -\ \frac{\alpha_i}{\alpha_{k+1}}\ ,\quad i=1,\ldots,k\,.\]Since \(\ v_{k+1}\neq\theta,\ \,\) then not all the coefficients \(\ \beta_i\ \) are equal to zero: \(\\\) there exists (at least one) index \(\ i_0\in\{\,1,\ldots,k\,\}\ \) for which \(\ \beta_{i_0}\neq 0\,.\)
We compute in two ways a result of the action of the operator \(\,F\ \) on the vector \(\,v_{k+1}:\)
\[ \begin{align}\begin{aligned}\begin{split}\begin{array}{l} F\,v_{k+1}\ =\ \lambda_{k+1}\,v_{k+1}\ =\ \lambda_{k+1}\ \displaystyle\sum_{i\,=\,1}^k\ \beta_i\,v_i\ =\ \displaystyle\sum_{i\,=\,1}^k\ (\beta_i\,\lambda_{k+1})\ v_i\ , \\ F\,v_{k+1}\ =\ F\,\left(\:\displaystyle\sum_{i\,=\,1}^k\ \beta_i\,v_i\right)\ =\ \displaystyle\sum_{i\,=\,1}^k\ \beta_i\ Fv_i\ =\ \displaystyle\sum_{i\,=\,1}^k\ (\beta_i\,\lambda_i)\ v_i\ . \end{array}\end{split}\\\begin{split}\begin{array}{l} \text{Hence}\qquad \displaystyle\sum_{i\,=\,1}^k\ (\beta_i\,\lambda_{k+1})\ v_i\ =\ \displaystyle\sum_{i\,=\,1}^k\ (\beta_i\,\lambda_i)\ v_i\,, \\ \text{so that}\qquad \displaystyle\sum_{i\,=\,1}^k\ \beta_i\ (\lambda_{k+1}-\lambda_i)\ v_i\ =\ \theta\,. \end{array}\end{split}\end{aligned}\end{align} \]Linear indpendence of vectors \(\ v_1,\ldots,v_k\ \) implies vanishing of all the coefficients of the above linear combination:
\[\beta_i\ (\lambda_{k+1}-\lambda_i)\ =\ 0\qquad\text{for}\quad i=1,\ldots,k\,.\]In particular, \(\,\) for \(\,i=i_0\ \) we obtain \(\ \beta_{i_0}\ (\lambda_{k+1}-\lambda_{i_0})\ =\ 0\,,\ \,\) \(\\\) which means, \(\,\) since \(\ \beta_{i_0}\neq 0\,,\ \,\) that \(\ \,\lambda_{k+1}=\lambda_{i_0}\,.\)
Hence, the assumption that the vectors \(\ v_1,\ldots,v_k,\,v_{k+1}\ \) are linearly dependent leads to a contradiction with the assumption that all the eigenvalues \(\ \lambda_1,\,\ldots,\,\lambda_k,\,\lambda_{k+1}\ \) are distinct. This means that the above vectors are linearly independent, which finishes the proof.
As we showed earlier, in an \(\,n\)-dimensional vector space every linearly independent set of \(\,n\ \) vectors comprises a basis. Hence, if a linear operator \(\,F\in\text{End}(V),\ \) where \(\,\dim V=n,\ \,\) has \(\,n\ \) distinct eigenvalues, then the space \(\,V\ \) has a basis which consists of eigenvectors of \(\,F.\,\) More precisely, we may write
Corollary.
If a linear operator \(\,F\ \) defined on an \(\,n\)-dimensional vector space \(\,V(K)\ \,\) has \(\ \,n\ \,\) distinct eigenvalues \(\ \,\lambda_1,\,\lambda_2,\,\ldots,\,\lambda_n\,,\ \,\) and \(\ v_1,\,v_2,\,\ldots,\,v_n\,\) are the asscoaieted eigenvectors:
then \(\,\) the set \(\ \,\mathcal{B}=(v_1,v_2,\ldots,v_n)\ \,\) is a basis of the space \(\,V.\)
If \(\,V\ \) is a unitary or Euclidean vector space, and \(\ \,F\ \) a normal operator \(\\\) (e.g. Hermitian or unitary), \(\,\) then eigenvectors associated with distinct eigenvalues are orthogonal, and thus the set \(\,\mathcal{B}\ \,\) is an orthogonal basis.
The above Corollary may be also proved independently as the following
Theorem 2.
If a linear operator \(\,F\ \) defined on an \(\,n\)-dimensional vector space \(\,V(K)\ \,\) has \(\ \,n\ \,\) distinct eigenvalues, then the associated eigenvectors are linearly independent and thus comprise a basis of the space \(\ V.\)
Proof. \(\,\) Assume that \(\ \ F\in\text{End}(V)\,,\ \ \) where \(\ \ V=V(K)\,,\ \ \dim V=n\,,\ \ \) and \(\,\) that
To prove linear independence of the set of vectors \(\ (v_1,v_2,\ldots,v_n)\ \) we show that every linear combination of these vectors which gives the zero vector must be the trivial combination.
Assume that \(\quad\alpha_1\,v_1+\alpha_2\,v_2+\ldots+\alpha_n\,v_n=\theta\,,\quad \alpha_i\in K\,,\ \ i=1,2,\ldots,n\,.\)
Applying \(\,n-1\ \) times the operator \(\,F\ \) to both sides of the equality, \(\,\) we obtain:
The resulting set of equalities may be written in a form of a matrix equation:
If the eigenvalues \(\ \lambda_1,\,\lambda_2,\,\dots,\lambda_n\ \) are distinct, then the square matrix on the left hand side is nondegenerate, as follows from the formula for the Vandermonde determinant:
Therefore, since the inverse matrix exists, we obtain
In this way \(\ \ \alpha_i\,v_i=\theta\,,\ \ \) and because \(\ \ v_i\neq\theta\,,\ \ \) it follows that \(\ \ \alpha_i=0\,,\quad i=1,2,\dots,n\,.\ \,\) That is, we proved the implication
which states linear independence of the vectors \(\ v_1,\,v_2,\,\ldots,\,v_n\,.\)