Application to Systems of Linear Differential Equations

Consider a system of linear differential equations of degree 1 with constant coefficients:

(1)\[\begin{split}\begin{array}{l} \dot{x}_1\ =\ a_{11}\,x_1\,+\,a_{12}\,x_2\,+\,\ldots\,+\,a_{1n}\,x_n \\ \dot{x}_2\ =\ a_{21}\,x_1\,+\,a_{22}\,x_2\,+\,\ldots\,+\,a_{2n}\,x_n \\ \ \ldots\qquad\ldots\qquad\ \ \ldots\qquad\ldots\qquad\ldots\qquad \\ \dot{x}_n\ =\ a_{n1}\,x_1\,+\,a_{n2}\,x_2\,+\,\ldots\,+\,a_{nn}\,x_n \end{array}\end{split}\]

where \(\ \ x_i=x_i(t)\,,\ \ \dot{x}_i\,=\,\frac{d}{dt}\ x_i(t)\,,\ \ a_{ij}\in R\,,\ \ i,j=1,2,\ldots,n.\ \) By introducing the notation

\[\begin{split}\boldsymbol{A}\ =\ \left[\begin{array}{cccc} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \dots & \dots & \dots & \dots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{array}\right]\,,\qquad \boldsymbol{x}\ =\ \left[\begin{array}{c} x_1 \\ x_2 \\ \ldots \\ x_n \end{array}\right]\,,\qquad \boldsymbol{\dot{x}}\ =\ \left[\begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \\ \ldots \\ \dot{x}_n \end{array}\right]\,,\end{split}\]

we can write the system (1) in a compact matrix form:

(2)\[\boldsymbol{\dot{x}}\ =\ \boldsymbol{A}\,\boldsymbol{x}\,.\]

We look for the solutions of the form

(3)\[\boldsymbol{x}(t)\,=\,\boldsymbol{v}\,e^{\,\lambda\,t}\,,\qquad \lambda\in C\,,\quad\boldsymbol{v}=[\,\beta_i\,]_n\in C^n\,.\]

Then \(\ \,\boldsymbol{\dot{x}}(t)=\lambda\,\boldsymbol{v}\,e^{\,\lambda\,t}\ \) and substitution into (4) gives

\[\lambda\,\boldsymbol{v}\,e^{\,\lambda\,t}\ =\ \boldsymbol{A}\,\boldsymbol{v}\,e^{\,\lambda\,t}\,,\]

which, after dividing by \(\ e^{\,\lambda\,t}\neq 0\,,\ \) reads

(4)\[\boldsymbol{A}\,\boldsymbol{v}\ =\ \lambda\,\boldsymbol{v}\,.\]

The equation (4) is an eigenproblem of the matrix \(\,\boldsymbol{A}\ \) viewed as a linear operator in the space \(\,C^n\ \) (the action of this operator on a vector \(\,\boldsymbol{x}\in C^n\,\) is to multiply this vector on the left by \(\boldsymbol{A}\)).

Hence, the function (3) is a solution of the equation (4) if and only if \(\lambda\ \) is an eigenvalue of the matrix \(\,\boldsymbol{A}\,,\ \) and \(\ \,\boldsymbol{v}\ \) - \(\,\) an eigenvector associated to this eigenvalue.

We compute the eigenvalues \(\ \lambda\ \) from the characteristic equation

(5)\[\begin{split}\det\,(\boldsymbol{A}-\lambda\,\boldsymbol{I}_n)\ \ =\ \ \left| \begin{array}{cccc} \alpha_{11}-\lambda & \alpha_{12} & \dots & \alpha_{1n} \\ \alpha_{21} & \alpha_{22}-\lambda & \dots & \alpha_{2n} \\ \dots & \dots & \dots & \dots \\ \alpha_{n1} & \alpha_{n2} & \dots & \alpha_{nn}-\lambda \end{array} \right|\ \ =\ \ 0\,,\end{split}\]

and the associated eigenvectors \(\,\) - \(\,\) by solving a linear problem (4) for a given eigenvalue \(\,\lambda:\)

(6)\[\begin{split}\begin{array}{l} (a_{11}-\lambda)\ \beta_1\,+\,a_{12}\ \beta_2\,+\,\ldots\,+\,a_{1n}\ \beta_n\ =\ 0 \\ a_{21}\ \beta_1\,+\,(a_{22}-\lambda)\ \beta_2\,+\,\ldots\,+\,a_{2n}\ \beta_n\ =\ 0 \\ \ \ \ldots\qquad\ldots\qquad\ldots\qquad\ldots\qquad\ldots \\ a_{n1}\ \beta_1\,+\,a_{n2}\ \beta_2\,+\,\ldots\,+\,(a_{nn}-\lambda)\ \beta_n\ =\ 0 \end{array}\end{split}\]

Because the system (1), and so also the associated matrix equation (4) are homogeneous, each linear combination of solutions is also a solution of the system. We discuss now different situations that may occur depending on possible solutions of the characteristic equation.

\(\;\)

Case 1. \(\,\)

The equation (5) has \(\,n\ \) different real roots \(\ \lambda_1,\,\lambda_2,\,\ldots,\,\lambda_n\,.\ \)

Then the real eigenvectors \(\ \boldsymbol{v}_1,\,\boldsymbol{v}_2,\,\ldots,\,\boldsymbol{v}_n\,\) associated to these eigenvalues and also the corresponding particular solution

(7)\[\boldsymbol{x}^1(t)=e^{\,\lambda_1\,t}\,\boldsymbol{v}_1\,,\quad \boldsymbol{x}^2(t)=e^{\,\lambda_2\,t}\,\boldsymbol{v}_2\,,\quad\ldots\,,\quad \boldsymbol{x}^n(t)=e^{\,\lambda_n\,t}\,\boldsymbol{v}_n\]

are linearly independent.

The general solution is a linear combination of the particular solutions:

(8)\[\boldsymbol{x}(t)\ =\ c_1\ \boldsymbol{x}^1(t)\,+\,c_2\ \boldsymbol{x}^2(t)\,+\,\ldots\,+\, c_n\ \boldsymbol{x}^n(t)\,,\qquad c_1,\,c_2,\,\ldots,\,c_n\in R\,.\]

Example 1. \(\,\) We determine the general solution of the system of equations

\begin{alignat*}{3} \dot{x}_1 & {\ } = {\ } & 2\,x_1 & {\ } - {\ } & x_2 \\ \dot{x}_2 & {\ } = {\ } & 4\,x_1 & {\ } - {\ } & 3\,x_2 \end{alignat*}

The characteristic equation (5) for the matrix \(\,\boldsymbol{A}\ =\ \left[\begin{array}{rr} 2 & -1 \\ 4 & -3 \end{array}\right]:\)

\[\begin{split}\left|\begin{array}{cc} 2-\lambda & -1 \\ 4 & -3-\lambda \end{array}\right|\ \,=\ \, \lambda^2+\lambda-2\ \,=\ \, (\lambda-1)(\lambda+2)\ \,=\ \,0\end{split}\]

has two different real roots: \(\ \,\lambda_1=1\,,\ \,\lambda_2=-2\,.\)

The eigenvectors \(\ \boldsymbol{v}_1\,,\ \boldsymbol{v}_2\ \,\) associated with the eigenvalues \(\ \lambda_1\,,\ \,\lambda_2\ \,\) may be determined from the equations (6):

\[\begin{split}\begin{array}{llll} \left[\begin{array}{cc} 1 & -1 \\ 4 & -4 \end{array}\right]\ \left[\begin{array}{c} \beta_1 \\ \beta_2 \end{array}\right]\ =\ \left[\begin{array}{c} 0 \\ 0 \end{array}\right]\,: & \beta_1=\beta_2=\beta\,, & \boldsymbol{v}_1\,=\,\beta\ \left[\begin{array}{c} 1 \\ 1 \end{array}\right]\,, & \beta\in R\!\smallsetminus\!\{0\}\,; \\ \\ \left[\begin{array}{cc} 4 & -1 \\ 4 & -1 \end{array}\right]\ \left[\begin{array}{c} \beta_1 \\ \beta_2 \end{array}\right]\ =\ \left[\begin{array}{c} 0 \\ 0 \end{array}\right]\,: & \beta_2=4\,\beta_1=4\,\beta\,, & \boldsymbol{v}_2\,=\,\beta\ \left[\begin{array}{c} 1 \\ 4 \end{array}\right]\,, & \beta\in R\!\smallsetminus\!\{0\}\,. \end{array}\end{split}\]

Taking \(\,\beta=1\ \) we obtain two linearly independent particular solutions:

\[\begin{split}\boldsymbol{x}^1(t)\ \,=\ \, e^{\;t}\ \boldsymbol{v}_1\ \,=\ \, e^{\;t}\ \left[\begin{array}{c} 1 \\ 1 \end{array}\right]\,,\qquad \boldsymbol{x}^2(t)\ \,=\ \, e^{\,-2\,t}\ \,\boldsymbol{v}_2\ \,=\ \, e^{\,-2\,t}\ \left[\begin{array}{c} 1 \\ 4 \end{array}\right]\,,\end{split}\]

which comprise the general solution:

\[ \begin{align}\begin{aligned}\begin{split}\begin{array}{c} \boldsymbol{x}(t)\,=\,c_1\ \boldsymbol{x}^1(t)\,+\,c_2\ \boldsymbol{x}^2(t)\ : \\ \\ \left[\begin{array}{c} x_1(t) \\ x_2(t) \end{array}\right]\ =\ c_1\ e^{\;t}\ \left[\begin{array}{c} 1 \\ 1 \end{array}\right]\ +\ c_2\ e^{\,-2\,t}\ \left[\begin{array}{c} 1 \\ 4 \end{array}\right]\,, \\ \\ \qquad \begin{cases}\ \begin{array}{l} x_1(t)\ =\ c_1\ e^{\;t}\,+\,c_2\ e^{\,-2\,t} \\ x_2(t)\ =\ c_1\ e^{\;t}\,+\,4\,c_2\ e^{\,-2\,t} \end{array}\end{cases} \qquad c_1,c_2\in R\,. \end{array}\end{split}\\\;\end{aligned}\end{align} \]

Case 2.

The equation (5) has \(\,n\ \) different roots \(\ \lambda_1,\,\lambda_2,\,\ldots,\,\lambda_n\,,\) \(\\\) including complex non-real roots.

Discussion and the formulae (7) and (8) from Case 1. are still valid, but now the particular solutions corresponding to the non-real roots are also non-real. However, by suitable composition of these solutions one may obtain the system of \(\,n\,\) linearly independent real solutions.

First note that since the matrix \(\,\boldsymbol{A}\ \) is real, so the complex non-real roots of the characteristic equation go in pairs: if \(\,\lambda\in C\!\smallsetminus\! R\ \) is in the set of roots then so is \(\,\lambda^*\,,\ \) and if \(\,\boldsymbol{v}\in C^n\ \) is an eigenvector of the matrix \(\,\boldsymbol{A}\ \) for the eigenvalue \(\ \lambda,\ \,\) then \(\ \boldsymbol{v}^*\ \) is an eigenvector for the eigenvalue \(\ \lambda^*:\)

\[\boldsymbol{A}\,\boldsymbol{v}\ =\ \lambda\,\boldsymbol{v} \qquad\Leftrightarrow\qquad \boldsymbol{A}\,\boldsymbol{v}^*\ =\ \lambda^*\,\boldsymbol{v}^*\,.\]

The particular solutions corresponding to the roots \(\ \lambda\ \,\) and \(\ \,\lambda^*\ \) are conjugate to each other:

\[e^{\,\lambda^*\,t}\;\boldsymbol{v}^*\ =\ \left[\,e^{\,\lambda\,t}\;\boldsymbol{v}\,\right]^*\,.\]

We write the solution \(\ \,\boldsymbol{x}(t)\,=\,e^{\,\lambda\,t}\,\boldsymbol{v}\,\ \) corresponding to the root \(\,\lambda\,\ \) as

\[\boldsymbol{x}(t)\,=\,\boldsymbol{x}_1(t)+i\ \boldsymbol{x}_2(t)\,,\]

where \(\ \,\boldsymbol{x}_1(t)\,=\,\text{re}\ \,\boldsymbol{x}(t)\,,\ \, \boldsymbol{x}_2(t)\,=\,\text{im}\ \,\boldsymbol{x}(t)\ \,\) are functions with values in \(\,R^n\,.\)

Then the solution \(\ \,\boldsymbol{x}^*(t)\,=\,e^{\,\lambda^*\,t}\,\boldsymbol{v}^*\,\ \) corresponding to the root \(\,\lambda^*\,\ \) is given by

\[\boldsymbol{x}^*(t)\,=\,\boldsymbol{x}_1(t)-i\ \boldsymbol{x}_2(t)\,.\]

In fact, the real part \(\ \boldsymbol{x}_1(t)\ \,\) and \(\,\) the imaginary part \(\ \boldsymbol{x}_2(t)\ \,\) of the solution \(\ \boldsymbol{x}(t)\ \,\) are also solutions of the equation (4). \(\,\) Indeed,

\[\boldsymbol{\dot{x}}_1(t)+i\ \boldsymbol{\dot{x}}_2(t)\ =\ \boldsymbol{\dot{x}}(t)\ =\ \boldsymbol{A}\ \boldsymbol{x}(t)\ =\ \boldsymbol{A}\ [\,\boldsymbol{x}_1(t)+i\ \boldsymbol{x}_2(t)\,]\ =\ \boldsymbol{A}\ \boldsymbol{x}_1(t)+i\ \boldsymbol{A}\ \boldsymbol{x}_2(t)\]

and comparing the real and the imaginary parts of the side expressions gives

\[\boldsymbol{\dot{x}}_1(t)\ =\ \boldsymbol{A}\ \boldsymbol{x}_1(t)\,,\qquad \boldsymbol{\dot{x}}_2(t)\ =\ \boldsymbol{A}\ \boldsymbol{x}_2(t)\,.\]

Note also that linear independence of the solutions \(\ \boldsymbol{x}(t)\,,\ \boldsymbol{x}^*(t)\ \) is equivalent to the linear independence of the solutions \(\ \boldsymbol{x}_1(t)\,,\ \boldsymbol{x}_2(t)\,.\ \) Hence, describing the general solution of the system (1) in the expression (8), we can replace a linear combination of complex solutions \(\ \boldsymbol{x}(t)\,,\ \boldsymbol{x}^*(t)\ \) by a linear combination of real solutions \(\ \boldsymbol{x}_1(t)\,,\ \boldsymbol{x}_2(t)\,,\ \) so that the general solution is real.

Exercise. \(\,\) To complete a discussion of Cases \(\,\) 1. \(\,\) and \(\,\) 2. \(\,\) prove that:

  1. If the vectors \(\ \boldsymbol{v}_1,\,\boldsymbol{v}_2,\,\ldots,\,\boldsymbol{v}_n\in C^n\ \) are linearly independent, then for \(\ \alpha_i\in C\!\smallsetminus\!\{0\}\,,\ \) \(i=1,2,\ldots,n\,,\ \,\) the vectors \(\ \ \alpha_1\,\boldsymbol{v}_1,\ \ \alpha_2\,\boldsymbol{v}_2,\ \ldots,\ \alpha_n\,\boldsymbol{v}_n\) are also linearly independent (in the expressions (7) for particular solutions \(\ \alpha_i=\exp{(\lambda_i\,t)}\,,\ i=1,2,\ldots,n\)).

  2. If the vector \(\ \boldsymbol{x}\in C^n\ \) is of the form \(\ \boldsymbol{x}=\boldsymbol{x}_1+i\ \boldsymbol{x}_2\,,\\), boldsymbol{x}_1,boldsymbol{x}_2in R^n,,` then the linear independence of the vectors \(\ \boldsymbol{x},\,\boldsymbol{x}^*\ \) is equivalent to the linear independence of the vectors \(\ \boldsymbol{x}_1,\boldsymbol{x}_2\,.\)

Example 2. \(\,\) We solve a linear system of equations:

\begin{alignat*}{3} \dot{x}_1 & {\ } = {\ } & 3\,x_1 & {\ } - {\ } & x_2 \\ \dot{x}_2 & {\ } = {\ } & x_1 & {\ } + {\ } & 3\,x_2 \end{alignat*}

The characteristic equation (5) of the matrix \(\ \,\boldsymbol{A}\ =\ \left[\begin{array}{rr} 3 & -1 \\ 1 & 3 \end{array}\right]:\)

\[\begin{split}\left|\begin{array}{cc} 3-\lambda & -1 \\ 1 & 3-\lambda \end{array}\right|\ \,=\ \, \lambda^2-6\,\lambda+10\ \,=\ \,0\end{split}\]

has two different complex roots, conjugate to each other:

\[\lambda_1\,=\,3+i\,,\qquad\lambda_2\,=\,3-i\,.\]

The eigenvectors \(\ \boldsymbol{v}_1\ \) associated with the eigenvalues \(\ \lambda_1\ \) may be determined from the equation (6):

\[\begin{split}\left[\begin{array}{rr} -i & -1 \\ 1 & -i \end{array}\right] \left[\begin{array}{c} \beta_1 \\ \beta_2 \end{array}\right] \ =\ \left[\begin{array}{c} 0 \\ 0 \end{array}\right]\,, \quad\text{so}\quad\ \begin{cases}\begin{array}{r} -i\ \beta_1 - \beta_2 = 0 \\ \beta_1 - i\ \beta_2 = 0 \end{array}\end{cases}:\quad \beta_2=-i\ \beta_1\,.\end{split}\]

The solution is \(\ \ \beta_1=\beta\,,\ \ \beta_2=-i\ \beta\,,\ \ \beta\in C\,,\ \ \) so \(\ \ \boldsymbol{v}_1=\beta\ \left[\begin{array}{r} 1 \\ -i \end{array}\right]\,,\ \ \beta\in C\!\smallsetminus\!\{0\}\,.\)

The eigenvectors associated with the eigenvalue \(\,\lambda_2=\lambda_1^*\ \ \) are \(\ \ \boldsymbol{v}_2=\beta\ \left[\begin{array}{r} 1 \\ -i \end{array}\right]^* = \beta\ \left[\begin{array}{r} 1 \\ i \end{array}\right]\,,\ \ \beta\in C\!\smallsetminus\!\{0\}\,.\) \(\\\)

If \(\,\beta=1\,,\ \) a particular solution associated with the eigenvalue \(\ \lambda_1\,:\)

\[\begin{split}\begin{array}{rcl} \boldsymbol{x}^1(t) & = & e^{\,\lambda_1\,t}\ \boldsymbol{v}_1\ =\ e^{\,(3+i)\,t}\ \left[\begin{array}{r} 1 \\ -i \end{array}\right]\ =\ e^{\,3\,t}\ e^{\,i\,t}\ \left[\begin{array}{r} 1 \\ -i \end{array}\right]\ = \\ \\ & = & e^{\,3\,t}\ (\cos{t}+i\ \sin{t})\ \left[\begin{array}{r} 1 \\ -i \end{array}\right]\ =\ e^{\,3\,t}\ \left[\begin{array}{c} \cos{t}+i\ \sin{t} \\ \sin{t}-i\ \cos{t} \end{array}\right]\ = \\ \\ & = & e^{\,3\,t}\ \left[\begin{array}{c} \cos{t} \\ \sin{t} \end{array}\right]\ +\ i\ e^{\,3\,t}\ \left[\begin{array}{r} \sin{t} \\ -\cos{t} \end{array}\right] \end{array}\end{split}\]

is of the form \(\ \boldsymbol{x}^1(t)=\boldsymbol{x}_1(t)+i\ \boldsymbol{x}_2(t)\,,\ \) where \(\ \boldsymbol{x}_1(t)\,,\ \boldsymbol{x}_2(t)\ \) are functions with values in \(\ R^2\,.\) \(\\\)

Because both the real and the complex part of the complex solution are solutions of the system, so the general solution is given by their arbitary linear combination:

\[\begin{split}\begin{array}{c} \boldsymbol{x}(t)\ =\ c_1\ \boldsymbol{x}_1(t)\ +\ c_2\ \boldsymbol{x}_2(t)\ : \\ \\ \left[\begin{array}{c} x_1(t) \\ x_2(t) \end{array}\right]\ \ =\ \ e^{\,3\,t}\ \left(\ c_1\ \left[\begin{array}{c} \cos{t} \\ \sin{t} \end{array}\right]\ \,+\ \, c_2\ \left[\begin{array}{r} \sin{t} \\ -\cos{t} \end{array}\right]\ \,\right) \\ \\ \begin{cases}\begin{array}{c} \ x_1(t)\ \,=\ \,e^{\,3\,t}\ (c_1\,\cos{t}\,+\,c_2\,\sin{t}) \\ \ x_2(t)\ \,=\ \,e^{\,3\,t}\ (c_1\,\sin{t}\,-\,c_2\,\cos{t}) \end{array}\end{cases}\qquad c_1,c_2\in R\,. \end{array}\end{split}\]

Case 3.

Some of the eigenvalues of the matrix \(\,\boldsymbol{A}\ \) are multiple roots of the characteristic polynomial but their geometric and algebraic multiplicities are equal. This means that for each root of the characteristic polynomial with multilplicity \(\,k\) there are \(\,k\ \) linearly independent eigenvectors of the matrix \(\,\boldsymbol{A}\,.\)

In this situation we can apply the method described in Cases \(\,\) 1. \(\,\) and \(\,\) 2 \(\,\) without any change.

Example 3. \(\,\) We determine the general solution of the system

\begin{alignat*}{4} \dot{x}_1 & {\ } = {\ } & -8\ x_1 & {\ } + {\ } & 18\ x_2 & {\ } + {\ } & 9\ x_3 \\ \dot{x}_1 & {\ } = {\ } & -9\ x_1 & {\ } + {\ } & 19\ x_2 & {\ } + {\ } & 9\ x_3 \\ \dot{x}_1 & {\ } = {\ } & 12\ x_1 & {\ } - {\ } & 24\ x_2 & {\ } - {\ } & 11\ x_3 \end{alignat*}

The characteristic equation of the matrix \(\,\boldsymbol{A}:\)

\[\begin{split}\left|\begin{array}{ccc} -8-\lambda & 18 & 9 \\ -9 & 19-\lambda & 9 \\ 12 & -24 & -11-\lambda \end{array}\right|\ =\ \lambda^3-3\,\lambda+2\ =\ (\lambda-1)^2\,(\lambda+2)\ =\ 0\end{split}\]

has a double root \(\,\lambda_{1,2}=1\ \) and a single root \(\,\lambda_3=-2\,.\)

For the eigenvalue \(\,\lambda_{1,2}\ \) the system of equations (6) reduces to

\[\beta_1-2\,\beta_2-\beta_3\ =\ 0\,,\qquad\text{so that}\qquad \beta_3\ =\ \beta_1-2\,\beta_2\,,\quad\beta_1,\beta_2\in R\,.\]

The geometric multiplicity of the eigenvalue \(\,\lambda_{1,2}\ \) is the same as the algebraic multiplicity and equals 2, because its associated eigenvectors of the form

\[\begin{split}\boldsymbol{v}_{1,2}\ =\ \left[\begin{array}{c} \beta_1 \\ \beta_2 \\ \beta_1-2\,\beta_2 \end{array}\right]\ =\ \beta_1\ \left[\begin{array}{r} 1 \\ 0 \\ 1 \end{array}\right]\ +\ \beta_2\ \left[\begin{array}{r} 0 \\ 1 \\ -2 \end{array}\right]\,,\qquad \begin{array}{c} \beta_1,\,\beta_2\in R\,, \\ \beta_1^2+\beta_2^2>0 \end{array}\end{split}\]

comprise (together with the zero vector) a 2-dimensional subspace.

Hence, the eigenvalue \(\,\lambda_{1,2}=1\ \) gives rise to two linearly independent particular solutions:

(9)\[\begin{split}\boldsymbol{x}^1(t)\ \,=\ \,e^{\,t}\ \left[\begin{array}{r} 1 \\ 0 \\ 1 \end{array}\right] \qquad\text{and}\qquad \boldsymbol{x}^2(t)\ \,=\ \,e^{\,t}\ \left[\begin{array}{r} 0 \\ 1 \\ -2 \end{array}\right]\,.\end{split}\]

The eigenvectors of the matrix \(\,\boldsymbol{A}\ \) associated with the eigenvalue \(\,\lambda_3=-2\ \) are of the form

(10)\[\begin{split}\boldsymbol{v}_3\ =\ \beta\ \left[\begin{array}{r} 3 \\ 3 \\ -4 \end{array}\right]\,,\quad \beta\in R\!\smallsetminus\!\{0\}\,, \qquad\text{so that}\qquad \boldsymbol{x}^3(t)\ \,=\ \,e^{\,-2\,t}\ \left[\begin{array}{r} 3 \\ 3 \\ -4 \end{array}\right]\,.\end{split}\]

The general solution of the system is an arbitrary linear combination of the solutions \(\,\) (9) \(\,\) and \(\,\) (10):

\[\begin{split}\begin{array}{l} \boldsymbol{x}(t)\ \,=\ \,c_1\ \boldsymbol{x}^1(t)\ +\ c_2\ \boldsymbol{x}^2(t)\ +\ c_3\ \boldsymbol{x}^3(t)\,: \\ \\ \begin{cases}\ \ \begin{array}{l} x_1(t)\ =\ c_1\ e^{\,t}\,+\,3\ c_3\ e^{\,-2\,t} \\ x_2(t)\ =\ c_2\ e^{\,t}\,+\,3\ c_3\ e^{\,-2\,t} \\ x_3(t)\ =\ (c_1-2\,c_2)\ e^{\,t}\,-\,4\ c_3\ e^{\,-2\,t} \end{array}\end{cases}\qquad c_1,\,c_2,\,c_3\,\in R\,. \end{array}\end{split}\]

Case 4.

For some of the eigenvalues of the matrix \(\,\boldsymbol{A}\ \) the geometric multilplicty is different (smaller) from the algebraic multilplicity.

In this case a basis of the space \(\,R^n\ \) cannot be formed exclusively from the eigenvectors of the matrix \(\,\boldsymbol{A}.\ \) However, one can use vectors of a Jordan basis of this space to form a set consisting of \(\,n\ \) linearly independent real solutions of the system (1). We will show on an example, without developping a general theory, that such a construction is possible.

Example 4. \(\,\) We solve a system of linear differential equations

\begin{alignat*}{4} \dot{x}_1 & {\ } = {\ } & 4\ x_1 & {\ } + {\ } & x_2 & {\ } + {\ } & x_3 \\ \dot{x}_1 & {\ } = {\ } & 2\ x_1 & {\ } + {\ } & 4\ x_2 & {\ } + {\ } & x_3 \\ \dot{x}_1 & {\ } = {\ } & & & x_2 & {\ } + {\ } & 4\ x_3 \end{alignat*}

The characteristic equation of the matrix \(\ \ \boldsymbol{A}\ =\ \left[\begin{array}{ccc} 4 & 1 & 1 \\ 2 & 4 & 1 \\ 0 & 1 & 4 \end{array}\right]:\)

\[\begin{split}\left|\begin{array}{ccc} 4-\lambda & 1 & 1 \\ 2 & 4-\lambda & 1 \\ 0 & 1 & 4-\lambda \end{array}\right|\ =\ \lambda^3-12\,\lambda^2+45\,\lambda-54\ =\ (\lambda-3)^2\,(\lambda-6)\ =\ 0\end{split}\]

has a double root \(\,\lambda_{1,2}=3\ \) and a single root \(\,\lambda_3=6\,.\) \(\\\)

The coordinates \(\ \beta_1,\beta_2,\beta_3\ \) of the eigenvectors associated with the eigenvalue \(\,\lambda_{1,2}\ \) may be determined from the equation

\[\begin{split}\left[\begin{array}{ccc} 1 & 1 & 1 \\ 2 & 1 & 1 \\ 0 & 1 & 1 \end{array}\right]\ \left[\begin{array}{c} \beta_1 \\ \beta_2 \\ \beta_3 \end{array}\right]\ =\ \left[\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right]\,, \quad\text{so}\quad \begin{cases}\begin{array}{r} \beta_1+\beta_2+\beta_3=0 \\ 2\,\beta_1+\beta_2+\beta_3=0 \\ \beta_2+\beta_3=0 \end{array}\end{cases}:\quad \begin{cases}\begin{array}{l} \beta_1=0 \\ \beta_3=-\beta_2 \end{array}\end{cases}\end{split}\]

Te solution is given by \(\ \ \beta_1=0\,,\ \ \beta_2=\beta\,,\ \ \beta_3=-\beta\,,\ \ \beta\in R\,,\ \) so the eigenvectors

(11)\[\begin{split}\boldsymbol{v}_1\ =\ \beta\ \left[\begin{array}{r} 0 \\ 1 \\ -1 \end{array}\right]\,,\quad \beta\in R\!\smallsetminus\!\{0\}\end{split}\]

comprise (together with the zero vector) a 1-dimensional subspace: the geometric multiplicity of the eigenvalue \(\,\lambda_{1,2}\ \) is equal to 1. Hence we obtain a solution of the system of linear differential equations:

(12)\[\begin{split}\boldsymbol{x}^1(t)\ \,=\ \, e^{\,3\,t}\ \left[\begin{array}{r} 0 \\ 1 \\ -1 \end{array}\right]\,.\end{split}\]

The second solution associated with the eigenvalue \(\,\lambda_{1,2}\ \) may be obtained from the construction of Jordan basis \(\,\mathcal{B}_{1,2}=(\boldsymbol{w}_1,\boldsymbol{w}_2)\,.\ \) The vectors \(\,\boldsymbol{w}_1,\boldsymbol{w}_2\in R^3\!\smallsetminus\!\{\boldsymbol{0}\}\ \) are defined by the conditions

(13)\[\begin{split}\begin{cases}\ \begin{array}{l} (\boldsymbol{A}-\lambda_{1,2}\ \boldsymbol{I}_3)\ \boldsymbol{w}_1\ =\ \boldsymbol{0} \\ (\boldsymbol{A}-\lambda_{1,2}\ \boldsymbol{I}_3)\ \boldsymbol{w}_2\ =\ \boldsymbol{w}_1 \end{array}\end{cases} \quad\text{so}\qquad\ \begin{cases}\ \begin{array}{l} \boldsymbol{A}\,\boldsymbol{w}_1\ =\ \lambda_{1,2}\ \boldsymbol{w}_1 \\ \boldsymbol{A}\,\boldsymbol{w}_2\ =\ \boldsymbol{w}_1+\lambda_{1,2}\ \boldsymbol{w}_2 \end{array}\end{cases}\end{split}\]

We will show that \(\ \,\boldsymbol{w}_1\,\) and \(\boldsymbol{w}_2\ \,\) are linearly independent. Indeed, let

\[\alpha_1\ \boldsymbol{w}_1\ +\ \alpha_2\ \boldsymbol{w}_2\ \,=\ \,\boldsymbol{0}\,,\qquad \alpha_1,\,\alpha_2\in R\,.\]

Multiply this equality on both sides from the left by the matrix \(\,\boldsymbol{A}-\lambda_{1,2}\ \boldsymbol{I}_3\ .\) \(\\\) The conditions (13) imply

\begin{eqnarray*} \alpha_1\ (\boldsymbol{A}-\lambda_{1,2}\ \boldsymbol{I}_3)\ \boldsymbol{w}_1\ +\ \alpha_2\ (\boldsymbol{A}-\lambda_{1,2}\ \boldsymbol{I}_3)\ \boldsymbol{w}_2 & = & \boldsymbol{0} \\ \text{so}\quad\alpha_2\ \boldsymbol{w}_1 & = & \boldsymbol{0}\,,\quad \text{and thus}\quad\alpha_2=0\,, \\ \text{but then}\quad\alpha_1\ \boldsymbol{w}_1 & = & \boldsymbol{0}\,, \quad\text{so}\quad\alpha_1=0\,. \end{eqnarray*}

We check now that the function

(14)\[\boldsymbol{x}^2(t)\ \,=\ \, \exp{(\lambda_{1,2}\;t)}\,\cdot\,(t\,\boldsymbol{w}_1\,+\,\boldsymbol{w}_2)\]

is a solution to the considered system of differential equations. Indeed, by the equations (13) we have

\begin{eqnarray*} \boldsymbol{\dot{x}}^2(t) & = & \lambda_{1,2}\ \exp{(\lambda_{1,2}\;t)}\,\cdot\,(t\,\boldsymbol{w}_1\,+\,\boldsymbol{w}_2)\ +\ \exp{(\lambda_{1,2}\;t)}\,\cdot\,\boldsymbol{w}_1\ = \\ & = & \exp{(\lambda_{1,2}\;t)}\,\cdot\, \left[\ \,t\,\cdot\,\lambda_{1,2}\;\boldsymbol{w}_1\,+\, (\boldsymbol{w}_1+\lambda_{1,2}\,\boldsymbol{w}_2)\ \right]\ = \\ & = & \exp{(\lambda_{1,2}\;t)}\,\cdot\, (\ t\,\cdot\,\boldsymbol{A}\,\boldsymbol{w}_1\,+\,\boldsymbol{A}\,\boldsymbol{w}_2\ )\ = \\ & = & \boldsymbol{A}\ \,[\ \,\exp{(\lambda_{1,2}\;t)}\,\cdot\, (t\,\boldsymbol{w}_1\,+\,\boldsymbol{w}_2)\ ]\ = \\ & = & \boldsymbol{A}\ \boldsymbol{x}^2(t)\,. \end{eqnarray*}

We now determine the vectors \(\,\boldsymbol{w}_1\ \ \text{and}\ \ \boldsymbol{w}_2\,.\ \) Since \(\,\boldsymbol{w}_1\ \) is an eigenvector of the matrix \(\,\boldsymbol{A}\ \) associated with the eigenvalue \(\,\lambda_{1,2}\,,\ \) we may assume \(\ \,\boldsymbol{w}_1=\boldsymbol{v}_1\,.\ \) Taking \(\ \beta=1\ \) in the equation (11), we obtain:

\[\begin{split}\boldsymbol{w}_1\ =\ \left[\begin{array}{r} 0 \\ 1 \\ -1 \end{array}\right]\,.\end{split}\]

The vector \(\ \,\boldsymbol{w}_2=[\,\gamma_i\,]_3\ \,\) may be calculated from the equation: \(\ \ (\boldsymbol{A}-\lambda_{1,2}\,\boldsymbol{I}_3)\,\boldsymbol{w}_2=\boldsymbol{w}_1\,,\ \ \) that is

\[\begin{split}\left[\begin{array}{ccc} 1 & 1 & 1 \\ 2 & 1 & 1 \\ 0 & 1 & 1 \end{array}\right]\ \left[\begin{array}{c} \gamma_1 \\ \gamma_2 \\ \gamma_3 \end{array}\right]\ =\ \left[\begin{array}{r} 0 \\ 1 \\ -1 \end{array}\right]\,, \quad\text{and so}\quad \begin{cases}\begin{array}{r} \gamma_1+\gamma_2+\gamma_3\,=\,0 \\ 2\,\gamma_1+\gamma_2+\gamma_3\,=\,1 \\ \gamma_2+\gamma_3\,=\,-1 \end{array}\end{cases}\end{split}\]

The solution is: \(\ \ \gamma_1=1,\ \ \gamma_2=\gamma,\ \ \gamma_3=-1-\gamma,\quad\gamma\in R.\ \,\) For \(\ \gamma=0\ \) we obtain

\[\begin{split}\boldsymbol{w}_2\ =\ \left[\begin{array}{r} 1 \\ 0 \\ -1 \end{array}\right]\,.\end{split}\]

The solution (14) of the system of differential equations takes now an explicit form:

(15)\[\begin{split}\boldsymbol{x}^2(t)\ \,=\ \, e^{\,3\,t}\ \left[\begin{array}{c} 1 \\ t \\ -1-t \end{array}\right]\,.\end{split}\]

In this way we have two linearly independent solutions, \(\ \boldsymbol{x}^1(t)\ \) and \(\ \boldsymbol{x}^2(t)\,,\ \) associated with the eigenvalue \(\ \lambda_{1,2}=3\ \) of the matrix \(\,\boldsymbol{A}\,.\)

It remains to determine the solution associated with the (single) eigenvalue \(\ \lambda_3=6.\ \) \(\\\) The associated eigenvectors \(\,\boldsymbol{v}_3=[\,\beta_i\,]_3\ \) are computed from the equation

\[\begin{split}\left[\begin{array}{rrr} -2 & 1 & 1 \\ 2 & -2 & 1 \\ 0 & 1 & -2 \end{array}\right]\ \left[\begin{array}{c} \beta_1 \\ \beta_2 \\ \beta_3 \end{array}\right]\ =\ \left[\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right]\,, \quad\text{so}\quad \begin{cases}\ \begin{array}{r} -\,2\,\beta_1\,+\,\beta_2\,+\,\beta_3\,=\,0 \\ 2\,\beta_1\,-\,2\,\beta_2\,+\,\beta_3\,=\,0 \\ \beta_2\,-\,2\,\beta_3\,=\,0 \end{array}\end{cases}.\end{split}\]

Hence: \(\quad\beta_1=3\,\beta\,,\ \ \beta_2=4\,\beta\,,\ \ \beta_3=2\,\beta\,,\ \ \beta\in R\,,\quad\) and thus \(\quad\boldsymbol{v}_3\ =\ \beta\ \left[\begin{array}{c} 3 \\ 4 \\ 2 \end{array}\right]\,, \ \ \beta\in R\!\smallsetminus\!\{0\}\,,\)

and the solution of the system of differential equations for this eigenvalue is given by

(16)\[\begin{split}\boldsymbol{x}^3(t)\ \,=\ \, e^{\,6\,t}\ \left[\begin{array}{r} 3 \\ 4 \\ 2 \end{array}\right]\,.\end{split}\]

The vector \(\,\boldsymbol{v}_3\ \) (eg. for \(\,\beta=1\)) may be taken as the third vector \(\,\boldsymbol{w}_3\ \) of the Jordan basis in \(\,R^3\,,\ \) corresponding to the matrix \(\,\boldsymbol{A}:\)

\[\begin{split}\mathcal{B}\ =\ (\boldsymbol{w}_1,\boldsymbol{w}_2,\boldsymbol{w}_3)\ \ =\ \ \left(\ \ \left[\begin{array}{r} 0 \\ 1 \\ -1 \end{array}\right]\,,\ \left[\begin{array}{r} 1 \\ 0 \\ -1 \end{array}\right]\,,\ \left[\begin{array}{r} 3 \\ 4 \\ 2 \end{array}\right] \ \ \right)\,.\end{split}\]

The general solution of the system of differential equations is an arbitrary linear combination of the particular solutions \(\,\) (12), \(\,\) (15) \(\,\) and \(\,\) (16) :

\[\begin{split}\begin{array}{c} \boldsymbol{x}(t)\ \,=\ \,c_1\ \boldsymbol{x}^1(t)\ +\ c_2\ \boldsymbol{x}^2(t)\ +\ c_3\ \boldsymbol{x}^3(t) : \\ \\ \left[\begin{array}{c} x_1(t) \\ x_2(t) \\ x_3(t) \end{array}\right]\ =\ e^{\,3\,t}\ \left[\begin{array}{c} c_2 \\ c_1\,+\,c_2\,t \\ -\,c_1\,-\,c_2\,(1+t) \end{array}\right]\ +\ c_3\ e^{\,6\,t}\ \left[\begin{array}{c} 3 \\ 4 \\ 2 \end{array}\right] \\ \\ \qquad\ \ \begin{cases}\ \ \begin{array}{l} x_1(t)\ \,=\ \,c_2\ e^{\,3\,t}\ +\ 3\,c_3\ e^{\,6\,t} \\ x_2(t)\ \,=\ \,(c_1+c_2\;t)\ e^{\,3\,t}\ +\ 4\,c_3\ e^{\,6\,t} \\ x_3(t)\ \,=\ \,-\ [\,c_1+c_2\,(1+t)\,]\ e^{\,3\,t}\ +\ 2\,c_3\ \ e^{\,6\,t} \end{array}\end{cases} c_1,\,c_2,\,c_3\in R\,. \end{array}\end{split}\]