Fundamental Concepts in Linear Algebra

Linear Combination of Vectors

Suppose that a vector \(\,x\in V(K)\,\) can be expressed as

\[x\,=\,\alpha_1\,x_1\,+\,\alpha_2\,x_2\,+\,\ldots\,+\,\alpha_m\,x_m\,,\]

where \(\ \ \alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m\in K\,,\ \ x_1,\,x_2,\,\ldots,\,x_m\in V\,.\ \)

Then the vector \(\,x\,\) is said to be a linear combination of vectors \(\ x_1,\,x_2,\,\ldots,\,x_m\ \) \(\\\) with coefficients \(\ \alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m\,.\)

When all coefficients vanish, the combination is called trivial. \(\\\) A trivial linear combination of any vectors equals the zero vector:

(1)\[\alpha_1=\alpha_2=\ldots=\alpha_r=0\qquad\Rightarrow\qquad \alpha_1\,x_1\,+\,\alpha_2\,x_2\,+\,\ldots\,+\,\alpha_r\,x_r\ =\ \theta\,.\]

Let \(\ X\ \) be \(\,\) a set \(\,\) (possibly infinite) \(\,\) of vectors from \(\,V(K)\,.\ \) \(\\\) A span of the set \(\ X\ \) is defined as the set of all finite linear combinations of elements in \(\,X:\)

\[L(X)\ \ :\,=\ \ \left\{\ \ \ \sum_{k=1}^n\ \alpha_k\,x_k\,:\ \ n\in N,\ \ \alpha_k\in K,\ \ x_k\in X,\ \ k=1,2,\ldots,n.\ \right\}\,.\]

For a finite set \(\ X\ =\ \{x_1,x_2,\ldots,x_m\}\ \) we get simply:

\[L(X)\,\equiv\,L(x_1,x_2,\ldots,x_m)\ =\ \left\{\,\alpha_1\,x_1 +\,\alpha_2\,x_2 +,\ldots\,+\,\alpha_m\,x_m:\ \ \alpha_1,\alpha_2,\ldots,\alpha_m\in K\,\right\}\,.\]

It’s easy to note that \(\,L(X)\,\) is a subspace: \(\,L(X) < V\,.\ \) Then it is said that

  • the set \(\,X\,\) spans (generates) the subspace \(\ L(X)\,;\)

  • the subspace \(\,L(X)\,\) is spanned by the set \(\,X\ \) (or by the vectors in \(\,X\)) ;

  • \(\,X\,\) is a spanning set of the subspace \(\,L(X)\,.\)

If \(\ L(X) = V\,,\ \) the set \(\,X\,\) is a spanning set of the space \(\,V\ \) (generates the space \(\,V\)).

Example.

Suppose that \(\ \vec{v}_1,\,\vec{v}_2,\,\vec{v}_3\ \) are three non-coplanar (not lying in the same plane) \(\\\) geometric vectors fixed at a given point \(\,O.\ \,\) Then

  1. \(L(\vec{v}_1)\,=\, \left\{\ \alpha\,\vec{v}_1 :\ \alpha\in R\ \right\}\ \) is a set of vectors lying in the straight line \(\\\) with the direction vector \(\,\vec{v}_1\ \) and passing through the point \(\,O\,;\)

  2. \(L(\vec{v}_1,\vec{v}_2)\,=\, \left\{\ \alpha_1\,\vec{v}_1 + \alpha_2\,\vec{v}_2 :\ \alpha_1,\alpha_2\in R\ \right\}\ \) is a set of vectors \(\\\) in the plane passing through the point \(\,O\ \) and determined by the vectors \(\,\vec{v}_1,\,\vec{v}_2\,;\)

  3. \(L(\vec{v}_1,\vec{v}_2,\vec{v}_3)\,=\, \left\{\ \alpha_1\,\vec{v}_1+\alpha_2\,\vec{v}_2+\alpha_3\,\vec{v}_3 :\ \alpha_1,\alpha_2,\alpha_3\in R\ \right\}\ \) is the whole space \(\\\) of vectors fixed at the point \(\,O.\)

The subspaces are bound by the relations

\[L(\vec{v}_1)\,<\, L(\vec{v}_1,\vec{v}_2)\,<\, L(\vec{v}_1,\vec{v}_2,\vec{v}_3)\,.\]

Linear Dependence and Independence

We shall consider sets of vectors \(\,x_1,x_2,\ldots,x_r\ \) from a vector space \(\,V\,\) over a field \(\,K.\)

By definition, \(\,\) a set \(\,\{x_1,x_2,\ldots,x_r\}\ \) is linearly dependent \(\,\) (or vectors \(\ x_1,x_2,\ldots,x_r\ \) are linearly dependent) \(\,\) if, and only if, \(\,\) there exists a non-trivial linear combination of these vectors equal to the zero vector \(\,\theta\).

A set \(\ \{x_1,x_2,\ldots,x_r\}\ \) is \(\,\) linearly independent \(\,\) (or vectors \(\,x_1,x_2,\ldots,x_r\ \) are linearly independent) \(\,\) if it is not linearly dependent, \(\,\) that is when every non-trivial linear combination of these vectors is different from the zero vector \(\,\theta\).

Therefore, a set \(\,\{x_1,x_2,\ldots,x_r\}\,\) is linearly dependent if, and only if, there exists a set \(\,\{\alpha_1,\alpha_2,\ldots,\alpha_r\}\,\) of scalars, \(\,\) not all zero, \(\,\) such that

(2)\[\alpha_1\,x_1\,+\,\alpha_2\,x_2\,+\,\ldots\,+\,\alpha_r\,x_r\ =\ \theta\,.\]

On the other hand, a set is linearly independent when the trivial linear combination of its vectors is the only one that equals the zero vector:

(3)\[\alpha_1\,x_1\,+\,\alpha_2\,x_2\,+\,\ldots\,+\,\alpha_r\,x_r\ =\ \theta \qquad\Rightarrow\qquad \alpha_1=\alpha_2=\ldots=\alpha_r=0\,.\]

It’s worthwhile to note that the condition (3) is a converse of (1).

More generally, a countably infinite set \(\,S\,\) of vectors is linearly dependent, when there exists a finite linearly dependent subset of \(\,S\,;\ \) such a set \(\,S\,\) is linearly independent, when each its finite subset is linearly independent.

Theorem 1. \(\,\) Vectors \(\ x_1,x_2,\ldots,x_r\,\ \ \) (\(r \geq 2\)) \(\ \) are linearly dependent if, and only if, \(\\\) at least one of them can be represented as a linear combination of the remaining ones: 2

(4)\[x_i\ =\ \beta_1\,x_1\,+\,\ldots\,+\,\beta_{i-1}\,x_{i-1}\,+\, \beta_{i+1}\,x_{i+1}\,+\,\ldots\,+\,\beta_r\,x_r\,, \quad\exists\ i\in\{1,2,\ldots,r\}.\]

Note. \(\ \) ‘at least one’ does not mean ‘each’.

Proof.

\(\Rightarrow\,:\ \) We assume that the vectors \(\ x_1,x_2,\ldots,x_r\ \) are linearly dependent. Thus, let

(5)\[\alpha_1\,x_1\,+\,\alpha_2\,x_2\,+\,\ldots\,+\,\alpha_r\,x_r\ =\ \theta\,,\]

and let \(\ i\in\{\,1,2,\ldots,r\,\}\ \) be an index for which \(\ \alpha_i\neq 0.\ \)

The Eq. (5) can be rewritten as

(6)\[\alpha_i\,x_i\ =\ -\,\alpha_1\,x_1\,-\,\ldots\,-\,\alpha_{i-1}\,x_{i-1}\,-\, \alpha_{i+1}\,x_{i+1}\,-\,\ldots\,-\,\alpha_r\,x_r\,.\]

The condition \(\,\alpha_i\neq 0\,\) implies that there exists in \(\,K\,\) an element \(\,\alpha_i^{-1}\,\) such that \(\,\alpha_i\cdot\,\alpha_i^{-1}=1.\ \) Multiplying both sides of \(\,\) Eq. (6) \(\,\) by \(\ \,\alpha_i^{-1}\ \,\) and using the denotement \(\ \beta_j\,=\,-\,\alpha_i^{-1}\,\alpha_j\,\) \(\\\) for \(\ j\,=\,1,\ldots,i-1,\ i+1,\ldots,r\,,\ \) we come up to Eq. (4).

\(\Leftarrow\,:\ \) Now we assume that the condition (4) is true. Moving the term \(\,x_i\,\) to the right-hand side and taking into account the relation \(\ \,-\,x_i\,=\,(-1)\cdot x_i\,,\ \) we get

\[\beta_1\,x_1\,+\,\ldots\,+\,\beta_{i-1}\,x_{i-1}\,+\,(-1)\,x_i\,+\, \beta_{i+1}\,x_{i+1}\,+\,\ldots\,+\,\beta_r\,x_r\ = \theta\,.\]

Since \(\,-1\neq 0\,,\ \) the linear combination on the left-hand side above is non-trivial, hence vectors \(\ x_1,x_2,\ldots,x_r\ \) are linearly dependent. \(\quad\bullet\)

Corollary. \(\,\) Vectors \(\ \,x_1,x_2,\ldots,x_r\ \,\) are linearly independent if, and only if, \(\\\) none of them can be written as a linear combination of the remaining ones.

It’s easy to justify the following useful statements \(\\\) (l.dp. = linearly dependent, \(\,\) l.idp. = linearly independent).

  • A set \(\,\{x\}\,\) of a single vector \(\,x\ \) is \(\,\) l.dp. \(\,\) if, and only if, \(\ x = \theta\,.\)

  • If a subset of a given set is \(\,\) l.dp., \(\,\) then the whole set is also \(\,\) l.dp.
    Conclusion 1.: \(\,\) Every set containing the zero vector is \(\,\) l.dp.;
    Conclusion 2.: \(\,\) If any two vectors are proportional: \(\ x_j = \alpha\,x_i\,,\ \) then the set is l.dp.
  • Every subset of a \(\,\) l.idp. set \(\,\) is also \(\,\) l.idp.
    Conclusion 3.: \(\,\) A \(\,\) l.idp. set \(\,\) does not contain zero vector nor proportional vectors.

Example 0. \(\,\) Consider the vector space \(\,C(R)\,\) of complex numbers over the real field \(\,R\,.\)

The two vectors (actually numbers), \(\ 1\ \) and \(\ i\,,\ \) are \(\,\) l.idp., \(\,\) since for arbitrary \(\,\alpha,\beta\in R\,:\)

\[\alpha\cdot 1\,+\,\beta\cdot i\ =\ 0 \qquad\Rightarrow\qquad \alpha = \beta = 0\,.\]

On the other hand, if we interpret \(\ 1\ \) and \(\ i\ \,\) as vectors from the complex space \(\,C(C)\,,\ \) \(\\\) they become \(\,\) l.dp., \(\,\) since a non-trivial combination thereof is equal to the zero vector:

\[1\cdot 1\,+\,i\cdot i\ =\ 0\,.\]

Example 1. \(\ \) Let \(\quad x\ =\ \left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right]\,,\quad y\ =\ \left[\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right]\,,\quad z\ =\ \left[\begin{array}{c} 2 \\ 2 \\ 2 \end{array}\right]\quad\in\ R^3\,.\)

The set \(\,\{x,y,z\}\,\) is \(\,\) l.dp., \(\,\) because

  • \(\,2\,x\,+\,2\,y\,-\,z\,=\,\theta\quad\) (a non-trivial linear combination of the vectors equals \(\,\theta\));

  • \(\,z\,=\,2\,x\,+\,2\,y\quad\) (one of the vectors can be expressed linearly by the remaining two).

Both above conditions are mutually equivalent and it suffices to ascertain only one of them.

Example 2. \(\ \) Let \(\quad x\ =\ \left[\begin{array}{c} 2 \\ 2 \end{array}\right]\,,\quad y\ =\ \left[\begin{array}{c} 1 \\ 0 \end{array}\right]\quad\in\ R^2\,.\)

The set \(\,\{x,y\}\,\) is \(\,\) l.idp. \(\ \) Indeed, let us assume that

\[\begin{split}\alpha\,x\,+\,\beta\,y\,=\,\theta\,,\qquad\text{that is}\qquad \alpha\ \left[\begin{array}{c} 2 \\ 2 \end{array}\right]\ +\ \beta\ \left[\begin{array}{c} 1 \\ 0 \end{array}\right]\ =\ \left[\begin{array}{c} 0 \\ 0 \end{array}\right]\,.\end{split}\]

The operations on the left-hand side being performed, we come to the system of equations

\begin{alignat*}{3} \ 2\,\alpha & {\,} + {\,} & \beta & {\;} = {\;} & 0 \\ 2\,\alpha & {\,} {\,} & & {\;} = {\;} & 0 \end{alignat*}

which has only the zero solution: \(\ \alpha = \beta = 0\,.\ \) Thus the vectors \(\ x,y\ \) fulfil the condition

\[\alpha\,x\,+\,\beta\,y\,=\,\theta\qquad\Rightarrow\qquad\alpha = \beta = 0\,,\]

wherefrom, \(\ \) according to (3), \(\ \) they are \(\ \) l.idp.

Basis of a Vector Space

A set \(\,B\subset V\,\) is by definition a basis of the vector space \(\,V(K)\,\ \) when every vector \(\,x\in V\,\) can be represented in a unique way as a linear combination of vectors from \(\,B\,:\ \)

\[x\ \,=\ \,\displaystyle\sum_{v\,\in\,B}\,\alpha_v\ v\,, \qquad\text{where}\quad \alpha_v\in K,\, v\in B.\]

A scalar \(\,\alpha_v\,\) is named a coordinate of the vector \(\,x,\,\) corresponding to the basis vector \(\,v\in B.\)

Thus every vector in a space \(\,V\,\) is uniquely characterized by the family \(\,\left(\alpha_v\right)_{v\,\in\,B}\,\) of its coordinates. In the present textbook we restrict ourselves to vector spaces with finite bases (finite-dimensional spaces).

To represent vectors by columns of their coordinates and linear transformations by matrices, one has to impose an order on basis vectors and, consequently, on coordinates. This motivates the following modification of the definition of basis.

Suppose a basis \(\,B\,\) is a set of \(\,n\,\) vectors. We define a corresponding \(\,\) ordered basis \(\ \mathcal{B}\ \,\) as a family of vectors in \(\,B,\ \) indexed by the set \(\ \mathrm{n} = \{1,2,\ldots,n\}\ \) of first \(\,n\,\) natural numbers:

\[\mathcal{B}\ =\ \left(v_i\right)_{i\,\in\,\mathrm{n}}\ =\ \left(\,v_1,\,v_2\,\ldots,\,v_n\,\right)\,.\]

Accordingly, the coordinates form a family

\[\mathcal{A}\ =\ \left(\alpha_i\right)_{i\,\in\,\mathrm{n}}\ =\ \left(\,\alpha_1,\,\alpha_2\,\ldots,\,\alpha_n\,\right)\,, \quad\text{where}\quad\alpha_i:\,=\alpha_{v_i}\,,\ \ \forall\ i\in\mathrm{n}\,.\]

In the following we shall distinguish between a basis \(\,B\,=\,\{v_i\}_{i\,\in\,\mathrm{n}}\,=\,\{v_1,v_2,\ldots,v_n\}\,\) (a set) and \(\,\) an ordered basis \(\ \mathcal{B}\,=\,(v_i)_{i\,\in\,\mathrm{n}}\,=\,(v_1,v_2,\ldots,v_n)\ \) (a sequence). In either event, every vector \(\,x\in V\,\) has a unique representation as a linear combination of basis vectors:

(7)\[x\ =\ \alpha_1\,v_1\,+\,\alpha_2\,v_2\,+\,\ldots\,+\,\alpha_n\,v_n\,.\]

Theorem 2. \(\,\) A set \(\,B\subset V\,\) is a basis of the vector space \(\,V(K)\,\) if, and only if, \(\,B\,\) is a linearly independent spanning set of the space \(\,V.\)

Proof. \(\,\) Let \(\,B = \{v_1,v_2,\ldots,v_n\}\,.\)

\(\Rightarrow\,:\ \) We assume that \(\,B\,\) is a basis of \(\,V.\)

The condition (7) implies that \(\ V \subset L(B)\,.\ \) On the other hand, \(\,\) obviously \(\ L(B) \subset V.\ \) \(\\\) Thus \(\ V = L(B)\,,\ \) that is \(\,B\,\) is a spanning set of the space \(\,V.\)

To demonstrate the linear independence of the set \(\,B\,,\) we notice that the identity

\[0\cdot v_1\,+\,0\cdot v_2\,+\,\ldots\,+\,0\cdot v_n\ =\ \theta\]

can be interpreted as a representation of the zero vector \(\,\theta\,\) in the basis \(\,B\,.\ \) From the uniqueness of this representation, we deduce that the trivial linear combination of vectors from \(\,B\,\) is the only combination that equals the zero vector. This means the linear independence of the set \(\,B.\)

\(\Leftarrow\,:\ \) Now we assume that \(\,B\,\) is a linearly independent set spanning the space \(\,V.\)

Since the set \(\,B\,\) spans the space \(\,V,\ \) every vector \(\,x\in V\,\) has the form (7). \(\\\) It remains to prove that such a representation is unique.

Let us suppose that, on the contrary, a vector \(\,x\ \) can be expressed in two different ways:

\[\begin{split}\begin{array}{l} x\ =\ \alpha_1\,v_1\,+\,\alpha_2\,v_2\,+\,\ldots\,+\,\alpha_n\,v_n\,, \\ x\ =\ \beta_1\,v_1\,+\,\beta_2\,v_2\,+\,\ldots\,+\,\beta_n\,v_n\,, \end{array} \qquad\text{while}\quad\beta_i\neq\alpha_i\,,\ \ \exists\ i\in\mathrm{n}.\end{split}\]

By subtracting the second equation from the first we obtain

\[(\alpha_1-\beta_1)\ v_1\,+\,(\alpha_2-\beta_2)\ v_2\,+\,\ldots\,+\, (\alpha_n-\beta_n)\ v_n\ =\ \theta\,, \quad\alpha_i-\beta_i\neq 0\,,\ \exists\ i\in\mathrm{n}.\]

Thus we conclude that a non-trivial linear combination of vectors \(\,v_1,\,v_2,\,\ldots,\,v_n\,\) equals the zero vector. This is in contradiction with the premise on the linear independence of the set \(\,B.\ \)

The representation (7) is therefore unique and the set \(\,B\,\) is a basis of the space \(\,V.\) \(\quad\bullet\)

The necessary and sufficient condition claimed by Theorem 2. is often cited as a definition of basis. Below we show yet another approach to the notion of basis, built upon the following definition.

A linearly independent set \(\,M\,\) of vectors is called a maximal linearly independent (m.l.idp.) set when no other linearly independent set contains \(\,M\,\) as a proper subset.

In other words, \(\,\) a set \(\,M\subset V\,\) is \(\,\) a \(\,\) m.l.idp. set of vectors in a space \(\,V\ \,\) if, and only if, \(\,\) for every vector \(\ x\in V\ \) the set \(\ \,M'=\,\{x\}\cup M\ \,\) is linearly dependent.

Theorem 3. \(\,\) A set \(\,B\subset V\,\) is a basis of the vector space \(\,V(K)\,\) \(\,\) if, and only if, \(\ B\ \) is a maximal linearly independent set.

Proof. \(\,\) Let \(\,B = \{v_1,v_2,\ldots,v_n\}\,.\)

\(\Rightarrow\,:\ \) We assume that \(\,B\,\) is a basis of \(\,V.\)

Then \(\,B\,\) is a \(\,\) l.idp. set, \(\,\) and for every vector \(\,x\in V\,\) there holds the expansion (7). \(\\\) This means that for every vector \(\,x\in V\,\) the set \(\,\{x,\,v_1,v_2,\ldots,v_n\}\,\) is \(\,\) l.dp. \(\\\) Therefore the set \(\,B = \{v_1,v_2,\ldots,v_n\}\,\) is \(\,\) a \(\,\) m.l.idp. set.

\(\Leftarrow\,:\ \) Now we assume that \(\,B\ \) is \(\,\) a \(\,\) m.l.idp. set of vectors in \(\,V.\)

Then for every vector \(\,x\in V\,\) the set \(\,\{x,\,v_1,v_2,\ldots,v_n\}\ \) is \(\,\) l.dp.:

(8)\[\alpha_0\ x\,+\, \alpha_1\,v_1\,+\,\alpha_2\,v_2\,+\,\ldots\,+\,\alpha_n\,v_n\,=\, \theta\,,\]

where the linear combination on the left-hand side is non-trivial, \(\\\) that is not all coefficients \(\,\alpha_0,\,\alpha_1,\,\ldots,\,\alpha_n\,\) are zeroes.

One may ask whether \(\,\alpha_0\,\) can vanish. If that was the case, we would obtain the equation

\[\alpha_1\,v_1\,+\,\alpha_2\,v_2\,+\,\ldots\,+\,\alpha_n\,v_n\,=\,\theta\,,\]

where not all \(\,\alpha_1,\,\ldots,\,\alpha_n\,\) are zeroes. This would mean that the vectors \(\,v_1,v_2,\ldots,v_n\,\) are linearly dependent, in contradiction with the assumption that \(\,B\ \) is linearly independent. \(\\\) Thus \(\ \alpha_0\neq 0\ \,\) and we may rewrite Eq. (8) as

\[x\ =\ \beta_1\,v_1\,+\,\beta_2\,v_2\,+\,\ldots\,+\,\beta_n\,v_n\,,\]

where \(\ \,\beta_i\,=\,-\,\alpha_0^{-1}\,\alpha_i\ \,\) for \(\ i\,=\,1,\ldots,n\,.\ \) The above condition being fulfilled for every \(\,x\in V,\ \) we infer that \(\,B\ \) is \(\,\) a \(\,\) l.idp. spanning set of the space \(\,V,\ \) hence is a basis of \(\,V.\ \) \(\ \ \bullet\)

Dimension of a Vector Space

The concept of the dimension of a vector space is based on

Theorem 4. \(\ \) All bases of a given vector space have equal cardinality. In particular, if a vector space \(\,V(K)\,\) has an \(\,n\)-element base \(\,B,\ \) then all its bases have the same number \(\,n\,\) of elements (a proof for the case of a finite base is given in the Appendix).

Therefore the following definition makes sense.

If a vector space \(\,V\,\) has a finite base \(\,B,\ \) then the number of elements of \(\,B\ \) \(\\\) is called the dimension of the space \(\,V\,\) and is denoted by \(\,\text{dim}\,V.\)

Vector spaces having finite bases are named finite-dimensional, and if specifically \(\,\text{dim}\,V = n\,,\) then \(\,V\,\) is an \(\,n\)-dimensional vector space. Additionally, we assume by convention that the dimension of the zero space (consisting of the zero vector only) equals zero: \(\ \text{dim}\,\{\theta\} = 0\,.\)

In the Appendix A4 we prove the following useful

Theorem 5. \(\ \) In an \(\,n\)-dimensional vector space:

  1. every set consisting of more than \(\,n\,\) vectors is linearly dependent;

  2. every linearly independent set of \(\,n\,\) vectors is a basis.

Examples.

  • In the real space \(\,R(R),\ \) as well as in the complex space \(\,C(C),\) every one-element set containing a non-zero number, \(\,\) for example \(B = \{1\},\ \) may be a basis. Thus \(\ \text{dim}\,R(R) = \text{dim}\,C(C) = \,1\,.\ \) On the other hand, \(\,\) in the space \(\,C(R)\,\) of complex numbers over the real field, a natural basis is the set \(\ B = \{1,\,i\},\ \) hence \(\ \text{dim}\,C(R) = 2\,.\)

  • In the space \(\,V\,\) of geometric (fixed or free) vectors a basis may be any set of three non-coplanar vectors. The most convenient one is a triplet of three mutually perpendicular unit vectors \(\ B = \{\,\vec{e}_1,\,\vec{e}_2,\,\vec{e}_3\}.\ \,\) So our ‘physical’ space is three-dimensional also in the algebraic sense: \(\ \text{dim}\,V = 3\,.\)

  • In the vector space \(\,K^n\,\) composed of \(\,n\)-element column vectors with entries from the field \(\,K,\ \) the most handy is \(\,\) the \(\,\) canonical basis \(\ E\,=\,\{\,e_1,\,e_2,\,\ldots,\,e_n\}\,,\ \) where

    \[\begin{split}e_1\ =\ \left[\begin{array}{c} 1 \\ 0 \\ \cdots \\ 0 \end{array}\right]\,, \quad e_2\ =\ \left[\begin{array}{c} 0 \\ 1 \\ \cdots \\ 0 \end{array}\right]\,, \quad\ldots,\quad e_n\ =\ \left[\begin{array}{c} 0 \\ 0 \\ \cdots \\ 1 \end{array}\right]\,.\end{split}\]

    Consequently, \(\ \ \text{dim}\,K^n = n\,,\ \ \forall\ n\in N.\)

  • In the subspace \(\ \ W_p\ =\ \left\{\ \,\left[\begin{array}{c} x_1 \\ \ldots \\ x_p \\ 0 \\ \ldots \\ 0 \end{array}\right]\ :\quad x_i\in K\,,\ \ i = 1,2,\ldots,p.\;\right\}\ \ <\ \ K^n\,,\ \)

    where \(\ 1 \leq p < n\,,\ \) a basis may be e.g. \(\,E_p = \{\,e_1,\,e_2,\,\ldots,\,e_p\},\ \,\) hence \(\ \text{dim}\,W_p = p.\)

2

a denotement \(\ \exists\ i\in I\ \,\) means \(\ \) “there exists \(\ i\in I\)”.

3

A family \((x_i)_{i\in I}\) of elements \(x\in X\) indexed by \(I\) is a map \(\,I\rightarrow X\,,\ \) with emphasis on the collection \(X\).