Section5.3Linearly Independent Sets of Vectors¶ permalink
Definition5.3.1
A set of vectors, \(\left\{ \vec{v}_1, \vec{v}_2, \ldots,
\vec{v}_n \right\}\) is called linearly independent if the only solution to the homogeneous equation
\begin{equation}
c_1 \vec{v}_1 + c_2 \vec{v}_2 + \cdots + c_n \vec{v}_n = \vec{0}
\label{eqn-linear-independence}\tag{5.3.1}
\end{equation}
is the trivial solution, \(c_1 = c_2 = \cdots = c_n =
0\text{.}\) Otherwise the set is linearly dependent.
Example5.3.3A Set of Linearly Dependent Vectors Has at Least One Redundant Vector
Consider the vectors,
\begin{equation*}
\vec{v}_1 = \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}, \quad
\vec{v}_2 = \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix}, \quad
\vec{v}_3 = \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix}
\end{equation*}
Notice that
\begin{equation}
2\vec{v}_1 - \vec{v}_2 + \vec{v}_3 = \vec{0},
\label{eqn-linearly-dependent-linear-combination}\tag{5.3.2}
\end{equation}
thus these vectors are linearly dependent. Since none of the coefficients are zero, we can solve equation (5.3.2) for \(\vec{v}_1, \vec{v}_2\) or \(\vec{v}_3\text{.}\) For example, solving for \(\vec{v}_3\) yields: \(\vec{v}_3 =
-2\vec{v}_1 + \vec{v}_2\text{.}\) Or equivalently,
\begin{equation*}
\begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} =
-2\begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} +
\begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix}
\end{equation*}
So how does one go about determining whether a set is linearly independent or linearly dependent? The answer is rather simple, we use the fact that a linear combination equation can be transformed into an equivalent matrix equation. Once the equation is expressed as a matrix equation equation, then we apply the one algorithm that we employ to answer all questions in Linear Algebra, namely Gaussian elimination or as needs be Gauss—Jordan elimination.
Example5.3.4
Determine whether the following set of vectors is linearly independent or linearly dependent. If the set is linearly dependent, express the first vector as a linear combination of the other vectors.
\begin{equation*}
\left\{ \,
\begin{bmatrix}
1 \\ 4 \\ 7
\end{bmatrix},
\begin{bmatrix}
2 \\ 5 \\ 8
\end{bmatrix},
\begin{bmatrix}
3 \\ 6 \\ 9
\end{bmatrix} \,
\right\}
\end{equation*}
SolutionWe must determine whether the only solution to,
\begin{equation*}
c_1 \begin{bmatrix}
1 \\ 4 \\ 7
\end{bmatrix} +
c_2 \begin{bmatrix}
2 \\ 5 \\ 8
\end{bmatrix} +
c_3 \begin{bmatrix}
3 \\ 6 \\ 9
\end{bmatrix} =
\begin{bmatrix}
0 \\ 0 \\ 0
\end{bmatrix}
\end{equation*}
is the trivial solution, i.e. \(c_1 = c_2 = c_3 = 0\text{?}\)
This question is equivalent to asking whether the augmented matrix,
\begin{equation*}
\left[
\begin{array}{rrr|r}
1 \amp 2 \amp 3 \amp 0 \\
4 \amp 5 \amp 6 \amp 0 \\
7 \amp 8 \amp 9 \amp 0
\end{array}
\right]
\end{equation*}
only has solution \(c_1 = c_2 = c_3=0\text{?}\) Now we could perform Gaussian elimination on this augmented matrix, but we know from theorem <<thm-omnibus>> that the solution will be the unique solution exactly when the determinant of the coefficient matrix is nonzero. Computing determinants of small matrices is quick and easy, so it may save us some work.
\begin{equation*}
\begin{vmatrix}
1 \amp 2 \amp 3 \\
4 \amp 5 \amp 6 \\
7 \amp 8 \amp 9
\end{vmatrix} =
1\cdot \begin{vmatrix} 5 \amp 6 \\ 8 \amp 9 \end{vmatrix} -
2\cdot \begin{vmatrix} 4 \amp 6 \\ 7 \amp 9 \end{vmatrix} +
3\cdot \begin{vmatrix} 4 \amp 5 \\ 8 \amp 9 \end{vmatrix} =
-3+15-12 = 0
\end{equation*}
This result tells us that the linear system has an infinite number of solutions, thus to determine the solution set we must transform the matrix into reduced row–echelon form (RREF) and then parametrize the solution set. The reduced row–echelon form is:
\begin{equation*}
\left[
\begin{array}{rrr|r}
1 \amp 0 \amp -1 \amp 0 \\
0 \amp 1 \amp 2 \amp 0 \\
0 \amp 0 \amp 0 \amp 0
\end{array}
\right]
\end{equation*}
This matrix has one column that does not contain a leading coefficient, namely the \(c_3\) column so we set \(c_3
= t\text{,}\) yielding the following parametrization of the solution set.
\begin{equation*}
\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} =
\begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix}t
\end{equation*}
Letting \(t=1\) gives us the following nontrivial solution:
\begin{equation*}
\begin{bmatrix}
1 \\ 4 \\ 7
\end{bmatrix}
-2 \begin{bmatrix}
2 \\ 5 \\ 8
\end{bmatrix} +
\begin{bmatrix}
3 \\ 6 \\ 9
\end{bmatrix} =
\begin{bmatrix}
0 \\ 0 \\ 0
\end{bmatrix}
\end{equation*}
Finally, one way we can express the first column vector as a linear combination of the other two vectors is as follows:
\begin{equation*}
\begin{bmatrix}
1 \\ 4 \\ 7
\end{bmatrix} =
2 \begin{bmatrix}
2 \\ 5 \\ 8
\end{bmatrix} -
\begin{bmatrix}
3 \\ 6 \\ 9
\end{bmatrix}
\end{equation*}
1
Use the definition to show that the set of vectors, \(\left\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix},
\begin{bmatrix} 1 \\ -1 \end{bmatrix} \right\}\) is linearly independent.
2
Use the definition to show that the set of vectors, \(\left\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix},
\begin{bmatrix} -1 \\ 1 \end{bmatrix},
\begin{bmatrix} 1 \\ 0 \end{bmatrix}
\right\}\) is linearly dependent. Write the third vector, as a linear combination of the first two.\(\)
HintWrite a linear combination of the three column vectors, being scaled by, \(c_1, c_2\) and \(c_3\) and set it equal to the zero vector.
3
Is it possible to write the vector \(\vec{w} =
(2,-6,3)\) as a linear combination of the vectors \(\vec{v}_1 = (1,-2,-1), \vec{v}_2 = (3,-5,4)\text{?}\)
HintSolve the augmented matrix problem which corresponds with the linear combination equation,
\begin{equation*}
c_1 \vec{v}_1 + c_2 \vec{v}_2 = \vec{w}.
\end{equation*}
4
Sometimes you can determine whether a set of vectors is linearly dependent by inspection. That is, sometimes you can determine a linear combination of the first two which yields the third just by inspecting the numbers in each vector and not solving a system of equations. Find a linear combination of the first two vectors which yields the third.
\begin{equation*}
\vec{v}_1 = (1, 0, 3, 0),
\vec{v}_2 = (0, -1, 1, 1),
\vec{v}_3 = (2, 1, 5, -1)
\end{equation*}
HintThe location of zeros in each vector is very useful.
5
Determine whether the following set of vectors from \(\R^4\) is linearly independent or linearly dependent.
\begin{equation*}
\left\{ \:
\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \:
\begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix}, \:
\begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}, \:
\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \:
\right\}
\end{equation*}
6
Find a nontrivial parametric solution to:
\begin{equation*}
c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} +
c_2 \begin{bmatrix} 0 \\ 2 \\ 0 \end{bmatrix} +
c_3 \begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix} +
c_4 \begin{bmatrix} 1 \\ 3 \\ 4 \end{bmatrix} =
\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}
\end{equation*}
7
Find a nontrivial parametric solution to:
\begin{equation*}
c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} +
c_2 \begin{bmatrix} 0 \\ 2 \\ 0 \\ 2 \end{bmatrix} +
c_3 \begin{bmatrix} 0 \\ 2 \\ 1 \\ 3 \end{bmatrix} +
c_4 \begin{bmatrix} 1 \\ 2 \\ 4 \\ 6 \end{bmatrix} =
\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}
\end{equation*}
8
Prove the following claim: If a finite set of vectors, \(S\) contains the zero vector, then the set is linearly dependent.
9
The set \(\left\{ \vec{v}_1, \vec{v}_2 \right\}\) is known to be a linearly independent set of vectors, use the definition of linear independence to show that the set \(\left\{ \vec{u}_1,\: \vec{u}_2 \right\}\text{,}\) where
\begin{align*}
\vec{u}_1 \amp = \vec{v}_1 + \vec{v}_2\\
\vec{u}_2 \amp = \vec{v}_1 - \vec{v}_2
\end{align*}
is also linearly independent.
HintYou want to show that the only solution to:
\begin{equation*}
c_1 \vec{u}_1 + c_2 \vec{u}_2 = \vec{0},
\end{equation*}
is \(c_1 = c_2 = 0\text{,}\) but you can rewrite the above as:
\begin{equation*}
c_1 (\vec{v}_1 + \vec{v}_2) + c_2 (\vec{v}_1 - \vec{v}_2)
= \vec{0}.
\end{equation*}
Use the distributive property to rearrange the last equation.