I apologize for the outage on the site yesterday and today. Lamar University is in Beaumont Texas and Hurricane Laura came through here and caused a brief power outage at Lamar. Things should be up and running at this point and (hopefully) will stay that way, at least until the next hurricane comes through here which seems to happen about once every 10-15 years. Note that I wouldn't be too suprised if there are brief outages over the next couple of days as they work to get everything back up and running properly. I apologize for the inconvienence.

Paul

August 27, 2020

*i.e.*you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.

### Section 5-2 : Review : Matrices & Vectors

This section is intended to be a catch all for many of the basic concepts that are used occasionally in working with systems of differential equations. There will not be a lot of details in this section, nor will we be working large numbers of examples. Also, in many cases we will not be looking at the general case since we won’t need the general cases in our differential equations work.

Let’s start with some of the basic notation for matrices. An \(n \times m\) (this is often called the **size** or **dimension** of the matrix) matrix is a matrix with \(n\) rows and \(m\) columns and the entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column is denoted by \(a_{ij}\). A short hand method of writing a general \(n \times m\) matrix is the following.

The size or dimension of a matrix is subscripted as shown if required. If it’s not required or clear from the problem the subscripted size is often dropped from the matrix.

#### Special Matrices

There are a few “special” matrices out there that we may use on occasion. The first special matrix is the **square matrix**. A square matrix is any matrix whose size (or dimension) is \(n \times n\). In other words, it has the same number of rows as columns. In a square matrix the diagonal that starts in the upper left and ends in the lower right is often called the **main diagonal**.

The next two special matrices that we want to look at are the zero matrix and the identity matrix. The **zero matrix**, denoted \(0_{n \times m}\), is a matrix all of whose entries are zeroes. The **identity matrix** is a square \(n \times n\) matrix, denoted \(I_{n}\), whose main diagonals are all 1’s and all the other elements are zero. Here are the general zero and identity matrices.

In matrix arithmetic these two matrices will act in matrix work like zero and one act in the real number system.

The last two special matrices that we’ll look at here are the **column matrix** and the **row matrix**. These are matrices that consist of a single column or a single row. In general, they are,

We will often refer to these as **vectors**.

#### Arithmetic

We next need to take a look at arithmetic involving matrices. We’ll start with **addition** and **subtraction** of two matrices. So, suppose that we have two \(n \times m\) matrices, \(A\) and \(B\). The sum (or difference) of these two matrices is then,

The sum or difference of two matrices of the same size is a new matrix of identical size whose entries are the sum or difference of the corresponding entries from the original two matrices. Note that we can’t add or subtract entries with different sizes.

Next, let’s look at **scalar multiplication**. In scalar multiplication we are going to multiply a matrix \(A\) by a constant (sometimes called a scalar) \(\alpha \). In this case we get a new matrix whose entries have all been multiplied by the constant, \(\alpha \).

compute \(A-5B\).

There isn’t much to do here other than the work.

\[\begin{align*}A - 5B & = \left( {\begin{array}{*{20}{r}}3&{ - 2}\\{ - 9}&1\end{array}} \right) - 5\left( {\begin{array}{*{20}{r}}{ - 4}&1\\0&{ - 5}\end{array}} \right)\\ & = \left( {\begin{array}{*{20}{r}}3&{ - 2}\\{ - 9}&1\end{array}} \right) - \left( {\begin{array}{*{20}{r}}{ - 20}&5\\0&{ - 25}\end{array}} \right)\\ & = \left( {\begin{array}{*{20}{r}}{23}&{ - 7}\\{ - 9}&{26}\end{array}} \right)\end{align*}\]We first multiplied all the entries of \(B\) by 5 then subtracted corresponding entries to get the entries in the new matrix.

The final matrix operation that we’ll take a look at is **matrix multiplication**. Here we will start with two matrices, \(A_{n \times p}\) and \(B_{p \times m}\). Note that \(A\) must have the same number of columns as \(B\) has rows. If this isn’t true, then we can’t perform the multiplication. If it is true, then we can perform the following multiplication.

The new matrix will have size \(n \times m\) and the entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column, \(c_{ij}\), is found by multiplying row \(i\) of matrix \(A\) by column \(j\) of matrix \(B\). This doesn’t always make sense in words so let’s look at an example.

compute \(AB\).

The new matrix will have size \(2 \times 4\). The entry in row 1 and column 1 of the new matrix will be found by multiplying row 1 of \(A\) by column 1 of \(B\). This means that we multiply corresponding entries from the row of \(A\) and the column of \(B\) and then add the results up. Here are a couple of the entries computed all the way out.

\[\begin{align*}{c_{11}} & = \left( 2 \right)\left( 1 \right) + \left( { - 1} \right)\left( { - 4} \right) + \left( 0 \right)\left( 0 \right) = 6\\ {c_{13}} & = \left( 2 \right)\left( { - 1} \right) + \left( { - 1} \right)\left( 1 \right) + \left( 0 \right)\left( 0 \right) = - 3\\ {c_{24}} & = \left( { - 3} \right)\left( 2 \right) + \left( 6 \right)\left( 0 \right) + \left( 1 \right)\left( { - 2} \right) = - 8\end{align*}\]Here’s the complete solution.

\[C = \left( {\begin{array}{*{20}{r}}6&{ - 3}&{ - 3}&4\\{ - 27}&{21}&9&{ - 8}\end{array}} \right)\]In this last example notice that we could not have done the product *BA* since the number of columns of \(B\) does not match the number of row of \(A\). It is important to note that just because we can compute \(AB\) doesn’t mean that we can compute \(BA\). Likewise, even if we can compute both \(AB\) and \(BA\) they may or may not be the same matrix.

#### Determinant

The next topic that we need to take a look at is the **determinant** of a matrix. The determinant is actually a function that takes a square matrix and converts it into a number. The actual formula for the function is somewhat complex and definitely beyond the scope of this review.

The main method for computing determinants of any square matrix is called the method of cofactors. Since we are going to be dealing almost exclusively with \(2 \times 2\) matrices and the occasional \(3 \times 3\) matrix we won’t go into the method here. We can give simple formulas for each of these cases. The standard notation for the determinant of the matrix \(A\) is.

\[\det \left( A \right) = \left| A \right|\]Here are the formulas for the determinant of \(2 \times 2\) and \(3 \times 3\) matrices.

\[\left| {\begin{array}{*{20}{r}}a&c\\b&d\end{array}} \right| = ad - cb\] \[\left| {\begin{array}{*{20}{r}}{{a_{11}}}&{{a_{12}}}&{{a_{13}}}\\{{a_{21}}}&{{a_{22}}}&{{a_{23}}}\\{{a_{31}}}&{{a_{32}}}&{{a_{33}}}\end{array}} \right| = {a_{11}}\left| {\begin{array}{*{20}{r}}{{a_{22}}}&{{a_{23}}}\\{{a_{32}}}&{{a_{33}}}\end{array}} \right| - {a_{12}}\left| {\begin{array}{*{20}{r}}{{a_{21}}}&{{a_{23}}}\\{{a_{31}}}&{{a_{33}}}\end{array}} \right| + {a_{13}}\left| {\begin{array}{*{20}{r}}{{a_{21}}}&{{a_{22}}}\\{{a_{31}}}&{{a_{32}}}\end{array}} \right|\]For the \(2 \times 2\) there isn’t much to do other than to plug it into the formula.

\[\det \left( A \right) = \left| {\begin{array}{*{20}{r}}{ - 9}&{ - 18}\\2&4\end{array}} \right| = \left( { - 9} \right)\left( 4 \right) - \left( { - 18} \right)\left( 2 \right) = 0\]For the \(3 \times 3\) we could plug it into the formula, however unlike the \(2 \times 2\) case this is not an easy formula to remember. There is an easier way to get the same result. A quicker way of getting the same result is to do the following. First write down the matrix and tack a copy of the first two columns onto the end as follows.

\[\det \left( B \right) = \left| {\begin{array}{*{20}{r}}2&3&1\\{ - 1}&{ - 6}&7\\4&5&{ - 1}\end{array}} \right|\,\,\,\,\begin{array}{*{20}{r}}2&3\\{ - 1}&{ - 6}\\4&5\end{array}\]Now, notice that there are three diagonals that run from left to right and three diagonals that run from right to left. What we do is multiply the entries on each diagonal up and the if the diagonal runs from left to right we add them up and if the diagonal runs from right to left we subtract them.

Here is the work for this matrix.

\[\begin{align*}\det \left( B \right) & = \left| {\begin{array}{*{20}{r}}2&3&1\\{ - 1}&{ - 6}&7\\4&5&{ - 1}\end{array}} \right|\,\,\,\,\begin{array}{*{20}{r}}2&3\\{ - 1}&{ - 6}\\4&5\end{array}\\ & = \left( 2 \right)\left( { - 6} \right)\left( { - 1} \right) + \left( 3 \right)\left( 7 \right)\left( 4 \right) + \left( 1 \right)\left( { - 1} \right)\left( 5 \right) - \\ & \hspace{0.25in}\hspace{0.25in}\hspace{0.25in}\left( 3 \right)\left( { - 1} \right)\left( { - 1} \right) - \left( 2 \right)\left( 7 \right)\left( 5 \right) - \left( 1 \right)\left( { - 6} \right)\left( 4 \right)\\ & = 42\end{align*}\]You can either use the formula or the short cut to get the determinant of a \(3 \times 3\).

If the determinant of a matrix is zero we call that matrix **singular** and if the determinant of a matrix isn’t zero we call the matrix **nonsingular**. The \(2 \times 2\) matrix in the above example was singular while the \(3 \times 3\) matrix is nonsingular.

#### Matrix Inverse

Next, we need to take a look at the **inverse** of a matrix. Given a square matrix, \(A\), of size *n *x \(n\) if we can find another matrix of the same size, \(B\) such that,

then we call \(B\) the **inverse** of \(A\) and denote it by \(B=A^{-1}\).

Computing the inverse of a matrix, \(A\), is fairly simple. First, we form a new matrix,

\[\left( {A\,\,\,{I_n}} \right)\]and then use the row operations from the previous section and try to convert this matrix into the form,

\[\left( {{I_n}\,\,\,B} \right)\]If we can then \(B\) is the inverse of \(A\). If we can’t then there is no inverse of the matrix \(A\).

We first form the new matrix by tacking on the \(3 \times 3\) identity matrix to this matrix. This is

\[\left( {\begin{array}{*{20}{r}}2&1&1\\{ - 5}&{ - 3}&0\\1&1&{ - 1}\end{array}\quad \begin{array}{*{20}{r}}1&0&0\\0&1&0\\0&0&1\end{array}} \right)\]We will now use row operations to try and convert the first three columns to the \(3 \times 3\) identity. In other words, we want a 1 on the diagonal that starts at the upper left corner and zeroes in all the other entries in the first three columns.

If you think about it, this process is very similar to the process we used in the last section to solve systems, it just goes a little farther. Here is the work for this problem.

\[\left( {\begin{array}{*{20}{r}}2&1&1\\{ - 5}&{ - 3}&0\\1&1&{ - 1}\end{array}\quad \begin{array}{*{20}{r}}1&0&0\\0&1&0\\0&0&1\end{array}} \right)\begin{array}{*{20}{c}}{{R_1} \leftrightarrow {R_3}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&1&{ - 1}\\{ - 5}&{ - 3}&0\\2&1&1\end{array}\quad \begin{array}{*{20}{r}}0&0&1\\0&1&0\\1&0&0\end{array}} \right)\begin{array}{*{20}{c}}{{R_2} + 5{R_1}}\\{{R_3} - 2{R_1}}\\ \Rightarrow \end{array}\] \[\left( {\begin{array}{*{20}{r}}1&1&{ - 1}\\0&2&{ - 5}\\0&{ - 1}&3\end{array}\quad \begin{array}{*{20}{r}}0&0&1\\0&1&5\\1&0&{ - 2}\end{array}} \right)\begin{array}{*{20}{c}}{\frac{1}{2}{R_2}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&1&{ - 1}\\0&1&{\frac{{ - 5}}{2}}\\0&{ - 1}&3\end{array}\quad \begin{array}{*{20}{r}}0&0&1\\0&{\frac{1}{2}}&{\frac{5}{2}}\\1&0&{ - 2}\end{array}} \right)\begin{array}{*{20}{c}}{{R_3} + {R_2}}\\ \Rightarrow \end{array}\] \[\left( {\begin{array}{*{20}{r}}1&1&{ - 1}\\0&1&{\frac{{ - 5}}{2}}\\0&0&{\frac{1}{2}}\end{array}\quad \begin{array}{*{20}{r}}0&0&1\\0&{\frac{1}{2}}&{\frac{5}{2}}\\1&{\frac{1}{2}}&{\frac{1}{2}}\end{array}} \right)\begin{array}{*{20}{c}}{2{R_3}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&1&{ - 1}\\0&1&{\frac{{ - 5}}{2}}\\0&0&1\end{array}\quad \begin{array}{*{20}{r}}0&0&1\\0&{\frac{1}{2}}&{\frac{5}{2}}\\2&1&1\end{array}} \right)\begin{array}{*{20}{c}}{{R_2} + \frac{5}{2}{R_3}}\\{{R_1} + {R_3}}\\ \Rightarrow \end{array}\] \[\left( {\begin{array}{*{20}{r}}1&1&0\\0&1&0\\0&0&1\end{array}\quad \begin{array}{*{20}{r}}2&1&2\\5&3&5\\2&1&1\end{array}} \right)\begin{array}{*{20}{c}}{{R_1} - {R_2}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&0&0\\0&1&0\\0&0&1\end{array}\quad \begin{array}{*{20}{r}}{ - 3}&{ - 2}&{ - 3}\\5&3&5\\2&1&1\end{array}} \right)\]So, we were able to convert the first three columns into the \(3 \times 3\) identity matrix therefore the inverse exists and it is,

\[{A^{ - 1}} = \left( {\begin{array}{*{20}{r}}{ - 3}&{ - 2}&{ - 3}\\5&3&5\\2&1&1\end{array}} \right)\]So, there was an example in which the inverse did exist. Let’s take a look at an example in which the inverse doesn’t exist.

In this case we will tack on the \(2 \times 2\) identity to get the new matrix and then try to convert the first two columns to the \(2 \times 2\) identity matrix.

\[\left( {\begin{array}{*{20}{r}}1&{ - 3}&1&0\\{ - 2}&6&0&1\end{array}} \right)\,\,\,\begin{array}{*{20}{c}}{2{R_1} + {R_2}}\\ \Rightarrow \end{array}\,\,\left( {\begin{array}{*{20}{r}}1&{ - 3}&1&0\\0&0&2&1\end{array}} \right)\,\,\]And we don’t need to go any farther. In order for the \(2 \times 2\) identity to be in the first two columns we must have a 1 in the second entry of the second column and a 0 in the second entry of the first column. However, there is no way to get a 1 in the second entry of the second column that will keep a 0 in the second entry in the first column. Therefore, we can’t get the \(2 \times 2\) identity in the first two columns and hence the inverse of \(B\) doesn’t exist.

We will leave off this discussion of inverses with the following fact.

#### Fact

Given a square matrix \(A\).

- If \(A\) is nonsingular then \(A^{-1}\) will exist.M
- If \(A\) is singular then \(A^{-1}\) will NOT exist.

I’ll leave it to you to verify this fact for the previous two examples.

#### Systems of Equations Revisited

We need to do a quick revisit of systems of equations. Let’s start with a general system of equations.

\[\begin{equation}\begin{aligned}{a_{11}}{x_1} + {a_{12}}{x_2} + \cdots + {a_{1n}}{x_n} & = {b_1}\\ {a_{21}}{x_1} + {a_{22}}{x_2} + \cdots + {a_{2n}}{x_n} & = {b_2}\\ \vdots \hspace{0.8in} & \\ {a_{n1}}{x_1} + {a_{n2}}{x_2} + \cdots + {a_{nn}}{x_n} & = {b_n}\end{aligned}\label{eq:eq1}\end{equation}\]Now, covert each side into a vector to get,

\[\left( {\begin{array}{*{20}{r}}{{a_{11}}{x_1} + {a_{12}}{x_2} + \cdots + {a_{1n}}{x_n}}\\{{a_{21}}{x_1} + {a_{22}}{x_2} + \cdots + {a_{2n}}{x_n}}\\ \vdots \\{{a_{n1}}{x_1} + {a_{n2}}{x_2} + \cdots + {a_{nn}}{x_n}}\end{array}} \right) = \left( {\begin{array}{*{20}{r}}{{b_1}}\\{{b_2}}\\ \vdots \\{{b_n}}\end{array}} \right)\]The left side of this equation can be thought of as a matrix multiplication.

\[\left( {\begin{array}{*{20}{r}}{{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}}\\{{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2n}}}\\ \vdots & \vdots & \ddots & \vdots \\{{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}}\end{array}} \right)\left( {\begin{array}{*{20}{r}}{{x_1}}\\{{x_2}}\\ \vdots \\{{x_n}}\end{array}} \right) = \left( {\begin{array}{*{20}{r}}{{b_1}}\\{{b_2}}\\ \vdots \\{{b_n}}\end{array}} \right)\]Simplifying up the notation a little gives,

\[\begin{equation}A\vec x = \vec b \label{eq:eq2}\end{equation}\]where, \(\vec x\) is a vector whose components are the unknowns in the original system of equations. We call \(\eqref{eq:eq2}\) the matrix form of the system of equations \(\eqref{eq:eq1}\) and solving \(\eqref{eq:eq2}\) is equivalent to solving \(\eqref{eq:eq1}\). The solving process is identical. The augmented matrix for \(\eqref{eq:eq2}\) is

\[\left( {A\,\,\,\vec b} \right)\]Once we have the augmented matrix we proceed as we did with a system that hasn’t been written in matrix form.

We also have the following fact about solutions to \(\eqref{eq:eq2}\).

#### Fact

Given the system of equation \(\eqref{eq:eq2}\) we have one of the following three possibilities for solutions.

- There will be no solutions.
- There will be exactly one solution.
- There will be infinitely many solutions.

In fact, we can go a little farther now. Since we are assuming that we’ve got the same number of equations as unknowns the matrix \(A\) in \(\eqref{eq:eq2}\) is a square matrix and so we can compute its determinant. This gives the following fact.

#### Fact

Given the system of equations in \(\eqref{eq:eq2}\) we have the following.

- If \(A\) is nonsingular then there will be exactly one solution to the system.
- If \(A\) is singular then there will either be no solution or infinitely many solutions to the system.

The matrix form of a homogeneous system is

\[\begin{equation}A\vec x = \vec 0 \label{eq:eq3}\end{equation}\]where \(\vec 0\) is the vector of all zeroes. In the homogeneous system we are guaranteed to have a solution, \(\vec x = \vec 0\). The fact above for homogeneous systems is then,

#### Fact

Given the homogeneous system \(\eqref{eq:eq3}\) we have the following.

- If \(A\) is nonsingular then the only solution will be \(\vec x = \vec 0\).
- If \(A\) is singular then there will be infinitely many nonzero solutions to the system.

#### Linear Independence/Linear Dependence

This is not the first time that we’ve seen this topic. We also saw linear independence and linear dependence back when we were looking at second order differential equations. In that section we were dealing with functions, but the concept is essentially the same here. If we start with \(n\) vectors,

\[{\vec x_1},\,\,{\vec x_2},\,\, \ldots ,\,\,{\vec x_n}\]If we can find constants, \(c_{1}\), \(c_{2}\), …, \(c_{n}\) with at least two nonzero such that

\[\begin{equation}{c_1}{\vec x_1} + {c_2}{\vec x_2} + \, \ldots + {c_n}{\vec x_n} = \vec 0 \label{eq:eq4}\end{equation}\]then we call the vectors linearly dependent. If the only constants that work in \(\eqref{eq:eq4}\) are \(c_{1}=0\), \(c_{2}\)=0, …, \(c_{n}=0\) then we call the vectors linearly independent.

If we further make the assumption that each of the \(n\) vectors has \(n\) components, *i.e.* each of the vectors look like,

we can get a very simple test for linear independence and linear dependence. Note that this does not have to be the case, but in all of our work we will be working with \(n\) vectors each of which has \(n\) components.

#### Fact

Given the \(n\) vectors each with \(n\) components,

\[{\vec x_1},\,\,{\vec x_2},\,\, \ldots ,\,\,{\vec x_n}\]form the matrix,

\[X = \left( {\begin{array}{*{20}{r}}{{{\vec x}_1}}&{{{\vec x}_2}}& \cdots &{{{\vec x}_n}}\end{array}} \right)\]So, the matrix \(X\) is a matrix whose \(i^{\text{th}}\) column is the \(i^{\text{th}}\) vector, \({\vec x_i}\). Then,

- If \(X\) is nonsingular (
*i.e.*\(\det(X)\) is not zero) then the \(n\) vectors are linearly independent, and - if \(X\) is singular (
*i.e.*\(\det(X)=0\)) then the \(n\) vectors are linearly dependent and the constants that make \(\eqref{eq:eq4}\) true can be found by solving the system \[X\,\vec c = \vec 0\]where \(\vec c\) is a vector containing the constants in \(\eqref{eq:eq4}\).

So, the first thing to do is to form \(X\) and compute its determinant.

\[X = \left( {\begin{array}{*{20}{r}}1&{ - 2}&6\\{ - 3}&1&{ - 2}\\5&4&1\end{array}} \right)\quad \quad \Rightarrow \hspace{0.25in}\hspace{0.25in}\det \left( X \right) = - 79\]This matrix is non singular and so the vectors are linearly independent.

As with the last example first form \(X\) and compute its determinant.

\[X = \left( {\begin{array}{*{20}{r}}1&{ - 4}&2\\{ - 1}&1&{ - 1}\\3&{ - 6}&4\end{array}} \right)\quad \quad \hspace{0.25in} \Rightarrow \hspace{0.25in}\hspace{0.25in}\det \left( X \right) = 0\]So, these vectors are linearly dependent. We now need to find the relationship between the vectors. This means that we need to find constants that will make \(\eqref{eq:eq4}\) true.

So, we need to solve the system

\[X\,\vec c = \vec 0\]Here is the augmented matrix and the solution work for this system.

\[\left( {\begin{array}{*{20}{r}}1&{ - 4}&2\\{ - 1}&1&{ - 1}\\3&{ - 6}&4\end{array}\quad \begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\begin{array}{*{20}{c}}{{R_2} + {R_1}}\\{{R_3} - 3{R_1}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&{ - 4}&2\\0&{ - 3}&1\\0&6&{ - 2}\end{array}\quad \begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\begin{array}{*{20}{c}}{{R_3} + 2{R_2}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&{ - 4}&2\\0&{ - 3}&1\\0&0&0\end{array}\quad \begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\begin{array}{*{20}{c}}{ - \frac{1}{3}{R_2}}\\ \Rightarrow \end{array}\] \[\left( {\begin{array}{*{20}{r}}1&{ - 4}&2\\0&1&{ - \frac{1}{3}}\\0&0&0\end{array}\quad \begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\begin{array}{*{20}{c}}{{R_1} + 4{R_2}}\\ \Rightarrow \end{array}\left( {\begin{array}{*{20}{r}}1&0&{\frac{2}{3}}\\0&1&{ - \frac{1}{3}}\\0&0&0\end{array}\quad \begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\quad \Rightarrow \quad \begin{array}{*{20}{r}}{{c_1} + \frac{2}{3}{c_3} = 0}\\{{c_2} - \frac{1}{3}{c_3} = 0}\\{0 = 0}\end{array}\quad \Rightarrow \quad \begin{array}{*{20}{l}}{{c_1} = - \frac{2}{3}{c_3}}\\{{c_2} = \frac{1}{3}{c_3}}\\{}\end{array}\]Now, we would like actual values for the constants so, if use \({c_3} = 3\) we get the following solution\({c_1} = - 2\),\({c_2} = 1\), and \({c_3} = 3\). The relationship is then.

\[ - 2{\vec x^{(1)}} + {\vec x^{(2)}} + 3{\vec x^{(3)}} = \left( {\begin{array}{*{20}{r}}0\\0\\0\end{array}} \right)\]#### Calculus with Matrices

There really isn’t a whole lot to this other than to just make sure that we can deal with calculus with matrices.

First, to this point we’ve only looked at matrices with numbers as entries, but the entries in a matrix can be functions as well. So, we can look at matrices in the following form,

\[A\left( t \right) = \left( {\begin{array}{*{20}{r}}{{a_{11}}\left( t \right)}&{{a_{12}}\left( t \right)}& \cdots &{{a_{1n}}\left( t \right)}\\{{a_{21}}\left( t \right)}&{{a_{22}}\left( t \right)}& \cdots &{{a_{2n}}\left( t \right)}\\ \vdots & \vdots &{}& \vdots \\{{a_{m1}}\left( t \right)}&{{a_{m2}}\left( t \right)}& \cdots &{{a_{mn}}\left( t \right)}\end{array}} \right)\]Now we can talk about differentiating and integrating a matrix of this form. To differentiate or integrate a matrix of this form all we do is differentiate or integrate the individual entries.

\[A'\left( t \right) = \left( {\begin{array}{*{20}{r}}{{{a'}_{11}}\left( t \right)}&{{{a'}_{12}}\left( t \right)}& \cdots &{{{a'}_{1n}}\left( t \right)}\\{{{a'}_{21}}\left( t \right)}&{{{a'}_{22}}\left( t \right)}& \cdots &{{{a'}_{2n}}\left( t \right)}\\ \vdots & \vdots &{}& \vdots \\{{{a'}_{m1}}\left( t \right)}&{{{a'}_{m2}}\left( t \right)}& \cdots &{{{a'}_{mn}}\left( t \right)}\end{array}} \right)\] \[\int{{A\left( t \right)\,dt}} = \left( {\begin{array}{*{20}{r}}{\int{{{a_{11}}\left( t \right)\,dt}}}&{\int{{{a_{12}}\left( t \right)\,dt}}}& \cdots &{\int{{{a_{1n}}\left( t \right)\,dt}}}\\{\int{{{a_{21}}\left( t \right)\,dt}}}&{\int{{{a_{22}}\left( t \right)\,dt}}}& \cdots &{\int{{{a_{2n}}\left( t \right)\,dt}}}\\ \vdots & \vdots &{}& \vdots \\{\int{{{a_{m1}}\left( t \right)\,dt}}}&{\int{{{a_{m2}}\left( t \right)\,dt}}}& \cdots &{\int{{{a_{mn}}\left( t \right)\,dt}}}\end{array}} \right)\]So, when we run across this kind of thing don’t get excited about it. Just differentiate or integrate as we normally would.

In this section we saw a very condensed set of topics from linear algebra. When we get back to differential equations many of these topics will show up occasionally and you will at least need to know what the words mean.

The main topic from linear algebra that you must know however if you are going to be able to solve systems of differential equations is the topic of the next section.