Spring 2026: Math 291 Homework

Any page and section numbers in the assignments below refer to Heffron's text.

Tuesday, January 20

1. Verify properties 1-8 from today's lecture, for \(A = \begin{pmatrix} 1 & -9\\2 & 6\end{pmatrix}\), \(B = \begin{pmatrix} 1 & 4\\0 & -9\end{pmatrix}\), \(C = \begin{pmatrix}0 & -4\\9 & 2\end{pmatrix}\), \(\lambda = 7, \lambda_1 = -6, \lambda_2 = 4\).

Solution. This is straightforward.

2. Give a proof of the cancellation property using entries in the matrices rather than the proof given class.

Solution. Write \(A = \begin{pmatrix} a & b\\c & d\end{pmatrix}\), \(B = \begin{pmatrix} e & f\\g & h\end{pmatrix}\), \(C = \begin{pmatrix} r & s\\t & u\end{pmatrix}\). Then,

\[\begin{pmatrix} a+e & b+f\\c+g & d+h\end{pmatrix} = A+B = A+C = \begin{pmatrix} a+r & b+s\\c+t & d+u\end{pmatrix}.\]

Thus, \(a+e = a+r\), \(b+f = b+s\), \(c+g = c+t\), \(d+h = d+u\). Since we have cancellation for real numbers, \(e = r\), \(f = s\), \(g = t\), \(h = u\), so \(B = C\).

Thursday, January 22

1. For the matrices \(A, B, C\) in the previous assignment, verify:

  1. (i) \(A(B+C) = AB+AC\).
  2. (ii) \(A(BC) = (AB)C\).

Solution. We just check (ii). \(AB = \begin{pmatrix} 1 & 85\\2 & -46\end{pmatrix}\), so \((AB)C = \begin{pmatrix} 765 & 166\\-414 & -100\end{pmatrix}\). On the other hand we have \(BC = \begin{pmatrix} 36 & 4\\-81 & -18\end{pmatrix}\), so \(A(BC) = \begin{pmatrix} 765 & 166\\-414 & -100\end{pmatrix}\).

2. For the matrix \(A = \begin{pmatrix} 3 & 1\\5 & 2\end{pmatrix}\) first verify that \(A^{-1} = \begin{pmatrix} 2 & -1\\-5 & 3\end{pmatrix}\), and then use \(A^{-1}\) to solve the system of equations

\[\begin{align*} 3x+y &= 7\\ 5x+2y &= -3. \end{align*}\]

Solution. It's easy to check that \(AA^{-1} = I_2 = A^{-1}A\). To solve the system, one starts with the matrix equation \(A\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 7\\-3\end{pmatrix}\), multiplying both sides of this equation by \(A^{-1}\) gives

\[\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 2 & -1\\-5 & 3\end{pmatrix} \cdot \begin{pmatrix} 7\\-3\end{pmatrix} = \begin{pmatrix} 17\\-44\end{pmatrix},\]

so \(x = 17, y = -44\).

3. Use mathematical induction to prove the following statements:

  1. (i) \(1^2+2^2+3^2+\cdots + n^2 = \frac{n(n+1)(2n+1)}{6}\), for all \(n\geq 1\).
  2. (ii) \(9^n-1\) is divisible by 8, for all \(n\geq 1\).

Solution. (i) For the base case, \(1 = \frac{1(1+1)(2\cdot 1+1)}{6}\), as required. Now, assume the formula holds for \(n-1\) and use this to prove the case \(n\):

\[\begin{align*} 1^2+2^2+3^2+\cdots + (n-1)^2 &= \frac{(n-1)(n)(2(n-1)+1)}{6}\\ 1^2+2^2+3^2+\cdots + (n-1)^2 &= \frac{(n-1)(n)(2n-1)}{6}, \ \text{adding}\ n^2\ \text{both sides, we get}\\ 1^2+2^2+3^2+\cdots + (n-1)^2 +n^2 &= \frac{(n-1)(n)(2n-1)}{6} +n^2\\ 1^2+2^2+3^2+\cdots + (n-1)^2 +n^2 &= \frac{2n^3+3n^2+n}{6}\\ 1^2+2^2+3^2+\cdots + n^2 &= \frac{n(n+1)(2n+1)}{6} \end{align*}\]

For (ii), it turns out induction is not needed. If one uses the identity \(x^n -1= (x-1)(x^{n-1}+x^{n-2}+\cdots + x+1)\), substituting \(x = 9\) shows \(8\) divides \(9^n-1\).

Tuesday, January 27

1. For \(A = \begin{pmatrix} 1 & -9\\2 & 6\end{pmatrix}\), \(B = \begin{pmatrix} 1 & 4\\0 & -9\end{pmatrix}\) and \(\lambda = 7\), verify properties 1-5 of the determinant given in today's lecture.

Solution. These are straightforward calculations.

2. For the \(2\times 2\) matrix \(A\), we verified in class that if \(\textrm{det}(A)\not = 0\), then \(A^{-1}\) exists. Prove that if \(A^{-1}\) exists, then \(\det A \not = 0\). Thus, we have the following:

Theorem. The \(2\times 2\) matrix is invertible if and only if \(\det A\not = 0\).

We'll see later in the semester that this holds for any \(n\times n\) matrix.

Solution. We have \(A^{-1}A = I_2\), so \(\det(A)\cdot \det(A^{-1}) = \det(A^{-1}A) = \det I_2 = 1\), so \(\det(A) \not = 0\).

3. Here are three systems of linear equations. Identify which one has a unique solution, infinitely many solutions and no solutions.

System A
\[\begin{align*} 2x + 3y &= 7 \\ 6x + 9y &= 31 \end{align*}\]
System B
\[\begin{align*} 2x + 3y &= -1 \\ 6x + 2y &= 4 \end{align*}\]
System C
\[\begin{align*} 2x + 3y &= 7 \\ 6x + 9y &= 21 \end{align*}\]

Solution. System A has no solution, since the system corresponds to parallel lines, System B has a unique solution \(x = 1, y =-1\), and system C has infinitely many solutions, since each equation describes the same line.

Thursday, January 29

1. For the three systems of equations given in the previous assignment, use augmented matrices and Gaussian elimination to find the solution set of each system.

Solution. For A, we have \(\left[\begin{array}{cc|c} 2 & 3 & 7\\6 & 9 & 31\end{array}\right] \xrightarrow{-3\cdot R_1+R_2} \left[\begin{array}{cc|c} 2 & 3 & 7\\0 & 0 & 10\end{array}\right]\), so the system has no solution. For B, we have

\[\left[\begin{array}{cc|c}2 & 3 & -1\\6 & 2 & 4\end{array}\right] \xrightarrow{\frac{1}{2}\cdot R_1} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\6 & 2 & 4\end{array}\right] \xrightarrow{-6\cdot R_1+R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\0 & -7 & 7\end{array}\right]\]
\[\xrightarrow{-\frac{1}{7}\cdot R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\0 & 1 & -1\end{array}\right] \xrightarrow{-\frac{3}{2}\cdot R_2+R_1} \left[\begin{array}{cc|c}1 & 0 & 1\\0 & 1 & -1\end{array}\right]\]

Therefore, \(x = 1\) and \(y = -1\). For C, \(\left[\begin{array}{cc|c} 2 & 3 & 7\\6 & 9 & 21\end{array}\right]\xrightarrow[\frac{1}{2}\cdot R_1]{-3\cdot R_1+R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & \frac{7}{2}\\0 & 0 & 0\end{array}\right]\). Solution set: \(\{(\frac{7}{2}-\frac{3}{2}t, t)\ |\ t\in \mathbb{R}\}\).

2. Something new: Find the solution set to the system of equations below using Gaussian elimination, following as closely as you can the algorithm given in class. Hint: You'll have to introduce a parameter to describe the solution set.

\[\begin{align*} 2x+4y+6z &= 12\\ x+y+z &= 8. \end{align*}\]

Solution. Converting to an augmented matrix, we have

\[\left[\begin{array}{ccc|c}2 & 4 & 6 & 12\\1 & 1 & 1 & 8\end{array}\right]\xrightarrow{R_1\leftrightarrow R_2} \left[\begin{array}{ccc|c}1 & 1 & 1 & 8\\2 & 4 & 6 & 12\end{array}\right] \xrightarrow{-2\cdot R_1+R_2}\left[\begin{array}{ccc|c} 1 & 1 & 1 & 8\\0 & 2 & 4 & -4\end{array}\right]\]
\[\xrightarrow{\frac{1}{2}\cdot R_2}\left[\begin{array}{ccc|c}1 & 1 & 1 & 8\\0 & 1 & 2 & -2\end{array}\right]\xrightarrow{-1\cdot R_2+R_1}\left[\begin{array}{ccc|c}1 & 0 & -1 & 10\\0 & 1 & 2 & -2\end{array}\right].\]

Thus, the solution set is \(\{(10+t, -2-2t, t)\ |\ t\in \mathbb{R}\}\).

3. Suppose that the ordered pair \((s,t)\) is a solution to the system

\[\begin{align*} ax+by &= u\\ cx+dy &= v. \end{align*}\]

Verify that \((s,t)\) is a solution to each of systems below. Assume that \(\lambda \in \mathbb{R}\) and is non-zero for System C.

System A
\[\begin{align*} cx + dy &= v \\ ax + by &= u \end{align*}\]
System B
\[\begin{align*} ax + by &= u \\ (c+\lambda a)x + (d+\lambda b)y &=v+\lambda u \end{align*}\]
System C
\[\begin{align*} ax + by &= u \\ \lambda cx + \lambda dy &= \lambda v \end{align*}\]

Now assume that \((s,t)\) is a solution to A, B, or C, and show that \((s,t)\) is a solution to the original system of equations. You must consider all three cases. These calculations show that solutions to systems of equations are invariant under elementary row operations.

Solution. We'll do the case of B. The other cases are similar, and easier. Suppose \((s,t)\) is a solution to the original system of equations, so that \(as+bt = u\) and \(cs+dt = v\). Clearly the first equation in B holds. For the second equation in B, we have

\[(c+\lambda a)s+(d+\lambda b)t = (cs+dt) + \lambda (as+bt) = v+\lambda u,\]

using the two original equations. Now suppose \((s,t)\) is a solution to the system B. Then clearly \((s,t)\) is a solution to the first equation in the original system. If \(\lambda = 0\), then the second equation in B is the second equation in the original system, so \((s,t)\) satisfies the latter. Now suppose \(\lambda \not = 0\). Then

\[v+\lambda u = (c+\lambda a)s + (d+\lambda b)t = (cs+dt)+\lambda (as+bt) = v+ \lambda (as+bt),\]

so \(\lambda u = \lambda (as+bt)\), therefore, \(u = as+bt\), which is what we want.

Tuesday, February 3

Use Gaussian Elimination to solve problems 2.18 (a)-(f) in Heffron.

Solution. (a) \(\left\{\begin{pmatrix} 6-2t\\t\end{pmatrix}\ \middle|\ t\in \mathbb{R}\right\}\). (b) \(\left\{\begin{pmatrix} 0\\1\end{pmatrix}\right\}\). (c) \(\left\{\begin{pmatrix} 4-t\\-1+t\\t\end{pmatrix}\ \middle|\ t\in \mathbb{R}\right\}\). (d) \(\left\{\begin{pmatrix} 1\\1\\1\end{pmatrix}\right\}\).

(e) \(\left\{\begin{pmatrix} \frac{5}{3}-\frac{1}{3}t_1-\frac{2}{3}t_2\\\frac{2}{3} +\frac{2}{3}t_1+\frac{1}{3}t_2\\t_1\\t_2\end{pmatrix}\ \middle|\ t_1, t_2\in \mathbb{R}\right\}\). (f) No solution.

Thursday, February 5

1. For the matrix \(A = \begin{pmatrix} 2 & 1 & 0\\0 & 4 & 0\\1 & 2 & -1\end{pmatrix}\), use Gaussian elimination to find \(A^{-1}\). Then check that your answer is correct.

Solution. \(\begin{bmatrix}2 & 1 & 0 & | & 1 & 0 & 0\\0 & 4 & 0 & | & 0 & 1 & 0\\1 & 2 & -1 & | & 0 & 0 & 1\end{bmatrix} \overset{R_1\leftrightarrow R_3}{\longrightarrow} \begin{bmatrix}1 & 2 & -1 & | & 0 & 0 & 1\\0 & 4 & 0 & | & 0 & 1 & 0\\2 & 1 & 0 & | & 1 & 0 & 0\end{bmatrix}\)

\(\xrightarrow[\frac{1}{4}\cdot R_2]{-2\cdot R_1+R_3}\begin{bmatrix}1 & 2 & -1 & | & 0 & 0 & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & -3 & 2 & | & 1 & 0 & -2\end{bmatrix}\) \(\xrightarrow[-2\cdot R_2+R_1]{3\cdot R_2+R_3}\begin{bmatrix}1 & 0 & -1 & | & 0 & -\frac{1}{2} & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 2 & | & 1 & \frac{3}{4} & -2\end{bmatrix}\)

\(\xrightarrow{\frac{1}{2}\cdot R_3} \begin{bmatrix}1 & 0 & -1 & | & 0 & -\frac{1}{2} & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 1 & | & \frac{1}{2} & \frac{3}{8} & -1\end{bmatrix} \xrightarrow{R_3+R_1}\begin{bmatrix}1 & 0 & 0 & | & \frac{1}{2} & -\frac{1}{8} & 0\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 1 & | & \frac{1}{2} & \frac{3}{8} & -1\end{bmatrix}\)

Thus, \(A^{-1} = \begin{pmatrix}\frac{1}{2} & -\frac{1}{8} & 0\\0 & \frac{1}{4} & 0\\\frac{1}{2} & \frac{3}{8} & -1\end{pmatrix}\).

2. For the matrix \(A = \begin{pmatrix}2 & 3 & 6\\4 & 8 & 14\end{pmatrix}\)

  1. (i) Use elementary row operations to put \(A\) into RREF.
  2. (ii) Convert the elementary row operations you used in (i) to \(2\times 2\) elementary matrices, and then multiply \(A\) successively on the left by the elementary matrices to get the same RREF.
  3. (iii) Now multiply the elementary matrices from (ii) to get a \(2\times 2\) matrix \(B\). Check that \(BA\) gives the same RREF. Be careful: The order in which you multiply the elementary matrices matters.

Solution. (i) \(\begin{pmatrix} 2 & 3 & 6\\4 & 8 & 14\end{pmatrix}\xrightarrow{\frac{1}{2}\cdot R_1}\begin{pmatrix} 1 & \frac{3}{2} & 3\\4 & 8 & 14\end{pmatrix}\xrightarrow{-4\cdot R_1+R_2}\begin{pmatrix} 1 & \frac{3}{2} & 3\\0 & 2 & 2\end{pmatrix}\xrightarrow{\frac{1}{2}\cdot R_2}\begin{pmatrix} 1 & \frac{3}{2} & 3\\0 & 1 & 1\end{pmatrix}\) \(\xrightarrow{-\frac{3}{2}\cdot R_2+R_1}\begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}\).

(ii) \(\begin{pmatrix} 1 & -\frac{3}{2}\\0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0\\0 & \frac{1}{2}\end{pmatrix}\begin{pmatrix} 1 & 0\\-4 & 1\end{pmatrix}\begin{pmatrix} \frac{1}{2} & 0\\0 & 1\end{pmatrix}\begin{pmatrix} 2 & 3 & 6\\4 & 8 & 14\end{pmatrix} = \begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}\).

(iii) \(B = \begin{pmatrix} 1 & -\frac{3}{2}\\0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0\\0 & \frac{1}{2}\end{pmatrix}\begin{pmatrix} 1 & 0\\-4 & 1\end{pmatrix}\begin{pmatrix} \frac{1}{2} & 0\\0 & 1\end{pmatrix} = \begin{pmatrix} 2 & -\frac{3}{4}\\-1 & \frac{1}{2}\end{pmatrix}\) and

\[BA = \begin{pmatrix} 2 & -\frac{3}{4}\\-1 & \frac{1}{2}\end{pmatrix} \begin{pmatrix}2 & 3 & 6\\4 & 8 & 14\end{pmatrix} = \begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}.\]

3. A square matrix is said to be diagonal if its only non-zero entries lie on the main diagonal of the matrix. Prove that the diagonal matrix \(A = \begin{pmatrix} a & 0\\0 & b\end{pmatrix}\) has an inverse if and only if both \(a, b\) are non-zero. In this case, find \(A^{-1}\).

Solution. Suppose both \(a,b \not = 0\). Then \(\begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix} \begin{pmatrix} a & 0\\0 & b\end{pmatrix} = \begin{pmatrix} 1 & 0\\0 & 1\end{pmatrix} = \begin{pmatrix} a & 0\\0 & b\end{pmatrix} \begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix}\), so \(A\) is invertible with \(A^{-1} = \begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix}\). Conversely, since \(A\) has an inverse if and only if \(\det A \not = 0\), if \(A\) has an inverse, then \(ab \not = 0\), so both \(a\) and \(b\) are non-zero.

Bonus Problem 2. Let \(A\) be a \(2\times 2\) matrix. Prove that \(A\) is invertible if there exists a \(2\times 2\) matrix \(H\) such that \(HA = I_2\) or there exists a \(2\times 2\) matrix \(L\) such that \(AL = I_2\). Be sure to verify both scenarios. This problem shows that just one of the conditions in the definition of invertibility is required for a \(2\times 2\) matrix to be invertible. Due Tuesday, February 10. (5 points.)

Solution. If \(HA = I_2\), then \(1 = \det I_2 = \det HA = (\det H)\cdot (\det A)\), so \(\det A\not = 0\). Thus, we may form \(\begin{pmatrix} \frac{d}{\rho} & -\frac{b}{\rho}\\-\frac{c}{\rho} & \frac{a}{\rho}\end{pmatrix}\), with \(\rho = \det A\), which we know to be \(A^{-1}\). The converse is similar.

Bonus Problem 3. Elementary matrices are defined for larger square matrices by applying elementary row operations to an identity matrix. Convert the elementary row operations you used in Problem 1 above to write the inverse you found as a product of elementary matrices. Be sure to check your answer. Again, be careful with the order in which you take the product of elementary matrices. Due February 10. (5 points)

Solution. We have

\[\scriptsize\begin{pmatrix} 1 & 0 & 1\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\0 & 0 & \frac{1}{2}\end{pmatrix} \begin{pmatrix} 1 & -2 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 3 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0\\0 & \frac{1}{4} & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\-2 & 0 & 1\end{pmatrix} \begin{pmatrix} 0 & 1 & 0\\1 & 0 & 0\\0 & 0 & 1\end{pmatrix} = A^{-1}.\]

You should check the details!

Tuesday, February 10

1. Show that the vectors \(v_1 = (2,3)\) and \(v_2 = (3,2)\) are linearly independent and then write \(w = (13, 12)\) as a linear combination of \(v_1\) and \(v_2\).

Solution. Since \(\textrm{det}\begin{pmatrix} 2 & 3\\3 & 2 \end{pmatrix}= -5 \neq 0\), \(v_1, v_2\) are linearly independent. To write \(w\) as a linear combination of \(v_1, v_2\) we must solve the system obtained from the vector equation \((13,12) = x(2,3)+y(3,2)\), i.e.,

\[\begin{align*} 2x+3y &= 13\\ 3x+2y &= 12. \end{align*}\]

Starting with the augmented matrix \(\begin{bmatrix}2 & 3 & | & 13\\3 & 2 & | & 12\end{bmatrix}\), Gaussian elimination yields \(x = 2, y = 3\), i.e., \(w = 2v_1+3v_2\).

2. Show that the vectors \(v_1 = (1,1), v_2 = (2,1), v_3 = (6,4)\) are not linearly independent by finding real numbers \(\alpha, \beta, \gamma \in \mathbb{R}\), not all zero, such that \(\alpha v_1+\beta v_2+\gamma v_3 = \vec{0}\).

Solution. One seeks a non-trivial solution to the vector equation \(x(1,1)+y(2,1)+z(6,4) = \vec{0}\), i.e., a non-zero solution to the system of equations

\[\begin{align*} x+2y+6z &= 0\\ x+y+4z &= 0. \end{align*}\]

This can be done using Gaussian elimination, but a close inspection shows that \(2v_1+2v_2+(-1)v_3 = \vec{0}\). There are in fact, infinitely many solutions to the system of equations above, all multiples of the vector \((2,2,-1)\).

3. Consider the line through the origin in \(\mathbb{R}^2\) given by \(6x-7y = 0\). Suppose the vectors \(v_1 = (u, v)\) and \(v_2 = (r,s)\) lie on the line. Show that: (i) The vector \(v_1+v_2\) lies on the line and (ii) The vector \(\lambda v_1\) lies on the line, for all \(\lambda \in \mathbb{R}\). Thus the set of vectors in \(\mathbb{R}^2\) lying on this line is closed under addition and scalar multiplication.

Solution. \(v_1+v_2 = (u+r, v+s)\), substituting gives \(6(u+r)-7(v+s) = (6u-7v)+(6r-7s) = 0 + 0 = 0\), so \(v_1+v_2\) lies on the line. \(\lambda v_1 = (\lambda u, \lambda v)\). Substituting gives \(6(\lambda u) - 7(\lambda v) = \lambda (6u-7v) = \lambda\cdot 0 = 0\), so \(\lambda v_1\) lies on the line.

Thursday, February 12

A subset \(W\subseteq \mathbb{R}^2\) is a subspace of \(\mathbb{R}^2\) if it is closed under vector addition and scalar multiplication.

1. Verify that the line \(2x+3y = 0\) is a subspace of \(\mathbb{R}^2\), but the line \(2x+3y = 1\) is not a subspace of \(\mathbb{R}^2\).

Solution. To see that the line \(2x+3y = 0\) is a subspace of \(\mathbb{R}^2\), one proceeds exactly like in Problem 3 from the previous assignment, i.e., assume the vectors \(v_1 = (u, v)\) and \(v_2 = (r,s)\) lie on the line, then show that: (i) The vector \(v_1+v_2\) lies on the line and (ii) The vector \(\lambda v_1\) lies on the line, for all \(\lambda \in \mathbb{R}\). The proof is almost exactly the same, just the coefficients in the equation of the line are different.

To see that the line \(2x+3y = 1\) is not a subspace, note that the vector \((1,-1)\) is on the line, but its multiple \(2(1,-1) = (2,-2)\) is not on the line.

2. Show directly from the definition of subspace that any subspace of \(\mathbb{R}^2\) must contain \((0,0)\).

Solution. Take \(v\in \mathbb{R}^2\) belonging to the subspace \(W\). Then, \(v+ (-1)\cdot v = v+(-v) = \vec{0} \in W\).

3. Define the function \(T:\mathbb{R}^2\to \mathbb{R}^2\) by the equation \(T(x,y) = (-2x+y, x+4y)\). Thus for example, if \(v = (3,2)\), then \(T(v) = T(3,2) = (-2\cdot 3+2, 3+4\cdot 2) = (-4,11)\). Suppose \(v = (a,b)\) and \(w = (c,d)\). Show that:

  1. (i) \(T(v+w) = T(v)+T(w)\)
  2. (ii) \(T(\lambda v) = \lambda T(v)\), for \(\lambda \in \mathbb{R}\).

A function with properties (i) and (ii) is called a linear transformation.

Solution. For (i),

\[\begin{align*} T(v+w) &= T(a+c, b+d)\\ &= (-2(a+c)+(b+d), a+c+4(b+d))\\ &= (-2a-2c+b+d, a+c+4b+4d)\\ &= (-2a+b, a+4b)+(-2c+d,c+4d)\\ &= T(v)+T(w). \end{align*}\]

And for (ii),

\[T(\lambda v) = T(\lambda a, \lambda b) = (-2(\lambda a)+(\lambda b), \lambda a+ 4(\lambda b)) = \lambda (-2a+b, a+4b) = \lambda T(a,b).\]

Bonus Problem 4. First verify that \(\{\vec{0}\}\) and \(\mathbb{R}^2\) are subspaces of \(\mathbb{R}^2\) and then prove that lines through the origin are the only other subspaces of \(\mathbb{R}^2\). In other words, if \(W\) is a subset of \(\mathbb{R}^2\) and \(W\) is a subspace, then \(W\) is \(\{\vec{0}\}, \mathbb{R}^2\) or a line through the origin. Due Tuesday, February 17. (5 points)

Solution. It is easy to check that \(\{\vec{0}\}\) and \(\mathbb{R}^2\) are subspaces of \(\mathbb{R}^2\). Suppose \(W\) is a non-zero subspace. Let \(0 \neq w\) be a non-zero vector in \(W\). Suppose \(w = (a,b)\). Then \(w\) lies on the line \(L: bx-ay = 0\). We want to show that \(W\) is the line \(L\), assuming \(W\) is not \(\mathbb{R}^2\). Note that \((u,v)\) lies on \(L\) if and only if \((u,v)\) is a multiple of \(w\). To see this, on the one hand, if \((u,v) = tw\), then \((u,v) = (ta,tb)\) which satisfies the equation \(bx-ay = 0\). On the other hand, suppose \((u,v)\) lies on the line \(L\). Then \(bu-av = 0\). Suppose \(a \neq 0\). Then \(v = \frac{b}{a} u\), so \((u, v) = (u, \frac{b}{a}u) = \frac{u}{a}(a,b)\), showing \((u,v)\) is a multiple of \(w\). The argument is similar if \(b\neq 0\). Now, suppose there is a vector \(h\) in \(W\) not on the line \(L\). Then \(w, h\) are linearly independent vectors and therefore \(\langle w, h\rangle = \mathbb{R}^2\). But \(\mathbb{R}^2 = \langle w,h\rangle \subseteq W\) shows that \(W = \mathbb{R}^2\), contrary to our assumption on \(W\). Thus, \(W = L\), as required.

Tuesday, February 17

1. For the linear transformation \(T\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 2x-3y\\-x+y\end{pmatrix}\), and bases for \(\mathbb{R}^2\) \(E := \{e_1, e_2\}\), \(B =\{ w_1, w_2\}\), with \(w_1 = \begin{pmatrix} 1\\1\end{pmatrix}, w_2 = \begin{pmatrix} 1\\2\end{pmatrix}\), calculate \([T]_E^E, [T]_B^E, [T]_E^B\) and \([T]_B^B\).

Solution. Each matrix is obtained by solving various systems of equations. We first calculate the values of \(T\) on the given basis elements: \(T(e_1) = (2, -1)\), \(T(e_2) = (-3,1)\), \(T(w_1) = (-1,0)\), \(T(w_2) = (-4,1)\).

We can read off \([T]_E^E = \begin{pmatrix} 2 & -3\\-1 & 1\end{pmatrix}\), since any vector \((a,b) = ae_1+be_2\). Similarly, we can write down the matrix \([T]_B^E = \begin{pmatrix} -1 & -4\\0 & 1\end{pmatrix}\), since the values of \(T(w_1), T(w_2)\) are easily expressed in terms of \(e_1, e_2\).

For \([T]_E^B\), we have to express \(T(e_1) = (2,-1)\) and \(T(e_2) = (-3,1)\) as a linear combination of \(w_1, w_2\). In other words, we must solve the vector equations \((2,-1) = x(1,1)+y(1,2)\) and \((-3,1) = x(1,1)+y(1,2)\). These equations give rise to two systems of equations:

System A: \(x + y = 2,\ x + 2y = -1\)      System B: \(x + y = -3,\ x + 2y = 1\)

The solutions to the systems are \(x = 5, y = -3\) and \(x = -7, y = 4\). It follows that \([T]_E^B = \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix}\).

Similarly, to calculate \([T]_B^B\), we must express \(T(w_1), T(w_2)\) as linear combinations of \(w_1, w_2\). In other words, we must solve the vector equations \((-1,0) = x(1,1)+y(1,2)\) and \((-4,1) = x(1,1)+y(1,2)\). Converting these to systems of equations and solving gives \(x = -2, y = 1\) for the first vector equation and \(x = -9, y = 5\) for the second. Therefore we have \([T]_B^B = \begin{pmatrix} -2 & -9\\1 & 5\end{pmatrix}\).

2. Let \(v_1, v_2\in \mathbb{R}^2\) be linearly independent. Thus, by a theorem from class, any vector \(w\in \mathbb{R}^2\) can be written as a linear combination of \(v_1, v_2\), i.e., \(w = av_1+bv_2\), for \(a,b\in \mathbb{R}\). Prove that the linear combination is unique, i.e., if \(w = cv_1+dv_2\), with \(c,d\in \mathbb{R}\) then \(a = c\) and \(b = d\). Note: This follows formally from our fundamental properties and the definition of linear independence, without having to assign coordinates to the vectors involved.

Solution. Suppose \(av_1+bv_2 = cv_1+dv_2\). Then \((a-c)v_1 +(b-d)v_2 = 0\). Since \(v_1, v_2\) are linearly independent, \(a-c = 0\) and \(b-d = 0\), i.e., \(a = c\) and \(b = d\), as required.

Thursday, February 19

1. Let \(T(x,y) = (2x-3y, -x+y)\), \(S(x,y) = (-y,x)\), \(\beta = \{(1,1), (1, 2)\}\), \(\gamma = \{(-1,1), (2,1)\}\). Verify the very important formula from today's lecture: \([ST]_E^{\gamma} = [S]_{\beta}^{\gamma}\cdot [T]_E^{\beta}\). You can use some of the calculations you have done in the previous homework.

Solution. We have the values of \(T(e_1), T(e_2)\) from the previous homework set. Now \(S(w_1) = S(1,1) = (-1,1)\) and \(S(w_2) = S(1,2) = (-2,1)\) and \(ST(e_1) = (1,2)\) and \(ST(e_2) = (-1,3)\).

The technique for calculating the indicated matrices consists in solving various systems of equations as in the previous homework set. Upon doing so, we obtain: \[[T]_E^\beta = \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix}, \quad [S]_\beta^\gamma = \begin{pmatrix} 1 & \frac{4}{3}\\0 & -\frac{1}{3}\end{pmatrix}, \quad [ST]_E^\gamma = \begin{pmatrix} 1 & -\frac{5}{3}\\1 & -\frac{4}{3}\end{pmatrix}.\] And we also have \[\begin{pmatrix} 1 & -\frac{5}{3}\\1 & -\frac{4}{3}\end{pmatrix} = \begin{pmatrix} 1 & \frac{4}{3}\\0 & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix},\] as required.

2. Using the notation from problem 1, verify the change of basis formula \([S]_{\beta}^{\beta} = [I_2]_{\gamma}^{\beta}\cdot [S]_{\gamma}^{\gamma}\cdot [I_2]_{\beta}^{\gamma}\).

Solution. Calculating as before yields: \([S]_{\beta}^{\beta} = \begin{pmatrix} -3 & -5\\2 & 2\end{pmatrix}\), \([S]_\gamma^\gamma = \begin{pmatrix} -\frac{1}{3} & \frac{5}{3}\\-\frac{2}{3} & \frac{1}{3}\end{pmatrix}\), \([I_2]_\beta^\gamma = \begin{pmatrix} \frac{1}{3} & 1\\\frac{2}{3} & 1\end{pmatrix}\) and \([I_2]_\gamma^\beta = \begin{pmatrix} -3 & 3\\2 & -1\end{pmatrix}\), and one easily checks that \([S]_{\beta}^{\beta} = [I_2]_{\gamma}^{\beta}\cdot [S]_{\gamma}^{\gamma}\cdot [I_2]_{\beta}^{\gamma}\).

Bonus Problem 5. Use the very important formula to prove that matrix multiplication of \(2\times 2\) matrices is associative. Hint: First show that if \(A\) is a \(2\times 2\) matrix, then there exists \(T: \mathbb{R}^2\to \mathbb{R}^2\) such that \([T]^E_E = A\). Due Tuesday, February 24. (5 points)

Solution. Suppose \(A = \begin{pmatrix} a & c\\b & d\end{pmatrix}\). Define \(T(e_1) = (a,b)\) and \(T(e_2) = (c,d)\). Then \([T]_E^E = A\).

Now, let \(A, B, C\) be \(2\times 2\) matrices with entries in \(\mathbb{R}\) and \(T, S, U\) linear transformations from \(\mathbb{R}^2\) to \(\mathbb{R}^2\) such that \([T]_E^E = A\), \([S]_E^E = B\), \([U]_E^E = C\). Then by the very important formula and the fact that \(T(SU) = (TS)U\), we have \[\begin{aligned} A(BC) &= [T]_E^E\cdot ([S]_E^E[U]_E^E) = [T]_E^E\cdot [SU]_E^E = [T(SU)]_E^E \\ &= [(TS)U]_E^E = [TS]_E^E\cdot [U]_E^E = ([T]_E^E\cdot [S]_E^E)\cdot [U]_E^E = (AB)C. \end{aligned}\]

Tuesday, March 3

1. Show that the following matrices are diagonalizable by first finding their eigenvectors and eigenvalues: \(A = \begin{pmatrix} 1 & 4\\2 & 3\end{pmatrix}\) and \(B = \begin{pmatrix} 7 & 2\\-4 & 1\end{pmatrix}\).

Solution. We have \(p_A(x) = \det \begin{pmatrix} -x+1 & 4\\2 & -x+3 \end{pmatrix} = (x-1)(x-3)-8 = x^2-4x-5 = (x-5)(x+1)\), so 5, \(-1\) are the eigenvalues.

For 5: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} -4 & 4\\2 & -2\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & -1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\1\end{pmatrix}\).

For \(-1\): We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 2 & 4\\2 & 4\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 2\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 2\\-1\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 2\\1 & -1\end{pmatrix}\), which gives \(P^{-1} = \begin{pmatrix} \frac{1}{3} & \frac{2}{3}\\\frac{1}{3} & -\frac{1}{3}\end{pmatrix}\), so that \(P^{-1}AP = \begin{pmatrix} 5 & 0\\0 & -1\end{pmatrix}\).

For the matrix \(B\), we have \(p_B(x) = \det \begin{pmatrix} -x+7 & 2\\-4 & -x+1\end{pmatrix} = (x-1)(x-7)+8 = x^2-8x+15 = (x-3)(x-5)\), so the eigenvalues are 3, 5.

For 5: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 2 & 2\\-4 & -4\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\-1\end{pmatrix}\).

For 3: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 4 & 2\\-4 & -2\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 2 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 1\\-2\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 1\\-1 & -2\end{pmatrix}\), which gives \(P^{-1} = \begin{pmatrix} 2 & 1\\-1 & -1\end{pmatrix}\), so that \(P^{-1}BP = \begin{pmatrix} 5 & 0\\0 & 3\end{pmatrix}\).

2. A key step in the diagonalizability of the matrix \(A\) is that there should be a basis for \(\mathbb{R}^2\) consisting of eigenvectors of \(A\). For the matrix \(A = \begin{pmatrix}1 & 2\\0 & 1\end{pmatrix}\), find the eigenvectors and eigenvalues and show that there is no basis for \(\mathbb{R}^2\) consisting of eigenvectors of \(A\).

Solution. We have \(p_A(x) = \det \begin{pmatrix} -x+1 & 2\\0 & -x+1\end{pmatrix} = (-x+1)^2\), so that 1 is a repeated root of \(p_A(x)\), and is the only eigenvalue. To find eigenvectors associated to 1, we solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 0 & 2\\0 & 0\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 0 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\0\end{pmatrix}\). Thus, the matrix \(A\) does not have a second eigenvector linearly independent from \(v_1\).

Bonus Problem 6. A \(2\times 2\) matrix \(A\) is a scalar matrix if \(A = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} = \lambda\cdot I_2\), for some \(\lambda \in \mathbb{R}\). Show that:

  1. (i) If \(A\in \mathrm{M}_2(\mathbb{R})\) is a scalar matrix then \(AB = BA\), for all \(B\in \mathrm{M}_2(\mathbb{R})\).
  2. (ii) Prove that if \(A \in \mathrm{M}_2(\mathbb{R})\) is diagonalizable and \(P^{-1}AP\) is a scalar matrix, then \(A\) was already a scalar matrix.
This bonus problem is due Tuesday March 10 and is worth 5 points.

Solution. For (i), suppose \(B = \begin{pmatrix} a & b\\c & d\end{pmatrix}\). Then \(\begin{pmatrix} a & b\\c & d\end{pmatrix} \cdot \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} = \begin{pmatrix} \lambda a & \lambda b\\\lambda c & \lambda d\end{pmatrix} = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} \cdot \begin{pmatrix} a & b\\c & d\end{pmatrix}\).

For (ii), suppose \(P^{-1}AP = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix}\). Then, using (i) and multiplying on the left by \(P\) and on the right by \(P^{-1}\) we have

\[A = P(P^{-1}AP)P^{-1} = P\begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} P^{-1} = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} PP^{-1} = \begin{pmatrix} \lambda & 0\\0 & \lambda \end{pmatrix}.\]
Thursday, March 5

1. For the matrix \(A = \begin{pmatrix} 1 & 0 & 0\\0 & 0 & 9\\0 & 1 & 0\end{pmatrix}\), find the eigenvalues of \(A\), the corresponding eigenvectors, and a diagonalizing matrix \(P\). Be sure to check that \(P^{-1}AP\) is a diagonal matrix. Note the process here is the same as for \(2\times 2\) matrices. First find the roots of the characteristic polynomial \(p_A(x) = \det (A-xI_3)\) and then find the corresponding eigenvectors as before; namely if \(\alpha\) is an eigenvalue, solve the homogeneous system of equations whose coefficient matrix is \(A-\alpha I_3\).

Solution. We have

\[\begin{aligned} p_A(x) &= \det \begin{pmatrix} -x+1 & 0 & 0\\0 & -x & 9\\0 & 1 & -x\end{pmatrix}\\ &= (-x+1)\cdot \det \begin{pmatrix} -x & 9\\1 & -x\end{pmatrix}\\ &= (-x+1)(x^2-9) = (-x+1)(x-3)(x+3), \end{aligned}\]

so the eigenvalues of \(A\) are \(1, 3, -3\).

For 1: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 0 & 0 & 0\\0 & -1 & 9\\0 & 1 & -1\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 0 & 1 & 0\\0 & 0 & 1\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\0\\0\end{pmatrix}\).

For 3: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} -2 & 0 & 0\\0 & -3 & 9\\0 & 1 & -3\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 0 & 0\\0 & 1 & -3\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 0\\3\\1\end{pmatrix}\).

For \(-3\): We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 4 & 0 & 0\\0 & 3 & 9\\0 & 1 & 3\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 3\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_3 = \begin{pmatrix} 0\\3\\-1\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 0 & 0\\0 & 3 & 3\\0 & 1 & -1\end{pmatrix}\). Using Gaussian elimination, we find that \(P^{-1} = \begin{pmatrix} 1 & 0 & 0\\0 & \frac{1}{6} & \frac{1}{2}\\0 & \frac{1}{6} & -\frac{1}{2}\end{pmatrix}\), so that \(P^{-1}AP = \begin{pmatrix} 1 & 0 & 0\\0 & 3 & 0\\0 & 0 & -3\end{pmatrix}\).

2. Suppose \(A, B, P\in \textrm{M}_2(\mathbb{R})\) satisfy \(B = P^{-1}AP\). Show that \(p_B(x) = p_A(x)\), i.e., \(A\) and \(B\) have the same characteristic polynomial. Hint: \(x I_2 = xP^{-1}P\).

Solution. We have

\[\begin{aligned} p_B(x) &= \det (P^{-1}AP-xI_2)\\ &= \det(P^{-1}AP - xP^{-1}P)\\ &= \det\{P^{-1}(A-xI_2)P\}\\ &= \det P^{-1}\cdot \det (A-xI_2)\cdot \det P\\ &= \det (A-xI_2)\\ &= p_A(x). \end{aligned}\]
Tuesday, March 10

1. Let \(A = \begin{pmatrix} 2 & 1\\1 & 2\end{pmatrix}\). Verify that \(v_1 = \begin{pmatrix} 1\\1\end{pmatrix}\) and \(v_2 = \begin{pmatrix} 1\\-1\end{pmatrix}\) are eigenvectors of \(A\) with eigenvalues 3 and 1 respectively.

2. In preparation for Thursday's lecture, verify that \(\begin{pmatrix} x_1(t)\\x_2(t)\end{pmatrix} = c_1e^{3t}\begin{pmatrix} 1\\1\end{pmatrix} + c_2e^{t} \begin{pmatrix} 1\\-1\end{pmatrix}\), equivalently, \(x_1(t) = c_1e^{3t}+c_2e^t\) and \(x_2(t) = c_1e^{3t}-c_2e^t\) is a solution to the system of differential equations,

\[\begin{align*} x_1'(t) &= 2x_1(t)+x_2(t)\\ x_2'(t) &= x_1(t)+2x_2(t). \end{align*}\]

3. Given the solutions to the system of differential equations in the previous problem, solve the initial condition \(\begin{pmatrix} x_1(0)\\x_2(0)\end{pmatrix} = \begin{pmatrix} 3\\-4\end{pmatrix}\).

Thursday, March 12

1. For the system of first order linear differential equations

\[\begin{align*} x_1'(t) &= 5x_1(t)-3x_2(t)\\ x_2'(t) &= -6x_1(t)+2x_2(t) \end{align*}\]

first find the eigenvalues and corresponding eigenvectors for the coefficient matrix \(A = \begin{pmatrix} 5 & -3\\-6 & 2\end{pmatrix}\), then, for the given system, follow step-by-step the derivation of the solution to the system given in class. After writing the general solution, write the solution to the system with initial conditions \(x_1(0) = 2, x_2(0) = \sqrt{5}\).

2. Use the fact that for any matrix \(A\), \((P^{-1}AP)^n = P^{-1}A^nP\) to find \(A^{99}\), for the matrix \(A\) in the problem above. You should use exponents in your answer. Hint: Use the fact that \(A\) is diagonalizable.

Bonus Problem 7. Let \(D = \begin{pmatrix} \alpha & 0\\0 & \beta\end{pmatrix}\). Use the formula \(e^{x} = 1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3 + \cdots\) to find an expression for \(e^{D}\). Then use the ideas in problem 2 above to find \(e^A\), for \(A = \begin{pmatrix} 5 & -3\\-6 & 2\end{pmatrix}\). This bonus problem is due Tuesday, March 24 and is worth 5 points.