Math 291 Spring 2026: Exam 2

Instructions: Be sure to put your name on each page of your exam. When in doubt, it is always better to provide more detail than less. And: No calculators or other computational aids allowed.

True-False and Short Answer. Each question is worth 6 points.

1.

True or False: Suppose \(\lambda\) is an eigenvalue for the \(2\times 2\) matrix \(A\). Then the set of eigenvectors associated to \(\lambda\) form a subspace of \(\mathbb{R}^2\).

Solution. False. Eigenvectors are non-zero, and any subspace must contain the zero vector.
2.

True or False: Every \(2\times 2\) symmetric matrix over \(\mathbb{C}\) is diagonalizable.

Solution. The correct answer is false, but a statement made in class may have been misleading, so no one lost any points on this problem. Symmetry means \(A = A^t\) and such matrices need not be diagonalizable over \(\mathbb{C}\). For example,

\[A = \begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix}.\]

This satisfies \(A^t = A\). Its characteristic polynomial is \(p_A(x) = (1-x)(-1-x) - i^2 = -(1-x^2) + 1 = x^2\). So \(\lambda = 0\) is the only eigenvalue (with multiplicity 2). The eigenspace is \(\ker(A) = \ker\begin{pmatrix} 1 & i\\i & -1\end{pmatrix}\). Row-reducing gives \(\begin{pmatrix} 1 & i\\0 & 0\end{pmatrix}\), so the eigenspace is one-dimensional (spanned by \(\begin{pmatrix} i\\-1\end{pmatrix}\)) so \(A\) is not diagonalizable.

Note: However, over \(\mathbb{C}\) if \(A\) has two distinct eigenvalues, \(A\) will be diagonalizable.

3.

True or False: Suppose \(A = \begin{pmatrix} 2 & 3\\0 & 4\end{pmatrix}\). Then \(e^A = \begin{pmatrix} e^2 & e^3\\1 & e^4\end{pmatrix}\).

Solution. False. The matrix exponential is not computed by exponentiating each entry.
4.

Consider the coupled system of linear differential equations given by \(X'(t) = A\cdot X(t)\), with initial conditions \(X(0) = \begin{pmatrix} 5\\5\end{pmatrix}\), where \(A\) is a \(2\times 2\) real matrix and \(X(t) = \begin{pmatrix} x_1(t)\\x_2(t)\end{pmatrix}\). Suppose \(A\) has eigenvalues \(-2, 7\) with eigenvectors \(v_1 = \begin{pmatrix} 2\\3\end{pmatrix}\), \(v_2 = \begin{pmatrix} 3\\2\end{pmatrix}\), respectively. Write out the solution to the given system of differential equations. You do not have to derive the solution.

Solution. The general solution to \(X'(t) = AX(t)\) is

\[X(t) = c_1 e^{-2t}\begin{pmatrix} 2\\3\end{pmatrix} + c_2 e^{7t}\begin{pmatrix} 3\\2\end{pmatrix}.\]

Applying the initial condition \(X(0) = \begin{pmatrix} 5\\5\end{pmatrix}\):

\[c_1\begin{pmatrix} 2\\3\end{pmatrix} + c_2\begin{pmatrix} 3\\2\end{pmatrix} = \begin{pmatrix} 5\\5\end{pmatrix}.\]

This gives the system \(2c_1 + 3c_2 = 5\) and \(3c_1 + 2c_2 = 5\). Subtracting the first equation from the second: \(c_1 - c_2 = 0\), so \(c_1 = c_2\). Substituting back: \(5c_1 = 5\), giving \(c_1 = c_2 = 1\). Therefore the solution is

\[X(t) = e^{-2t}\begin{pmatrix} 2\\3\end{pmatrix} + e^{7t}\begin{pmatrix} 3\\2\end{pmatrix},\]

i.e., \(x_1(t) = 2e^{-2t} + 3e^{7t}\) and \(x_2(t) = 3e^{-2t} + 2e^{7t}\).

5.

State the Spectral Theorem for \(2\times 2\) real matrices. Be sure to define all of the terms used in the statement of the theorem.

Solution. We first define the relevant terms.

  1. (i) A matrix \(A \in M_2(\mathbb{R})\) is symmetric if \(A^t = A\).
  2. (ii) A matrix \(Q \in M_2(\mathbb{R})\) is orthogonal if the columns of \(Q\) form an orthonormal basis for \(\mathbb{R}^2\).
  3. (iii) \(A\) is orthogonally diagonalizable if there exists an orthogonal matrix \(Q\) and a diagonal matrix \(D\) such that \(Q^{-1}AQ = D\).

Spectral Theorem (for \(2\times 2\) real matrices). A real \(2\times 2\) symmetric matrix is orthogonally diagonalizable.

6.

Suppose \(P^{-1}AP = \begin{pmatrix} a & 0\\0 & b\end{pmatrix}\). Write a formula for \(A^{1729}\).

Solution. Since \(P^{-1}AP = D\) where \(D = \begin{pmatrix} a&0\\0&b\end{pmatrix}\), we have \(A = PDP^{-1}\). Then

\[A^{1729} = (PDP^{-1})^{1729} = PD^{1729}P^{-1}.\]

Since \(D\) is diagonal, \(D^{1729} = \begin{pmatrix} a^{1729} & 0\\0 & b^{1729}\end{pmatrix}\). Therefore

\[A^{1729} = P\begin{pmatrix} a^{1729} & 0\\0 & b^{1729}\end{pmatrix} P^{-1}.\]

Regarding the number 1729: Upon visiting his colleague Ramanujan in the hospital, the English mathematician G.H. Hardy remarked that the number of the taxi cab he arrived in, 1729, was a rather dull number. "No," Ramanujan replied, "it is a very interesting number; it is the smallest number expressible as the sum of two positive cubes in two different ways." If you are not familiar with the story of the Indian mathematician Ramanujan, then you might enjoy either seeing the movie, or reading the book, titled The Man Who Knew Infinity.

Calculation Problems. Each question is worth 20 points.

1.

Consider the matrix \(A = \begin{pmatrix} 2 & i\\i & 1\end{pmatrix}\). Find the eigenvalues of \(A\) and find a matrix over \(\mathbb{C}\) diagonalizing \(A\) and verify the diagonalization. Then show that \(A\) is not orthogonally diagonalizable as a complex matrix.

Solution. My apologies. The exam question stated diagonalizing over \(\mathbb{R}\), which was a typo.

Step 1: We compute the characteristic polynomial:

\[\begin{aligned} p_A(x) &= \det(A - xI) = \det\begin{pmatrix} 2-x & i\\i & 1-x\end{pmatrix} \\ &= (2-x)(1-x) - i^2 = (2-x)(1-x)+1 = x^2 - 3x + 3. \end{aligned}\]

By the quadratic formula:

\[x = \frac{3 \pm \sqrt{9 - 12}}{2} = \frac{3 \pm \sqrt{-3}}{2} = \frac{3 \pm i\sqrt{3}}{2}.\]

So the eigenvalues are \(\lambda_1 = \dfrac{3 + i\sqrt{3}}{2}\) and \(\lambda_2 = \dfrac{3 - i\sqrt{3}}{2}\).

Step 2: We find eigenvectors for each eigenvalue.

Eigenvector for \(\lambda_1 = \frac{3}{2}+\frac{i\sqrt{3}}{2}\): We consider \(A-\lambda_1 I = \begin{pmatrix} 2-\lambda_1 & i\\i & 1-\lambda_1\end{pmatrix}\). This matrix has rank one, so to solve \((A - \lambda_1 I)\begin{pmatrix} x\\y\end{pmatrix} = 0\), we may use the second row: \(ix + (1-\lambda_1)y = 0\). Set \(y = 1\):

\[x = \frac{\lambda_1-1}{i} = \frac{\lambda_1 -1}{i}\cdot\frac{-i}{-i} = \frac{\frac{1}{2}+\frac{i\sqrt{3}}{2}}{i}\cdot \frac{-i}{-i} = \frac{-\frac{i}{2}+\frac{\sqrt{3}}{2}}{1} = \frac{\sqrt{3}}{2} - \frac{i}{2}.\]

So \(v_1 = \begin{pmatrix} \frac{\sqrt{3}}{2} - \frac{i}{2}\\1\end{pmatrix}\).

Eigenvector for \(\lambda_2 = \frac{3}{2}-\frac{i\sqrt{3}}{2}\): Solve \((A - \lambda_2 I)\begin{pmatrix} x\\y\end{pmatrix} = 0\). Using the first row: \(\left(\frac{1}{2}+\frac{i\sqrt{3}}{2}\right)x + i\,y = 0\). Set \(x = 1\):

\[y = -\frac{\frac{1}{2}+\frac{i\sqrt{3}}{2}}{i} = -\left(\frac{1}{2}+\frac{i\sqrt{3}}{2}\right)(-i) = -\frac{\sqrt{3}}{2} + \frac{i}{2}.\]

So \(v_2 = \begin{pmatrix} 1\\-\frac{\sqrt{3}}{2}+\frac{i}{2}\end{pmatrix}\).

The diagonalizing matrix is \(P = \begin{pmatrix} \frac{\sqrt{3}}{2}-\frac{i}{2} & 1\\ 1 & -\frac{\sqrt{3}}{2}+\frac{i}{2}\end{pmatrix}\).

What I had in mind for the verification was checking that \(AP = PD\), for \(D = \begin{pmatrix} \dfrac{3 + i\sqrt{3}}{2} & 0\\[6pt]0 & \dfrac{3 - i\sqrt{3}}{2}\end{pmatrix}\), so that one does not have to compute \(P^{-1}\). But even this calculation is more involved than expected, though it is straightforward. Consequently, no points were taken off for omitting or miscalculating this verification. See the Appendix for the details of the calculation.

Step 3: To see that \(A\) is not orthogonally diagonalizable over \(\mathbb{C}\), we check that the eigenvectors are not orthogonal with respect to the complex inner product:

\[v_1\cdot v_2 = \left(\frac{\sqrt{3}-i}{2}\right)\cdot \overline{1} + 1\cdot\overline{\left(\frac{-\sqrt{3}+i}{2}\right)} = \left(\frac{\sqrt{3}-i}{2}\right) + \left(\frac{-\sqrt{3}-i}{2}\right) = -i\not= 0,\]

so \(v_1, v_2\) are not orthogonal.

2.

Given the matrix \(B = \begin{pmatrix} 0 & -1\\1 & 2\end{pmatrix}\), find the Jordan canonical form for \(B\) and the corresponding change of basis matrix \(P\). Then use this to derive a formula for \(B^n\).

Solution. We first compute the characteristic polynomial:

\[p_B(x) = \det(B - xI_2) = \det\begin{pmatrix} -x & -1\\1 & 2-x\end{pmatrix} = -x(2-x)+1 = x^2-2x+1 = (x-1)^2.\]

So \(\lambda = 1\) is the only eigenvalue, with multiplicity 2. We next find the eigenspace of 1:

\[B - I_2 = \begin{pmatrix} -1 & -1\\1 & 1\end{pmatrix}.\]

Row reducing we get \(\begin{pmatrix} 1 & 1\\0 & 0\end{pmatrix}\), so the eigenspace is \(\left\{t\cdot \begin{pmatrix} -1\\1\end{pmatrix}\ \middle|\ t\in \mathbb{R}\right\}\), which is one-dimensional, so \(B\) is not diagonalizable since we do not have two linearly independent eigenvectors. The Jordan canonical form is:

\[J = \begin{pmatrix} 1 & 1\\0 & 1\end{pmatrix}.\]

For the change of basis matrix, we have \(P = [v_1\ v_2]\) where \(v_2\) is not an eigenvector and \(v_1 = (B-I_2)v_2\). We can take \(v_2 = \begin{pmatrix} 1\\0\end{pmatrix}\). Then \(v_1 = (B-I_2)v_2 = \begin{pmatrix} -1 & -1\\1 & 1\end{pmatrix}\begin{pmatrix} 1\\0\end{pmatrix} = \begin{pmatrix} -1\\1\end{pmatrix}\). Therefore \(P = \begin{pmatrix} -1 & 1\\1 & 0\end{pmatrix}\). We have \(\det(P) = 0 - 1 = -1\), so \(P^{-1} = \begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix}\) and \(P^{-1}BP = J\).

To find \(B^n\), since \(B = PJP^{-1}\), we have \(B^n = PJ^nP^{-1}\). Now \(J^2 = \begin{pmatrix} 1 & 2\\0 & 1\end{pmatrix}\), \(J^3 = \begin{pmatrix} 1 & 3\\0 & 1\end{pmatrix}\), \(J^4 = \begin{pmatrix} 1 & 4\\0 & 1\end{pmatrix}\), and continuing we see \(J^n = \begin{pmatrix} 1 & n\\0 & 1\end{pmatrix}\), which can be proven using mathematical induction, though this was not required.

Therefore:

\[B^n = PJ^nP^{-1} = \begin{pmatrix} -1 & 1\\1 & 0\end{pmatrix}\begin{pmatrix} 1 & n\\0 & 1\end{pmatrix}\begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix}.\]

Computing \(PJ^n\) first:

\[\begin{pmatrix} -1 & 1\\1 & 0\end{pmatrix}\begin{pmatrix} 1 & n\\0 & 1\end{pmatrix} = \begin{pmatrix} -1 & -n+1\\1 & n\end{pmatrix},\]

and then multiplying by \(P^{-1}\):

\[B^n = \begin{pmatrix} -1 & -n+1\\1 & n\end{pmatrix}\begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix} = \begin{pmatrix} 1-n & -n\\n & n+1\end{pmatrix}.\]

Proof Problem. 24 points.

Proof Problem

State and prove the first part of the Diagonalizability Theorem for \(2\times 2\) matrices, i.e., the sufficient condition for diagonalizability. You must justify all steps in the proof.

Solution. Let \(A \in M_2(\mathbb{R})\). Suppose \(v_1, v_2 \in \mathbb{R}^2\) are linearly independent eigenvectors of \(A\) with corresponding eigenvalues \(\lambda_1, \lambda_2\) (not necessarily distinct), so that \(Av_1 = \lambda_1 v_1\) and \(Av_2 = \lambda_2 v_2\). Then \(A\) is diagonalizable. In particular, if \(P = [v_1\ v_2]\) is the matrix with columns \(v_1\) and \(v_2\), then

\[P^{-1}AP = \begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix}.\]

Proof. On the one hand, we have \(AP = A[v_1\ v_2] = [Av_1\ Av_2] = [\lambda_1 v_1\ \lambda_2 v_2]\). On the other hand,

\[P \begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix} = [v_1\ v_2] \begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix} = [\lambda_1 v_1+0 v_2\ \ 0 v_1 + \lambda_2 v_2] = [\lambda_1 v_1\ \ \lambda_2 v_2],\]

showing that \(AP = P\begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix}\). Since \(v_1, v_2\) are linearly independent, they form a basis for \(\mathbb{R}^2\), and therefore \(P\) is invertible. Thus, \(P^{-1}AP = \begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix}\), as required. \(\square\)

Bonus Problem. 10 points.

Bonus Problem

Prove the converse of the real spectral theorem for \(2\times 2\) matrices, namely, if \(A\) is orthogonally diagonalizable, then \(A\) is symmetric.

Solution. Suppose \(Q\) is an orthogonal matrix such that \(Q^{-1}AQ = D\), where \(D\) is a diagonal matrix. Let \(C_1, C_2\) denote the columns of \(Q\), so \(C_1, C_2\) have length one and are orthogonal. Therefore, \(C_1\cdot C_1 = 1 = C_2\cdot C_2\) and \(C_1\cdot C_2 = 0 = C_2\cdot C_1\). We also have \(Q^tQ = \begin{pmatrix} C_1\cdot C_1 & C_1\cdot C_2\\C_2\cdot C_1 & C_2\cdot C_2\end{pmatrix} = I_2\), showing that \(Q^{-1} = Q^t\). Thus,

\[A^t = (QDQ^{-1})^t = (QDQ^t)^t = Q^{tt}D^tQ^t = QDQ^t = QDQ^{-1} = A,\]

since \(D^t = D\). Therefore \(A = A^t\), showing that \(A\) is symmetric. \(\square\)

Appendix

Verification that \(AP = PD\) for Calculation Problem 1

We have

\[A = \begin{pmatrix} 2 & i \\ i & 1 \end{pmatrix}, \quad P = \begin{pmatrix} \dfrac{\sqrt{3}}{2}-\dfrac{i}{2} & 1 \\[8pt] 1 & -\dfrac{\sqrt{3}}{2}+\dfrac{i}{2} \end{pmatrix}, \quad D = \begin{pmatrix} \dfrac{3+i\sqrt{3}}{2} & 0 \\[8pt] 0 & \dfrac{3-i\sqrt{3}}{2} \end{pmatrix}.\]

Step 1: Compute \(AP\).

\[AP = \begin{pmatrix} 2 & i \\ i & 1 \end{pmatrix} \begin{pmatrix} \dfrac{\sqrt{3}}{2}-\dfrac{i}{2} & 1 \\[8pt] 1 & -\dfrac{\sqrt{3}}{2}+\dfrac{i}{2} \end{pmatrix}.\]

Entry \((AP)_{11}\):

\[2\!\left(\tfrac{\sqrt{3}}{2}-\tfrac{i}{2}\right) + i\cdot 1 = \sqrt{3} - i + i = \sqrt{3}.\]

Entry \((AP)_{12}\):

\[2\cdot 1 + i\!\left(-\tfrac{\sqrt{3}}{2}+\tfrac{i}{2}\right) = 2 - \tfrac{i\sqrt{3}}{2} + \tfrac{i^2}{2} = 2 - \tfrac{1}{2} - \tfrac{i\sqrt{3}}{2} = \frac{3 - i\sqrt{3}}{2}.\]

Entry \((AP)_{21}\):

\[i\!\left(\tfrac{\sqrt{3}}{2}-\tfrac{i}{2}\right) + 1\cdot 1 = \tfrac{i\sqrt{3}}{2} - \tfrac{i^2}{2} + 1 = \tfrac{i\sqrt{3}}{2} + \tfrac{1}{2} + 1 = \frac{3 + i\sqrt{3}}{2}.\]

Entry \((AP)_{22}\):

\[i\cdot 1 + 1\cdot\!\left(-\tfrac{\sqrt{3}}{2}+\tfrac{i}{2}\right) = i - \tfrac{\sqrt{3}}{2} + \tfrac{i}{2} = -\frac{\sqrt{3}}{2} + \frac{3i}{2}.\]

Therefore:

\[AP = \begin{pmatrix} \sqrt{3} & \dfrac{3-i\sqrt{3}}{2} \\[10pt] \dfrac{3+i\sqrt{3}}{2} & -\dfrac{\sqrt{3}}{2}+\dfrac{3i}{2} \end{pmatrix}.\]

Step 2: Compute \(PD\).

\[PD = \begin{pmatrix} \dfrac{\sqrt{3}}{2}-\dfrac{i}{2} & 1 \\[8pt] 1 & -\dfrac{\sqrt{3}}{2}+\dfrac{i}{2} \end{pmatrix} \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix} = \begin{pmatrix} \lambda_1\!\left(\dfrac{\sqrt{3}}{2}-\dfrac{i}{2}\right) & \lambda_2 \\[10pt] \lambda_1 & \lambda_2\!\left(-\dfrac{\sqrt{3}}{2}+\dfrac{i}{2}\right) \end{pmatrix}.\]

Since \(\lambda_1 = \dfrac{3+i\sqrt{3}}{2}\) and \(\lambda_2 = \dfrac{3-i\sqrt{3}}{2}\):

Entry \((PD)_{11}\):

\[\frac{3+i\sqrt{3}}{2}\cdot\frac{\sqrt{3}-i}{2} = \frac{(3+i\sqrt{3})(\sqrt{3}-i)}{4}.\]

Expanding the numerator: \((3+i\sqrt{3})(\sqrt{3}-i) = 3\sqrt{3} - 3i + 3i - i^2\sqrt{3} = 3\sqrt{3} + \sqrt{3} = 4\sqrt{3}\). Hence \((PD)_{11} = \sqrt{3}\).

Entry \((PD)_{12}\): \((PD)_{12} = \lambda_2 = \dfrac{3-i\sqrt{3}}{2}\).

Entry \((PD)_{21}\): \((PD)_{21} = \lambda_1 = \dfrac{3+i\sqrt{3}}{2}\).

Entry \((PD)_{22}\):

\[\frac{3-i\sqrt{3}}{2}\cdot\frac{-\sqrt{3}+i}{2} = \frac{(3-i\sqrt{3})(-\sqrt{3}+i)}{4}.\]

Expanding the numerator: \((3-i\sqrt{3})(-\sqrt{3}+i) = -3\sqrt{3} + 3i + 3i - i^2\sqrt{3} = -3\sqrt{3} + 6i + \sqrt{3} = -2\sqrt{3} + 6i\). Hence \((PD)_{22} = -\dfrac{\sqrt{3}}{2}+\dfrac{3i}{2}\).

Therefore:

\[PD = \begin{pmatrix} \sqrt{3} & \dfrac{3-i\sqrt{3}}{2} \\[10pt] \dfrac{3+i\sqrt{3}}{2} & -\dfrac{\sqrt{3}}{2}+\dfrac{3i}{2} \end{pmatrix},\]

showing \(AP = PD\). \(\square\)