Talk:2021 AIME I Problems/Problem 12

Revision as of 12:15, 19 February 2022 by Cassiopeia1 (talk | contribs) (Alternate Solution)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Alternate Solution

Without loss of generality, let's have each frog jump one space to their right after their usual jump. Then observe that half of the vertices will be blank due to parity reasons. Then, we can instead consider the frogs on a hexagon instead. Let them start at vertices $A_{0}$, $A_{2}$, and $A_{4}$. Each time, they either stay at the same lilypad, or jump right by one spot. Observe that configurations are still equivalent to their reflections. If some frogs jump in one reflection, we have all frogs jump in the other reflection except for the ones which jumped in the original reflection.

There are only four states on this hexagon. The first state, $x$, is where the frogs are arranged in an equilateral triangle. The second state, $y$, is where two frogs are adjacent. The third state, $z$, is where all three frogs are adjacent. And the fourth state, $H$, is the halting state. We can then consider how these states change.

$x$ moves to $x$ with $\frac{1}{4}$ probability, and to $y$ with $\frac{3}{4}$ probability.

$y$ moves to $y$ with $\frac{1}{2}$ probability, to $x$ and $z$ with $\frac{1}{8}$ probability each, and to $H$ with $\frac{1}{4}$ probability.

$z$ moves to $z$ and $y$ with $\frac{1}{4}$ probability each, and to $H$ with $\frac{1}{2}$ probability.

$H$ always moves to itself.

Now, let $x$, $y$, $z$, and $H$ be functions of time. We can create a state vector $\overrightarrow{S}=\begin{bmatrix}x\\y\\z\\H\end{bmatrix}$. Observe that we can then multiply $\overrightarrow{S}\left(t+1\right)=A\overrightarrow{S}\left(t\right)$ for some matrix $A$. This matrix can be found from our previous examination of how the states move:

$A=\begin{bmatrix}\frac{1}{4} & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2} & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4} & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1\end{bmatrix}$

From the problem, we have $\overrightarrow{S}\left(0\right)=\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$, and by induction, we have $\overrightarrow{S}\left(t\right)=A^{t}\overrightarrow{S}\left(0\right)$. The expected stopping time is $\sum_{t=1}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)$. To find $H\left(t\right)$, we need to be able to easily raise $A$ to high powers. $PDP^{-1}$ factorization is the tool we need for this job. We need the eigenvalues and eigenvectors for $A$. By definition, if $\overrightarrow{v}$ is nonzero and is an eigenvector of $A$, then there exists an eigenvalue $\lambda$ where $A\overrightarrow{v}=\lambda\overrightarrow{v}$. To subtract matrices from matrices, we insert a copy of the identity matrix:

$A\overrightarrow{v}=\left(\lambda I\right)\overrightarrow{v}$

Then we want values of $\lambda$ where $A-\lambda I$ has a null space with nonzero dimension. This happens when $\det\left(A-\lambda I\right)=0$. We know:

$A-\lambda I=\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}$

We need the determinant of that. One obvious way to proceed would be to do a cofactor expansion across the last column. However, it's easier to expand across the first column, because one cofactor is lower triangular, meaning that we can take the determinant of that cofactor by multiplying its diagonal entries, and the other cofactor has two zeroes in one of its columns, meaning we only have to evaluate one more determinant. Two of the entries in the first column are still 0, so we can skip evaluating the determinants of those two cofactors.

$\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\left(\frac{1}{4}-\lambda\right)\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4} & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}-\frac{3}{4}\det\begin{bmatrix}\frac{1}{8} & 0 & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}$

Let's do the easy part first:

$\det\begin{bmatrix}\frac{1}{8} & 0 & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\frac{1}{8}\left(\frac{1}{4}-\lambda\right)\left(1-\lambda\right)$

For the other cofactor, we simply do cofactor expansion again, this time on the last column, meaning that we only have to deal with one 2x2 cofactor.

$\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4} & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\left(1-\lambda\right)\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4}\\\frac{1}{8} & \frac{1}{4}-\lambda\end{bmatrix}$

$=\left(1-\lambda\right)\left(\left(\frac{1}{2}-\lambda\right)\left(\frac{1}{4}-\lambda\right)-\frac{1}{8}\cdot\frac{1}{4}\right)$

$=\left(1-\lambda\right)\left(\lambda^{2}-\frac{3}{4}\lambda+\frac{3}{32}\right)$

Substituting it all back in, this is what we get.

$\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\frac{1}{8}\left(1-\lambda\right)\left(\frac{1}{4}-\lambda\right)^{2}-\frac{3}{4}\left(1-\lambda\right)\left(\lambda^{2}-\frac{3}{4}\lambda+\frac{3}{32}\right)$

After a bit of distribution and cancellation, we get:

$\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\lambda\left(\frac{1}{4}-\lambda\right)\left(1-\lambda\right)\left(\lambda-\frac{3}{4}\right)$

We have our eigenvalues; they are 0, $\frac{1}{4}$, $\frac{3}{4}$, and 1. Now, let's find an eigenvector for each eigenvalue. Remember the equation for eigenvectors and eigenvalues, and that eigenvectors must not be zero:

$A\overrightarrow{v}=\lambda\overrightarrow{v}$

This time, we rearrange slightly differently.

$\left(A-\lambda I\right)\overrightarrow{v}=\overrightarrow{0}$

We then need to find nonzero vectors $\overrightarrow{v}$ for each value of $\lambda$; these vectors will be our eigenvectors. You can use any method that you would normally use; it should still work. In this case, I got these eigenvectors, up to a constant factor:

$\lambda=0\to\overrightarrow{v}=\begin{bmatrix}1\\-2\\1\\0\end{bmatrix}$

$\lambda=\frac{1}{4}\to\overrightarrow{v}=\begin{bmatrix}1\\0\\-3\\2\end{bmatrix}$

$\lambda=\frac{3}{4}\to\overrightarrow{v}=\begin{bmatrix}1\\4\\1\\-6\end{bmatrix}$

$\lambda=1\to\overrightarrow{v}=\begin{bmatrix}0\\0\\0\\1\end{bmatrix}$

We can construct $P$ and $D$. $D$ will be a diagonal matrix whose entries are the eigenvalues. $P$ is a matrix where the nth column contains an eigenvector for the eigenvalue in the nth column of $D$. Here are $P$ and $D$:

$P=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}$

$D=\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}$

Inverting with any method gives $P^{-1}=\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$.

Therefore, we have:

$A=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

However, this makes finding $\overrightarrow{S}\left(t\right)$ really convenient as now we can raise $A$ to high powers with the $PDP^{-1}$ decomposition:

$A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}^{t}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0^{t} & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1^{t}\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

This relies on two facts. One, that $PP^{-1}=P^{-1}P=I$, and two, that raising a diagonal matrix to a power can be done by raising its entries to said power.

Note, that in our case, we can simplify. First of all, $1^{t}=1$.

$A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0^{t} & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

Earlier, I said that the expected stopping time is $\sum_{t=1}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)$. This is true, but note that $H\left(1\right)=H\left(1-1\right)=H\left(0\right)=0$, so that the first term is zero. As a result, we can instead evaluate $\sum_{t=2}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)$. This lets us only pass positive values of $t$ into $H$, which lets us make another simplification: $0^{t}=0$ when $t>0$.

$A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

This is a very powerful weapon on a diagonal matrix. Multiplying vectors by diagonal matrices simply multiplies each element in the vector by the corresponding entry in the diagonal matrix. Since $D$ has a zero as its first entry, its output vector will always have a zero as its first entry. This means that when multiplying by $P$, the first column is completely irrelevant, as every multiplication with an entry in the first column is a multiplication by 0. Likewise, we can ignore the first row in $P^{-1}$, as every multiplication with an entry in the first row is added to the first element of the output, which will then get zeroed out by $D$.

$A^{t}=\begin{bmatrix}0 & 1 & 1 & 0\\0 & 0 & 4 & 0\\0 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

Finally, let's just completely drop the irrelevant rows and columns, and create smaller matrices (This actually works):

$A^{t}=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}$

Since we know that $\overrightarrow{S}\left(0\right)=\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$, we know that when $t>0$, $\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$. We carry out the multiplication:

$\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4}\\\frac{1}{4}\\1\end{bmatrix}$

$\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4}\cdot\left(\frac{1}{4}\right)^{t}\\\frac{1}{4}\cdot\left(\frac{3}{4}\right)^{t}\\1\end{bmatrix}$

We only care about $H$:

$H\left(t\right)=\frac{1}{2}\cdot\left(\frac{1}{4}\right)^{t}-\frac{3}{2}\cdot\left(\frac{3}{4}\right)^{t}+1$

Now, substitute into the infinite series.

Define $h\left(t\right)=H\left(t\right)-1=\frac{1}{2}\cdot\left(\frac{1}{4}\right)^{t}-\frac{3}{2}\cdot\left(\frac{3}{4}\right)^{t}$. Observe that $\sum_{t=2}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)=\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)$, except using $h$ instead of $H$ allows us to commute stuff, since the positive pieces and negative pieces both converge absolutely on their own.

$\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)=2h\left(2\right)-2h\left(1\right)+3h\left(3\right)-3h\left(2\right)+4h\left(4\right)-4h\left(3\right)+...$

$=-2h\left(1\right)-h\left(2\right)-h\left(3\right)-h\left(4\right)-...$

Note that $h\left(1\right)=-1$. We plug in the definition for $h$, and "split" the two copies of $h\left(1\right)$:

$\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)=1+\frac{3}{2}\sum_{t=1}^{\infty}\left(\frac{3}{4}\right)^{t}-\frac{1}{2}\sum_{t=1}^{\infty}\left(\frac{1}{4}\right)^{t}$

$=1+\frac{3}{2}\cdot\frac{3}{4-3}-\frac{1}{2}\cdot\frac{1}{4-1}$

$=\frac{16}{3}$

Therefore, the answer is $\boxed{019}$.