Talk:2021 AIME I Problems/Problem 12
Alternate Solution
Without loss of generality, let's have each frog jump one space to their right after their usual jump. Then observe that half of the vertices will be blank due to parity reasons. Then, we can instead consider the frogs on a hexagon instead. Let them start at vertices , , and . Each time, they either stay at the same lilypad, or jump right by one spot. Observe that configurations are still equivalent to their reflections. If some frogs jump in one reflection, we have all frogs jump in the other reflection except for the ones which jumped in the original reflection.
There are only four states on this hexagon. The first state, , is where the frogs are arranged in an equilateral triangle. The second state, , is where two frogs are adjacent. The third state, , is where all three frogs are adjacent. And the fourth state, , is the halting state. We can then consider how these states change.
moves to with probability, and to with probability.
moves to with probability, to and with probability each, and to with probability.
moves to and with probability each, and to with probability.
always moves to itself.
Now, let , , , and be functions of time. We can create a state vector . Observe that we can then multiply for some matrix . This matrix can be found from our previous examination of how the states move:
From the problem, we have , and by induction, we have . The expected stopping time is . To find , we need to be able to easily raise to high powers. factorization is the tool we need for this job. We need the eigenvalues and eigenvectors for . By definition, if is nonzero and is an eigenvector of , then there exists an eigenvalue where . To subtract matrices from matrices, we insert a copy of the identity matrix:
Then we want values of where has a null space with nonzero dimension. This happens when . We know:
We need the determinant of that. One obvious way to proceed would be to do a cofactor expansion across the last column. However, it's easier to expand across the first column, because one cofactor is lower triangular, meaning that we can take the determinant of that cofactor by multiplying its diagonal entries, and the other cofactor has two zeroes in one of its columns, meaning we only have to evaluate one more determinant. Two of the entries in the first column are still 0, so we can skip evaluating the determinants of those two cofactors.
Let's do the easy part first:
For the other cofactor, we simply do cofactor expansion again, this time on the last column, meaning that we only have to deal with one 2x2 cofactor.
Substituting it all back in, this is what we get.
After a bit of distribution and cancellation, we get:
We have our eigenvalues; they are 0, , , and 1. Now, let's find an eigenvector for each eigenvalue. Remember the equation for eigenvectors and eigenvalues, and that eigenvectors must not be zero:
This time, we rearrange slightly differently.
We then need to find nonzero vectors for each value of ; these vectors will be our eigenvectors. You can use any method that you would normally use; it should still work. In this case, I got these eigenvectors, up to a constant factor:
We can construct and . will be a diagonal matrix whose entries are the eigenvalues. is a matrix where the nth column contains an eigenvector for the eigenvalue in the nth column of . Here are and :
Inverting with any method gives .
Therefore, we have:
However, this makes finding really convenient as now we can raise to high powers with the decomposition:
This relies on two facts. One, that , and two, that raising a diagonal matrix to a power can be done by raising its entries to said power.
Note, that in our case, we can simplify. First of all, .
Earlier, I said that the expected stopping time is . This is true, but note that , so that the first term is zero. As a result, we can instead evaluate . This lets us only pass positive values of into , which lets us make another simplification: when .
This is a very powerful weapon on a diagonal matrix. Multiplying vectors by diagonal matrices simply multiplies each element in the vector by the corresponding entry in the diagonal matrix. Since has a zero as its first entry, its output vector will always have a zero as its first entry. This means that when multiplying by , the first column is completely irrelevant, as every multiplication with an entry in the first column is a multiplication by 0. Likewise, we can ignore the first row in , as every multiplication with an entry in the first row is added to the first element of the output, which will then get zeroed out by .
Finally, let's just completely drop the irrelevant rows and columns, and create smaller matrices (This actually works):
Since we know that , we know that when , . We carry out the multiplication:
We only care about :
Now, substitute into the infinite series.
Define . Observe that , except using instead of allows us to commute stuff, since the positive pieces and negative pieces both converge absolutely on their own.
Note that . We plug in the definition for , and "split" the two copies of :
Therefore, the answer is .