2021 AIME I Problems/Problem 12
Problem
Let be a dodecagon (-gon). Three frogs initially sit at and . At the end of each minute, simultaneously, each of the three frogs jumps to one of the two vertices adjacent to its current position, chosen randomly and independently with both choices being equally likely. All three frogs stop jumping as soon as two frogs arrive at the same vertex at the same time. The expected number of minutes until the frogs stop jumping is , where and are relatively prime positive integers. Find .
Solution
Define the distance between two frogs as the number of sides between them that do not contain the third frog.
Let denote the expected number of minutes until the frogs stop jumping, such that the distances between the frogs are and (in either clockwise or counterclockwise order). Without the loss of generality, assume that
We wish to find Note that:
- At any moment before the frogs stop jumping, the only possibilities for are and
- does not depend on the actual positions of the frogs, but depends on the distances between the frogs.
- At the end of each minute, each frog has outcomes. So, there are outcomes in total.
We have the following system of equations: Rearranging and simplifying each equation, we get Substituting and into we obtain from which Substituting this into gives
Therefore, the answer is
~Ross Gao (Fundamental Logic)
~MRENTHUSIASM (Reconstruction)
Alternate solution
Without loss of generality, let's have each frog jump one space to their right after their usual jump. Then observe that half of the vertices will be blank due to parity reasons. Then, we can instead consider the frogs on a hexagon instead. Let them start at vertices , , and . Each time, they either stay at the same lilypad, or jump right by one spot. Observe that configurations are still equivalent to their reflections. If some frogs jump in one reflection, we have all frogs jump in the other reflection except for the ones which jumped in the original reflection.
There are only four states on this hexagon. The first state, , is where the frogs are arranged in an equilateral triangle. The second state, , is where two frogs are adjacent. The third state, , is where all three frogs are adjacent. And the fourth state, , is the halting state. We can then consider how these states change.
moves to with probability, and to with probability.
moves to with probability, to and with probability each, and to with probability.
moves to and with probability each, and to with probability.
always moves to itself.
Now, let , , , and be functions of time. We can create a state vector . Observe that we can then multiply for some matrix . This matrix can be found from our previous examination of how the states move:
From the problem, we have , and by induction, we have . The expected stopping time is . To find , we need to be able to easily raise to high powers. factorization is the tool we need for this job. We need the eigenvalues and eigenvectors for . By definition, if is nonzero and is an eigenvector of , then there exists an eigenvalue where . To subtract matrices from matrices, we insert a copy of the identity matrix:
Then we want values of where has a null space with nonzero dimension. This happens when . We know:
We need the determinant of that. One obvious way to proceed would be to do a cofactor expansion across the last column. However, it's easier to expand across the first column, because one cofactor is lower triangular, meaning that we can take the determinant of that cofactor by multiplying its diagonal entries, and the other cofactor has two zeroes in one of its columns, meaning we only have to evaluate one more determinant. Two of the entries in the first column are still 0, so we can skip evaluating the determinants of those two cofactors.
Let's do the easy part first:
For the other cofactor, we simply do cofactor expansion again, this time on the last column, meaning that we only have to deal with one 2x2 cofactor.
Substituting it all back in, this is what we get.
After a bit of distribution and cancellation, we get:
We have our eigenvalues; they are 0, , , and 1. Now, let's find an eigenvector for each eigenvalue. Remember the equation for eigenvectors and eigenvalues, and that they must not be zero:
This time, we rearrange slightly differently.
We then need to find nonzero vectors for each value of ; these vectors will be our eigenvectors. You can use any method that you would normally use; it should still work. In this case, I got these eigenvectors, up to a constant factor:
We can construct and . will be a diagonal matrix whose entries are the eigenvalues. is a matrix where the nth column contains an eigenvector for the eigenvalue in the nth column of . Here are and :
Inverting with any method gives .
Therefore, we have:
However, this makes finding really convenient as now we can raise to high powers with the decomposition:
This relies on two facts. One, that , and two, that raising a diagonal matrix to a power can be done by raising its entries to said power.
Note, that in our case, we can simplify. First of all, .
Earlier, I said that the expected stopping time is . This is true, but note that , so that the first term is zero. As a result, we can instead evaluate . This lets us only pass positive values of into , which lets us make another simplification: when .
This is a very powerful weapon on a diagonal matrix. Multiplying vectors by diagonal matrices simply multiplies each element in the vector by the corresponding entry in the diagonal matrix. Since has a zero as its first entry, its output vector will always have a zero as its first entry. This means that when multiplying by , the first column is completely irrelevant, as every multiplication with an entry in the first column is a multiplication by 0. Likewise, we can ignore the first row in , as every multiplication with an entry in the first row is added to the first element of the output, which will then get zeroed out by .
Finally, let's just completely drop the irrelevant rows and columns, and create smaller matrices (This actually works):
Since we know that , we know that when , . We carry out the multiplication:
We only care about :
Now, substitute into the infinite series.
Define . Observe that , except using instead of allows us to commute stuff, since the positive pieces and negative pieces both converge absolutely on their own.
Note that . We plug in the definition for , and "split" the two copies of :
Therefore, the answer is 19.
See Also
2021 AIME I (Problems • Answer Key • Resources) | ||
Preceded by Problem 11 |
Followed by Problem 13 | |
1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 | ||
All AIME Problems and Solutions |
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.