China Post 3: The Euclidean algorithm

by t0rajir0u, Jul 28, 2008, 1:45 PM

Part of this post is more "textbook" than the flavor of my latest entries, especially the part dealing with continued fractions. This is largely due to the fact that it seems to me that certain facts in the basic theory are not, I think, as well-known as they deserve to be. Part of this post also rambles a lot; I think this is the longest one to date.

The basic idea of the extended Euclidean algorithm on integers is extraordinarily simple, and yet it is one of the fundamental properties of the integers that allow for unique prime factorization and its existence in an arbitrary ring of integers is still the easiest way to prove unique prime factorization there. As we will see, it has even more interesting consequences (which we will illustrate by reformulating its statement five times!) but first we shall work through a large example. (In the rest of this post the Euclidean algorithm always refers to the relatively prime case, since the case that $ a, b$ are not relatively prime is in theory handled by dividing by $ (a, b)$. In practice, the Euclidean algorithm itself is used to compute $ (a, b)$, but we ignore this issue.)

Suppose we wanted to compute solutions to $ 15284x - 4889y = 1$. The Euclidean algorithm dictates that we repeatedly divide, obtain a remainder, and divide by that remainder, giving

$ 15284 = 3 \cdot 4889 + 617$
$ 4889 = 7 \cdot 617 + 570$
$ 617 = 1 \cdot 570 + 47$
$ 570 = 12 \cdot 47 + 6$
$ 47 = 7 \cdot 6 + 5$
$ 6 = 1 \cdot 5 + 1$
$ 5 = 5 \cdot 1 + 0$.

The next step of the extended Euclidean algorithm is to substitute expressions for each of our remainders as follows:

$ \begin{eqnarray*} 1 &=& 6 - 1 \cdot 5 \\ &=& 6 - 1 \cdot (47 - 7 \cdot 6) \\ &=& 8 \cdot 6 - 1 \cdot 47 \\ &=& 8 \cdot (570 - 12 \cdot 47) - 1 \cdot 47 \\ &=& 8 \cdot 570 - 97 \cdot 47 \\ &=& 8 \cdot 570 - 97 \cdot (617 - 1 \cdot 570) \\ &=& 105 \cdot 570 - 97 \cdot 617 \\ &=& 105 \cdot (4889 - 7 \cdot 617) - 97 \cdot 617 \\ &=& 105 \cdot 4889 - 832 \cdot 617 \\ &=& 105 \cdot 4889 - 832 (15284 - 3 \cdot 4889) \\ &=& 2601 \cdot 4889 - 832 \cdot 15284 \\ \end{eqnarray*}$.

So why does this algorithm really work? One way of motivating the Euclidean algorithm is the question of finding multiplicative inverses modulo some $ m$. The question of finding the multiplicative inverse of $ a \bmod m$ is exactly the question of solving the linear Diophantine equation

$ ax = 1 + my \Leftrightarrow$
$ ax - my = 1$.

For example, the above calculation demonstrates that the multiplicative inverse of $ 4889 \bmod 15284$ is $ 2601$. We interpret the equation as the question of finding a multiplicative inverse by taking it $ \bmod m$. But why restrict ourselves? As a concrete example, to find the inverse of $ 4889 \bmod 15284$ we solved

$ 15284x - 4889y = 1$.

We recover the inverse by taking $ \bmod 15284$. But to solve this equation suppose we take $ \bmod 4889$ instead. Then

$ 617x \equiv 1 \bmod 4889$

and we are finding the inverse of $ 617 \bmod 4889$ instead. Note that this is exactly the first step of the first half of the Euclidean algorithm! Thus one motivation of the Euclidean algorithm is that we reduce a modular problem $ (a, b)$ (of finding the inverse of $ a \bmod b$) to the modular problem $ (b', a)$ (of finding the inverse of $ b' \equiv b \bmod a$), which is "smaller," and once we have reduced enough we backtrack to get a solution to the original problem (which is the content of the second half of the algorithm).

Generalizations of the Euclidean algorithm lead to the notion of a Euclidean domain, but for now let's dispense with the abstract stuff and ask ourselves, say, an algorithmic question: can we bound the number of steps required to execute the Euclidean on $ (a, b)$ as a function of $ a$ and $ b$? In fact, we can, and very easily so! A straightforward bound is implied by

Lamé's Theorem: The pair $ (F_n, F_{n+1})$ is the "smallest" pair of numbers requiring $ n$ steps for the Euclidean algorithm to execute. (We have added an extra step to the Euclidean algorithm here for reasons to be explained later.)

And you thought I couldn't sneak the Fibonacci numbers into this one!


We defer the proof of the above result because we wish to place this result in the proper context (although it is somewhat intuitive). To do so it is necessary first to streamline the Euclidean algorithm. The fundamental step where we replace $ (a, b)$ with $ (b', a)$ can be broken down into the two steps

$ (a, b) \mapsto (a, b - a), a < b$

or

$ (a, b) \mapsto (b, a), a > b$.

In running the algorithm we merely always keep the larger of the two numbers on the right until we get the ordered pair $ (0, 1)$ (equivalently, we could stop at $ (1, 0)$ or even at $ (1, p)$). The advantage of working with these steps is that they can be easily represented by the matrices

$ \mathbf{A} = \left[ \begin{array}{cc} 1 & -1 \\ 0 & 1 \end{array} \right]$
$ \mathbf{J} = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right]$

(where we have identified the ordered pair $ (a, b)$ with the column matrix $ \left[ \begin{array}{c} b \\ a \end{array} \right]$ for later convenience.) The Euclidean algorithm can then be formulated as follows.

Theorem (the Euclidean algorithm): Given two relatively prime integers $ a, b, a < b$, there exists a unique finite sequence of natural numbers $ q_0, q_1, ..., q_n$ such that

$ \mathbf{J} \mathbf{A}^{q_n} ... \mathbf{J} \mathbf{A}^{q_0} \left[ \begin{array}{c} b \\ a \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$.

$ \{ q_i \}$ is precisely the sequence of quotients in the presentation of the Euclidean algorithm above. ("Unique" has to be understood in the proper sense because we can end the Euclidean algorithm in a few slightly different ways; we could choose to continue so that the last quotient is always $ 1$, for example, but this is not really necessary.)

It is then natural to ask what the nature of the matrix $ \mathbf{J} \mathbf{A}^{q_n} ... \mathbf{J} \mathbf{A}^{q_0}$ is. The answer is also a direct consequence of the Euclidean algorithm. In particular, the following theorem occurs (with different notation) in the problem sets of 18.701 at MIT:

Theorem (the Euclidean algorithm, take 2): The special linear group $ SL_2(\mathbb{Z})$ of integer matrices with determinant $ 1$ is generated by $ \mathbf{A}$ and $ \mathbf{JAJ} = \mathbf{A}^T$. (Note that $ \mathbf{J}$ is its own inverse and transpose and that it has determinant $ -1$.)

Proof. The additional step necessary to execute the Euclidean algorithm on negative integers is $ (a, b) \mapsto (a, b + a)$ when $ a, b$ have opposite signs, which is just the matrix $ \mathbf{A}^{-1}$. $ \mathbf{JAJ}$ is the step $ (a, b) \mapsto (a - b, b)$, and it (and its inverse) can be taken as a replacement for the switching step (that is, depending on whichever of $ a, b$ is greater we subtract the smaller from the greater at each step). In other words, we now have four cases and four possible steps:

- $ (a, b) \mapsto (a, b-a), a, b \ge 0, a < b \text{ or } a, b \le 0, a > b (\mathbf{A})$
- $ (a, b) \mapsto (a-b, b), a, b \ge 0, a > b \text{ or } a, b \le 0, a < b (\mathbf{JAJ})$
- $ (a, b) \mapsto (a, b+a), a < 0, b \ge 0 (\mathbf{A}^{-1})$
- $ (a, b) \mapsto (a+b, b), a \ge 0, b < 0 (\mathbf{JA}^{-1} \mathbf{J})$

Now, given $ \mathbf{M} = \left[ \begin{array}{cc} b & x \\ a & y \end{array} \right] \in SL_2(\mathbb{Z})$ (hence with $ by - ax = 1$), repeatedly apply the four transformations above, as in the first theorem, to to

$ \mathbf{M} \left[ \begin{array}{c} 1 & 0 \end{array} \right] = \left[ \begin{array}{c} b & a \end{array} \right]$.

We get an expression of the form

$ (\text{Some As and Js}) \mathbf{M} \left[ \begin{array}{c} 1 & 0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$.

(Note that even without an explicit switching step $ (a, b) \mapsto (b, a)$ it is still possible to ensure that we end at $ (0, 1)$ by running the algorithm until $ (1, 1)$ is reached.) $ (\text{Some As and Js}) \mathbf{M}$ is therefore a matrix with left column $ \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$. Because it still has determinant $ 1$ (since $ \mathbf{A}, \mathbf{JAJ}$, and their inverses all have determinant $ 1$), its right column must be of the form $ \left[ \begin{array}{c} x' \\ 1 \end{array} \right]$ for some integer $ x'$. But this matrix is just $ \mathbf{A}^{-x'}$. Inverting gives the desired result.

Note that this is a slightly more general statement than the ordinary Euclidean algorithm because we have made no assumptions about the signs of the integers involved. The members of $ SL_2(\mathbb{Z})$ generated by non-negative powers of $ \mathbf{A}$ and $ \mathbf{JAJ}$ (where we take no inverses) form a semigroup, and these are the matrices that appear in our first formulation of the Euclidean algorithm. It is convenient at this point to adopt the notation $ \mathbf{F}_q = \mathbf{A}^{-q} \mathbf{J} = \left[ \begin{array}{cc} q & 1 \\ 1 & 0 \end{array} \right]$, in which case we can reword Take 1 into the following

Theorem (the Euclidean algorithm, take 3): Given relatively prime positive integers $ (a, b), a < b$, there exists a unique finite sequence of natural numbers $ q_0, q_1, ... q_n$ such that

$ \left[ \begin{array}{c} b \\ a \end{array} \right] = \mathbf{F}_{q_0} \mathbf{F}_{q_1} ... \mathbf{F}_{q_n} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$.

Consequently, $ SL_2(\mathbb{Z})$ is generated by $ \mathbf{JF}_1$ and $ \mathbf{F}_1 \mathbf{J}$.

The use of the letter $ \mathbf{F}$ was intentional - $ \mathbf{F}_1$ is precisely the Fibonacci matrix! This observation leads to a quick and intuitive

Proof of Lamé's Theorem. In counting the number of steps in the Euclidean algorithm, we should be specific: we are counting the number of steps of the form $ (a, b) \mapsto (b', a)$ where $ b'$ is the reduction of $ b \bmod a$. This is precisely the number of quotients $ q_i$ that occur in the process of running the Euclidean algorithm. We also need to be specific about what we mean by "size." Given two relatively prime integers $ (a, b)$, simply define the size of the ordered pair to be $ \text{max}(a, b)$.

It is now straightforward that $ \mathbf{A}^{-1}$ strictly increases the size of a pair of relatively prime positive integers and $ \mathbf{J}$ does nothing, and therefore that the smallest pair of relatively prime positive integers requiring $ n$ steps in the Euclidean algorithm is

$ \left( \mathbf{F}_1 \right)^n \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} F_{n+1} \\ F_n \end{array} \right]$

exactly as desired. Beautiful!


There is a fascinating sense in which the Euclidean algorithm generalizes to irrational numbers. Suppose we interpreted the intermediate steps $ (s, r), s < r$ in the Euclidean algorithm as points (particularly, lattice points) in the plane. When $ r, s$ are relatively prime (which is always the case here), we lose no information in identifying a point with its slope $ \frac{r}{s}$. A linear transformation $ (s, r) \mapsto (cr + ds, ar + bs)$ (of the type we studied above) then becomes a fractional linear transformation (or Möbius transformation) $ \frac{r}{s} \mapsto \frac{ a \frac{r}{s} + b}{c \frac{r}{s} + d}$ of the corresponding slope. A succinct description of the Euclidean algorithm is now possible as follows: one "step" in the Euclidean algorithm is the transformation

$ \frac{r}{s} \mapsto \frac{1}{\frac{r}{s} - \left\lfloor \frac{r}{s} \right\rfloor}$

(where $ q = \left\lfloor \frac{r}{s} \right\rfloor$ is clearly the quotient in the division). The above discussion demonstrates that the set of fractional linear transformations satisfying $ ad - bc = 1$ behaves under composition essentially like matrix multiplication. The group we get here is called the modular group $ \Gamma$ and the above discussion shows that it is isomorphic to $ PSL_2(\mathbb{Z})$, the projective special linear group with integer coefficients (essentially $ SL_2(\mathbb{Z})$ once we identify every transformation with its negative). The modular group is a rather fascinating group that shows up unexpectedly all over the place (this being one of those places!).

The transformation above is associated with the matrix $ \mathbf{F}_{-q} = \mathbf{J} \mathbf{A}^q$ exactly as expected (since this is how we implemented the Euclidean algorithm). Here $ \mathbf{A}^q$ corresponds to the subtraction step and $ \mathbf{J}$ corresponds to the map $ x \mapsto \frac{1}{x}$. The fractional linear transformation $ f_q(x) = q + \frac{1}{x} = \frac{qx + 1}{x}$ (which has matrix $ \mathbf{F}_q$, again as expected) is the inverse map of the above; applying the Euclidean algorithm on $ \frac{r}{s}$ and then inverting each step gives

$ \frac{r}{s} = f_{q_0}(f_{q_1}(...f_{q_n}(\infty)))$

(where $ f_q(\infty) = q$), which we can write quite beautifully as

$ \frac{r}{s} = q_0 + \frac{1}{q_1 + \frac{1}{... + \frac{1}{q_n}}}$.

This is known as the (simple) continued fraction of $ \frac{r}{s}$ and is usually abbreviated $ [q_0 ; q_1, q_2, ... q_n]$. For example, the continued fraction of $ \frac{15284}{4889}$ is $ [3; 7, 1, 12, 7, 1, 5]$ (precisely the sequence of quotients in the Euclidean algorithm). In other words, we have the following

Theorem (the Euclidean algorithm, take 4): Every rational number has a unique terminating continued fraction expansion.

(Again, unique has to be understood in the proper sense because $ [q_0 ; q_1, q_2, ... q_n] = [q_0 ; q_1, q_2, ... q_n - 1, 1]$. The convention we are using, therefore, is that $ q_n > 1$.) Continued fraction representation is really exactly Take 3 of the theorems above once we identify $ (s, r)$ with $ \frac{r}{s}$. Because of the way we defined fractional linear transformations, the point $ (0 : 1)$ at which we chose to end the algorithm is now essentially the point at infinity and corresponds to our use of $ \infty$ above. The definition of the convergents of $ \frac{r}{s}$ is now perhaps more natural. They are given by the slopes of the points corresponding to the vectors

$ \mathbf{p}_k = \left[ \begin{array}{c} r_k \\ s_k \end{array} \right] = \mathbf{F}_{q_0} \mathbf{F}_{q_1} ... \mathbf{F}_{q_k} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right], k \le n$.

Since the computation of successive convergents is just "inserted" multiplication by a matrix, it is also not suprising that the calculation of convergents can be done by recurrences known as the fundamental recurrence formulas (which have a very obvious interpretation in terms of matrix multiplication). Simply note that

$ \mathbf{F}_{q_{k+1}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} q_{k+1} \\ 1 \end{array} \right] \implies$
$ \left[ \begin{array}{cc} r_{k+1} & r_k \\ s_{k+1} & s_k \end{array} \right] = \mathbf{F}_{q_0}... \mathbf{F}_{q_k} \mathbf{F}_{q_{k+1}}$.

Denoting the LHS by $ \mathbf{P}_{k+1}$, it is now immediate that

$ \mathbf{P}_{k+1} = \mathbf{P}_k \mathbf{F}_{q_{k+1}}$

and that $ \text{det}(\mathbf{P}_{k+1}) = (-1)^k$; in other words, we have the identities

$ r_{k+1} = r_k q_{k+1} + r_{k-1}$
$ s_{k+1} = s_k q_{k+1} + s_{k-1}$
$ r_{k+1} s_k - r_k s_{k+1} = (-1)^k$.

In particular, because the last convergent of a rational number is itself, the third identity tells us that the second-to-last convergent is the solution to $ ax - by = 1$ that we wanted in the first place! Thus the second-to-last convergent of $ \frac{15284}{4889}$ is $ \frac{2601}{832}$. These seemingly simple identities have other deep implications. Note, for example, their use in my solution to IMO 2008 #3. But first: now that we have recast the Euclidean algorithm in terms of the map $ x \mapsto \frac{1}{x - \lfloor x \rfloor}$, we no longer have to restrict ourselves to rational $ x$. The simple continued fraction of an irrational number $ x$ can be defined as the infinite result of applying the Euclidean algorithm. Perhaps the most beautiful example of such a fraction is

$ \phi = 1 + \frac{1}{1 + \frac{1}{1 + \frac{1}{1 + ...}}}$

which satisfies $ q_0 = q_1 = ... = 1$ (and therefore $ \mathbf{F}_{q_k} = \mathbf{F}_1 = \mathbf{F}$.) The convergents to $ \phi$ are therefore precisely the ratios of consecutive Fibonacci numbers $ \frac{F_{n+1}}{F_n}$. (This is obvious from the matrix computation and from the recursions given above.) This provides an immediate proof of the identity $ F_{k+1} F_{k-1} - F_k^2 = (-1)^k$ (which, as you'll recall, we proved by matrix methods alone before) and also an even more natural interpretation of Distinguishing Result 1: the number of steps required to run the Euclidean algorithm on a rational number $ x$ is the number of terms in its continued fraction representation, so to minimize the "size" (here a quantity known as height) of the corresponding fraction we want to make each term as small as possible, that is, $ 1$.

You may take some pleasure from the fact that after all of the effort that has been expended on this blog finding ways to express the Fibonacci numbers in terms of $ \phi$ we now have an extraordinary framework in which we found a way to express $ \phi$ in terms of the Fibonacci numbers.


The identities given above can be interpreted in terms of what is sometimes called the geometry of numbers. If a real number $ p$ has convergents $ p_k = \frac{r_k}{s_k}$, the observation that $ r_{k+1} s_k - r_k s_{k+1} = (-1)^k$ is really the observation that

$ \frac{r_{k+1}}{s_{k+1}} - \frac{r_k}{s_k} = \frac{(-1)^k}{s_k s_{k+1}}$,

which tells us that $ p_k$ and $ p_{k+1}$ are quite close together, and moreover that the sequence $ \{ p_k \}$ is alternately increasing and decreasing. Since the absolute value of the first differences is strictly increasing it then follows that $ p_k$ is alternately above and below $ p$ (which is also straightforward by observing what happens when you delete and add terms "deep down" in a simple continued fraction). We interpret this as follows:

Draw the line $ y = px$. The lattice points $ (s_k, r_k)$ are very close to each other and alternate between different sides of the line. Moreover, by Pick's theorem we conclude that the triangle $ (0, 0), (s_k, r_k), (s_{k+1}, r_{k+1})$ contains no lattice points. Any possible lattice points within the interior of the above triangle correspond to a point whose slope is closer to the value of $ p$ than $ p_k$, but we now know that this does not occur. We are therefore led to the conclusion that

The convergents to a real number determine its best rational approximations.

As Practice Problem 2 shows, this has important implications for the solution of certain Diophantine equations. The observation that $ \frac{355}{113}$ is an extremely good approximation to $ \pi$ can therefore be explained by the fact that it is a convergent, and moreover, it turns out that the next quotient in the continued fraction is quite large (which means $ s_{k+1} >> s_k$). We can even estimate the error in the approximation:

$ |p_k - p| < \frac{1}{s_k s_{k+1}} < \frac{1}{s_k^2}$,

so in other words, the error is approximately the square of the denominator, which is quite good, and if $ s_{k+1}$ is significantly larger than $ s_k$ it is a lot better. How much better this estimate can get in general is discussed in Practice Problem 3. In any case, if I have not convinced you of anything else you should note the power of Pick's Theorem, which is so mighty it is one of the lemmas used in a proof of quadratic reciprocity (but that's another story).


Note that the action of the elements of the modular group we described above takes each convergent to a better convergent which has "almost" the same slope - in other words, as lattice points the convergents (and therefore $ p$) are "almost" eigenvectors. The actual eigenvectors of $ \mathbf{F}_q$ correspond to the slopes $ \frac{q \pm \sqrt{q^2 + 4}}{2}$, a clear generalization of the situation with the Fibonacci sequence, where $ p = \phi$ is the exact eigenvector. Thus the action of repeated applications of $ \mathbf{F}_1$ can be seen to be an expression of the principle of the dominant eigenvalue. Generally, the action of $ \mathbf{F}_{q_k}$ for each of the quotients in a continued fraction expansion can be thought to "average" out in some sense to $ p$. (It is perhaps less surprising, now, if I tell you that the quotients in the continued fraction of almost any real have the same geometric mean.)

This notion generalizes slightly as follows: if we take $ q_0 = q_1 = ... = x$ for a fixed constant $ x$, we obtain the sequence of rational functions

$ \phi_n(x) = x + \frac{1}{x + \frac{1}{x + ...}} = \frac{F_{n+1}(x)}{F_n(x)}$

where $ F_n(x) = \frac{1}{\sqrt{x^2 + 4}} \left( \left( \frac{(x + \sqrt{x^2 - 4})}{2} \right)^n - \left( \frac{(x - \sqrt{x^2 - 4})}{2} \right)^n \right)$ is the Fibonacci polynomial of order $ n$. This is a straightforward generalization of Binet's formula; the recurrence that describes these polynomials is just $ F_0(x) = 0, F_1(x) = 1, F_n(x) = x F_{n-1}(x) + F_{n-2}(x)$, with $ F_n(1) = F_n$.


Excursion: It turns out that $ F_{n+1}(x) = \sum_{j=0}^{ \lfloor \frac{n}{2} \rfloor} {n - j \choose j} x^{n-2j}$ (by induction, for example), which gives in particular $ F_{n+1} = \sum_{j=0} {n-j \choose j}$, a rather beautiful identity begging for a combinatorial interpretation. Recalling that $ F_{n+1}$ counts the number of paths of length $ n$ that can be taken by steps of length $ 1$ or $ 2$, this is reasonably straightforward: a path of length $ n$ with $ j$ steps of length $ 2$ has $ n - j$ total steps, and given a set of $ n - j$ steps we pick $ j$ of them to "lengthen."

Now the general Fibonacci polynomial (for a positive integer $ x$) can be understood as counting the number ways in which a $ 1 \times n$ rectangle can be tiled by $ 1 \times 1$ tiles of $ x$ different colors and $ 1 \times 2$ tiles of one color (a situation where a recurrence can be written whose characteristic polynomial is precisely $ t^2 = xt + 1$), and the corresponding combinatorial proof is along exactly the same lines as above. This general situation can also be interpreted as the powers of the adjacency matrix of a graph on two variables in which a vertex $ A$ connects to a vertex $ B$ with $ x$ loops. We can also consider the situation where the $ 1 \times 2$ tiles are of $ y$ different colors (the general second-order recurrence), but this recurrence is not as easy to translate back into the language of simple continued fractions. (The appropriate generalization is the subject of Practice Problem 1.)


Back on topic, we would like to ask whether $ \phi_n(x)$ is a "generalization" of the convergents to $ \phi$. In other words,

Does there exist a function $ \phi(x)$ such that $ \lim_{n \to \infty} \phi_n(x) = \phi(x)$ in an appropriate sense?

We might expect this function to be $ \frac{x + \sqrt{x^2 + 4}}{2}$. Since the two roots above are the roots of the polynomial $ t^2 - xt - 1$, the negative root has small absolute value (for real $ x$) and so has negligible contribution to asymptotic behavior. Indeed, within this context the above calculation can be considered (at least for integer $ x$) to be the process of simultaneously computing the continued fraction of a large class of rationals where, at each step, we remark that it is always the case that $ \frac{-x + \sqrt{x^2 + 4}}{2} < 1$. We might now ask whether this notion generalizes; in other words,

Does there exist a notion of continued fraction expansion for some large class of functions?

Rational functions have an obvious choice of terminating simple continued fraction given by the corresponding Euclidean algorithm on polynomials (although it is no longer true that substituting integers will get us a valid simple continued fraction of a rational number; some quotients may be negative). This is an interesting perspective: the Euclidean algorithm then allows one to decompose rational functions uniquely into the composition of fractional linear transformations of the form $ x \mapsto q + \frac{1}{x} = \frac{qx + 1}{x}$ for polynomial coefficients $ q = Q(t)$. If the notion of "functional" continued fraction exists for polynomial quotients, rational functions will be its convergents. Part of the question we're asking, therefore, is the question of what class of functions can be appropriately approximated by sequences of rational functions.

This should remind you of the notion of a Taylor series, which essentially asks the question of what class of functions can be appropriately approximated by sequences of polynomials. Indeed, it is possible to ask the same kind of convergence questions for infinite continued fractions as it is for power series, although these questions are much harder to answer. (This appears to be a current area of active research.) A partial answer is provided by Euler's continued fraction formula, a rather beautiful identity which translates an infinite series into an infinite continued fraction and therefore allows a partial translation in the other direction, and it is here that I am, once again, out of details (although my hunch that hypergeometric functions are part of the answer appears to be correct.)


In closing, one application of the modular group is its use in verifying identities involving a fascinating set of functions called modular forms, which are functions that satisfy a "modular symmetry"; in other words, the modular group acts on them in a very specific way (as described in the article). This condition can be broken down into two conditions by looking at two generators of the modular group (which, we must stress, are not unique), and indeed this is the way a function is usually proven modular. A major application of this idea is to prove that two functions are identical by proving that they are both modular with the same value of $ k$ (called the weight of the function). Without getting into all that, ideas from the application of the modular group to complex analysis provide a proof of the Euclidean algorithm which is at once familiar and novel, and relies on the fact that the modular group has a natural group action on the upper half plane $ \mathbb{H}$ (and in fact forms a subgroup of $ SL_2(\mathbb{R})$, the full automorphism group of $ \mathbb{H}$).

First, we will need a few definitions. We say that $ z \sim w$ if there exists an element $ \gamma$ of the modular group such that $ \gamma(z) = w$. (Verify that this is an equivalence relation.) Let $ E_z$ denote the equivalence class of all such $ w$ for a particular $ z \in \mathbb{H}$. Let $ \mathcal{F} = \{ \tau \in \mathbb{H} : |\tau| \ge 1, |\Re{\tau}| \le 1 \}$ denote the fundamental domain in the upper half-plane.

Theorem (the Euclidean algorithm, take 5): For a given $ z$, there exists either exactly one element of $ E_z$ in the interior of $ \mathcal{F}$ or two elements of $ E_z$ on its boundary. Consequently, the modular group is generated by $ z \mapsto - \frac{1}{z}$ and $ z \mapsto z + 1$.

The proof is left as an exercise. (Note that $ z \mapsto z + 1$ is a translation and $ z \mapsto - \frac{1}{z}$ is a reflection through the $ y$-axis followed by an inversion about the circle immediately below $ \mathcal{F}$.) Suffice it to say that the sequence of steps that translates a complex number in $ \mathbb{H}$ into $ \mathcal{F}$ is the complex-numerical analogue of the Euclidean algorithm and implies that the values of a modular function are uniquely determined by its values in $ \mathcal{F}$. Therefore, if two modular functions of the same weight agree on $ \mathcal{F}$ then they are identical. This seemingly straightforward principle leads to a number of highly nontrivial results, but this is also another story.


Practice Problem 0: Prove that $ F_{n+1}(x) = \sum_{j=0}^{\lfloor \frac{n}{2} \rfloor} {n - j \choose j} x^{n-2j}$ by expanding the Binet-like formula.

Practice Problem 1: Prove that a simple continued fraction is (eventually) periodic if and only if it is the real root of a quadratic polynomial with integer coefficients.

Practice Problem 2: Prove that if $ (x, y)$ is a solution to the Pell's equation $ x^2 - dy^2 = 1$, then $ \frac{x}{y}$ is a convergent of $ \sqrt{d}$.

Practice Problem 3: What is the best constant $ k$ such that $ \left| \frac{r}{s} - x \right| < \frac{1}{ks^2}$ for infinitely many convergents $ \frac{r}{s}$ of $ x$, where $ x$ is an arbitrary irrational? (Hint: Look at $ \phi$ again. This result is closely related, in spirit, to Lamé's Theorem.)

Practice Problem 4: Prove Take 5.

Comment

2 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
You sort of lost me after Lamé's theorem, but your explanation of the extended Euclidean algorithm was quite insightful. :)

by Anonymous, Jul 28, 2008, 7:06 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Here is another good practice problem to put on.

Mr. Fat and Ms. Taf are playing a game. Mr. Fat has a set of $ n$ positive integers $ a_1, \ldots a_n$. Ms. Taf only knows $ n$ and the game is for her to figure out those numbers. She is allowed to give a red card and blue card to Mr. Fat, each with an integer ($ R$ on the red card and $ B$ on the blue card). Mr. Fat then performs the following process:
- Starts with $ i = 1$.
- Calculates $ B^\prime = a_i B + R$.
- Replaces the number on the red card $ R$ with $ B$.
- Replaces the number on the blue card $ B$ with $ B^\prime$.
- If $ i = n$, stops.
- Increases $ i$ by $ 1$ and does the same thing again.
Once the process stops, Mr. Fat hands the two cards back to Ms. Taf.
For each $ n$, what is the minimum number of pairs of cards Ms. Taf needs to submit to determine the $ n$ positive integers?
(this is USA TST 2008 #8)

by MellowMelon, Jul 29, 2008, 1:28 AM

Various mathematical thoughts, ranging from problem-solving techniques to attempts to tie together related concepts.

avatar

t0rajir0u
Archives
+ March 2009
+ October 2007
+ May 2007
Shouts
Submit
  • orz $~~~~$

    by clarkculus, Jan 10, 2025, 4:13 PM

  • Insanely auraful

    by centslordm, Jan 1, 2025, 11:17 PM

  • Fly High :(

    by Siddharthmaybe, Oct 22, 2024, 8:34 PM

  • Dang it he is gone :(( /revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive.

    by 799786, Aug 4, 2022, 1:56 PM

  • annoying precision

    by centslordm, May 16, 2021, 7:34 PM

  • rip t0rajir0u

    by OlympusHero, Dec 5, 2020, 9:29 PM

  • Shoutbox bump xD

    by DuoDuoling0, Oct 4, 2020, 2:25 AM

  • dang hes gone :(

    by OlympusHero, Jul 28, 2020, 3:52 AM

  • First shout in July

    by smartguy888, Jul 20, 2020, 3:08 PM

  • https://artofproblemsolving.com/community/c2448

    has more.

    -πφ

    by piphi, Jun 12, 2020, 8:20 PM

  • wait hold up 310,000
    people visited this man?!?!??

    by srisainandan6, May 29, 2020, 5:16 PM

  • first shout in 2020

    by OlympusHero, Apr 4, 2020, 1:15 AM

  • in his latest post he says he moved to wordpress

    by MelonGirl, Nov 16, 2019, 2:43 AM

  • Please revive!

    by AopsUser101, Oct 30, 2019, 7:10 PM

  • first shout in october fj9odiais

    by bulbasaur., Oct 14, 2019, 1:14 AM

128 shouts
Tags
About Owner
  • Posts: 12167
  • Joined: Nov 20, 2005
Blog Stats
  • Blog created: Dec 5, 2006
  • Total entries: 48
  • Total visits: 321293
  • Total comments: 202
Search Blog
a