# Difference between revisions of "Characteristic polynomial"

(add, will finish later) |
(enough for now) |
||

Line 2: | Line 2: | ||

== Definition == | == Definition == | ||

− | Suppose <math>A</math> is a <math>n \times n</math> [[matrix]] (over a field <math>K</math>). Then the characteristic polynomial of <math>A</math> is defined as <math>P_A(t) = \ | + | Suppose <math>A</math> is a <math>n \times n</math> [[matrix]] (over a field <math>K</math>). Then the characteristic polynomial of <math>A</math> is defined as <math>P_A(t) = \det(tI - A)</math>, which is a <math>n</math>th degree polynomial in <math>t</math>. Here, <math>I</math> refers to the <math>n\times n</math> [[identity matrix]]. |

Written out, the characteristic polynomial is the [[determinant]] | Written out, the characteristic polynomial is the [[determinant]] | ||

Line 11: | Line 11: | ||

An [[eigenvector]] <math>\bold{v} \in K^n</math> is a non-zero vector that satisfies the relation <math>A\bold{v} = \lambda\bold{v}</math>, for some scalar <math>\lambda \in K</math>. In other words, applying a linear operator to an eigenvector causes the eigenvector to dilate. The associated number <math>\lambda</math> is called the [[eigenvalue]]. | An [[eigenvector]] <math>\bold{v} \in K^n</math> is a non-zero vector that satisfies the relation <math>A\bold{v} = \lambda\bold{v}</math>, for some scalar <math>\lambda \in K</math>. In other words, applying a linear operator to an eigenvector causes the eigenvector to dilate. The associated number <math>\lambda</math> is called the [[eigenvalue]]. | ||

− | There are at most <math>n</math> distinct eigenvalues, whose values are exactly the roots of the characteristic polynomial of the square matrix. To prove this, we use the fact that the determinant of a matrix is <math>0</math> [[iff]] the column vectors of the matrix are [[linearly dependent]]. Observe that if <math>\lambda</math> satisfies <math>\det(\lambda I-A) = 0</math>, then the column vectors of <math>\ | + | There are at most <math>n</math> distinct eigenvalues, whose values are exactly the roots of the characteristic polynomial of the square matrix. To prove this, we use the fact that the determinant of a matrix is <math>0</math> [[iff]] the column vectors of the matrix are [[linearly dependent]]. Observe that if <math>\lambda</math> satisfies <math>\det(\lambda I-A) = 0</math>, then the column vectors of the <math>n\times n</math> matrix <math>\lambda I - A</math> are linearly dependent. Indeed, if we define <math>T = \lambda I - A</math> and let <math>\bold{T}^1, \bold{T}^2, \ldots, \bold{T}^n</math> denote the column vectors of <math>T</math>, this is equivalent to saying that there exists not all zero scalars <math>v_i \in K</math> such that |

− | By the [[Hamilton-Cayley Theorem]], the | + | <cmath>v_1 \cdot \bold{T}^1 + v_2 \cdot \bold{T}^2 + \cdots + v_n \cdot \bold{T}^n = \bold{O}.</cmath> |

+ | |||

+ | Hence, there exists a non-zero vector <math>\bold{v} = (v_1, v_2, \ldots, v_n) \in K^n</math> such that <math>(\lambda I - A) \bold{v} = \bold{O}</math>. Distributing and re-arranging, <math>A\bold{v} = \lambda\bold{v}</math>, as desired. In the other direction, if <math>A \bold{v} = \lambda \bold{v}</math>, then <math>\lambda I \bold{v} - A \bold{v} = \bold{O}</math>. But then, the column vectors of <math>\lambda I - A</math> are linearly dependent, so it follows that <math>\det(\lambda I - A) = 0</math>. <br><br> | ||

+ | |||

+ | Note that if <math>t = 0</math>, then <math>P_A(0) = \det (tI - A) = \det (-A) = (-1)^n \det (A)</math>. Hence, the characteristic polynomial encodes the determinant of the matrix. Also, the [[coefficient]] of the <math>t^{n-1}</math> term of <math>P_A(t)</math> gives the negative of the [[trace]] of the matrix (which follows from [[Vieta's formulas]]). | ||

+ | |||

+ | By the [[Hamilton-Cayley Theorem]], the characteristic polynomial of a square matrix applied to the square matrix itself is zero, that is <math>P_A(A) = 0</math>. The [[minimal polynomial]] of <math>A</math> thus divides the characteristic polynomial <math>p_A</math>. | ||

== Linear recurrences == | == Linear recurrences == | ||

− | Consider a monic [[homogenous]] [[linear recurrence]] of the form | + | Let <math>x_1, x_2, \ldots, </math> be a sequence of real numbers. Consider a monic [[homogenous]] [[linear recurrence]] of the form |

<center><cmath>x_{n} = c_1x_{n-1} + c_2x_{n-2} + \cdots + c_kx_{n-k}, \quad (*)</cmath></center> | <center><cmath>x_{n} = c_1x_{n-1} + c_2x_{n-2} + \cdots + c_kx_{n-k}, \quad (*)</cmath></center> | ||

− | where | + | where <math>c_1, \ldots, c_k</math> are real constants. The characteristic polynomial of this recurrence is defined as the polynomial |

<center><cmath>P(x) = x^k - c_1x^{k-1} - c_2x^{k-2} - \cdots -c_{k-1}x - c_k.</cmath></center> | <center><cmath>P(x) = x^k - c_1x^{k-1} - c_2x^{k-2} - \cdots -c_{k-1}x - c_k.</cmath></center> | ||

Line 36: | Line 42: | ||

If we evaluate <math>k</math> different values of <math>x_i</math> (typically <math>x_0, x_1, \cdots, x_{k-1}</math>), we can find a linear system in the <math>a_i</math>s that can be solved for each constant. Refer to the [[#Introductory|introductory problems]] below to see an example of how to do this. In particular, for the Fibonacci numbers, this yields [[Binet's formula]]. <br><br> | If we evaluate <math>k</math> different values of <math>x_i</math> (typically <math>x_0, x_1, \cdots, x_{k-1}</math>), we can find a linear system in the <math>a_i</math>s that can be solved for each constant. Refer to the [[#Introductory|introductory problems]] below to see an example of how to do this. In particular, for the Fibonacci numbers, this yields [[Binet's formula]]. <br><br> | ||

− | If there are roots with multiplicity greater than <math>1</math>, suppose <math>r_1 = r_2 = \cdots = r_{k_1}</math>. Then we would replace the term <math>a_1 \cdot r^n + a_2 \cdot r_2^n + \cdots + a_{k_1} \cdot r_{k_1}^n</math> with the expression < | + | If there are roots with multiplicity greater than <math>1</math>, suppose <math>r_1 = r_2 = \cdots = r_{k_1}</math>. Then we would replace the term <math>a_1 \cdot r^n + a_2 \cdot r_2^n + \cdots + a_{k_1} \cdot r_{k_1}^n</math> with the expression |

+ | |||

+ | <center><cmath>(a_1 \cdot n^{k_1-1} + a_2 \cdot n^{k_1 - 2} + \cdots + a_{k_1-1} \cdot n + a_{k_1}) \cdot r^n.</cmath></center> | ||

− | For example, consider the recurrence relation <math>x_n = -2x_{n-1} -x_{n-2} \Longleftrightarrow x_n + 2x_{n-1} + x_{n-2} = 0</math>. It’s characteristic polynomial, <math>x^2 + 2x + 1</math>, has a <math>-1</math> double root. Then, its closed form solution is of the type <math>x_n = (-1)^n(an + b)</math>. | + | That is, we would replace . For example, consider the recurrence relation <math>x_n = -2x_{n-1} -x_{n-2} \Longleftrightarrow x_n + 2x_{n-1} + x_{n-2} = 0</math>. It’s characteristic polynomial, <math>x^2 + 2x + 1</math>, has a <math>-1</math> double root. Then, its closed form solution is of the type <math>x_n = (-1)^n(an + b)</math>. |

− | === Proof === | + | === Proof 1 (Linear Algebra) === |

− | Note: The ideas expressed in this section can be transferred to the next section about differential equations. | + | Note: The ideas expressed in this section can be transferred to the next section about differential equations. This requires some knowledge of [[linear algebra]] (upto the [[Spectral Theorem]]). |

− | + | Let <math>\bold{y}_{n-1} = \begin{pmatrix} x_{n-1} \\ x_{n-2} \\ \vdots \\ x_{n-k} \end{pmatrix}</math>. Then, we can express our linear recurrence as the matrix | |

<center><cmath>A = \begin{pmatrix}a_1 & a_2 & a_3 & \cdots & a_{k-1} & a_{k} \\ 1 & 0 & 0 & \cdots &0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 \end{pmatrix},</cmath></center> | <center><cmath>A = \begin{pmatrix}a_1 & a_2 & a_3 & \cdots & a_{k-1} & a_{k} \\ 1 & 0 & 0 & \cdots &0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 \end{pmatrix},</cmath></center> | ||

− | so that <math>\bold{y}_n = A\bold{y}_{n-1}</math> (try to verify this). The characteristic polynomial of <math>A</math> is precisely <math>P_A(t) = \det (tI - A) = a_1 \cdot t^{k-1} + a_2 \cdot t^{k-2} \cdots + a_k = 0</math> | + | so that <math>\bold{y}_n = A\bold{y}_{n-1}</math> (try to verify this). The characteristic polynomial of <math>A</math> is precisely <math>P_A(t) = \det (tI - A) = a_1 \cdot t^{k-1} + a_2 \cdot t^{k-2} \cdots + a_k = 0</math>. This is not difficult to show via [[Laplace's expansion]] and induction, and is left as an exercise to the reader. |

+ | |||

+ | If the roots of <math>P_A</math> are distinct, then there exists a [[basis]] (of <math>\mathbb{R}^{k-1}</math>) consisting of [[eigenvector]]s of <math>A</math> (since eigenvectors of different eigenvalues are [[linearly independent]]). That is, applying a change of bases, we can write | ||

<center><cmath>A = U^{-1} D U</cmath></center> | <center><cmath>A = U^{-1} D U</cmath></center> | ||

− | for an [[unitary matrix]] <math>U</math> and a diagonal matrix <math>D</math>. Then, <math>A^2 = U^{-1} D U U^{-1} D U = U^{-1} D^2 U</math>, and in general, <math>A^n = U^{-1} D^n U</math>. Thus, <math>\bold{y}_{n} = A^{n}\bold{y}_0 = U^{-1}D^{n+k}U\bold{y}_0</math>. Here, <math>U^{-1}, U,</math> and <math>\bold{y}_0</math> are fixed (note that to find the values of <math>\bold{y}_0</math>, we may need to trace the recurrence backwards. We only take the <math>0</math>th index for simplicity). It follows that <math>y_{n}</math> is a linear combination of the diagonal elements of <math>D</math>, namely <math>r_1^n, r_2^n, \ldots, r_k^n</math>. | + | for an [[unitary matrix]] <math>U</math> and a diagonal matrix <math>D</math>. Then, <math>A^2 = U^{-1} D U U^{-1} D U = U^{-1} D^2 U</math>, and in general, <math>A^n = U^{-1} D^n U</math>. Thus, <math>\bold{y}_{n} = A^{n}\bold{y}_0 = U^{-1}D^{n+k}U\bold{y}_0</math>. Here, <math>U^{-1}, U,</math> and <math>\bold{y}_0</math> are fixed (note that to find the values of <math>\bold{y}_0</math>, we may need to trace the recurrence backwards. We only take the <math>0</math>th index for simplicity). It follows that <math>y_{n}</math> is a linear combination of the diagonal elements of <math>D</math>, namely <math>r_1^n, r_2^n, \ldots, r_k^n</math>. <br><br> |

+ | |||

+ | Suppose now that that the roots of <math>P_A</math> are not distinct. Then, we can write the matrix in the [[Jordan normal form]]. For simplicity, first consider just a single root <math>r</math> repeated multiple times. The corresponding Jordan form of the matrix is given by (that is, the matrix is [[similar]] to the following): | ||

+ | |||

+ | <center><cmath>\begin{pmatrix} r & 1 & 0 & \cdots & 0 \\ 0 & r & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & r \end{pmatrix}</cmath>.</center> | ||

+ | |||

+ | Exponentiating this matrix to the <math>n</math>th power will yield [[binomial coefficient]]s, which are essentially polynomials in <math>n</math>. After a little bit of work, the result follows. | ||

− | + | === Proof 2 (Induction) === | |

− | There are a couple of | + | There are a couple of lower-level ways to prove this. One is by [[induction]], though the proof is not very revealing; we can explicitly check that a sequence <math>\{a_1 \cdot r_1^n + a_2 \cdot r_2^n + \cdots + a_k \cdot r_k^n\}</math>, for real numbers <math>a_1, a_2, \ldots, a_k</math>, satisfies the linear recurrence relation <math>(*)</math>. If the two sequences are the same for the first <math>k</math> values of the sequence, it follows by induction that the two sequences must be exactly the same. |

In particular, for <math>k=2</math>, we can check this by using the identity | In particular, for <math>k=2</math>, we can check this by using the identity | ||

Line 61: | Line 77: | ||

<center><cmath>ax^{n+1} + by^{n+1} = (x+y)(ax^n + by^n) - xy(ax^{n-1} + by^{n-1}).</cmath></center> <br> | <center><cmath>ax^{n+1} + by^{n+1} = (x+y)(ax^n + by^n) - xy(ax^{n-1} + by^{n-1}).</cmath></center> <br> | ||

− | + | It is also possible to reduce the recurrence to a [[telescoping sum]]. However, the details would get messy for larger values of <math>k</math>. | |

+ | |||

+ | === Proof 3 (Partial fractions) === | ||

+ | |||

+ | Another method uses [[partial fractions]] and [[generating function]]s, though not much knowledge of each is required for this proof. Let <math>G_n = a_1G_{n-1} + \cdots + a_kG_{n-k}</math> be a linear recurrence. Consider the [[generating function]] given by | ||

+ | |||

+ | <center><cmath>G(x) = G_0 + G_1x + G_2x^2 + \cdots</cmath></center> | ||

+ | |||

+ | Then, comparing coefficients, | ||

+ | |||

+ | <center><cmath>a_kG(x) \cdot x^{k-1} + a_{k-1}G(x) \cdot x^{k-2} + \cdots + a_{2}G(x) \cdot x + a_{1}G(x) = \frac{G(x) - R(x)}x,</cmath></center> | ||

+ | |||

+ | where <math>R(x)</math> is a remainder polynomial with degree <math>< k</math>. Re-arranging, | ||

− | <center><cmath> | + | <center><cmath>G(x) = \frac{xR(x)}{a_k\cdot x^{k} + a_{k-1}\cdot x^{k-1} + \cdots + a_{1}x + 1}</cmath></center> |

− | See these [http://www.artofproblemsolving.com/Forum/weblog_entry.php?t=250866 blog][http://www.artofproblemsolving.com/Forum/weblog_entry.php?t=215833 posts] for further ideas. | + | This polynomial in the denominator is the reverse of the characteristic polynomial of the recurrence. Hence, its roots are <math>1/r_1, 1/r_2, \ldots, 1/r_k</math>, assuming that the roots are distinct. Using [[partial fraction]] decomposition (essentially the reverse of the process of adding fractions by combining denominators, except we now pull the denominators apart), we can write |

+ | |||

+ | <center><cmath>G(x) = \frac{c_1}{1 - r_1x} + \frac{c_2}{1-r_2x} + \cdots + \frac{c_k}{1-r_kx } </cmath></center> | ||

+ | |||

+ | for some constants <math>c_1, \ldots, c_k</math>. Using the [[geometric series]] formula, we have <math>\frac{c}{1-y} = c + cy + cy^2 + \ldots</math>. Thus, | ||

+ | |||

+ | <center><cmath>G(x) = (c_1 + \cdots + c_k) + (c_1r_1 + \cdots + c_kr_k)x + (c_1r_1^2 + \cdots + c_kr_k^2)x^2 + \cdots.</cmath></center> | ||

+ | |||

+ | Comparing coefficients with our original definition of <math>G(x)</math> gives <math>G_n = c_1r_1^n + c_2r_2^n + \cdots + c_kr_k^n</math>, as desired. <br><br> | ||

+ | |||

+ | The generalization to characteristic polynomials with multiple roots is not difficult from here, and is left to the reader. The partial fraction decomposition will have terms of the form <math>\frac{c_i}{(1-r_ix)^\alpha}</math> for <math>\alpha > 1</math>. <br><br> | ||

+ | |||

+ | A note about this argument: all of the power series used here are defined formally, and so we do not actually need to worry whether or not they converge. See these [http://www.artofproblemsolving.com/Forum/weblog_entry.php?t=250866 blog][http://www.artofproblemsolving.com/Forum/weblog_entry.php?t=215833 posts] for further ideas. | ||

== Differential equations == | == Differential equations == | ||

Line 84: | Line 124: | ||

== Problems == | == Problems == | ||

+ | === Introductory === | ||

*Prove [[Binet's formula]]. Find a similar closed form equation for the [[Lucas sequence]], defined the starting terms <math>L_1 = 2, L_2 = 1</math>, and <math>L_n = L_{n-1} + L_{n-2}</math>. | *Prove [[Binet's formula]]. Find a similar closed form equation for the [[Lucas sequence]], defined the starting terms <math>L_1 = 2, L_2 = 1</math>, and <math>L_n = L_{n-1} + L_{n-2}</math>. | ||

− | * | + | *Let <math>\{x_n\}</math> denote the sequence defined by the recursion <math>x_0 = 3, x_1 = 1</math>, and <math>x_n = 2x_{n-1} + 3x_{n-2}</math>. Find a closed form expression for <math>x_n</math>. |

+ | |||

+ | === Intermediate === | ||

+ | *Let <math>\{X_n\}</math> and <math>\{Y_n\}</math> be sequences defined as follows: | ||

+ | |||

+ | <center><cmath>X_0 = Y_0 = X_1 = Y_1 = 1, X_{n+1} = X_n + 2X_{n-1}, Y_{n+1} = 3Y_n + 4Y_{n-1}.</cmath></center> | ||

+ | |||

+ | :Let <math>k</math> be the largest integer that satisfies all of the following conditions: | ||

+ | |||

+ | ::(i) <math>|X_i - k| \le 2007</math>, for some positive integer <math>i</math>; | ||

+ | ::(ii) <math>|Y_j - k| \le 2007</math>, for some positive integer <math>j</math>; | ||

+ | ::(iii) <math>k < 10^{2007}</math>. | ||

+ | |||

+ | :Find the remainder when <math>k</math> is divided by <math>2007</math>. (2007 [[iTest]], #47) | ||

+ | |||

+ | *Let <math>a_{n}</math>, <math>b_{n}</math>, and <math>c_{n}</math> be geometric sequences with different common ratios and let <math>a_{n}+b_{n}+c_{n}=d_{n}</math> for all integers <math>n</math>. If <math>d_{1}=1</math>, <math>d_{2}=2</math>, <math>d_{3}=3</math>, <math>d_{4}=-7</math>, <math>d_{5}=13</math>, and <math>d_{6}=-16</math>, find <math>d_{7}</math>. ([[Mock AIME 1 2006-2007/Problem 13|Mock AIME 1 2006-2007, Problem 13]]) | ||

+ | |||

+ | *Show that <math>a^n + b^n + c^n</math> can be written as a linear combination of the [[elementary symmetric polynomials]] <math>abc, ab+bc+ca, a+b+c</math>. In general, prove [[Newton's sums]]. | ||

+ | |||

+ | === Olympiad === | ||

+ | *Let <math>r</math> be a real number, and let <math>x_n</math> be a sequence such that <math>x_0 = 0, x_1 = 1</math>, and <math>x_{n+2} = rx_{n+1} - x_n</math> for <math>n \ge 0</math>. For which values of <math>r</math> does <math>x_1 + x_3 + \cdots + x_{2m-1} = x_m^2</math> for all positive integers <math>m</math>? ([[WOOT]]) | ||

+ | * Let <math>a(x,y)</math> be the polynomial <math>x^2y + xy^2</math>, and <math>b(x,y)</math> the polynomial <math>x^2 + xy + y^2</math>. Prove that we can find a polynomial <math>p_n(a, b)</math> which is identically equal to <math>(x + y)^n + (-1)^n (x^n + y^n)</math>. For example, <math>p_4(a, b) = 2b^2</math>. ([[1976 Putnam Problems/Problem A2|1976 Putnam, Problem A2]]). | ||

+ | |||

+ | == Hints/Solutions == | ||

+ | === Introductory === | ||

+ | * A proof of [[Binet's formula]] may be found in that link. The characteristic polynomial of the Lucas sequence is exactly the same. Hence, the only thing we have to change are the coefficients. | ||

+ | * We will work out this problem in full detail. The recurrence relation is <math>x_n -2x_{n-1} -3x_{n-2}</math>, and its characteristic polynomial is given by <math>x^2-2x-3</math>. The roots of this polynomial are <math>-1</math> and <math>3</math>. Thus, a closed form solution is given by <math>x_n = a \cdot (-1)^n + b \cdot 3^n</math>. For <math>n = 0</math>, we get <math>x_0 = a + b = 3</math>, and for <math>n = 1</math>, we get <math>x_1 = -a + 3b = 1</math>. Solving gives <math>a = 2, b = 1</math>. Thus, our answer is <math>x_n = 2 \cdot (-1)^n + 3^n</math>. | ||

+ | |||

+ | === Intermediate === | ||

+ | *Answer (2007 iTest 47): The answer is <math>k = 7468</math> (before taking <math>\mod{2007}</math>). | ||

+ | *Hint (Newton’s Sum): Let us work backwards. Suppose <math>a,b,c</math> are the roots of the characteristic polynomial of a linear recurrence. Then apply [[Vieta's sums]]. | ||

− | + | === Olympiad === | |

+ | *Hint (WOOT): substitute the closed form solution. It is true for all <math>r</math>. | ||

+ | *Hint (1976 Putnam A2): do the even and odd cases separately. | ||

[[Category:Linear algebra]] | [[Category:Linear algebra]] |

## Revision as of 16:16, 28 February 2010

The **characteristic polynomial** of a linear operator refers to the polynomial whose roots are the eigenvalues of the operator. It carries much information about the operator.

## Contents

## Definition

Suppose is a matrix (over a field ). Then the characteristic polynomial of is defined as , which is a th degree polynomial in . Here, refers to the identity matrix.

Written out, the characteristic polynomial is the determinant

## Properties

An eigenvector is a non-zero vector that satisfies the relation , for some scalar . In other words, applying a linear operator to an eigenvector causes the eigenvector to dilate. The associated number is called the eigenvalue.

There are at most distinct eigenvalues, whose values are exactly the roots of the characteristic polynomial of the square matrix. To prove this, we use the fact that the determinant of a matrix is iff the column vectors of the matrix are linearly dependent. Observe that if satisfies , then the column vectors of the matrix are linearly dependent. Indeed, if we define and let denote the column vectors of , this is equivalent to saying that there exists not all zero scalars such that

Hence, there exists a non-zero vector such that . Distributing and re-arranging, , as desired. In the other direction, if , then . But then, the column vectors of are linearly dependent, so it follows that .

Note that if , then . Hence, the characteristic polynomial encodes the determinant of the matrix. Also, the coefficient of the term of gives the negative of the trace of the matrix (which follows from Vieta's formulas).

By the Hamilton-Cayley Theorem, the characteristic polynomial of a square matrix applied to the square matrix itself is zero, that is . The minimal polynomial of thus divides the characteristic polynomial .

## Linear recurrences

Let be a sequence of real numbers. Consider a monic homogenous linear recurrence of the form

where are real constants. The characteristic polynomial of this recurrence is defined as the polynomial

For example, let be the th Fibonacci number defined by , and

Then, its characteristic polynomial is .

The roots of the polynomial can be used to write a closed form for the recurrence. If the roots of this polynomial are distinct, then suppose the roots are . Then, there exists real constants such that

If we evaluate different values of (typically ), we can find a linear system in the s that can be solved for each constant. Refer to the introductory problems below to see an example of how to do this. In particular, for the Fibonacci numbers, this yields Binet's formula.

If there are roots with multiplicity greater than , suppose . Then we would replace the term with the expression

That is, we would replace . For example, consider the recurrence relation . It’s characteristic polynomial, , has a double root. Then, its closed form solution is of the type .

### Proof 1 (Linear Algebra)

Note: The ideas expressed in this section can be transferred to the next section about differential equations. This requires some knowledge of linear algebra (upto the Spectral Theorem).

Let . Then, we can express our linear recurrence as the matrix

so that (try to verify this). The characteristic polynomial of is precisely . This is not difficult to show via Laplace's expansion and induction, and is left as an exercise to the reader.

If the roots of are distinct, then there exists a basis (of ) consisting of eigenvectors of (since eigenvectors of different eigenvalues are linearly independent). That is, applying a change of bases, we can write

for an unitary matrix and a diagonal matrix . Then, , and in general, . Thus, . Here, and are fixed (note that to find the values of , we may need to trace the recurrence backwards. We only take the th index for simplicity). It follows that is a linear combination of the diagonal elements of , namely .

Suppose now that that the roots of are not distinct. Then, we can write the matrix in the Jordan normal form. For simplicity, first consider just a single root repeated multiple times. The corresponding Jordan form of the matrix is given by (that is, the matrix is similar to the following):

Exponentiating this matrix to the th power will yield binomial coefficients, which are essentially polynomials in . After a little bit of work, the result follows.

### Proof 2 (Induction)

There are a couple of lower-level ways to prove this. One is by induction, though the proof is not very revealing; we can explicitly check that a sequence , for real numbers , satisfies the linear recurrence relation . If the two sequences are the same for the first values of the sequence, it follows by induction that the two sequences must be exactly the same.

In particular, for , we can check this by using the identity

It is also possible to reduce the recurrence to a telescoping sum. However, the details would get messy for larger values of .

### Proof 3 (Partial fractions)

Another method uses partial fractions and generating functions, though not much knowledge of each is required for this proof. Let be a linear recurrence. Consider the generating function given by

Then, comparing coefficients,

where is a remainder polynomial with degree . Re-arranging,

This polynomial in the denominator is the reverse of the characteristic polynomial of the recurrence. Hence, its roots are , assuming that the roots are distinct. Using partial fraction decomposition (essentially the reverse of the process of adding fractions by combining denominators, except we now pull the denominators apart), we can write

for some constants . Using the geometric series formula, we have . Thus,

Comparing coefficients with our original definition of gives , as desired.

The generalization to characteristic polynomials with multiple roots is not difficult from here, and is left to the reader. The partial fraction decomposition will have terms of the form for .

A note about this argument: all of the power series used here are defined formally, and so we do not actually need to worry whether or not they converge. See these blogposts for further ideas.

## Differential equations

Given a monic linear homogenous differential equation of the form , then the characteristic polynomial of the equation is the polynomial

Here, is short-hand for the differential operator.

If the roots of the polynomial are distinct, say , then the solutions of this differential equation are precisely the linear combinations . Similarly, if there is a set of roots with multiplicity greater than , say , then we would replace the term with the expression .

In general, given a linear differential equation of the form , where is a linear differential operator, then the set of solutions is given by the sum of any solution to the homogenous equation and a specific solution to .

### Proof

We can apply an induction technique similar to the section on linear recurrences above.

From linear algebra, we can use the following theorem.

## Problems

### Introductory

- Prove Binet's formula. Find a similar closed form equation for the Lucas sequence, defined the starting terms , and .
- Let denote the sequence defined by the recursion , and . Find a closed form expression for .

### Intermediate

- Let and be sequences defined as follows:

- Let be the largest integer that satisfies all of the following conditions:

- (i) , for some positive integer ;
- (ii) , for some positive integer ;
- (iii) .

- Find the remainder when is divided by . (2007 iTest, #47)

- Let , , and be geometric sequences with different common ratios and let for all integers . If , , , , , and , find . (Mock AIME 1 2006-2007, Problem 13)

- Show that can be written as a linear combination of the elementary symmetric polynomials . In general, prove Newton's sums.

### Olympiad

- Let be a real number, and let be a sequence such that , and for . For which values of does for all positive integers ? (WOOT)
- Let be the polynomial , and the polynomial . Prove that we can find a polynomial which is identically equal to . For example, . (1976 Putnam, Problem A2).

## Hints/Solutions

### Introductory

- A proof of Binet's formula may be found in that link. The characteristic polynomial of the Lucas sequence is exactly the same. Hence, the only thing we have to change are the coefficients.
- We will work out this problem in full detail. The recurrence relation is , and its characteristic polynomial is given by . The roots of this polynomial are and . Thus, a closed form solution is given by . For , we get , and for , we get . Solving gives . Thus, our answer is .

### Intermediate

- Answer (2007 iTest 47): The answer is (before taking ).
- Hint (Newton’s Sum): Let us work backwards. Suppose are the roots of the characteristic polynomial of a linear recurrence. Then apply Vieta's sums.

### Olympiad

- Hint (WOOT): substitute the closed form solution. It is true for all .
- Hint (1976 Putnam A2): do the even and odd cases separately.