Difference between revisions of "Taylor polynomial"

m
(Convergence)
Line 31: Line 31:
 
The '''Maclaurin series''' is the Taylor series chosen with <math>a = 0</math>. The partial sums of the Maclaurin series are the Maclaurin polynomials of <math>f(x)</math> of each degree.
 
The '''Maclaurin series''' is the Taylor series chosen with <math>a = 0</math>. The partial sums of the Maclaurin series are the Maclaurin polynomials of <math>f(x)</math> of each degree.
 
===Convergence===
 
===Convergence===
 +
If the limit of the Lagrange error bound, given above, is <math>0</math> as <math>n</math> goes to infinity, then the Taylor series must converge to the function. Because of the rapidly growing factorial term in the denominator of the error bound, Taylor series often converge for all <math>x</math>. For example, all Taylor series for <math>\sin(x)</math>, <math>\cos(x)</math>, and <math>e^x</math> converge to their respective functions for all <math>x</math>.
 +
 +
However, if the derivatives of successive [[Order (derivative)|orders]] grow fast enough, then Taylor series may only converge for <math>a</math> sufficiently close to <math>x</math>, or may even diverge for all <math>x \neq a</math>.
 +
 +
Consider the Taylor series of <math>\ln(x)</math> about <math>x = 1</math>. By <math>\ln'(x) = \frac{1}{x}</math> and the [[Derivative/Formulas|power rule for derivatives]], <cmath>\ln^{(n)}(x) = (-1)^{n-1}(n-1)!x^{-n}</cmath> for <math>n > 0</math>. Since negative-power functions are strictly decreasing and positive for positive <math>x</math>, the maximum absolute value of <math>x^{-(n+1)}</math> on the interval <math>[1,x]</math> is <math>1</math>, so the maximum absolute value of <math>\ln^{(n+1)}(x)</math> on <math>[1,x]</math> is <math>n!</math>. Therefore, for <math>x \geq 1</math>, the Lagrange error bound evaluates to <cmath>\left| \frac{n!(x-1)^{n+1}}{(n+1)!}\right| = \frac{(x-1)^{n+1}}{n+1}.</cmath> If <math>x \leq 2</math>, then the limit of the above expression as <math>n</math> goes to infinity is <math>0</math>, so the Taylor series must in fact converge to <math>\ln(x)</math>; but for <math>x > 2</math>, the Taylor series is not guaranteed to converge to <math>\ln(x)</math>. In fact, the Taylor series for <math>\ln(x)</math> about <math>x = 1</math> is <cmath>\sum_{n=0}^{\infty} \frac{(-1)^{n-1}(x-1)^n}{n} </cmath> which, by the Ratio Test, does not converge at all for <math>x > 2</math>.
 +
 +
It is also possible that a Taylor series converges but not to the value of the function it is meant to approximate. For a well-known example, all of the derivatives of the function
 +
<cmath>f(x) = \left\{
 +
\begin{array}{lr}
 +
e^{-1/t^2}  &  x \neq 0 \\
 +
0 & x = 0
 +
\end{array} \right\}</cmath>
 +
are zero at <math>x = 0</math>, so the Maclaurin series of <math>f(x)</math> converges to <math>0</math> for all <math>x</math>. However, <math>f(x) > 0</math> if <math>x \neq 0</math>, so the Maclaurin series converges to a different value than <math>f(x)</math> for all <math>x \neq 0</math>.

Revision as of 16:15, 29 June 2022

The degree-$n$ Taylor polynomial of a function $f(x)$ about $x = a$ is the unique polynomial of degree $n$ whose value and first $n$ derivatives match the value and first $n$ derivatives of $f(x)$ at $x = a$.

The formula for a degree-$n$ Taylor polynomial of $f(x)$ about $x = a$ is \[\sum_{k=0}^{n} \frac{f^{(k)}(a)(x-a)^k}{k!} = f(a) + f'(a)(x - a) + \frac{f''(a)(x-a)^2}{2} + \dots + \frac{f^{(n)}(a)(x-a)^n}{n!}.\] In the formula above, $f^{(k)}$ denotes the order-$k$ derivative of $f$.

Taylor polynomials are often used to approximate non-polynomial functions that cannot be calculated exactly, such as trigonometric functions, exponential functions, and logarithms.

Derivation of the formula

We want the Taylor polynomial to have $k$-th derivative $f^{(k)}(a)$ at $x = a$. The Power Rule for derivatives gives that the derivative of $(x-a)^j$ is $j(x-a)^{j-1}$ for all positive integers $j$, and $0$ for $j = 0$ (because when $j = 0$ the function is a constant $1$). Here the Chain Rule is used implicitly with the fact that $x - a$ has derivative $1$ for all $x$.

For $m < k$, the degree-$m$ term in $x - a$ has $k$th derivative $0$, because after $k$ differentiations the degree of the term will have reached $0$ and then at least one more differentiation ensures that the term is eliminated.

For $m > k$, the degree-$m$ term in $x - a$ has $k$th derivative $0$ at $x = a$, because the $k$ differentiations leave a term with a positive power of $(x - a)$, which is zero at $x = a$.

The degree-$k$ term undergoes $k$ differentiations, leaving a constant term and accumulating all of the factors $j$ for $k \geq j \geq 1$. As such, its $k$th derivative is $k!$ times its original coefficient for all $x$, so the coefficient of $(x-a)^k$ should be defined as $\frac{f^{(k)}(a)}{k!}$.

Special cases

Maclaurin polynomial

A Maclaurin polynomial is a Taylor series with $a = 0$. Setting $a = 0$ simplifies the appearance of the polynomial somewhat, since every instance of $(x-a)$ in the formula is replaced with $x$.

For some functions, like $e^x$ and $\sin x$, Maclaurin polynomials are generally effective across the domain (although using a different $a$-value might allow greater accuracy for the same choice of degree). However, for functions like $\ln x$, Maclaurin polynomials cannot be defined because the function and its derivatives are undefined at $x = 0$. For other functions, Maclaurin polynomials can be defined, but do not in general approximate the function well (see Taylor series), so a value of $a$ closer to the $x$-value of the desired approximation must be chosen.

Tangent-line approximation

A tangent-line approximation is a first-degree Taylor polynomial, given by $f(a) + f'(a)(x - a)$. The name "tangent-line approximation" comes from the fact that the graph is a line tangent to the graph of $f(x)$ at $x = a$. Tangent-line approximations are used in Euler's method and Newton's method.

Error bound

Letting $P_n$ be the degree-$n$ Taylor polynomial of $f$ about $a$, the Lagrange Error Bound states that \[\lvert f(x) - P_n(x) \rvert \leq \left\lvert \frac{(x-a)^{n+1}M}{(n+1)!} \right\rvert\] if $f^{(n+1)}(x)$ is defined and has absolute value at most $M$ on the entire interval $(a,x)$ if $x > a$ or $(x,a)$ if $x < a$.

The Lagrange Error Bound bounds the true value of $f(x)$ both above and below.

Taylor series

The Taylor series of an infinitely differentiable function $f(x)$ is the infinite series \[\sum_{k=0}^{\infty} \frac{f^{(k)}(x-a)}{k!} = f(a) + f'(a)(x - a) + \frac{f''(a)(x-a)^2}{2} + \dots.\] The partial sums of the Taylor series are the Taylor polynomials of $f(x)$ about $x = a$ of each degree.

The Maclaurin series is the Taylor series chosen with $a = 0$. The partial sums of the Maclaurin series are the Maclaurin polynomials of $f(x)$ of each degree.

Convergence

If the limit of the Lagrange error bound, given above, is $0$ as $n$ goes to infinity, then the Taylor series must converge to the function. Because of the rapidly growing factorial term in the denominator of the error bound, Taylor series often converge for all $x$. For example, all Taylor series for $\sin(x)$, $\cos(x)$, and $e^x$ converge to their respective functions for all $x$.

However, if the derivatives of successive orders grow fast enough, then Taylor series may only converge for $a$ sufficiently close to $x$, or may even diverge for all $x \neq a$.

Consider the Taylor series of $\ln(x)$ about $x = 1$. By $\ln'(x) = \frac{1}{x}$ and the power rule for derivatives, \[\ln^{(n)}(x) = (-1)^{n-1}(n-1)!x^{-n}\] for $n > 0$. Since negative-power functions are strictly decreasing and positive for positive $x$, the maximum absolute value of $x^{-(n+1)}$ on the interval $[1,x]$ is $1$, so the maximum absolute value of $\ln^{(n+1)}(x)$ on $[1,x]$ is $n!$. Therefore, for $x \geq 1$, the Lagrange error bound evaluates to \[\left| \frac{n!(x-1)^{n+1}}{(n+1)!}\right| = \frac{(x-1)^{n+1}}{n+1}.\] If $x \leq 2$, then the limit of the above expression as $n$ goes to infinity is $0$, so the Taylor series must in fact converge to $\ln(x)$; but for $x > 2$, the Taylor series is not guaranteed to converge to $\ln(x)$. In fact, the Taylor series for $\ln(x)$ about $x = 1$ is \[\sum_{n=0}^{\infty} \frac{(-1)^{n-1}(x-1)^n}{n}\] which, by the Ratio Test, does not converge at all for $x > 2$.

It is also possible that a Taylor series converges but not to the value of the function it is meant to approximate. For a well-known example, all of the derivatives of the function \[f(x) = \left\{ \begin{array}{lr}  e^{-1/t^2}  &  x \neq 0 \\ 0 & x = 0  \end{array} \right\}\] are zero at $x = 0$, so the Maclaurin series of $f(x)$ converges to $0$ for all $x$. However, $f(x) > 0$ if $x \neq 0$, so the Maclaurin series converges to a different value than $f(x)$ for all $x \neq 0$.