Difference between revisions of "Euler-Mascheroni Constant"

(Created page with "The <b>Euler-Mascheroni constant</b> <math>\gamma</math> is defined by <cmath>\gamma = \lim_{n \rightarrow \infty} \left( \left( \sum_{k=1}^n \frac{1}{k} \right) - \ln(n) \rig...")
 
(Transferred content of this page to "Euler-Mascheroni constant" and converted this page into a redirect there.)
(Tag: New redirect)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
The <b>Euler-Mascheroni constant</b> <math>\gamma</math> is defined by <cmath>\gamma = \lim_{n \rightarrow \infty} \left( \left( \sum_{k=1}^n \frac{1}{k} \right) - \ln(n) \right).</cmath> Its value is approximately <cmath>\gamma = 0.5772 \dots .</cmath>
+
#REDIRECT[[Euler-Mascheroni constant]]
 
 
Whether <math>\gamma</math> is rational or irrational and (if irrational) algebraic or transcendental is an open question.
 
 
 
==Proof of existence==
 
 
 
===Alternate formulation of the limit===
 
The tangent-line approximation (first-degree Taylor polynomial) of <math>\ln(k + 1)</math> about <math>x = k</math> is <cmath>\ln(k + 1) = \ln(k) + ((k + 1) - k)\ln'(k) + E_k</cmath> for some error term <math>E_k</math>. Using <math>\ln'(x) = \frac{1}{x}</math> and simplifying, <cmath>\ln(k + 1) = \ln(k) + \frac{1}{k} + E_k.</cmath> Recursively applying the tangent-line formula for all <math>k</math> descending from <math>n</math> to <math>1</math>,
 
 
 
<cmath>\begin{align*} \ln(n) &= \ln(n-1) + \frac{1}{n-1} + E_{n-1} \\ &= \left( \ln (n-2) + \frac{1}{n-2} + E_{n-2} \right) + \frac{1}{n - 1} + E_{n-1} \\ &= \dots \\ &= \ln(1) + \left( \sum_{k=1}^{n-1} \frac{1}{k} \right) + \left( \sum_{k=1}^{n-1} E_k \right) . \end{align*}</cmath>
 
 
 
Because <math>\ln(1) = 0</math>, we may rearrange to <cmath>\left( \sum_{k=1}^{n-1} \frac{1}{k} \right) - \ln(n) = -\sum_{k=1}^{n-1} E_k.</cmath> Adding <math>\frac{1}{n}</math> to both sides yields <cmath>\left( \sum_{k=1}^{n} \frac{1}{k} \right) - \ln(n) = \left( -\sum_{k=1}^{n-1} E_k \right) + \frac{1}{n}.</cmath> Taking the limit as <math>n</math> goes to infinity of both sides,
 
 
 
<cmath>\begin{align*} \lim_{n \rightarrow \infty} \left( \left( \sum_{k=1}^n \frac{1}{k} \right) - \ln(n) \right) &= \lim_{n \rightarrow \infty} \left( \left( -\sum_{k=1}^{n-1} E_k \right) + \frac{1}{n} \right) \\ &= - \lim_{n \rightarrow \infty} \left( \sum_{k=1}^{n-1} E_k \right) + \lim_{n \rightarrow \infty} \frac{1}{n} \\ &= - \lim_{n \rightarrow \infty} \left( \sum_{k=1}^{n-1} E_k \right) \\ &= - \sum_{k=1}^{\infty} E_k. \end{align*}</cmath>
 
 
 
Thus, <math>\gamma = - \sum_{k=1}^{\infty} E_k</math>.
 
 
 
===Convergence of the sum of error terms===
 
We have <math>\ln''(x) = \left(\frac{1}{x} \right)' = -\frac{1}{x^2}</math>. For <math>k \geq 1</math>, the maximum absolute value of <math>-\frac{1}{x^2}</math> for <math>x \in [k, k+1]</math> is <math>\frac{1}{k^2}</math>. Therefore, by the Lagrange Error Bound, <cmath>|E_k| \leq \left| \frac{1^2 \left( \frac{1}{k^2} \right) }{2!} \right| = \frac{1}{2k^2}.</cmath>
 
 
 
The series <math>\sum_{k=1}^{\infty} \frac{1}{k^2}</math> famously converges to <math>\frac{\pi^2}{6}</math> by the Basel problem, so <math>\sum_{k=1}^{\infty} -\frac{1}{2k^2}</math> converges to <math>-\frac{\pi^2}{12}</math> and <math>\sum_{k=1}^{\infty} \frac{1}{2k^2}</math> converges to <math>\frac{\pi^2}{12}</math>.
 
 
 
Because <math>E_k \in \left[ -\frac{1}{2k^2}, \frac{1}{2k^2} \right]</math> for all <math>k</math>, the Series Comparison Test gives that <math>\sum_{k=1}^{\infty} E_k</math> must converge to a value in <math>\left[-\frac{\pi^2}{12}, \frac{\pi^2}{12} \right]</math>.
 
 
 
Hence, <math>\gamma = - \sum_{k=1}^{\infty} E_k</math> is a defined constant.
 

Latest revision as of 12:18, 9 March 2022