A Brief Explanation of e
by djmathman, Sep 20, 2014, 1:08 AM
This was originally going to be a response to this thread, but it got locked before I had the chance to post it. Here it is for your enjoyment.
Hidden for length
Hidden for length
(Warning: this post contains a good deal of calculus, some of which was researched for the reason of making it more rigorous.)
The thing is that
is much more important than it seems - but the reasons behind it are generally really really deep.
Rigorously,
is defined as the number
. By this definition, yes it is a "placeholder", just like how
is a "placeholder" constant that equals the ratio of the circumference of any circle to its diameter. It just so happens that this mysterious number
shows up way more often than its somewhat innocent definition suggests it should.
First, some reasoning on where this definition might come from. In calculus, there's this rule for evaluating integrals that works like the reverse of derivatives. It states that
for an arbitrary constant
. The problem with this definition is that it doesn't work for
(because then otherwise we would divide by
which is bad). A natural question to ask is then "what is
?"
A first interesting property to make is that if
denotes the integral with bounds
and
, then
. To prove this, note that
![\[\int_{1}^{ab}\dfrac1x\,dx=\int_1^a\dfrac1x\,dx+\int_a^{ab}\dfrac1x\,dx=\int_1^a\dfrac1x\,dx+\int_1^b\dfrac1{at}\,d(at)=\int_1^a\dfrac1x\,dx+\int_1^b\dfrac1t\,dt.\]](//latex.artofproblemsolving.com/e/8/5/e85689569d31690f09d4649e9557f71fa968771a.png)
This highly suggests that
is some sort of logarithm function. But to what base? To find out, let
be this unknown base. Then by the definition of the derivative
![\[\dfrac d{dx}[\log_ax]=\lim_{h\to0}\dfrac{\log_a(x+h)-\log_ax}h=\lim_{h\to0}\log_a\left(1+\dfrac hx\right)^{1/h}.\]](//latex.artofproblemsolving.com/7/9/6/796f824c050e4c71ecbaefb468182c981d83c18e.png)
It turns out that in order for this to equal
as desired we must have
. (I don't remember exactly how to prove this, but it shouldn't be too hard.) To summarize, the integral of
is just
- and already we have an interesting idea with this mysterious variable
!
Okay, now to the meat of the whole discussion: Taylor Series. When your calculator is asked to compute, say,
or
, how does it go about doing it? I mean, most trigonometric, exponential, and logarithmic numbers are transcandental, meaning that they aren't the solution of any polynomial equation with rational coefficients. The answer is simple: we can approximate these values by polynomials, called Taylor Series. In a general sense, for any function
its taylor series centered at the origin (i.e.
) is given by
![\[f(x)=\sum_{n=0}^\infty f^{(n)}(0)\dfrac{x^n}{n!},\]](//latex.artofproblemsolving.com/f/8/1/f811d5a0d3dc92259885b862f6b9058a593e7174.png)
where
is the
derivative of the function. (The idea here is that this polynomial of infinite degree has its first derivative, second derivative, third derivative, etc. all equal to the first, second, third, etc. derivatives of the original function
.) Calculators will probably compute "only" up to
or so terms, but it's enough to get a pretty darn good approximation.
The hidden beauty of these Taylor Series is that they can be used to prove Euler's formula. Any complex number with magnitude
can be written in the form
where
is the angle between the complex number and the real axis. The claim is that this expression equals
. To prove this, one can use the formula shown above to generate the Taylor polynomials

(The last one is interesting, as it implies that
!) Now we can say that

This explains tons and tons of properties of complex numbers very easily. For example, we can easily prove that the argument of the product of two complex numbers is equal to the sum of the arguments of the individual complex numbers almost trivially from this new definition, as
. Also, watch what happens when we plug in
:
Boom! The famous Euler's formula is not only beautiful, but it is also the fusion of complex numbers and calculus, two fields of mathematics that at first seem relatively distant from each other.
In conclusion, yes, it's true that in many cases
is seen as just a "placeholder", but upon closer inspection there's a lot more going on underneath the surface. In fact, that can be said for any two branches of math, really.
The thing is that

Rigorously,




First, some reasoning on where this definition might come from. In calculus, there's this rule for evaluating integrals that works like the reverse of derivatives. It states that
![\[\int x^n\,dx=\dfrac{x^{n+1}}{n+1}+C\]](http://latex.artofproblemsolving.com/7/8/b/78b075a804acef8ebaf9fb2bf32f1820393aa22b.png)




A first interesting property to make is that if




![\[\int_{1}^{ab}\dfrac1x\,dx=\int_1^a\dfrac1x\,dx+\int_a^{ab}\dfrac1x\,dx=\int_1^a\dfrac1x\,dx+\int_1^b\dfrac1{at}\,d(at)=\int_1^a\dfrac1x\,dx+\int_1^b\dfrac1t\,dt.\]](http://latex.artofproblemsolving.com/e/8/5/e85689569d31690f09d4649e9557f71fa968771a.png)
This highly suggests that


![\[\dfrac d{dx}[\log_ax]=\lim_{h\to0}\dfrac{\log_a(x+h)-\log_ax}h=\lim_{h\to0}\log_a\left(1+\dfrac hx\right)^{1/h}.\]](http://latex.artofproblemsolving.com/7/9/6/796f824c050e4c71ecbaefb468182c981d83c18e.png)
It turns out that in order for this to equal





Okay, now to the meat of the whole discussion: Taylor Series. When your calculator is asked to compute, say,




![\[f(x)=\sum_{n=0}^\infty f^{(n)}(0)\dfrac{x^n}{n!},\]](http://latex.artofproblemsolving.com/f/8/1/f811d5a0d3dc92259885b862f6b9058a593e7174.png)
where




The hidden beauty of these Taylor Series is that they can be used to prove Euler's formula. Any complex number with magnitude





(The last one is interesting, as it implies that
![$d/dx[e^x]=e^x$](http://latex.artofproblemsolving.com/d/b/1/db1982b03510a2f12c3c94acde02c837e097f7d2.png)

This explains tons and tons of properties of complex numbers very easily. For example, we can easily prove that the argument of the product of two complex numbers is equal to the sum of the arguments of the individual complex numbers almost trivially from this new definition, as


![\[e^{i\pi}=\cos\pi+i\sin\pi=-1\implies \boxed{e^{i\pi}+1=0}.\]](http://latex.artofproblemsolving.com/9/5/3/953cdbc3ff00782a80b0cff3c40dcc0bea75c5ba.png)
In conclusion, yes, it's true that in many cases

This post has been edited 3 times. Last edited by djmathman, Mar 5, 2015, 3:44 AM
Reason: align tags
Reason: align tags