10 "Real" Applications of Complex Numbers (Part 2/2)
by greenturtle3141, Aug 23, 2024, 3:39 AM
(Part 1: Here)
7. Evaluation of integrals
The topic of using contour integration to evaluate certain integrals has been beaten to death, this time by me. For a full discussion please see https://artofproblemsolving.com/community/c2532359h3075016.
I'll add that contour integration isn't the only way for complex numbers to evaluate integrals. If there is a way to introduce complex numbers which can simplify matters, there's a good chance that this can be made to work. Let us, for example, evaluate the integral
The answer, of course, is
. Let us try to derive this using complex numbers.
As suggested in my previous writings concerning complex numbers, there are some dangerous pitfalls that we must beware of. It is easy to fall for such traps and end up with nonsense, but fortunately I am here to light the way. We begin by factoring over the complex numbers to rewrite the integral as
This is perfectly legal. Next, it is in fact true that this integral is simply equal to
But you will fall into a pit unless you know precisely what
means for complex numbers. Here, it refers to the principal logarithm (there is some reasoning that should be applied to justify that this works; namely one ought to avoid the perilous negative real line). A bit more precisely, we are choosing a particular branch of the complex logarithm that contains all the complex numbers we're working with. Working with the principal log (characterized by
for
) and drawing some pictures, we discover the following:



Hence our integral is
This may look different from the claimed integral of
, but fortunately these two expressions are the same (why?).
Exercise: Evaluate the integral
using complex numbers (instead of the usual integration by parts).
8. Evaluation of infinite sums
In a previous exercise I asked you to evaluate
using complex numbers. This is one instance of complex numbers being the silver bullet for evaluating an infinite sum, but there are other (less elementary) applications.
Fourier series
The Fourier series of a periodic function is, morally speaking, its Fourier transform under the Fourier transform of "type"
. This seemingly "abstract nonsense" view of Fourier series is actually quite helpful for making much of the properties shared by both Fourier series and the "classic" Fourier transform more intuitive. A more extended discussion of Fourier series and/or the Fourier transform will be reserved for a separate post, but I felt motivated to provide a few insights here.
Taking the Fourier series of certain simple functions and plugging some values into them can often lead to interesting identities. Unfortunately the converse is, at least for me, far more difficult; it is a tall task to look at an infinite sum and proceed to conjure up a proof via Fourier series.
Let us, for example, take the function
over
. Morally speaking we view this as a 2-periodic function on
by extending
periodically. To obtain the Fourier series of
with high probability of success, and without the help of the extreme "abstract" view, we can follow these steps.
Voila! You have successfully found the Fourier series
without memorization. (I could combine the positive and negative parts to write everything in terms of
, but I don't want to for reasons that may become apparent. Also I think it's more instructive like this.)
Note that writing down
willy-nilly is actually a bit reckless. It is not always going to be true that both sides of
are equal for every
. To avoid pitfalls, it should first be shown that
is continuous (which is obvious here) and that
(which is not too hard). The motivation and rationale for these conditions will be deferred for another post.
Once you've shown that, it's safe to plug in numbers into
and see what you get. Let's try
for fun. Noting that
for all
, we get
which rearranges to
. Wow! We have recovered the Basel sum,
(and if you're not sure how this immediately follows, you should figure it out!).
But wait, there's more to glean from
! By Parseval's identity, or the fact that the Fourier transform is an isometry, we know that
This gives
which rearranges to
. This recovers the formula

Exercise: Let
be a real constant. Derive the identity
by abusing the power of complex numbers.
Hint 1
Hint 2
Hint 3
Hint 4
Poisson Summation
Poisson summation is a tool that can be used to tackle sums of the form
where
is a continuous function that "decreases rapidly" as
. For such functions, Poisson summation states that you can replace
with its Fourier transform without changing the value of the sum. That is,
It is normal to feel that this identity is absurd. Two of the proofs I know of include using Fourier series and using contour integration of a cleverly-chosen function.
Poisson summation happens to be an essential tool in analytic number theory, but I am not an expert in this field so I cannot say much on that matter. Amazingly, it is also used crucially for proving optimal results in the study of sphere packings (https://en.wikipedia.org/wiki/Sphere_packing), though it should be noted that such applications use the multi-dimensional form
or further generalizations.
Exercise: Use Poisson summation to solve the previous exercise. It is helpful to first prove the identity
for
, by using contour integration. Or you can just assume it's true if you aren't good at that yet.
9. The Central Limit Theorem
The Central Limit Theorem (CLT) states that with enough data, all distributions are normal (Gaussian) distributions, i.e. look like bell curves. More formally: If
is a sequence of independent and identically distributed random variables with mean
and variance
, then
converges in distribution to a
random variable. This is scary fact. But what's even more frightening is that this result can be proven using complex numbers. In fact, it is a very pretty, brief and elegant proof.
The proof uses a probabilistic version of the Fourier transform, called the characteristic function: If
is a random variable, we can define its "Fourier transform"
via
The distribution of a random variable is captured fully by its characteristic function (this requires proof), and moreover the characteristic function of a sum of independent random variables turns into a product! This makes the characteristic function a powerful tool for tackling the CLT, which we will now provide (roughly) the proof of.
Assume for simplicity that
and
. (In fact, this can be assumed WLOG.) Now if
, then
Since the variables are independent, this splits into a product of expectations.
Since the variables have the same distribution, all these expectations are the same.
Now we rewrite
by Taylor expansion.
The most difficult part of this proof is dealing with the remainder term, and it requires some technical bounds. I will omit these parts of the proof. But intuitively you should know that the remainder term here is on the order of
, in some sense.
Taking the expectation of each side (and using
,
) now gives
and now raising to the
th power gives
Now, since the remainder term is on the order of like
, we can consider it asymptotically insignificant compared to the leading term
. Thus, by handwavingly ignoring the remainder, we find that
(Be rest assured that taming the remainder to make this argument rigorous isn't too bad to do.) The above limit basically means that
converges in distribution to a random variable whose characteristic function is
. It turns out that
is exactly the characteristic function of a standard Gaussian! That's the proof. (...modulo proving all those properties of characteristic functions that we relied on.)
Exercise: Use complex numbers to show that if
and
are independent random variables such that
and
have the same distributon, then
(almost surely).
10. Fractional derivatives and Sobolev embeddings
First, do note that there are many notions of fractional derivatives, and they are not at all equivalent. It's also not that clear how they are useful. However, it sounds cool.
Complex numbers give one method of defining a notion of fractional differentiation. Let
be a nice function. Then it is well-known that the Fourier transform converts derivatives of
into multiplication by a term:
Of course, this works inductively for any number of derivatives.
In particular, we get the identity
which gives a rather awkward method of taking
derivatives of
.
But now this begs the question: What if in the above identity we allow
to be a non-integer? Then this would give a way to define a notion of taking
derivatives, where
is any real number.
This is clearly consistent with "classical" differentiation by definition, so this definition is reasonable provided that
is nice. It should be noted that we should be a wee bit careful with what "
" means when
is not an integer, but I don't feel like discussing this.
I'm going to use this definition to compute the fractional derivatives of the insultingly simple function
. Before you complain, I have very good reason for keeping it simple --- if we try to use the above definition, we immediately run into a problem! Namely, this choice of
does not have a Fourier transform because it does not have sufficient decay at
.
While this is certainly a major issue, it can be sidestepped with the knowledge that such functions do have Fourier transforms, in the sense of distributions. As I've previously suggested, this is quite advanced, so it is not easy for me to explain precisely how this works at a low level. Rigorously, we can find that
where
is the Dirac delta at
. Now
is the distribution acting as follows:
Aha! By definition of
, this expression is just
. If
, this is just
. If
, this is
(because
), which is equal to
. Thus for
,

The next simplest function we can try differentiating is
. The first immediate issue is, of course, that
can't be Fourier'd in the usual sense, so we must again interpret
distributionally. It's not trivial, but you can show that
where
is the distributional derivative of
. (One way to demonstrate this is to work backwards by proving that
I'm not sure how I'd figure it out more directly.) Now


We conclude that
Let's check that this is consistent for integer values of
. When
(using the convention that
), this is just
. When
, this is
. When
, this is
. Hence we did not mess up. We can also write this derivative piece-wise in
as

This procedure can be used to take the (Fourier-wise) fractional derivative of various expressions, but as you can probably tell, computing an exact expression for such derivatives is rather cumbersome in practice. What's far more useful is the consideration of spaces of functions for which you can take a fractional derivative, which can pop up in the study of PDE, interpolation theory, and other deep regions of analysis. Of particular interest is the Hilbert space
of functions
that satisfy
which happens to be equal to the fractional Sobolev space
, where
is defined as the space of functions
for which
For
a positive integer, this in turn is just the Sobolev space of functions which admit
weak derivatives, so these abstractions are certainly grounded in reality (provided you consider weak derivatives to be at all grounded in reality...).
In fact,
gives an equivalent norm for the topology on
, which allows some very neat proofs in the realm of Sobolev spaces. A particularly easy result to get is Morrey's Embedding, which states that if
then
. Equivalently, for any Schwartz function
we have the bound
To show this, first fix
and use Fourier inversion to write

Since we wish to use the funny Fourier norm on
, we now multiply and divide by the appropriate expression and apply Cauchy-Schwarz.


It remains to prove that
. Since integrals over
are kinda awful, it makes to split it as a sum of an integral over
and an integral over
, with
to be chosen later. Near
we use the estimate

Far from
, we use the stupid estimate
. Putting this together,



Now we choose
, and it makes sense to take
. This gives the final upper bound (up to a constant) of
, completing the proof.
Actually to really show the desired embedding we should control the
norm. I will leave this to you.
Exercise: Use complex numbers to prove that any
, where
, is actually bounded with upper bound
(up to a constant depending only on
and
).
I must stress that the above statement does not involve complex numbers!
One last (tough) exercise for you.
Exercise: Use complex numbers to prove that for any
, where
, we have the inequality
(Hint: You will need to use Young's Convolution Inequality. One form of this inequality is as follows: For
and
with
, we have
)
Additional Hint
What I have presented here is only those applications that I am most familiar with. There are far more magical uses for complex numbers in "real" settings, many of which are likely unknown to me. What applications do you know of? Feel free to share.
7. Evaluation of integrals
The topic of using contour integration to evaluate certain integrals has been beaten to death, this time by me. For a full discussion please see https://artofproblemsolving.com/community/c2532359h3075016.
I'll add that contour integration isn't the only way for complex numbers to evaluate integrals. If there is a way to introduce complex numbers which can simplify matters, there's a good chance that this can be made to work. Let us, for example, evaluate the integral


As suggested in my previous writings concerning complex numbers, there are some dangerous pitfalls that we must beware of. It is easy to fall for such traps and end up with nonsense, but fortunately I am here to light the way. We begin by factoring over the complex numbers to rewrite the integral as

![$$ = \frac{1}{2i}\left[\left(\log(x-i) - \log(-i)\right) - \left(\log(x+i) - \log(i)\right)\right].$$](http://latex.artofproblemsolving.com/6/0/c/60cd967591265dc0e9317cbc56db528a7942254d.png)







![$$\int_0^x \frac{1}{1+t^2}\,dt = \frac{1}{2i}\left[-2\arctan(1/x)i + \pi i\right] = \boxed{\frac{\pi}{2} - \arctan(1/x)}.$$](http://latex.artofproblemsolving.com/4/1/7/417dbf686c921d40a24a78212b954231a610d86d.png)

Exercise: Evaluate the integral

8. Evaluation of infinite sums
In a previous exercise I asked you to evaluate

Fourier series
The Fourier series of a periodic function is, morally speaking, its Fourier transform under the Fourier transform of "type"

Taking the Fourier series of certain simple functions and plugging some values into them can often lead to interesting identities. Unfortunately the converse is, at least for me, far more difficult; it is a tall task to look at an infinite sum and proceed to conjure up a proof via Fourier series.
Let us, for example, take the function





- Determine what frequencies we need to use. We need to use all frequencies
that repeat every
, i.e. are invariant under a phase shift of
. It's not hard to reason out that the frequencies we should use are
for every integer
.
- Scale the frequencies so that they have unit
norm. That is, we want to define
where
is chosen so that
. Seeing that
, it's not hard to find that
. We need to do this so that the frequencies
form an orthonormal basis for
.
- Now the
th Fourier coefficient of
is simply the
-component of
under the orthonormal basis
, which is given by the inner product
Oh but you'll notice that this actually doesn't work for
, so we need to compute separately that
.
Voila! You have successfully found the Fourier series


Note that writing down





Once you've shown that, it's safe to plug in numbers into







But wait, there's more to glean from





Exercise: Let


Hint 1
Find the Fourier series of
over some centered interval, such as
. Don't be afraid to "adjust"
later on if the resulting identities you obtain are not quite what you want. The computation is pretty clean; if you notice the right things, no part of this should be tedious.



Hint 2
You can compute the Fourier coefficients with very little pain by first using symmetry to rid yourself of the pesky absolute value (and, collaterally, eliminate the imaginary part of the frequency). Then, to tame what remains, use a trick/methodology mentioned earlier in this post. (Is there a nice way to find
?)

Hint 3
Once you have reduced the problem to tackling a sum, it is useful to let
be the sum of the even-indexed terms and let
be the sum of the odd-indexed terms. Then your goal is to compute
based on a system of equations involving
and
.





Hint 4
If you have only one equation, use a clever trick to immediately procure another. This trick is suggested by the form of the final answer. Once you have found a second equation, some slick algebra can give you
without actually solving for either
or
.



Poisson Summation
Poisson summation is a tool that can be used to tackle sums of the form





Poisson summation happens to be an essential tool in analytic number theory, but I am not an expert in this field so I cannot say much on that matter. Amazingly, it is also used crucially for proving optimal results in the study of sphere packings (https://en.wikipedia.org/wiki/Sphere_packing), though it should be noted that such applications use the multi-dimensional form

Exercise: Use Poisson summation to solve the previous exercise. It is helpful to first prove the identity


9. The Central Limit Theorem
The Central Limit Theorem (CLT) states that with enough data, all distributions are normal (Gaussian) distributions, i.e. look like bell curves. More formally: If





The proof uses a probabilistic version of the Fourier transform, called the characteristic function: If



Assume for simplicity that



![$$\phi_{Y_n}(t) = \mathbb{E}\left[e^{it(X_1+\ldots+X_n)/\sqrt{n}}\right] = \mathbb{E}\left[e^{itX_1/\sqrt{n}}e^{itX_2/\sqrt{n}}\ldots e^{itX_n/\sqrt{n}}\right].$$](http://latex.artofproblemsolving.com/5/6/c/56cd7edaf99bba4fc582dc6cc1e45a87ba6a1237.png)





Taking the expectation of each side (and using











Exercise: Use complex numbers to show that if





10. Fractional derivatives and Sobolev embeddings
First, do note that there are many notions of fractional derivatives, and they are not at all equivalent. It's also not that clear how they are useful. However, it sounds cool.
Complex numbers give one method of defining a notion of fractional differentiation. Let







But now this begs the question: What if in the above identity we allow







I'm going to use this definition to compute the fractional derivatives of the insultingly simple function



While this is certainly a major issue, it can be sidestepped with the knowledge that such functions do have Fourier transforms, in the sense of distributions. As I've previously suggested, this is quite advanced, so it is not easy for me to explain precisely how this works at a low level. Rigorously, we can find that















The next simplest function we can try differentiating is





















This procedure can be used to take the (Fourier-wise) fractional derivative of various expressions, but as you can probably tell, computing an exact expression for such derivatives is rather cumbersome in practice. What's far more useful is the consideration of spaces of functions for which you can take a fractional derivative, which can pop up in the study of PDE, interpolation theory, and other deep regions of analysis. Of particular interest is the Hilbert space









In fact,






























Actually to really show the desired embedding we should control the

Exercise: Use complex numbers to prove that any





I must stress that the above statement does not involve complex numbers!
One last (tough) exercise for you.
Exercise: Use complex numbers to prove that for any







Additional Hint
Use the bound
.

What I have presented here is only those applications that I am most familiar with. There are far more magical uses for complex numbers in "real" settings, many of which are likely unknown to me. What applications do you know of? Feel free to share.
This post has been edited 5 times. Last edited by greenturtle3141, Aug 23, 2024, 5:09 PM