10 "Real" Applications of Complex Numbers (Part 2/2)

by greenturtle3141, Aug 23, 2024, 3:39 AM

(Part 1: Here)

7. Evaluation of integrals

The topic of using contour integration to evaluate certain integrals has been beaten to death, this time by me. For a full discussion please see https://artofproblemsolving.com/community/c2532359h3075016.

I'll add that contour integration isn't the only way for complex numbers to evaluate integrals. If there is a way to introduce complex numbers which can simplify matters, there's a good chance that this can be made to work. Let us, for example, evaluate the integral
$$\int_0^x \frac{1}{1+t^2}\,dt.$$The answer, of course, is $\arctan(x)$. Let us try to derive this using complex numbers.

As suggested in my previous writings concerning complex numbers, there are some dangerous pitfalls that we must beware of. It is easy to fall for such traps and end up with nonsense, but fortunately I am here to light the way. We begin by factoring over the complex numbers to rewrite the integral as
$$\frac{1}{2i}\int_0^x \frac{1}{t-i} - \frac{1}{t+i}\,dt.$$This is perfectly legal. Next, it is in fact true that this integral is simply equal to
$$ = \frac{1}{2i}\left[\left(\log(x-i) - \log(-i)\right) - \left(\log(x+i) - \log(i)\right)\right].$$But you will fall into a pit unless you know precisely what $\log(\cdot)$ means for complex numbers. Here, it refers to the principal logarithm (there is some reasoning that should be applied to justify that this works; namely one ought to avoid the perilous negative real line). A bit more precisely, we are choosing a particular branch of the complex logarithm that contains all the complex numbers we're working with. Working with the principal log (characterized by $\log(re^{i\theta}) = \log r + \theta i$ for $-\pi < \theta < \pi$) and drawing some pictures, we discover the following:
$$\log(x+i) = \log(\sqrt{x^2+1}) + \arctan(1/x)i$$$$\log(x-i) = \log(\sqrt{x^2+1}) - \arctan(1/x)i$$$$\log i = \pi i/2$$$$\log(-i) = -\pi i/2$$Hence our integral is
$$\int_0^x \frac{1}{1+t^2}\,dt = \frac{1}{2i}\left[-2\arctan(1/x)i + \pi i\right] = \boxed{\frac{\pi}{2} - \arctan(1/x)}.$$This may look different from the claimed integral of $\arctan x$, but fortunately these two expressions are the same (why?).

Exercise: Evaluate the integral
$$\int_0^\pi e^x\cos x\,dx$$using complex numbers (instead of the usual integration by parts).

8. Evaluation of infinite sums

In a previous exercise I asked you to evaluate $\sum_{n=1}^\infty \frac{\cos n}{2^n}$ using complex numbers. This is one instance of complex numbers being the silver bullet for evaluating an infinite sum, but there are other (less elementary) applications.

Fourier series

The Fourier series of a periodic function is, morally speaking, its Fourier transform under the Fourier transform of "type" $\mathbb{T} \to \mathbb{Z}$. This seemingly "abstract nonsense" view of Fourier series is actually quite helpful for making much of the properties shared by both Fourier series and the "classic" Fourier transform more intuitive. A more extended discussion of Fourier series and/or the Fourier transform will be reserved for a separate post, but I felt motivated to provide a few insights here.

Taking the Fourier series of certain simple functions and plugging some values into them can often lead to interesting identities. Unfortunately the converse is, at least for me, far more difficult; it is a tall task to look at an infinite sum and proceed to conjure up a proof via Fourier series.

Let us, for example, take the function $f(x) = |x|$ over $(-1,1)$. Morally speaking we view this as a 2-periodic function on $\mathbb{R}$ by extending $f$ periodically. To obtain the Fourier series of $f$ with high probability of success, and without the help of the extreme "abstract" view, we can follow these steps.
  1. Determine what frequencies we need to use. We need to use all frequencies $x \mapsto e^{-i\xi x}$ that repeat every $2$, i.e. are invariant under a phase shift of $2$. It's not hard to reason out that the frequencies we should use are $x \mapsto e^{-n\pi i x}$ for every integer $n$.
  2. Scale the frequencies so that they have unit $L^2$ norm. That is, we want to define
    $$e_n(x) := c \cdot e^{-n\pi ix}$$where $c > 0$ is chosen so that $\int_{-1}^1 |e_n(x)|^2\,dx = 1$. Seeing that $|e_n(x)| = 1$, it's not hard to find that $c = 1/\sqrt{2}$. We need to do this so that the frequencies $\{e_n\}_{n \in \mathbb{Z}}$ form an orthonormal basis for $L^2(-1,1)$.
  3. Now the $n$th Fourier coefficient of $f$ is simply the $e_n$-component of $f$ under the orthonormal basis $\{e_n\}_{n \in \mathbb{Z}}$, which is given by the inner product
    $$\hat{f}(n) = \langle f, e_n\rangle = \int_{-1}^1 |x| \cdot \frac{1}{\sqrt{2}}e^{-n\pi i x}\,dx$$$$= \sqrt{2}\int_0^1 x\cos(n\pi x)\,dx = \sqrt{2}\int_0^1 \frac{\sin n\pi x}{n \pi}\,dx = \frac{-\sqrt{2}(\cos(n\pi) - 1)}{n^2\pi^2} = \begin{cases}0, & \text{$n$ even} \\ \frac{-2\sqrt{2}}{n^2\pi^2}, & \text{$n$ odd}\end{cases}.$$Oh but you'll notice that this actually doesn't work for $n=0$, so we need to compute separately that $\hat{f}(0) = \langle f, e_0\rangle = \int_{-1}^1 |x| \cdot\frac{1}{\sqrt{2}}\,dx = \frac{1}{\sqrt{2}}$.

Voila! You have successfully found the Fourier series
$$|x| = f(x) = \sum_{n=-\infty}^\infty \hat{f}(n)e_n(x) = \sum_{\text{$n < 0$, odd}} \frac{-2\sqrt{2}}{n^2\pi^2}e_n(x) + \frac{1}{\sqrt{2}}e_0 + \sum_{\text{$n > 0$, odd}}\frac{-2\sqrt{2}}{n^2\pi^2}e_n(x) \qquad (*)$$without memorization. (I could combine the positive and negative parts to write everything in terms of $\cos \pi nx$, but I don't want to for reasons that may become apparent. Also I think it's more instructive like this.)

Note that writing down $(*)$ willy-nilly is actually a bit reckless. It is not always going to be true that both sides of $(*)$ are equal for every $x$. To avoid pitfalls, it should first be shown that $f$ is continuous (which is obvious here) and that $\sum_{n \in \mathbb{Z}} |\hat{f}(n)| < \infty$ (which is not too hard). The motivation and rationale for these conditions will be deferred for another post.

Once you've shown that, it's safe to plug in numbers into $(*)$ and see what you get. Let's try $x = 0$ for fun. Noting that $e_n(0) = 1/\sqrt{2}$ for all $n$, we get
$$0 = \frac12 + 2\sum_{\text{$n \in \mathbb{N}$ odd}} \frac{-2}{n^2\pi^2},$$which rearranges to $\sum_{n \in \mathbb{N}\text{ odd}} \frac1{n^2} = \frac{\pi^2}{8}$. Wow! We have recovered the Basel sum, $\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ (and if you're not sure how this immediately follows, you should figure it out!).

But wait, there's more to glean from $(*)$! By Parseval's identity, or the fact that the Fourier transform is an isometry, we know that
$$\int_{-1}^1 |f(x)|^2\,dx = \sum_{n \in \mathbb{Z}} |\hat{f}(n)|^2.$$This gives
$$\frac{2}{3} = \frac12 + 2\sum_{\text{$n \in \mathbb{N}$ odd}} \frac{8}{n^4\pi^4},$$which rearranges to $\sum_{n \in \mathbb{N}\text{ odd}} \frac{1}{n^4} = \frac{\pi^4}{96}$. This recovers the formula
$$\zeta(4) = \sum_{n=1}^\infty \frac{1}{n^4}  =\frac{\pi^4}{90}.$$
Exercise: Let $a \neq 0$ be a real constant. Derive the identity
$$\sum_{n=-\infty}^\infty \frac{1}{a^2+n^2} = \frac{\pi\coth(\pi a)}{a}$$by abusing the power of complex numbers.

Hint 1
Hint 2
Hint 3
Hint 4

Poisson Summation

Poisson summation is a tool that can be used to tackle sums of the form
$$\sum_{n=-\infty}^\infty f(n),$$where $f:\mathbb{R} \to \mathbb{R}$ is a continuous function that "decreases rapidly" as $x \to \pm \infty$. For such functions, Poisson summation states that you can replace $f$ with its Fourier transform without changing the value of the sum. That is,
$$\sum_{n=-\infty}^\infty f(n) = \sum_{n=-\infty}^\infty \hat{f}(n).$$It is normal to feel that this identity is absurd. Two of the proofs I know of include using Fourier series and using contour integration of a cleverly-chosen function.

Poisson summation happens to be an essential tool in analytic number theory, but I am not an expert in this field so I cannot say much on that matter. Amazingly, it is also used crucially for proving optimal results in the study of sphere packings (https://en.wikipedia.org/wiki/Sphere_packing), though it should be noted that such applications use the multi-dimensional form
$$\sum_{n \in \mathbb{Z}^d} f(n) = \sum_{n \in \mathbb{Z}^d} \hat{f}(n)$$or further generalizations.

Exercise: Use Poisson summation to solve the previous exercise. It is helpful to first prove the identity
$$\int_{-\infty}^\infty \frac{\cos x}{x^2+a^2}\,dx = \frac{e^{-a}\pi}{a}$$for $a > 0$, by using contour integration. Or you can just assume it's true if you aren't good at that yet.

9. The Central Limit Theorem

The Central Limit Theorem (CLT) states that with enough data, all distributions are normal (Gaussian) distributions, i.e. look like bell curves. More formally: If $X_n$ is a sequence of independent and identically distributed random variables with mean $\mu$ and variance $\sigma^2$, then
$$\sum_{k=1}^n \frac{X_1+\ldots+X_n - n\mu}{\sqrt{n}\sigma}$$converges in distribution to a $N(0,1)$ random variable. This is scary fact. But what's even more frightening is that this result can be proven using complex numbers. In fact, it is a very pretty, brief and elegant proof.

The proof uses a probabilistic version of the Fourier transform, called the characteristic function: If $X$ is a random variable, we can define its "Fourier transform" $\phi_X:\mathbb{R} \to \mathbb{C}$ via
$$\phi_X(t) := \mathbb{E} e^{itX}.$$The distribution of a random variable is captured fully by its characteristic function (this requires proof), and moreover the characteristic function of a sum of independent random variables turns into a product! This makes the characteristic function a powerful tool for tackling the CLT, which we will now provide (roughly) the proof of.

Assume for simplicity that $\mu = 0$ and $\sigma^2 = 1$. (In fact, this can be assumed WLOG.) Now if $Y_n := \frac{X_1+\ldots+X_n}{\sqrt{n}}$, then
$$\phi_{Y_n}(t) = \mathbb{E}\left[e^{it(X_1+\ldots+X_n)/\sqrt{n}}\right] = \mathbb{E}\left[e^{itX_1/\sqrt{n}}e^{itX_2/\sqrt{n}}\ldots e^{itX_n/\sqrt{n}}\right].$$Since the variables are independent, this splits into a product of expectations.
$$ = \mathbb{E} e^{itX_1/\sqrt{n}}\mathbb{E} e^{itX_2/\sqrt{n}}\ldots \mathbb{E} e^{itX_n/\sqrt{n}}$$Since the variables have the same distribution, all these expectations are the same.
$$ = \left(\mathbb{E} e^{itX_1/\sqrt{n}}\right)^n$$Now we rewrite $e^{itX_1/\sqrt{n}}$ by Taylor expansion.
$$e^{itX_1/n} = 1 + \frac{iX_1}{\sqrt{n}}t - \frac{t^2}{2n}X_1^2 + \text{Remainder}$$The most difficult part of this proof is dealing with the remainder term, and it requires some technical bounds. I will omit these parts of the proof. But intuitively you should know that the remainder term here is on the order of $t^3/n^{3/2}$, in some sense.

Taking the expectation of each side (and using $\mathbb{E}X_1 = 0$, $\mathbb{E} X_1^2 = 1$) now gives
$$\mathbb{E}e^{itX_1/\sqrt{n}} = 1 - \frac{t^2}{2n} + \text{Remainder},$$and now raising to the $n$th power gives
$$\phi_{Y_n}(t) = \left(\mathbb{E} e^{itX_1/\sqrt{n}}\right)^n = \left(1 - \frac{t^2}{2n} + \text{Remainder}\right)^n.$$Now, since the remainder term is on the order of like $1/n^{3/2}$, we can consider it asymptotically insignificant compared to the leading term $t^2/(2n)$. Thus, by handwavingly ignoring the remainder, we find that
$$\lim_{n \to \infty} \phi_{Y_n}(t) = \lim_{n \to \infty} \left(1 - \frac{t^2}{2n}\right)^n = e^{-t^2/2}.$$(Be rest assured that taming the remainder to make this argument rigorous isn't too bad to do.) The above limit basically means that $Y_n$ converges in distribution to a random variable whose characteristic function is $e^{-t^2/2}$. It turns out that $e^{-t^2/2}$ is exactly the characteristic function of a standard Gaussian! That's the proof. (...modulo proving all those properties of characteristic functions that we relied on.)

Exercise: Use complex numbers to show that if $X$ and $Y$ are independent random variables such that $X$ and $X+Y$ have the same distributon, then $Y = 0$ (almost surely).

10. Fractional derivatives and Sobolev embeddings

First, do note that there are many notions of fractional derivatives, and they are not at all equivalent. It's also not that clear how they are useful. However, it sounds cool.

Complex numbers give one method of defining a notion of fractional differentiation. Let $f(x)$ be a nice function. Then it is well-known that the Fourier transform converts derivatives of $f$ into multiplication by a term:
$$\mathcal{F}(f')(\xi) = -2\pi i \xi \hat{f}(\xi)$$Of course, this works inductively for any number of derivatives.
$$\mathcal{F}(f^{(n)})(\xi) = (-2\pi i \xi)^n\hat{f}(\xi)$$In particular, we get the identity
$$f^{(n)}(x) = \mathcal{F}^{-1}\left((-2\pi i \xi)^n\hat{f}(\xi)\right),$$which gives a rather awkward method of taking $n$ derivatives of $f$.

But now this begs the question: What if in the above identity we allow $n$ to be a non-integer? Then this would give a way to define a notion of taking $s$ derivatives, where $s$ is any real number.
$$f^{(s)}(x) := \mathcal{F}^{-1}\left((-2\pi i \xi)^s\hat{f}(\xi)\right)$$This is clearly consistent with "classical" differentiation by definition, so this definition is reasonable provided that $f$ is nice. It should be noted that we should be a wee bit careful with what "$i^s$" means when $s$ is not an integer, but I don't feel like discussing this.

I'm going to use this definition to compute the fractional derivatives of the insultingly simple function $f(x) = 1$. Before you complain, I have very good reason for keeping it simple --- if we try to use the above definition, we immediately run into a problem! Namely, this choice of $f$ does not have a Fourier transform because it does not have sufficient decay at $\pm \infty$.

While this is certainly a major issue, it can be sidestepped with the knowledge that such functions do have Fourier transforms, in the sense of distributions. As I've previously suggested, this is quite advanced, so it is not easy for me to explain precisely how this works at a low level. Rigorously, we can find that $\hat{f} = \delta_0$ where $\delta_0$ is the Dirac delta at $0$. Now $\mathcal{F}^{-1}((-2\pi i\xi)^s\delta_0)$ is the distribution acting as follows:
$$\langle \mathcal{F}^{-1}((-2\pi i\xi)^s\delta_0),\phi\rangle = \langle(-2\pi i\xi)^s\delta_0, \hat{\phi}\rangle = \langle \delta_0,(-2\pi i\xi)^s\hat{\phi}\rangle$$Aha! By definition of $\delta_0$, this expression is just $(-2\pi i \cdot 0)^s\hat{\phi}(0)$. If $s > 0$, this is just $0 = \langle 0, \phi\rangle$. If $s = 0$, this is $\hat{\phi}(0)$ (because $0^0 = 1$), which is equal to $\int_{\mathbb{R}} \phi = \langle 1,\phi \rangle$. Thus for $s \geq 0$,
$$f^{(s)}(x) = \begin{cases}1, & s = 0 \\ 0, & s > 0\end{cases}.$$
The next simplest function we can try differentiating is $f(x) = x$. The first immediate issue is, of course, that $x$ can't be Fourier'd in the usual sense, so we must again interpret $x$ distributionally. It's not trivial, but you can show that $\hat{x} = \frac{1}{-2\pi i}\delta'_0$ where $\delta_0'$ is the distributional derivative of $\delta_0$. (One way to demonstrate this is to work backwards by proving that $$\langle \mathcal{F}^{-1}(\delta_0'),\phi\rangle = \langle \delta_0', \hat{\phi}\rangle = \langle \delta_0, -(\hat{\phi})'\rangle = \left.\frac{d}{d\xi}\right|_{\xi=0} -\hat{\phi}(\xi) = \langle x,-2\pi i \phi\rangle.$$I'm not sure how I'd figure it out more directly.) Now
$$\langle x^{(s)}, \phi\rangle = \langle \mathcal{F}^{-1}((-2\pi i)^{s-1}\xi^s\delta_0'),\phi\rangle = \langle (-2\pi i)^{s-1}\xi^s\delta_0',\hat{\phi}\rangle = \langle \delta_0', (-2\pi i)^{s-1}\xi^s\hat{\phi}\rangle$$$$ = \left.\frac{d}{d\xi}\right|_{\xi = 0} (-2\pi i)^{\xi-1}\xi^s \hat{\phi}(\xi) = s(-2\pi i)^{s-1}0^{s-1}\hat{\phi}(0) + (-2\pi i)^{s-1}0^s\hat{\phi}'(0)$$$$ = s(-2\pi i)^{s-1}0^{s-1}\langle 1, \phi\rangle + (-2\pi i)^{s-1}0^s\langle -2\pi i x,\phi\rangle = \langle s(-2\pi i)^{s-1}0^{s-1} + x(-2\pi i)^{s}0^s, \phi\rangle.$$We conclude that
$$x^{(s)} = s \cdot 0^{s-1} + x \cdot 0^s.$$Let's check that this is consistent for integer values of $s$. When $s = 0$ (using the convention that $0 \cdot \infty = 0$), this is just $x$. When $s = 1$, this is $1$. When $s \geq 2$, this is $0$. Hence we did not mess up. We can also write this derivative piece-wise in $s$ as
$$x^{(s)} = \begin{cases}x, & s = 0 \\ \text{DNE}, & 0 < s < 1 \\ 1, & s = 1 \\ 0, & s > 1\end{cases}.$$
This procedure can be used to take the (Fourier-wise) fractional derivative of various expressions, but as you can probably tell, computing an exact expression for such derivatives is rather cumbersome in practice. What's far more useful is the consideration of spaces of functions for which you can take a fractional derivative, which can pop up in the study of PDE, interpolation theory, and other deep regions of analysis. Of particular interest is the Hilbert space $H^s(\mathbb{R}^d)$ of functions $f \in L^2(\mathbb{R}^d)$ that satisfy
$$\int_{\mathbb{R}^d} (1+x^2)^{s}|\hat{f}(\xi)|^2\,d\xi < \infty,$$which happens to be equal to the fractional Sobolev space $W^{s,2}(\mathbb{R}^d)$, where $W^{s,p}(\mathbb{R}^d)$ is defined as the space of functions $f \in L^2(\mathbb{R}^d)$ for which
$$\int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{|f(x)-f(y)|^p}{|x-y|^{d+sp}}\,d(x,y) < \infty.$$For $s$ a positive integer, this in turn is just the Sobolev space of functions which admit $s$ weak derivatives, so these abstractions are certainly grounded in reality (provided you consider weak derivatives to be at all grounded in reality...).

In fact, $f \mapsto \left(\int_{\mathbb{R}^d} (1+x^2)^{s}|\hat{f}(\xi)|^2\,d\xi\right)^{1/2}$ gives an equivalent norm for the topology on $H^s(\mathbb{R}^d)$, which allows some very neat proofs in the realm of Sobolev spaces. A particularly easy result to get is Morrey's Embedding, which states that if $\alpha = s - d/2 \in (0,1)$ then $H^s(\mathbb{R}^d) \hookrightarrow C^{0,\alpha}(\mathbb{R}^d)$. Equivalently, for any Schwartz function $f \in \mathcal{S}(\mathbb{R}^d)$ we have the bound
$$|f|_{C^{0,\alpha}} \preceq \|f\|_{H^s}.$$To show this, first fix $x,y \in \mathbb{R}^d$ and use Fourier inversion to write
$$|f(x)-f(y)| = \left|\int_{\mathbb{R}^d} \hat{f}(\xi)\left(e^{2\pi i x\xi} - e^{2\pi i y\xi}\right)\,d\xi\right|$$$$ \leq \int_{\mathbb{R}^d} |\hat{f}(\xi)|\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|\,d\xi.$$Since we wish to use the funny Fourier norm on $H^s$, we now multiply and divide by the appropriate expression and apply Cauchy-Schwarz.
$$= \int_{\mathbb{R}^d} (1+|\xi|^2)^{s/2}|\hat{f}(\xi)| \cdot \frac{\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|}{(1+|\xi|^2)^{s/2}}\,d\xi$$$$\leq \left(\int_{\mathbb{R}^d} (1+|\xi|^2)^{s}|\hat{f}(\xi)|^2\,d\xi\right)^{1/2}\left(\int_{\mathbb{R}^d}\frac{\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2}{(1+|\xi|^2)^{s}}\,d\xi\right)^{1/2}$$$$\preceq \|f\|_{H^s}\left(\int_{\mathbb{R}^d}\frac{\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2}{(1+|\xi|^2)^{s}}\,d\xi\right)^{1/2}$$It remains to prove that $\int_{\mathbb{R}^d}\frac{\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2}{(1+|\xi|^2)^{s}}\,d\xi \preceq |x-y|^\alpha$. Since integrals over $\mathbb{R}^d$ are kinda awful, it makes to split it as a sum of an integral over $|\xi| < R$ and an integral over $|\xi| > R$, with $R$ to be chosen later. Near $0$ we use the estimate
$$\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2 = \left|e^{2\pi i \xi \cdot (x-y)} - 1\right|^2 = (\cos(2\pi\xi \cdot (x-y))-1)^2 + \sin^2(2\pi\xi \cdot (x-y))$$$$ \preceq 1-\cos(2\pi\xi \cdot (x-y)) \leq 4\pi^2|\xi \cdot (x-y)|^2 \preceq |\xi|^2|x-y|^2.$$Far from $0$, we use the stupid estimate $\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2 \leq 4 \preceq 1$. Putting this together,
$$\int_{\mathbb{R}^d}\frac{\left|e^{2\pi i x\xi} - e^{2\pi i y\xi}\right|^2}{(1+|\xi|^2)^{s}}\,d\xi \preceq \int_{\mathbb{R}^d} \frac{1}{(1+|\xi|^2)^s} \cdot \left(1_{\{|\xi| < R\}}|x-y|^2|\xi|^2 + 1_{\{|\xi| > R\}}\right)\,d\xi$$$$\preceq \int_{\mathbb{R}^d} \frac{1}{|\xi|^{2s}} \cdot \left(1_{\{|\xi| < R\}}|x-y|^2|\xi|^2 + 1_{\{|\xi| > R\}}\right)\,d\xi$$$$\preceq \int_0^\infty \frac{r^{d-1}}{r^{2s}} \cdot \left(1_{(0,R)}r^2|x-y|^2 + 1_{(R,\infty)}\right)\,dr$$$$= |x-y|^2\int_0^R r^{d+1-2s}\,dr + \int_R^\infty r^{d-1-2s}\,dr \preceq |x-y|^2R^{d+2-2s} + R^{d-2s}.$$Now we choose $R$, and it makes sense to take $R = |x-y|^{-1}$. This gives the final upper bound (up to a constant) of $|x-y|^{2s-d} = |x-y|^{2\alpha}$, completing the proof.

Actually to really show the desired embedding we should control the $\|\cdot\|_\infty$ norm. I will leave this to you.

Exercise: Use complex numbers to prove that any $f \in H^s(\mathbb{R}^d)$, where $s > d/2$, is actually bounded with upper bound
$$\|f\|_\infty \preceq \|f\|_{H^s}$$(up to a constant depending only on $s$ and $d$).

I must stress that the above statement does not involve complex numbers!

One last (tough) exercise for you.

Exercise: Use complex numbers to prove that for any $f,g \in H^s(\mathbb{R}^d)$, where $s > d/2$, we have the inequality
$$\|fg\|_{H^s} \preceq \|f\|_{H^s}\|g\|_{H^s}.$$(Hint: You will need to use Young's Convolution Inequality. One form of this inequality is as follows: For $f,g,h:\mathbb{R}^d \to \mathbb{R}$ and $p,q,r \geq 1$ with $\frac1p + \frac1q + \frac1r = 2$, we have
$$\left|\int_{\mathbb{R}^d}\int_{\mathbb{R}^d} f(x)g(x-y)h(y)\,dx\,dy\right| \leq \|f\|_{L^p}\|g\|_{L^q}\|h\|_{L^r}.$$)

Additional Hint


What I have presented here is only those applications that I am most familiar with. There are far more magical uses for complex numbers in "real" settings, many of which are likely unknown to me. What applications do you know of? Feel free to share.
This post has been edited 5 times. Last edited by greenturtle3141, Aug 23, 2024, 5:09 PM

Comment

1 Comment

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
This is really eye-opening and useful! Pls continue posting stuff like these!

by The_Eureka, Sep 7, 2024, 1:57 AM

Turtle math!

avatar

greenturtle3141
Archives
+ October 2024
Shouts
Submit
  • Can you give some thought to dropping a guide to STS? Just like how you presented your research (in your paper), what your essays were about, etc. Also cool blog!

    by Shreyasharma, Mar 13, 2025, 7:03 PM

  • this is so good

    by purpledonutdragon, Mar 4, 2025, 2:05 PM

  • orz usamts grader

    by Lhaj3, Jan 23, 2025, 7:43 PM

  • Entertaining blog

    by eduD_looC, Dec 31, 2024, 8:57 PM

  • wow really cool stuff

    by kingu, Dec 4, 2024, 1:02 AM

  • Although I had a decent college essay, this isn't really my specialty so I don't really have anything useful to say that isn't already available online.

    by greenturtle3141, Nov 3, 2024, 7:25 PM

  • Could you also make a blog post about college essay writing :skull:

    by Shreyasharma, Nov 2, 2024, 9:04 PM

  • what gold

    by peace09, Oct 15, 2024, 3:39 PM

  • oh lmao, i was confused because of the title initially. thanks! great read

    by OlympusHero, Jul 20, 2024, 5:00 AM

  • It should be under August 2023

    by greenturtle3141, Jul 11, 2024, 11:44 PM

  • does this blog still have the post about your math journey? for some reason i can't find it

    by OlympusHero, Jul 10, 2024, 5:41 PM

  • imagine not tortoise math

    no but seriously really interesting blog

    by fruitmonster97, Apr 2, 2024, 12:39 AM

  • W blog man

    by s12d34, Jan 24, 2024, 11:37 PM

  • very nice blog greenturtle it is very descriptive and fascinating to pay attention to :-D

    by StarLex1, Jan 3, 2024, 3:12 PM

  • orz blog

    by ryanbear, Dec 6, 2023, 9:23 PM

67 shouts
Tags
About Owner
  • Posts: 3549
  • Joined: Oct 14, 2014
Blog Stats
  • Blog created: Oct 23, 2021
  • Total entries: 54
  • Total visits: 40271
  • Total comments: 126
Search Blog
a