Many Proofs of the AM-GM Inequality

by greenturtle3141, Feb 26, 2022, 5:15 AM

Reading Difficulty: 2-5

Proof by Backward Induction

Step 1: "Base Cases"

Clearly $a_1 \geq a_1$ so $n=1$ is done.

We will need $n=2$ too. This follows from $(\sqrt{a_1}-\sqrt{a_2})^2 \geq 0$, which rearranges to $a_1+a_2 \geq 2\sqrt{a_1a_2}$, as needed.

Step 2: "Forward Induction Step"

Assume that we have the AM-GM for $n$ variables. We show the AM-GM for $2n$ variables:
\begin{align*}
\frac{a_1+\cdots+a_{2n}}{2n} &= \frac{\frac{a_1+\cdots+a_n}{n} + \frac{a_{n+1}+\cdots+a_{2n}}{n} }{2} \\
&\geq \frac{\sqrt[n]{a_1\ldots a_n}+\sqrt[n]{a_{n+1}\ldots a_{2n}}}{2} \tag{Assumption} \\
&\geq \sqrt[2n]{a_1\ldots a_{2n}} \tag{2-var AM-GM} \\
\end{align*}
Step 3: Backward Induction

Assume that we have the AM-GM for $n > 1$ variables. We show the AM-GM for $n-1$ variables.

Key Idea: "Waste Information" by using the average as one of the variables.

Let $\mu = \frac{a_1+\cdots+a_{n-1}}{n-1}$ be the arithmetic mean of the variables we have. By AM-GM for $n$ variables:
$$\frac{a_1+\cdots+a_{n-1}+\mu}{n} \geq \sqrt[n]{a_1\ldots a_{n-1}\mu}$$But in fact, the left side is equal to $\mu$! So this is:
$$\mu \geq \sqrt[n]{a_1\ldots a_{n-1}\mu}$$$$\mu^n \geq a_1\ldots a_{n-1}\mu$$$$\mu^{n-1} \geq a_1\ldots a_{n-1}$$$$\mu \geq \sqrt[n-1]{a_1\ldots a_{n-1}}$$Which is precisely the AM-GM for $n-1$ variables. $\square$


Proof by Algorithm (My own, but probably done before)

Key Idea: The AM-GM is a statement about how products tend to be maximized when the numbers being multiplied are close together.

Step 1: Sliding Lemma

Suppose $0 < x < y$. We claim that by moving $x$ and $y$ closer together, their product will increase. Formally, for any $0 < z < \frac{y-x}{2}$, we have that $(y-z)(x+z) \geq xy$.

To show this, note that $(y-z)(x+z) = xy-xz+yz-z^2$, so it suffices to prove that $-xz+yz-z^2 \geq 0$, or $y-x \geq z$. But this is true by definition of $z$. This proves the claim.

Step 2: We can "slide all variables to the equality case"

Suppose $0 \leq a_1 \leq \cdots \leq a_n$. A "slide" is given by the following operation: Pick $a_i,a_j$ with $a_i \leq a_j$, then replace them with $a_i+z$ and $a_j-z$ where $z \in [0,\frac{a_j-a_i}{2}]$.

Claim: We can perform slides such that all of the $n$ variables become equal.

Proof: Let $\mu$ be the arithmetic mean of $a_1,\cdots,a_n$. We write the following algorithm:
  1. While there still exists a variable not equal to $\mu$:
  2. Let $a_i$ be a variable less $\mu$, let $a_j$ be a variable greater than $\mu$.
  3. If $\mu-a_i > a_j - \mu$, then slide $a_i,a_j$ towards each other by $a_j-\mu$. That is, replace $(a_i,a_j)$ with $(a_i+a_j-\mu,\mu)$.
  4. If otherwise $\mu-a_i \leq a_j-\mu$, slide $a_i,a_j$ towards each other by $\mu-a_i$. That is, replace $(a_i,a_j)$ with $(\mu,a_j-(\mu-a_i))$.
  5. Loop back to 1.

CLAIM: All steps are well-defined/valid.

REASON: Main point is, is Step 2 always possible? The answer is yes: If not all variables are equal to $\mu$, then surely WLOG some variable is strictly less than $\mu$. But $\mu$ is the average, so there must also be a variable strictly greater than $\mu$.

CLAIM: This algorithm terminates.

REASON: Every loop, either Step 3 or Step 4 turns a new variable into $\mu$. So the number of variables equal to $\mu$ is a strictly increasing monovariant that cannot exceed $n$, so the algorithm must terminate.

CLAIM: When the algorithm terminates, all variables are equal, as needed.

REASON: The algorithm terminates only when the condition of Step 1 is not met. This happens exactly when every variable is equal to $\mu$.

This proves the correctness of the algorithm, and thus proves the claim. $\square$

Step 3: We're Done!

Consider the quantity $\sqrt[n]{a_1\ldots a_n}$. Now run the algorithm. Each slide can only increase the product, so in particular every slide will only make the geometric mean of the variables increase. When the algorithm terminates, all variables are equal to $\mu$, and the geometric mean could only have gone up, resulting in this inequality:
$$\sqrt[n]{a_1\ldots a_n} \leq \sqrt[n]{\mu\ldots \mu} = \mu$$...but this is the AM-GM inequality. $\square$


Proof by Convexity

Since $\log$ is concave, we may apply Jensen's Inequality:
$$\log\left(\frac{a_1+\ldots+a_n}{n}\right) \geq \frac{\log(a_1)+\ldots+\log(a_n)}{n}$$Manipulating this:
$$\log\left(\frac{a_1+\ldots+a_n}{n}\right) \geq \frac{\log(a_1\ldots a_n)}{n}$$$$\log\left(\frac{a_1+\ldots+a_n}{n}\right) \geq \log(\sqrt[n]{a_1\ldots a_n})$$$$\frac{a_1+\ldots+a_n}{n} \geq \sqrt[n]{a_1\ldots a_n}$$Tada. $\square$


Proof by Lagrange Multipliers

For this one, let $k \geq 0$ be a constant. We need to show that the maximum value of $x_1\ldots x_n$, under the constraint $x_1+\cdots+ x_n = k$, is given by $(k/n)^n$.

That is, we aim to maximize the function $f(x_1,\cdots,x_n) := x_1\ldots x_n$ over $[0,\infty)^n$, under the constraint $g(x_1,\cdots,x_n) = x_1+\cdots+x_n-k$.

Step 1: A maximum exists

Consider the set $E = \{(x_1,\cdots,x_n) \in [0,\infty)^n : x_1+\cdots+ x_n = k\}$.

CLAIM: $E$ is bounded.

PROOF: Clearly, in $E$, we have $0 \leq x_i \leq k$ for all $i$. $\square$

CLAIM: $E$ is closed.

PROOF: If you want, you can manually show that the complement is open. Alternatively, it is the graph of a continuous function $h(x_1,\cdots,x_{n-1}) := k-x_1-\ldots-x_{n-1}$ over a closed domain $[0,k]$, and such graphs are closed. $\square$

Since $E$ is closed and bounded, it is compact by Heine-Borel. Since $f$ is continuous, it follows that a maximum exists in $E$. Thus there must exist a maximum with respect to the constraint.

Step 2: Computing the Maximum

Suppose the maximum occurs at some $(x_1,\cdots,x_n)$. Since $f$ and $g$ are of class $C^1$, and $\nabla g$ never vanishes (so that $\{\nabla g\}$ is linearly independent), we have by the Theorem of Lagrange Multipliers:
$$\nabla f(x_1,\cdots,x_n) = \lambda \nabla g(x_1,\cdots,x_n)$$So:
$$\begin{bmatrix}x_2\ldots x_n \\ x_1x_3\ldots x_n \\ \vdots \\ x_1\ldots x_{n-1}\end{bmatrix} = \lambda \begin{bmatrix}1 \\ 1 \\ \vdots \\ 1\end{bmatrix}$$This is a system of $n$ equations. Multiply all of them! This gets us:
$$(x_1\ldots x_n)^{n-1} = \lambda^n$$It is necessarily safe to raise each side to $\frac{1}{n-1}$ because $\lambda \geq 0$, so:
$$x_1\ldots x_n = \lambda^{\frac{n}{n-1}}$$Dividing this by each of the $n$ equations in the system from before, we solve each variable in terms of $\lambda$:
$$x_i = \lambda^{\frac{1}{n-1}} \qquad \forall 1 \leq i \leq n$$We now solve for $\lambda$. Using the constraint $x_1+\cdots+x_n = k$, we see that:
$$n\lambda^{\frac{1}{n-1}} = k$$$$\lambda = (k/n)^{n-1}$$Thus we have solved for each variable independently of $\lambda$:
$$x_i = ((k/n)^{n-1})^{\frac{1}{n-1}} = k/n\qquad \forall 1 \leq i \leq n$$Finally we see that extremal value found by this method is $f(x_1,\cdots,x_n) = (k/n)^n$, which is the claimed maximum value.

We are done because (1) we showed that a maximum exists, and (2), if there is a maximum, then it must be $(k/n)^n$, because we only found one maximum from the Lagrange Multipliers. Thus the maximum that we knew existed and the proposed maximum found must be one and the same. $\square$


Proof using Probability

Define $f(p) := \left(\frac{a_1^p+\ldots+a_n^p}{n}\right)^{1/p}$.

STEP 1: $f(p)$ is increasing.

Let $X \sim \text{Unif}\{a_1,\cdots,a_n\}$. Then $(\mathbb{E}X^p)^{1/p} = \left(\frac{a_1^p+\ldots+a_n^p}{n}\right)^{1/p} = f(p)$.

Let $p < q$. By Holder's Inequality:
$$\mathbb{E}X^p = \mathbb{E}[1 \cdot X^p] \leq \mathbb{E}[1^{\frac{p}{q-p}}]^{\frac{q-p}{p}}\mathbb{E}[(X^p)^{q/p}]^{p/q} = (\mathbb{E}X^q)^{p/q}$$This rearranges to $f(p) \leq f(q)$, as desired.

STEP 2: The AM-GM Inequality Holds

We have that $f(p) \leq f(1) = \frac{a_1+\ldots+a_n}{n}$ forall $0 < p < 1$. Now send $p  \to 0^+$. It follows by monotonicity that:
$$\frac{a_1+\ldots+a_n}{n} \geq \lim_{p \to 0^+} f(p)$$We now compute the limit (spoiler alert: It's the geometric mean).

$$\lim_{x \to 0^+} \left(\frac{a_1^x+\ldots+a_n^x}{n}\right)^{1/x} = \exp\left(\lim_{x \to 0^+} \frac{\log(a_1^x+\ldots+a_n^x) - \log n}{x} \right)$$Apply L'Hopital:
$$= \exp\left(\lim_{x \to 0^+} \frac{\log(a_1) + \ldots + \log(a_n)}{a_1^x+\ldots+a_n^x}\right)$$This limit clearly exists, so the application of L'Hopital was justified.
$$= \exp\left(\frac{\log(a_1) + \ldots + \log(a_n)}{n}\right)$$$$= \exp\left(\log(\sqrt[n]{a_1\ldots a_n})\right) = \sqrt[n]{a_1\ldots a_n}$$It happens to be that the limit is the geometric mean, and this proves AM-GM. $\square$


Share more proofs if you know any :)
This post has been edited 3 times. Last edited by greenturtle3141, Feb 26, 2022, 6:23 AM

Comment

4 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
There's a really cool proof using Thermodynamics,
http://www.mit.edu/~ehjin/files/AMGM.pdf

by AOPqghj, Feb 27, 2022, 6:44 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Fascinating! Thanks for sharing.

by greenturtle3141, Feb 27, 2022, 7:23 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
imo, the idea of moving the variables toward the mean is the most intuitive

that's the weirdest inductive proof i've seen lol :laugh:

by depsilon0, Mar 8, 2022, 5:49 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I like the sliding proof! Imo it's the most novel, and I've overused the two variable sliding theorem - I've even thought about how AM-GM sort of makes intuitive sense due to the equality sense sorta maximize the product, but it was interesting to see the idea formalized into a legitimate proof.

by peace09, Apr 6, 2022, 11:53 PM

Turtle math!

avatar

greenturtle3141
Archives
+ October 2024
Shouts
Submit
  • Can you give some thought to dropping a guide to STS? Just like how you presented your research (in your paper), what your essays were about, etc. Also cool blog!

    by Shreyasharma, Mar 13, 2025, 7:03 PM

  • this is so good

    by purpledonutdragon, Mar 4, 2025, 2:05 PM

  • orz usamts grader

    by Lhaj3, Jan 23, 2025, 7:43 PM

  • Entertaining blog

    by eduD_looC, Dec 31, 2024, 8:57 PM

  • wow really cool stuff

    by kingu, Dec 4, 2024, 1:02 AM

  • Although I had a decent college essay, this isn't really my specialty so I don't really have anything useful to say that isn't already available online.

    by greenturtle3141, Nov 3, 2024, 7:25 PM

  • Could you also make a blog post about college essay writing :skull:

    by Shreyasharma, Nov 2, 2024, 9:04 PM

  • what gold

    by peace09, Oct 15, 2024, 3:39 PM

  • oh lmao, i was confused because of the title initially. thanks! great read

    by OlympusHero, Jul 20, 2024, 5:00 AM

  • It should be under August 2023

    by greenturtle3141, Jul 11, 2024, 11:44 PM

  • does this blog still have the post about your math journey? for some reason i can't find it

    by OlympusHero, Jul 10, 2024, 5:41 PM

  • imagine not tortoise math

    no but seriously really interesting blog

    by fruitmonster97, Apr 2, 2024, 12:39 AM

  • W blog man

    by s12d34, Jan 24, 2024, 11:37 PM

  • very nice blog greenturtle it is very descriptive and fascinating to pay attention to :-D

    by StarLex1, Jan 3, 2024, 3:12 PM

  • orz blog

    by ryanbear, Dec 6, 2023, 9:23 PM

67 shouts
Tags
About Owner
  • Posts: 3554
  • Joined: Oct 14, 2014
Blog Stats
  • Blog created: Oct 23, 2021
  • Total entries: 54
  • Total visits: 40985
  • Total comments: 126
Search Blog
a