Many Proofs of the AM-GM Inequality
by greenturtle3141, Feb 26, 2022, 5:15 AM
Reading Difficulty: 2-5
Proof by Backward Induction
Step 1: "Base Cases"
Clearly
so
is done.
We will need
too. This follows from
, which rearranges to
, as needed.
Step 2: "Forward Induction Step"
Assume that we have the AM-GM for
variables. We show the AM-GM for
variables:
![\begin{align*}
\frac{a_1+\cdots+a_{2n}}{2n} &= \frac{\frac{a_1+\cdots+a_n}{n} + \frac{a_{n+1}+\cdots+a_{2n}}{n} }{2} \\
&\geq \frac{\sqrt[n]{a_1\ldots a_n}+\sqrt[n]{a_{n+1}\ldots a_{2n}}}{2} \tag{Assumption} \\
&\geq \sqrt[2n]{a_1\ldots a_{2n}} \tag{2-var AM-GM} \\
\end{align*}](//latex.artofproblemsolving.com/e/d/5/ed52099d768e1af17b4031bb482e12e55a66f1d5.png)
Step 3: Backward Induction
Assume that we have the AM-GM for
variables. We show the AM-GM for
variables.
Key Idea: "Waste Information" by using the average as one of the variables.
Let
be the arithmetic mean of the variables we have. By AM-GM for
variables:
But in fact, the left side is equal to
! So this is:
![$$\mu \geq \sqrt[n]{a_1\ldots a_{n-1}\mu}$$](//latex.artofproblemsolving.com/8/2/8/828097b7e12e04cb93e2cd16a3eca8bb6d837052.png)


Which is precisely the AM-GM for
variables. 
Proof by Algorithm (My own, but probably done before)
Key Idea: The AM-GM is a statement about how products tend to be maximized when the numbers being multiplied are close together.
Step 1: Sliding Lemma
Suppose
. We claim that by moving
and
closer together, their product will increase. Formally, for any
, we have that
.
To show this, note that
, so it suffices to prove that
, or
. But this is true by definition of
. This proves the claim.
Step 2: We can "slide all variables to the equality case"
Suppose
. A "slide" is given by the following operation: Pick
with
, then replace them with
and
where
.
Claim: We can perform slides such that all of the
variables become equal.
Proof: Let
be the arithmetic mean of
. We write the following algorithm:
CLAIM: All steps are well-defined/valid.
REASON: Main point is, is Step 2 always possible? The answer is yes: If not all variables are equal to
, then surely WLOG some variable is strictly less than
. But
is the average, so there must also be a variable strictly greater than
.
CLAIM: This algorithm terminates.
REASON: Every loop, either Step 3 or Step 4 turns a new variable into
. So the number of variables equal to
is a strictly increasing monovariant that cannot exceed
, so the algorithm must terminate.
CLAIM: When the algorithm terminates, all variables are equal, as needed.
REASON: The algorithm terminates only when the condition of Step 1 is not met. This happens exactly when every variable is equal to
.
This proves the correctness of the algorithm, and thus proves the claim.
Step 3: We're Done!
Consider the quantity
. Now run the algorithm. Each slide can only increase the product, so in particular every slide will only make the geometric mean of the variables increase. When the algorithm terminates, all variables are equal to
, and the geometric mean could only have gone up, resulting in this inequality:
...but this is the AM-GM inequality. 
Proof by Convexity
Since
is concave, we may apply Jensen's Inequality:
Manipulating this:

![$$\log\left(\frac{a_1+\ldots+a_n}{n}\right) \geq \log(\sqrt[n]{a_1\ldots a_n})$$](//latex.artofproblemsolving.com/7/f/9/7f91be06683f652b4b37b62c6545c7431d4a86a5.png)
Tada. 
Proof by Lagrange Multipliers
For this one, let
be a constant. We need to show that the maximum value of
, under the constraint
, is given by
.
That is, we aim to maximize the function
over
, under the constraint
.
Step 1: A maximum exists
Consider the set
.
CLAIM:
is bounded.
PROOF: Clearly, in
, we have
for all
. 
CLAIM:
is closed.
PROOF: If you want, you can manually show that the complement is open. Alternatively, it is the graph of a continuous function
over a closed domain
, and such graphs are closed. 
Since
is closed and bounded, it is compact by Heine-Borel. Since
is continuous, it follows that a maximum exists in
. Thus there must exist a maximum with respect to the constraint.
Step 2: Computing the Maximum
Suppose the maximum occurs at some
. Since
and
are of class
, and
never vanishes (so that
is linearly independent), we have by the Theorem of Lagrange Multipliers:
So:
This is a system of
equations. Multiply all of them! This gets us:
It is necessarily safe to raise each side to
because
, so:
Dividing this by each of the
equations in the system from before, we solve each variable in terms of
:
We now solve for
. Using the constraint
, we see that:

Thus we have solved for each variable independently of
:
Finally we see that extremal value found by this method is
, which is the claimed maximum value.
We are done because (1) we showed that a maximum exists, and (2), if there is a maximum, then it must be
, because we only found one maximum from the Lagrange Multipliers. Thus the maximum that we knew existed and the proposed maximum found must be one and the same. 
Proof using Probability
Define
.
STEP 1:
is increasing.
Let
. Then
.
Let
. By Holder's Inequality:
This rearranges to
, as desired.
STEP 2: The AM-GM Inequality Holds
We have that
forall
. Now send
. It follows by monotonicity that:
We now compute the limit (spoiler alert: It's the geometric mean).
Apply L'Hopital:
This limit clearly exists, so the application of L'Hopital was justified.

It happens to be that the limit is the geometric mean, and this proves AM-GM. 
Share more proofs if you know any
Proof by Backward Induction
Step 1: "Base Cases"
Clearly


We will need



Step 2: "Forward Induction Step"
Assume that we have the AM-GM for


![\begin{align*}
\frac{a_1+\cdots+a_{2n}}{2n} &= \frac{\frac{a_1+\cdots+a_n}{n} + \frac{a_{n+1}+\cdots+a_{2n}}{n} }{2} \\
&\geq \frac{\sqrt[n]{a_1\ldots a_n}+\sqrt[n]{a_{n+1}\ldots a_{2n}}}{2} \tag{Assumption} \\
&\geq \sqrt[2n]{a_1\ldots a_{2n}} \tag{2-var AM-GM} \\
\end{align*}](http://latex.artofproblemsolving.com/e/d/5/ed52099d768e1af17b4031bb482e12e55a66f1d5.png)
Step 3: Backward Induction
Assume that we have the AM-GM for


Key Idea: "Waste Information" by using the average as one of the variables.
Let


![$$\frac{a_1+\cdots+a_{n-1}+\mu}{n} \geq \sqrt[n]{a_1\ldots a_{n-1}\mu}$$](http://latex.artofproblemsolving.com/b/d/b/bdbe8f998c8aa4bf0022448641482260e49ae5ad.png)

![$$\mu \geq \sqrt[n]{a_1\ldots a_{n-1}\mu}$$](http://latex.artofproblemsolving.com/8/2/8/828097b7e12e04cb93e2cd16a3eca8bb6d837052.png)


![$$\mu \geq \sqrt[n-1]{a_1\ldots a_{n-1}}$$](http://latex.artofproblemsolving.com/d/c/6/dc6a2fc5bec85abbe848b4ecc1151fdb74ab0a0a.png)


Proof by Algorithm (My own, but probably done before)
Key Idea: The AM-GM is a statement about how products tend to be maximized when the numbers being multiplied are close together.
Step 1: Sliding Lemma
Suppose





To show this, note that




Step 2: We can "slide all variables to the equality case"
Suppose





![$z \in [0,\frac{a_j-a_i}{2}]$](http://latex.artofproblemsolving.com/f/7/d/f7dd7becf7815b60657d0e8d9ad6adca4e1b1fb5.png)
Claim: We can perform slides such that all of the

Proof: Let


- While there still exists a variable not equal to
:
- Let
be a variable less
, let
be a variable greater than
.
- If
, then slide
towards each other by
. That is, replace
with
.
- If otherwise
, slide
towards each other by
. That is, replace
with
.
- Loop back to 1.
CLAIM: All steps are well-defined/valid.
REASON: Main point is, is Step 2 always possible? The answer is yes: If not all variables are equal to




CLAIM: This algorithm terminates.
REASON: Every loop, either Step 3 or Step 4 turns a new variable into



CLAIM: When the algorithm terminates, all variables are equal, as needed.
REASON: The algorithm terminates only when the condition of Step 1 is not met. This happens exactly when every variable is equal to

This proves the correctness of the algorithm, and thus proves the claim.

Step 3: We're Done!
Consider the quantity
![$\sqrt[n]{a_1\ldots a_n}$](http://latex.artofproblemsolving.com/d/6/8/d6815db7013344be28107868e83d988f18f1e76e.png)

![$$\sqrt[n]{a_1\ldots a_n} \leq \sqrt[n]{\mu\ldots \mu} = \mu$$](http://latex.artofproblemsolving.com/8/9/3/893d69ed2072bf8381a1b914b054acb39a0d5f74.png)

Proof by Convexity
Since



![$$\log\left(\frac{a_1+\ldots+a_n}{n}\right) \geq \log(\sqrt[n]{a_1\ldots a_n})$$](http://latex.artofproblemsolving.com/7/f/9/7f91be06683f652b4b37b62c6545c7431d4a86a5.png)
![$$\frac{a_1+\ldots+a_n}{n} \geq \sqrt[n]{a_1\ldots a_n}$$](http://latex.artofproblemsolving.com/f/6/c/f6c589c9966c2a7f508183815b8b42996c30b619.png)

Proof by Lagrange Multipliers
For this one, let




That is, we aim to maximize the function



Step 1: A maximum exists
Consider the set

CLAIM:

PROOF: Clearly, in




CLAIM:

PROOF: If you want, you can manually show that the complement is open. Alternatively, it is the graph of a continuous function

![$[0,k]$](http://latex.artofproblemsolving.com/3/3/1/33158504ddc8f1acc6de2a10d7db650a001636da.png)

Since



Step 2: Computing the Maximum
Suppose the maximum occurs at some























We are done because (1) we showed that a maximum exists, and (2), if there is a maximum, then it must be


Proof using Probability
Define

STEP 1:

Let


Let

![$$\mathbb{E}X^p = \mathbb{E}[1 \cdot X^p] \leq \mathbb{E}[1^{\frac{p}{q-p}}]^{\frac{q-p}{p}}\mathbb{E}[(X^p)^{q/p}]^{p/q} = (\mathbb{E}X^q)^{p/q}$$](http://latex.artofproblemsolving.com/6/b/2/6b24b87a9946c4b260113c012d7d2b39fd2d2876.png)

STEP 2: The AM-GM Inequality Holds
We have that







![$$= \exp\left(\log(\sqrt[n]{a_1\ldots a_n})\right) = \sqrt[n]{a_1\ldots a_n}$$](http://latex.artofproblemsolving.com/4/0/e/40e7bbc952ae38d8118eabffe354dc17cfbd844b.png)

Share more proofs if you know any

This post has been edited 3 times. Last edited by greenturtle3141, Feb 26, 2022, 6:23 AM