Difference between revisions of "Muirhead's Inequality"
(25 intermediate revisions by 12 users not shown) | |||
Line 1: | Line 1: | ||
− | '''Muirhead's Inequality''' states that if a sequence <math> | + | '''Muirhead's Inequality''' states that if a sequence <math>p</math> [[Majorization|majorizes]] a sequence <math>q</math>, then given a set of positive reals <math>x_1,x_2,\cdots,x_n</math>: |
+ | <cmath>\sum_{\text{sym}} {x_1}^{p_1}{x_2}^{p_2}\cdots {x_n}^{p_n}\geq \sum_{\text{sym}} {x_1}^{q_1}{x_2}^{q_2}\cdots {x_n}^{q_n}</cmath> | ||
− | <math> | + | == Example == |
+ | The inequality is easier to understand given an example. Since the sequence <math>(5,1)</math> majorizes <math>(4,2)</math> (as <math>5>4, 5+1=4+2</math>), Muirhead's inequality states that for any positive <math>x,y</math>, | ||
− | + | <cmath>x^5y^1+y^5x^1\geq x^4y^2+y^4x^2</cmath> | |
− | + | == Usage on Olympiad Problems == | |
+ | A common [[Brute forcing|bruteforce]] technique with inequalities is to clear denominators, multiply everything out, and apply Muirhead's or [[Schur's Inequality|Schur's]]. However, it is worth noting that any inequality that can be proved directly with Muirhead can also be proved using the [[Arithmetic mean-geometric mean | Arithmetic Mean-Geometric Mean]] inequality. In fact, [[International Mathematics Olympiad | IMO]] gold medalist [[Thomas Mildorf]] says it is unwise to use Muirhead in an Olympiad solution; one should use an application of AM-GM instead. Thus, it is suggested that Muirhead be used only to verify that an inequality ''can'' be proved with AM-GM before demonstrating the full AM-GM proof. | ||
− | <math>x^5y^1+y^5x^1\geq x^4y^2+y^4x^2</math> | + | ==Example revisited== |
+ | To understand the proof further below, let's also prove the inequality <math>x^5y^1+y^5x^1\geq x^4y^2+y^4x^2</math> from the [[AM-GM Inequality#Weighted AM-GM Inequality|(weighted) AM-GM inequality]]. | ||
− | === | + | <cmath> |
+ | \begin{align*} | ||
+ | x^5y^1+y^5x^1&=\frac{3}{4}\left(x^5y^1+y^5x^1\right)+\frac{1}{4}\left(x^5y^1+y^5x^1\right)\\ | ||
+ | &=\left(\frac{3}{4}x^5y^1+\frac{1}{4}x^1y^5\right)+\left(\frac{3}{4}y^5x^1+\frac{1}{4}y^1x^5\right)\\ | ||
+ | &\geq \left(x^{\frac{3}{4}5+\frac{1}{4}1}y^{\frac{3}{4}1+\frac{1}{4}5}\right)^{3/4+1/4} + \left(y^{\frac{3}{4}5+\frac{1}{4}1}x^{\frac{3}{4}1+\frac{1}{4}5}\right)^{3/4+1/4}\\ | ||
+ | &=x^4y^2+y^2x^4 | ||
+ | \end{align*} | ||
+ | </cmath> | ||
− | + | where the step with the inequality consists in applying the weighted AM-GM inequality to each expression in parentheses. | |
− | + | The coefficients <math>3/4</math> and <math>1/4</math> and the grouping of terms according to the permutations <math>(5,1)</math> and <math>(1,5)</math> of the tuple of exponents <math>(5,1)</math> come from the following arithmetic facts. | |
− | <math>\ | + | Note that <math>\begin{bmatrix}4\\2\end{bmatrix}=T\begin{bmatrix}5\\1\end{bmatrix}</math>, where the matrix <math>T=\begin{bmatrix}3/4&1/4\\1/4&3/4\end{bmatrix}</math>. This matrix has non-negative entries and all its rows and columns add up to <math>1</math>. Such matrices are called ''doubly stochastic matrices''. We can write <math>T=\frac{3}{4}\begin{bmatrix}1&0\\0&1\end{bmatrix}+\frac{1}{4}\begin{bmatrix}0&1\\1&0\end{bmatrix}</math>. Matrices like <math>P_1=\begin{bmatrix}1&0\\0&1\end{bmatrix}</math> and <math>P_2=\begin{bmatrix}0&1\\1&0\end{bmatrix}</math>, with exactly one <math>1</math> in each row and each column and <math>0</math> everywhere else, are called ''permutation matrices''. Observe how <math>P_1\begin{bmatrix}5\\1\end{bmatrix}=\begin{bmatrix}5\\1\end{bmatrix}</math> and <math>P_2\begin{bmatrix}5\\1\end{bmatrix}=\begin{bmatrix}1\\5\end{bmatrix}</math> gives us all permutations of <math>\begin{bmatrix}5\\1\end{bmatrix}</math>. |
− | + | So, we can write | |
+ | <cmath> | ||
+ | \begin{align*} | ||
+ | \begin{bmatrix}4\\2\end{bmatrix}&=T\begin{bmatrix}5\\1\end{bmatrix}\\ | ||
+ | &=\left(\frac{3}{4}\begin{bmatrix}1&0\\0&1\end{bmatrix}+\frac{1}{4}\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)\begin{bmatrix}5\\1\end{bmatrix}\\ | ||
+ | &=\frac{3}{4}\begin{bmatrix}5\\1\end{bmatrix}+\frac{1}{4}\begin{bmatrix}1\\5\end{bmatrix} | ||
+ | \end{align*} | ||
+ | </cmath> | ||
+ | |||
+ | A single inequality that follows from Muirhead's inequality could be proven from the AM-GM inequality in multiple ways. | ||
+ | Another way is | ||
+ | |||
+ | <cmath> | ||
+ | \begin{align*} | ||
+ | \frac{x^4+x^4+x^4+y^4}{4}&\geq\sqrt[4]{x^{12}y^4}=x^3y\\ | ||
+ | \frac{x^4+y^4+y^4+y^4}{4}&\geq\sqrt[4]{x^4y^{12}}=xy^3 | ||
+ | \end{align*} | ||
+ | </cmath> | ||
Adding these, | Adding these, | ||
+ | <cmath>x^4+y^4\geq x^3y+xy^3</cmath> | ||
+ | Multiplying both sides by <math>xy</math> (as both <math>x</math> and <math>y</math> are positive), | ||
+ | <cmath>x^5y+xy^5\geq x^4y^2+x^2y^4</cmath> | ||
+ | as desired. | ||
+ | |||
+ | ==Proof== | ||
+ | Given <math>p=(p_1,p_2,\ldots,p_n)\in\mathbb{R}^n</math>, let's write | ||
+ | <cmath>[p]=\frac{1}{n!}\sum_{sym}x_1^{p_1}x_2^{p_2}\dotsm x_n^{p_n}</cmath> | ||
+ | Let's denote that <math>p</math> majorizes <math>q=(q_1,q_2,\ldots,q_n)\in\mathbb{R}^n</math> as <math>p\succ q</math>. | ||
+ | These notation is useful in the context of Muirhead's inequality. In particular the theorem can be stated as <math>p\succ q</math> implies <math>[p]\geq [q]</math>. | ||
+ | |||
+ | Even though <math>p</math> and <math>q</math> are written above as <math>n</math>-tuples, it will be convenient to think of them as column vectors, vertical. | ||
+ | |||
+ | The first goal of the proof is to show that there is a ''doubly stochastic matrix'' <math>D</math> such that <math>q=Dp</math>. Then [https://en.wikipedia.org/wiki/Doubly_stochastic_matrix#Birkhoff%E2%80%93von_Neumann_theorem Birkhoff-von Neuman theorem] tells us that there are <math>c_1,c_2,\ldots c_{n!}\geq0</math> such that <math>\sum_{i=1}^{n!}c_i=1</math> and <math>D=\sum_{i=1}^{n!}c_iP_i</math>, where <math>P_i</math> are ''permutation matrices''. | ||
− | <math> | + | Note that once we have such an expression for <math>D</math>, then |
+ | <cmath> | ||
+ | \begin{align*} | ||
+ | [p]&=1\cdot [p]\\ | ||
+ | &=\left(\sum_{i=1}^{n!}c_i\right)[p]\\ | ||
+ | &=\sum_{i=1}^{n!}c_i[p]\\ | ||
+ | &=\sum_{i=1}^{n!}c_i[P_ip]\\ | ||
+ | &\geq\left[\sum_{i=1}^{n!}c_iP_ip\right]\\ | ||
+ | &=[Dp]\\ | ||
+ | &=[q] | ||
+ | \end{align*} | ||
+ | </cmath> | ||
+ | where the step with the inequality (like in the example above) consists of applying the weighted AM-GM inequality grouping the terms from each element of the sum <math>\sum_{i=1}^{n!}</math>, corresponding to each permutation of the variables <math>x_1,x_2,\ldots,x_n</math>. Remember that the brackets <math>[\cdot]</math> are representing also a summation <math>\frac{1}{n!}\sum_{sym}</math>. | ||
− | + | Before showing how to produce the matrix <math>D</math>, note that knowing the computational aspect of Birkhoff-von Newmann theorem is useful, since it gives us an algorithm to compute the <math>c_1,c_2,\ldots,c_{n!}</math>. | |
+ | |||
+ | To produce the matrix <math>D</math> we construct a sequence <math>p=r_0, r_1, r_2, \ldots, r_k=q</math> of <math>n</math>-tuples (or rather column vectors), such that <math>p\succ r_1\succ r_2\succ\ldots\succ r_k=q</math>, each <math>r_i</math> has at least one more component equal to a component of <math>q</math> than its predecessor <math>r_{i-1}</math>, and <math>r_{i}=T_ir_{i-1}</math>, for some ''doubly stochastic matrix'' <math>T_i</math>. Note that then we can take <math>D=T_k\dotsm T_2T_1</math> to get <math>q=Dp</math> and since products of ''doubly stochastic matrices'' is a ''doubly stochastic matrix'', we get what we wanted. | ||
+ | |||
+ | In the case <math>p=q</math> there is nothing to prove. Even Muirhead's inequality would be an equality in this case. | ||
+ | So, assume that <math>p\succ q</math> and <math>p\neq q</math>. | ||
+ | |||
+ | Define | ||
+ | |||
+ | <cmath> | ||
+ | \begin{align*} | ||
+ | j&=\min\{i:\ p_i>q_i\}\\ | ||
+ | k&=\min\{i:\ p_i<q_i\}\\ | ||
+ | b&=\frac{p_j+p_k}{2}\\ | ||
+ | d&=\frac{p_j-p_k}{2}\\ | ||
+ | c&=\max\{|q_k-b|, |q_k-b|\}\\ | ||
+ | \end{align*} | ||
+ | </cmath> | ||
+ | |||
+ | Define the matrix <math>T=T_1</math> with entries | ||
+ | |||
+ | <cmath> | ||
+ | \begin{align*} | ||
+ | T_{j,j}&=\frac{d+c}{2d}\\ | ||
+ | T_{k,k}&=\frac{d+c}{2d}\\ | ||
+ | T_{j,k}&=\frac{d-c}{2d}\\ | ||
+ | T_{k,j}&=\frac{d-c}{2d} | ||
+ | \end{align*} | ||
+ | </cmath> | ||
+ | all other diagonal entries <math>T_{i,i}</math>, for <math>i\notin\{j,k\}</math> are equal to <math>1</math> and the remaining entries equal to <math>0</math>. | ||
− | <math> | + | It is straightforward from the definition to verify that this matrix is doubly stochastic, satisfies <math>r_1=Tp</math>, all components of <math>r_1</math> before position <math>j</math> and after position <math>k</math> are equal to those of <math>p</math>, but at least one of the components at position <math>j</math> or <math>k</math> became equal to the corresponding component in <math>q</math>. By repeating this construction with <math>r_1,r_2,...</math> etc. we get the matrices <math>T_2,T_3,\ldots T_k</math>. |
− | + | ==See also== | |
+ | *[[PaperMath’s sum]] | ||
− | [[Category: | + | [[Category:Algebra]] |
− | [[Category: | + | [[Category:Inequalities]] |
Latest revision as of 11:53, 8 October 2023
Muirhead's Inequality states that if a sequence majorizes a sequence , then given a set of positive reals :
Example
The inequality is easier to understand given an example. Since the sequence majorizes (as ), Muirhead's inequality states that for any positive ,
Usage on Olympiad Problems
A common bruteforce technique with inequalities is to clear denominators, multiply everything out, and apply Muirhead's or Schur's. However, it is worth noting that any inequality that can be proved directly with Muirhead can also be proved using the Arithmetic Mean-Geometric Mean inequality. In fact, IMO gold medalist Thomas Mildorf says it is unwise to use Muirhead in an Olympiad solution; one should use an application of AM-GM instead. Thus, it is suggested that Muirhead be used only to verify that an inequality can be proved with AM-GM before demonstrating the full AM-GM proof.
Example revisited
To understand the proof further below, let's also prove the inequality from the (weighted) AM-GM inequality.
where the step with the inequality consists in applying the weighted AM-GM inequality to each expression in parentheses.
The coefficients and and the grouping of terms according to the permutations and of the tuple of exponents come from the following arithmetic facts.
Note that , where the matrix . This matrix has non-negative entries and all its rows and columns add up to . Such matrices are called doubly stochastic matrices. We can write . Matrices like and , with exactly one in each row and each column and everywhere else, are called permutation matrices. Observe how and gives us all permutations of .
So, we can write
A single inequality that follows from Muirhead's inequality could be proven from the AM-GM inequality in multiple ways. Another way is
Adding these, Multiplying both sides by (as both and are positive), as desired.
Proof
Given , let's write Let's denote that majorizes as . These notation is useful in the context of Muirhead's inequality. In particular the theorem can be stated as implies .
Even though and are written above as -tuples, it will be convenient to think of them as column vectors, vertical.
The first goal of the proof is to show that there is a doubly stochastic matrix such that . Then Birkhoff-von Neuman theorem tells us that there are such that and , where are permutation matrices.
Note that once we have such an expression for , then where the step with the inequality (like in the example above) consists of applying the weighted AM-GM inequality grouping the terms from each element of the sum , corresponding to each permutation of the variables . Remember that the brackets are representing also a summation .
Before showing how to produce the matrix , note that knowing the computational aspect of Birkhoff-von Newmann theorem is useful, since it gives us an algorithm to compute the .
To produce the matrix we construct a sequence of -tuples (or rather column vectors), such that , each has at least one more component equal to a component of than its predecessor , and , for some doubly stochastic matrix . Note that then we can take to get and since products of doubly stochastic matrices is a doubly stochastic matrix, we get what we wanted.
In the case there is nothing to prove. Even Muirhead's inequality would be an equality in this case. So, assume that and .
Define
Define the matrix with entries
all other diagonal entries , for are equal to and the remaining entries equal to .
It is straightforward from the definition to verify that this matrix is doubly stochastic, satisfies , all components of before position and after position are equal to those of , but at least one of the components at position or became equal to the corresponding component in . By repeating this construction with etc. we get the matrices .