Read, Dead, Redemption

by Ankoganit, Jul 30, 2021, 8:22 AM

Once a blog has been inactive for three years, it's legally presumed dead. Clocking at a disgraceful three years and three months and change, this revival could then easily count as necromancy. I would make excuses and try to explain this long hiatus, but seeing as this is my own blog and I am free to do as I please, that would be an exercise in futility. First amendment, kiddo: if freedom of speech is protected, so is freedom of lack of speech.

Anyway, this revival was certainly not for naught. Recently I taught a class at MOOC (Mathematical Olympiad Orientation Camp) on an application of Cauchy's functional equation to a certain class of tiling problems, and then subsequently turned that into a blog post for the Monsoon Math Camp blog. If any part of the last two sentences piques your interest, go check it out! If not, that's also perfectly understandable; just don't come whining "revive" next time this goes dormant.

A small note: there seem to be some formatting troubles in the blog post linked above. I suspect they'll get fixed soon, but till then try not to get distracted by dysfunctional paragraph tags. Also, the penultimate paragraph is somehow missing: but that's not all bad, it turns a small part of the proof into an exercise left to the reader. Of course, these will probably get corrected at some future point in time, making this paragraph void and pointless. But hey, that's life.
This post has been edited 2 times. Last edited by Ankoganit, Jan 19, 2024, 10:41 PM

The secret liFE of polygons

by Ankoganit, Apr 23, 2018, 3:16 AM

So apparently I promised in my last post to write some stuff about my KöMaL problem. While I'm usually not one to keep promises (just look at the second promise in that post), it'd be a shame if I failed at all of them. So here I am, typing up stuff just to live up to the expectations my own past self. Screw you, past me.

Anyway, here's the problem my past self wished me to discuss:
For which pairs $(m,n)$ does there exist an injective function $f:\mathbb R^2\to\mathbb R^2$ under which the image of every regular $m-$gon is a regular $n-$gon? (Note that $m,n\ge 3$, and that by a regular $N-$gon we mean the union of the boundary segments, not the closed polygonal region.)

Rather than simply writing out the solution, I'll indulge in a brief walk-through demonstarting my thought-process while thinking this up. To me, this was a memorable journey (as are all good problems I happen to solve); to anyone else, it may be a rather boring read. If you're just looking for the final write-up and don't care about my semi-coherent ramblings, get out of my blog scroll to the bottom.

People smarter than me will tell you that most olympiad problems are created via one of two processes: working backwards from the key idea of the solution, and working forwards from the statement of the problem (or some variation of it). This particular problem was constructed in the second manner (I suspect most combinatorics problems are? I don't know really, though).

So there I was, on a fateful evening, high on my spirits (cough cough), wondering "dude, y'know, like, what if we could, like, map the squares to circles? Like, whoa!" After all, that's how all great ideas are born. I never even had the slightest idea that little shimmer of thought was going to change my life forever. (It turns out I was right, though. Oh well.)

After I recovered from my momentary delirium that followed my visionary epiphany, I started to think this through. No, really, what if we had a function on the Euclidean plane that turned squares into circles? I'm not exactly sure why, but I was thinking "circle" as the "boundary of the circle" in spite of the other countless options like an open disk, a closed disk and what not. I guessed such a function shouldn't exist, because it somehow sounds a lot like squaring the circle (actually it's the other way round but shh).

But wait. What if we mapped all points to a single point? Then the image of any circle would be a square with zero side-length! A rather silly example, no doubt, but nothing in our current rule-set forbids that. Darn these trivial cases, ruining everything since 300 BC.

To bypass this edge case, let's add a little condition: our function can't map to different points to the same one; in other words, it has to be injective. Now we're talking, doesn't look like there will be another counter-example, trivial or not. Let's try to prove there isn't one.

Unfortunately, this is just too easy; the injectivity condition easily implies that the intersection of the images of two sets has the same cardinality as the intersection of those sets. Now we can consider two different squares that share a side; then their image circles must have infinitely many common points, which means they are actually the same circle, a contradiction to injectivity (do you see why?).

OK, let's try flipping the problem around; what about this version instead:
Quote:
Does there exist an injective function $f:\mathbb R^2\to\mathbb R^2$ under which the image of any circle (that is, the set of points on its boundary) is a square (that is, the set of points on the boundary of a square)?

This is slightly more non-trivial, but not too much. The earlier used a special configuration, namely one involving two squares glued together. The proof of this one involves a different but fairly natural configuration; lots of circles touching each other at a single point. The idea here is tangency is an intrinsic property of circles, but squares don't handle it too well, so no function should be able to preserve the "structure" that tangency involves, especially if there are lots of tangent circles.

[asy]size(6cm);
for(int i=1;i<6;++i){
draw(circle((i,0),i),blue);}
dot((11,0)^^(12,0)^^(13,0),blue);
[/asy]
Try drawing a set of squares each two of which have exactly one point in common; you'll quickly see it's impossible (the details are kinda boring; I think you can draw at most three such squares unless I'm misremembering). This easily proves what we wanted to.

So this problem didn't quite cut it either. At this point, you have two options (insert red-pill-blue-pill reference here): trashcan the idea and ragequit, or invent the most outrageously general formulation and see how deep the rabbit-hole goes. No prize for guessing which one I chose.

The problem we have at hand right now is:
Quote:
For which pairs $(m,n)$ does there exist an injective function $f:\mathbb R^2\to\mathbb R^2$ under which the image of every regular $m-$gon is a regular $n-$gon? (Note that $m,n\ge 3$, and that by a regular $N-$gon we mean the union of the boundary segments, not the closed polygonal region.)
(Yeah, exactly the statement I wrote at the top. Just felt like copy-pasting it once more.)

This was when I started investigating (technical term for messing around with) the general version with $m-$gons and $n-$gons; making conjectures (technical term for wild guesses), trying different approaches (technical term for throwing random things at the problem), looking at the problem from different perspectives (technical term for taking a nap and forgetting all about the progress made so far), trying to algebraize the situation (technical term for suicide) -- you know, the usual techniques.

The key idea is to find some aspect in which the pre-image and the image are fundamentally different. In the first example, it was the fact that squares can be 'glued' together without being the same. In the second, we exploited the ability of circles to touch each other without the picture looking too awkward. But here we're in a pickle: there's no intrinsic difference between two regular polygons with diferrent number of sides. Well, except for the number of sides, I guess. We want the equation involving two numbers ($m=n$) drop out of some statement involving shapes, so a natural choice is create a configuration with an intrinsic numeral property. One natural idea that works is this: keeping putting $m-$gons around a central point until the cycle closes on itself, obtaining a floral pattern. Here's a neat illustration for $m=5$:
[asy]size(4.5cm);
guide P=polygon(5);
for(int i=0; i<10;++i){draw(rotate(i*108,dir(90))*P);}
draw(rotate(0*108,dir(90))*P,blue);draw(rotate(1*108,dir(90))*P,blue);
label("A flower of 5-gons",1.3S);
[/asy]
Why is this interesting? Apart from the fact that it looks pretty (and is fun to draw in asymptote), the number of polygons in a flower depends on the number of sides of the polygon and nothing else, which is neat because we've found a way to encode the number of sides in a diagram. It's not too hard to calculate the number of polygons for general $n$ (it's some pesky piecewise function; I'll omit the details here), and even better, it turns out to be injective. Now if we can show that the image of an $m-$gonal flower is an $n-$gonal flower, we'll be done immediately.

So we now need to show that when two polygons are 'glued' together, so are their images. One thing that'd help is proving that vertex of a polygon maps to a vertex of its image. To prove this, we need to "plug in" a special configuration; further, we somehow need to exploit the fact that a vertex is fundamentally different from a point on the side. It's not too hard to hit upon the configuration that works (except for some edge cases like $n=3$, that needs to be treated separately). Using these ideas and some other minor adjustments, one comes up with the following proof:

Solution

Turns out, the whole business with flowers and stuff was actually unnecessary; you can directly prove edges map to edges after claim 3 with a little effort. Oh well, at least the flowers look pretty.

The post you should bookmark

by Ankoganit, Jan 29, 2018, 3:36 PM

...Another semi-long hiatus. Sigh. I have no better excuse than to say I forgot this blog existed (believe me). The shoutbox tells me my sincere well-wishers have tried to remind me, but sadly forgetting the blog means forgetting the shoutbox too, and I don't get notifications for shouts. So here we are.

Also, I have another confession to make; this post is going to about books, and I actually googled 'good book puns' before making up a title. Guess I couldn't come up with anything novel, hehe. Feel free to groan.

What follows was originally meant to be my 1000th post on AoPS, but it didn't happen for various reasons (don't ask why, I have done enough of explanations and confessions already, jeez).

So I like books, and recently I got to read a bunch of fun books that happen to be connected to math somehow. Specifically, they aren't math books; you probably won't learn any interesting math from them (with a few exceptions), but they are fun to read, so I guess it won't hurt to give them a try.

Trolling Euclid, Thomas Wright

There's a quote often (mis?)attributed to Einstein: "Everything should be made as simple as possible, but not simpler." The author of Trolling Euclid, Thomas Wright, takes a bunch of 'hot' topics in number theory, makes them way simpler than they are, and the result is pure gold. What do you have to say about that, Mr. Einstein (or whoever said that)?

The topics range from Riemann Hypothesis to Collatz Conjecture, the humor level ranges from downright hilarious to...well, downright hilarious. This is written as accessible to an average person with an average attitude towards math, but having some knowledge of basic knowledge of math makes it all the more interseting. And since you're somehow here, I guess you do know some math. Unless you're a spambot. In which case, go away. (Note that I didn't say "and never come back". I can't miss the chance of throwing you into an infinite loop.)

If you want a sampler, give this article by the same author a go.

Here are some notable quotes from the book. Even though I put them in hide tags, they aren't really spoilers (it's not a murder mystery duh) but they are more enjoyable in-context, so you may skip these and jump straight to the book.

Quotes

Mathematics Made Difficult, Carl E Linderholm

The polar opposite of the above book, this one takes apparently simple things and makes them esoteric and arcane. Once again, the result is pure gold. There are four chapters, dealing with arithmetic, factors and fractions, algebra and topology-and-geometry. Each topic is dissected meticluously and discussed with precision; too much precision, in fact. It's probably a good idea to read some abstract algebra and category theory to completely understand all that's in the book, but it's perfectly enjoyable without any of that; in fact, ignorance about category theory makes the satirically rigorous presentation more humorous.

Once again, I'll include some excerpts:

Quotes

A budget of trisections, Underwood Dudley

As you probably know, trisecting a general angle with ruler and compass only is impossible. What is more impossible, though, is convincing someone this who happens to believe otherwise. Don't believe me? Well, neither did I, until A budget of trisections opened my eyes to the endless insanity that's all around us.

This book is about cranks. Cranks of all flavours: engineer cranks, grumpy-old-man cranks, even PhD cranks. Cranks who believe they have reached the end of a quest that bothered all humanity from time immemorial by "solving" the "problem" of trsection. You'll be amazed at how numerous they are, and just how annoyingly obstinate they can be. There might be one in your town, maybe in your school, maybe one hiding under your bed. Thankfully, the author gives some valuable advice on what to do when a triceratops trisector attacks, which may save your life someday.

The mood ranges from hilarious to tragic, or even scary in parts; to top things off, there's even a list of actual trisections involving tools such as a tomahawk or an analog clock. On the whole, it's an enjoyable read.

Things to Make and Do in the Fourth Dimension, Matt Parker

Matt Parker of Numberphile fame hardly needs an introduction. In contrast with the above works, this one focuses primarily on bits of actual mathematics (mostly of the recreational type), with a good deal of humor thrown in occassionally. If you are somewhat familiar with recreation mathemtaics, you may have to read through quite some pages to find something new; however, the pleasant writing style of Parker means that it never actually boring or dull.

Honorable Mentions

Here are some more things that didn't make it to the above list, not because they are less enjoyable, but because they are of different types from those above in some way.
  • You Failed Your Math Test, Comrade Einstein: A heartbreaking account of how mathematics was once used a tool for racism in academia, as absurd and impossible it sounds.
  • tastymath75025 told me about The Pleasure of Finding Things Out and Surely You're Joking, Mr. Feynman! by, as if you couldn't guess, Feynman. The second one is an old favorite of mine; the first one is something I recently started and it looks fun so far.
  • Mark Levi's The Mathematical Mechanic can be interesting to people who find physics fun; the "proof"s are actually pretty nice. However, the author tends to downplay mathematical proofs using somewhat annoying selection bias and strawman arguments. (For example, a proof of the good ol' Fermat's point thingy is done using physics, and contrasted with a hideous looking solution by vectors and derivatives. Wow, it's like synthetic geometry isn't even a thing.) If you can ignore that issue, this is a nice book overall.

Anyway, finishing through all of these should take you a while. So I guess that's pretty much it for today.


Before I forget, I need to put in two more things in here, mostly as reminders for my future self rather than as things of any actual importance.

First, a shameless plug: I got a problem into KöMaL! I will probably write some things about it in my next blog post (assuming there is one). Note that this also coincidentally means that I may not; really, it depends on a whole lot of random variables. So don't hold your breath; I won't be responsible for your asphyxia.

Secondly, you have probably noticed that mock contests and programs are currently off-topic for C&P and belong to a secret forum hidden somewhere in the cold depths of oblivion, where it's impossible to find unless you know what you're looking for. I've never been too much involved with mocks to have a strong opinion, but frankly I don't see the rationale behind this major change. I plan to write a rant on this in Site Support some time. Again, the disclaimer in the first point applies.
This post has been edited 2 times. Last edited by Ankoganit, Jan 29, 2018, 3:42 PM

Cover Story: Squares

by Ankoganit, Nov 28, 2017, 6:44 AM

After more than 5 months of heart-rending, trauma-inducing, earth-shattering, baldness-curing wait, I'm finally here to revive my blog from the cursed depths of oblivion. This is a shame; what's more shameful is that I don't really have anything super-interesting or super-original in my head right now, except for the unrestrainable urge to bring this blog back to life. So I'll do what I can do; don't judge.

Let's start with a very classical problem we were asked in this year's pre-departure camp, because old is gold, even if told hundredfold.

Undercover Square: May The 4's Be With You
Quote:
Given a finite collection of squares with total area at least $4$, prove that you can cover a unit square completely with these squares (with overlapping allowed, of course).

Some quick search-function-fu tells me this is from a Russian contest in 1979, but was asked again in Brazilian MO 2002. And for good reason. The solution is surprisingly clever (paraphrased mostly from Adithya Prakash's solution in the camp and rem's post here):

If there's a square with side $\ge 1$, there's nothing to solve, so let's say all squares have sides less than $1$. The trick is to introduce powers of $2$ (motivated by the number $4$? Maybe.) and divide the squares into classes: class 1 consisting of squares with side length in $[\tfrac12,1)$, class 2 with squares having side length in $[\tfrac14,\tfrac12)$ and so on; you get the pattern. Now shrink all squares in class $k$ (with side lengths in $\left[\tfrac{1}{2^k},\tfrac{1}{2^{k-1}}\right)$) and make all the side lengths $\tfrac{1}{2^k}$. The key observation is that this operation at most halves the side-length and hence at decreases the area to at worst $\tfrac14$, so the total area is still at least $1$. Now divide the unit square into $\tfrac12\times \tfrac12$ squares and use up all $\tfrac12\times \tfrac12$ pieces; when you run out of those, further subdivide the remaining squares into $\tfrac14\times \tfrac14$ and use all the pieces of that size, and so on. Since the total area of all the pieces is at least $1$, and no overlapping/wasting/screwing-up is done so far, the whole unit square will be filled eventually. Now it just remains to blow up the shrunken squares to their former sizes. $\blacksquare$

Undercover Square: The Best Cover

The cool thing about the proof above is that we never had to rotate any square. If we had to, it'd be surely painful; it'd just increase the number of things we have to keep track of. For the time being let's forget we can rotate squares; whenever we think of covering/packing something with squares, we'll never rotate them, unless otherwise stated.

Of course, nothing in the previous proof suggests that $4$ is the best bound possible; indeed it isn't. Let's try to figure out what it is.

If we do manage to prove $k$ is the best possible bound, then we should also be able to show there is a collection of squares with total are $k-\epsilon$ that cannot cover a unit square. Actually, it might be a good idea to figure such a $k-\epsilon$-are-collection that just barely fails to cover; that'd give us a decent lower bound.

$k=1$ is an obvious choice here; too obvious, in fact. Can we do better? To prove a bunch of things can't cover the unit square, one route that seems promising is to consider the points that are the furthest away from each other and show that our set of pieces can't touch everything. One such set of far-from-each-other points is the four corners. Since squares of area $4$ did manage to cover the whole thing with room to spare, $3$ may be the next best choice to consider. And indeed it is!

Consider $3$ squares of $1-\epsilon$ area. Since each square covers at most one corner of the unit square (remember rotating is not allowed), these $3$ squares are not enough to cover the whole thing.

But is $3$ in fact the magic number we've been looking for? Fortunately, it just so happens that the following problem is true:
Quote:
Given a finite collection of squares with total area at least $3$, prove that you can cover a unit square completely with these squares (with overlapping allowed).

But how do we go about proving that? Of course the power of $2$ sorcery we did earlier leads to a dead end here. Oops, that's a bummer. That proof was pretty darned nifty; how on earth are we going to one-up that?

If we can't do something cleverer, let's try something dumber. What do you do when you get a bunch of square pieces and a unit square to fill up? You can always throw random squares at random places and pray to the probabilistic gods, but that's probably a bit too dumb. Let's line these squares up neatly, from largest to smallest, and put them inside the square row by row, going to the next row when one row is packed up. I mean, that's the most natural thing to try; much more natural than inventing powers of two out of nowhere, at least.

Does this work? To see, we'll do what every math-student should do when faced with a problem: name all the variables. (Usually because it's easier than actually solving the problem, but it has other uses too.) Let's say we are putting the squares, largest to smallest in rows (left to right), and starting a new row whenever the row length exceeds $1$. Then each row has a maximum height (the height of the leftmost square) and a minimum height (the height of the rightmost square). Let's call the minimum heights of the rows $x_1,x_2,\cdots ,x_n$ respectively. Here $x_n$ is the minimum height of the last row, which is, in all likelihood, not even fully filled because we ran out of squares midway. It's probably a good idea to assume $x_n=0$ anyway, because that is the minimum height of a unfilled row, in some sense. Also, the maximum height of every row starting from the second, is at most the minimum height of the previous one. We need to prove that the sum of these minimum heights (except maybe the last one) is at least $1$; then we'd have filled up the entire thing. So, um, we need to bound stuff. Do we have a bound already? Yes, it's the area that's at least $3$! So let's try to estimate the area of these rows somehow, so that we can compare their sum to the total height; that should be a good idea.

Now consider the first row, with minimum height $x_1$, and maximum height at most $1$ (why?). Now the length the row is at most $1+x_1$, so the area is at most $1\cdot (1+x_1)$. Similarly the second row has area at most $x_1(1+x_2)$ and so on. Thus $$\text{Total Area}\le 1\cdot (1+x_1)+x_1(1+x_2)\cdots \le 1+x_1+(x_1+x_2)+\cdots =1+2x_1+2x_2+\cdots$$Therefore $$1+2(x_1+x_2+\cdots )\ge 3\implies x_1+x_2+\cdots
\ge 1.$$This is what we wanted to prove, so we are done. $\blacksquare$

A concept that's almost dual to covering is packing. Packing is when we don't care if the unit square (or any other shape) is fully filled or not; we just need to fit in all the pieces without overlapping. In a similar way to the above proof we can :
Quote:
Prove that a finite collection of squares of total area $\frac{1}{2} $ can be placed inside a unit square without overlap.

Again $\tfrac12$ is the best bound here.

[The above two problems occur in Newman's A Problem Seminar, which contains many, many other really interesting problems of different flavours. ]

A clever interplay of the ideas of covering and packing can be seen in Elegant, not Elephant : Part II Problem 9. I'm tempted to think the same idea can be modified to solve the first covering problem in this post (especially because of that factor of $4$ present), but the details seem hard. What do you think?

Covering the Plane: Think Big

I want to finish this post with something that'd leave a pleasant aftertaste, and what could be more apt than a ToT problem?
Quote:
We attempt to cover the plane with an infinite sequence of rectangles, overlapping allowed.
  1. Is the task always possible if the area of the $n-$th rectangle is $n^2$ for each $n$?
  2. Is the task always possible if each rectangle is a square, and for any number $N$, there exist squares with total area greater than $N$?
ToT Senior-A, Spring 2012
Remember that the no-rotation rule does not apply here! Or at least, the ToT graders won't be impressed if we try to enforce that out of nowhere.

Part (a) involves rectangles, and rectangles don't always make great tiles unless you get them in a nice shape. Only the areas of the rectangular tiles are specified, but that leaves the possibility that the rectangles are really thin and linear-looking; not the ideal set of tiles to work with. My proof below is based on that idea:

The answer is no. Suppose the $n-$th tile has dimensions $Mn^4\times \tfrac{1}{Mn^2}$ for some $M>>9000$. We claim that these can't even cover the unit square. Clearly, if the length of the part of the $n-$th tile that lies inside the unit square is $x$, then the area of the square it covers in at most $x\times \tfrac{1}{Mn^2}$, and $x$ itself is at most $\sqrt{2}$. So the total area all these rectangles can ever hope to cover is at most $\sqrt{2}\left(\textstyle\sum_{n=1}^\infty \tfrac{1}{Mn^2}\right)\le \sqrt{2}\left(\tfrac{1}{M}\cdot\tfrac{\pi^2}{6}\right)$, which falls short for large enough $M$.

For part (b), 'yes' should be a reasonable guess, because squares are pretty good at covering as we have seen before. So we give a algorithm that eventually manages to cover the whole plane.

Divide the whole plane into unit squares, and get some ordering on them. This is possible because the number of unit squares is countable. We'll fill these one by one by the following trick:

When filling the $m-$th square, suppose the total area of all the squares used so far is $M$, and pick a finite collection of squares with area at least $M+4$ (this is possible because of the given condition; as to why we can assume the collection is finite, try using standard limit arguments). Now if we take away the squares which are already used, that should leave us with a collection of squares, unused so far, with total area at least $4$. By the first problem we solved in this post (which you hopefully remember), we can use these to cover the current unit square. Now proceed onto the next square, and repeat.

This clearly works, so we are done for today. $\blacksquare$
This post has been edited 2 times. Last edited by Ankoganit, Jan 2, 2018, 5:32 AM

Some properties of ferrous nitride

by Ankoganit, Jun 26, 2017, 3:03 PM

Ferrous nitride is a crystalline, metallic solid, usually found in powdered form, which is insoluble in water. It has the chemical formula $\text{Fe}_3\text{N}_2$.

Heh, just kidding; I won't tax you with these stuff. Chemistry was never really my forte anyway. What I am going to do is, though, to dwell on that neat formula $\text{Fe}_3\text{N}_2$. Doesn't it look cute? (Hint: it doesn't.) In honour of that elegant formula that allegedly holds the key to understanding the mysteries of our universe $^{\color{blue}[\textit{\textsf{citation needed}}]}$, I'll write the solutions (plus some motivation) of $3$ FEs and $2$ Number theory problem. Yeah, that's totally how I interpret chemical formulas. Guess who botched the chemistry exam last year?

Chemical Analysis #1: Surjection curing dejection
Quote:
(Iran TST 2011 P5) Find all surjective functions $f: \mathbb R \to \mathbb R$ such that for every $x,y\in \mathbb R,$ we have
\[f(x+f(x)+2f(y))=f(2x)+f(2y).\]

Darn, this is bad. Everything is inside those $f$'s; I doubt we can pull out anything out of them. And there's also that surjectivity thing; what can it do, anyway? Well, surjectivity allows us write $x=f(y)$ for any $x$ we wish...hang on, did we just pull out $x$ from $f$? Excellent! I bet this trick is going to be useful.

OK, let's start with plugging random values, 'cause that's pretty much all we can do at this stage. To make life easier, let $P(x,y)$ mean $P$lug in $x$ and $y$ into the given equation. So, we can plug in $0$ anywhere; we can even cook up a mystery constant $c$ so that $f(c)=0$ via surjectivity, that'd let us cancel a bunch of stuff. Elementary combinatorics (yay!) says there are basically $3\times 3$ plug-ins that might be useful, but we're not gonna try all of them, are we? Let's trying spamming everything with $c$'s, that'd get us a whole lot of zeros:$$P(c,c)\implies f(c)=2f(2c)\implies f(2c)=0.$$Cool, so even $2c$ leads to $0$. Can we use that fact anywhere? Of course, we can spam one of the variable slots with $c$, like so $$P(x,c)\implies f(f(x)+x)=f(2x)\quad (\star); \;\;P(c,x)\implies f(c+2f(x))=f(2x)\quad  (\spadesuit).$$Now we're talking. If we can somehow manage to get injectivity, we can argue $f(f(x)+x)=f(c+2f(x))\implies f(x)=x-c$, and we'll be left with checking the linear functions. So we need to get hold of injectivity.

Well, let's begin just as all canonical injectivity proofs begin. Suppose $f(a)=f(b)$; we hope to prove that $a=b$ somehow. However we try to use the given equations, we'd run into things like $f(2a)$; so let's settle that. From $(\spadesuit)$, we have $f(2a)=f(c+2f(a))=f(c+2f(b))=f(2b)$. That's neat.

Now that we've got more letters to shuffle around, let's do that. $P(a,x)$ and $P(b,x)$ give $$f(2f(x)+a+f(a))=f(2x)+f(2a)=f(2x)+f(2b)=f(2f(x)+b+f(b))=f(2f(x)+b+f(a)).$$Now because of the surjectivity of $f(x)$, the expression $2f(x)+f(a)$ can take any real value, so $f(x+a)=f(x+b)\implies f(x+a-b)=f(x)$. Calling $k=a-b$, we have that $k$ is a period of $f$, and we wish to show $k=0$. But that'd require pulling $k$ out of the $f$'s somehow...of course, that's what the trick in the first paragraph was for! Let $\kappa$ be such that $f(\kappa)=k$, then we can use the periodicity of $f$, with $(\star)$ and $(\spadesuit)$ like so: $$k=f(\kappa)=f(\kappa+k)=f(\kappa+f(\kappa))=f(2\kappa)=f(c+2f(\kappa))=f(c+2k)=f(c)=0.$$As we've seen before, this leads us to $f(x)=x-c$, and now it's trivial to check that only $c=0$ works. And that seals the deal. $\blacksquare$

Chemical Analysis #2: Cubes versus newbs
Quote:
(ELMO 2017 P6) Find all functions $f:\mathbb{R}\to \mathbb{R}$ such that for all real numbers $a,b,$ and $c$:
  1. If $a+b+c\ge 0$ then $f(a^3)+f(b^3)+f(c^3)\ge 3f(abc).$
  2. If $a+b+c\le 0$ then $f(a^3)+f(b^3)+f(c^3)\le 3f(abc).$

Proposed by Ashwin Sah

So I finally managed to nail a P6. GG.

The only things that look somewhat scary are the fact that there are inequalities, not equalities, and that things are raised to third powers. We can squeeze out an equality out of the inequalities, but the cubes are going to be painful.

The first few steps are fairly routine: plug in zeros, $-x$, and all that stuff. Here's how it goes:
If $f$ is a function satisfying the conditions, then so is $f+k$ for any constant $k$, so we can assume WLOG $f(0)=0$. Note that the given statements imply:

$``$For reals $a,b,c$ with $a+b+c=0$, we have $f(a^3)+f(b^3)+f(c^3)=3f(abc)."$


Call this statement $P(a,b,c)$. Now $P(a^{\frac13},-a^{\frac13},0)$ gives $f(a)+f(-a)=0$, so $f$ is odd. Now for $a>0$, we have $a^{\frac13}+0+0\ge 0$, so condition $(i)$ gives $f(a)\ge 0$. Because of oddness, we have $f(a)\le 0$ for $a<0$. So $f$ takes nonnegative values at non-negative inputs and non-positive values for non-positive inputs.

Now we run into an impasse. First of all we need equalities, not inequalities; so the quoted statement would probably be the only useful thing from this point. Also, that $a+b+c=0$ is an annoying constraint; let's drop $c$ altogether:
Consider that statement $P(a,b,-(a+b))$. Because $f$ is odd, this translates into$$f(a^3)+f(b^3)+3f(ab(a+b))=f((a+b)^3).$$
Good work, but not great. There's nothing outside $f$'s, we lost a variable, and we're nowhere near completion. We need one more variable, at least. Where do we get that from? We introduce a new one, of course! This brings us to the next part of the solution:
Using the above equation multiple times, we obtain
\begin{align*} f((a+b+c)^3) &=f((a+b)^3)+f(c^3)+3f((a+b)c(a+b+c))\\
&= f(a^3)+f(b^3)+3f(ab(a+b))+f(c^3)\\
&+3f((a+b)c(a+b+c))\end{align*}Similarly, switching the roles of $b$ and $c$ we get, $$f((a+b+c)^3)=f(a^3)+f(b^3)+f(c^3)+3f(ac(a+c))+3f((b(a+c)(a+b+c)).$$Comparing these two expressions for $f((a+b+c)^3)$, we obtain $$f(c(a+b)(a+b+c))+f(ab(a+b))=f(b(a+c)(a+b+c))+f(ac(a+c)).\qquad (\star)$$
Ugh, that looks horrible. But that does hint at something; $f$ mostly respects addition, because if we ignored the $f$'s and added the terms in the above equation, we'd get an identity. The respect is shown in a rather ugly way, though. We need to ensure that those weird expressions can be replaced by generic $x,y$. So we need to find $a,b,c$ so that $c(a+b)(a+b+c)=x,ab(a+b)=y,b(a+c)(a+b+c)=z$. Oops, that's hard, mostly because it's a system of $3$ cubic equations. Let's loosen up the conditions a bit; we don't need $z$, just gotta make sure that $ab(a+b)=y=b(a+c)(a+b+c)$. Playing around with the equations a bit, noticing the homogeneity and some luck gets us to the following chain of reasoning:
Now choose arbitrary positive reals $x\le y$ and consider the following system of equations
\begin{align}
a^2c-abc&=a^2b+ab^2\\
a^2b+ab^2&=x\\
a^2c+ac^2&=y
\end{align}We claim that this has a solution in reals $a,b,c$. Indeed, set $b=qa,c=ra$, and $(1)$ becomes $$r-qr=q+q^2\implies r=\frac{q^2+q}{1-q}.$$And $(2)\div (3)$ becomes $$\frac{x}{y}=\frac{q(q+1)}{r(r+1)}=\frac{q(q+1)}{\frac{q(q+1)}{1-q}\cdot\frac{q^2+1}{1-q}}=\frac{(q-1)^2}{1+q^2}.$$The function $h(q)=\tfrac{(q-1)^2}{1+q^2}$ is continuous on $[0,1]$ and $h(0)=1,h(1)=0$. Since $\tfrac xy\in(0,1]$, there is a $q\in[0,1)$ so that $h(q)=\tfrac xy$. Choose this $q$, and the corresponding $r=\tfrac{q(q+1)}{1-q}$. Then scale both $q$ and $r$ by $a$ to get $b,c$ so that $(2)$ holds; then $(3)$ and $(1)$ would hold automatically. Thus $a,b,c$ exist; clearly $a\ne 0$.

Now that we've chosen suitable $a,b,c$, note that $$a^2c-abc=a^2b+ab^2\implies b(a+b+c)=ac\implies b(a+c)(a+b+c)=ac(a+c)=y,$$and $$ab(a+b)=x,$$So $$c(a+b)(a+b+c)=(b(a+c)(a+b+c)+ac(a+c))-ab(a+b)=2y-x.$$Using these in $(\star)$ gives $$f(2y-x)+f(x)=2f(y)\; \forall 0<x\le y.$$Setting $z=2y-x\iff y=\frac{x+z}{2}$, this becomes $$f(x)+f(z)=2f\left(\frac{x+z}{2}\right)\;\forall 0<x\le z.$$So $f$ satisfies Jensen's functional equation over positives, and it's bounded on $[0,\infty)$, so $f(x)=cx+d$ for some $c\ge 0$, which works.
Whoopee. $\blacksquare$

Chemical Analysis #3: Continuous f(rustration)
Quote:
(Tuymaada 2003, P4) Find all continuous functions $f(x)$ defined for all $x>0$ such that for every $x$, $y > 0$
\[ f\left(x+{1\over x}\right)+f\left(y+{1\over y}\right)= f\left(x+{1\over y}\right)+f\left(y+{1\over x}\right) . \]

Proposed by F. Petrov

I tried this months ago, without much success. I tried it again today morning, and having solved it, it doesn't seem even that hard. Possibly, I've actually messed up and managed to overlook some glaring flaw; so read at your own discretion, and let me know if it's glitched.

There's a lot of reciprocal business going on, so let's start with something like that. Replace $x$ by $\frac 1x$ in the given equation, and we have$$f\left(x+\frac1x\right)+f\left(y+\frac1y\right)=f\left(x+y\right)+f\left(\frac1x+\frac1y\right).$$
Notice something: $f$ respects addition in the same as the previous problem did; ignore the $f$'s and you end up with an identity. Now the key piece of intuition is: the pair $(x+y,\tfrac1x+\tfrac 1y)$ has the same sum as the $(x+\tfrac1x,y+\tfrac1y)$, but the numbers in the latter pair are closer together (check it!). This suggests some approach like this: take $a,b$, and express them as $x+y$ and $\tfrac 1x+\tfrac1y$, use the above equation and find a new pair of numbers $x+\tfrac1x$ and $y+\tfrac1y$ and make our new $a,b$, which are closer together. If we repeat this ad infinitum, the numbers in the pair will become equal in the limit (here's where we use continuity), and we can infer something nice. This motivates the following:

Take arbitrary positive reals $a,b$ so that $ab\ge 4$, and define the sequences $\left\{a_i\right\}_{i=0}^\infty,\left\{b_i\right\}_{i=0}^\infty$ as follows:
$$a_0=a,b_0=b,$$and for $n\ge 0$, take positive reals $x,y$ so that $x+y=a_n,\tfrac1x+\tfrac1y=b_n$ (these exist because $ab\ge 4$), and define $$a_{n+1}=x+\frac1x,b_{n+1}=y+\frac1y.$$
Because of the equation above, $$f(a_0)+f(b_0)=f(a_1)+f(b_1)=\cdots=f(a_n)+f(b_n)=\cdots \quad (\star).$$Also, $a_n+b_n$ is a constant throughout. Now take any $n$, and consider the numbers $a_n,b_n,a_{n+1},b_{n+1}$. Let $x,y$ be the corresponding values that satisfy $x+y=a_n,\tfrac1x+\tfrac1y=b_n.$ This implies $xy=\tfrac{a_n}{b_n}.$ Then we have $$\frac{|a_{n+1}-b_{n+1}|}{|a_n-b_n|}=\left|\frac{x+\frac1x-y-\frac1y}{x+y-\frac1x-\frac1y}\right|=\left|\frac{x-y}{x+y}\right|=\sqrt{1-\frac{4xy}{(x+y)^2}}=\sqrt{1-\frac4{a_nb_n}}.$$But we have $(a_n+b_n)^2\ge 4a_nb_n\implies 1-\frac{4}{a_nb_n}\le 1-\frac{16}{(a_n+b_n)^2},$ so$$\frac{|a_{n+1}-b_{n+1}|}{|a_n-b_n|}\le \sqrt{1-\frac{16}{(a_n+b_n)^2}}=\sqrt{1-\frac{16}{(a_0+b_0)^2}}.$$Now $\textstyle\sqrt{1-\frac{16}{(a_0+b_0)^2}}$ is a constant less than $1$, so $|a_n-b_n|$ converges to zero. It's now easy to see that $a_n,b_n$ are convergent as well, in fact they both converge to $\tfrac{a+b}{2}$; so invoking $(\star )$ and continuity, $$f(a)+f(b)=2f\left(\frac{a+b}{2}\right)\;\forall a,b>0 \text{ so that }ab\ge 4.\quad (\spadesuit)$$Okay, that's almost Jensen but with a constraint. How do we turn that into omnipotent Jensen? By introducing new variables, of course!
Now take arbitrary $a,b>0$. Choose sufficiently large $c,d$ so that:
  • $cd\ge 4$,
  • $bc\ge 4$,
  • $ad\ge 4$,
  • $\frac{a+d}{2}\cdot\frac{b+c}{2}\ge 4$, and
  • $\frac{a+b}{2}\cdot \frac{c+d}{2}\ge 4.$
These conditions are easy to satisfy by taking huge $c,d$; in fact, many of these are redundant. So by applying $(\spadesuit )$ multiple times, we have
\begin{align*}f(a)+f(b)+2f\left(\frac{c+d}{2}\right)&=f(a)+f(b)+f(c)+f(d)\\
&=2f\left(\frac{a+d}{2}\right)+2f\left(\frac{b+c}{2}\right)\\
&=4f\left(\frac{a+b+c+d}{2}\right)\\
&=2f\left(\frac{a+b}{2}\right)+2f\left(\frac{c+d}{2}\right).\end{align*}Therefore $f(a)+f(b)=2f\left(\tfrac{a+b}{2}\right)$ for arbitrary $a,b>0,$ so Jensen says $f(x)=cx+d$ for $c\ge 0$ (or just $c\in\mathbb R$ if the codomain is supposed to be $\mathbb R$, thanks @anantmudgal09 for pointing this out), which works nicely. $\blacksquare$

Chemical Analysis #4: Size matters
Quote:
(IMO 2011 P5) Let $f$ be a function from the set of integers to the set of positive integers. Suppose that, for any two integers $m$ and $n$, the difference $f(m) - f(n)$ is divisible by $f(m- n)$. Prove that, for all integers $m$ and $n$ with $f(m) \leq f(n)$, the number $f(n)$ is divisible by $f(m)$.

Proposed by Mahyar Sefidgaran, Iran

This may be IMO Problem 5 or a shortlist N5 or whatever, but I find this to be a massive troll. Not quite fitting of an Iranian problem. Tsk tsk.

Hmm, so looks like the solutions are $f(x)=x$...hey, wait, the range is positive numbers?! Where do negative numbers map to? Darn, $f$ is most likely some stupid function like constant or some ugly piecewise beast.

So let $P(m,n)$ denote the statement $f(m-n)|f(m)-f(n)$. As always, we start with plugging in nice things like $0,-m$ and stuff. So this gives: \begin{align*}P(m,0)&\implies f(m)|f(m)-f(0)\implies f(m)|f(0)\\P(0,m)&\implies f(-m)|f(0)-f(m)\implies f(-m)|f(m) \text{ since }f(-m)|f(0)\\
P(0,-m)&\implies f(m)|f(-m)\\
\implies & f(m)=f(-m)\end{align*}Wow, $f$ is even. Also, every $f(m)$ is a divisor of $f(0)$, so $f$ can take at most finitely many values. Can it get any sillier?

We want to finish off this thing as quickly as possible, so let's get to two variables. Choose $m,n$ and let's do whatever we can with them.
\begin{align*}P(m+n,m)&\implies f(n)|f(m+n)-f(m)\\ P(m+n,n)&\implies f(m)|f(m+n)-f(n)\\
P(m,-n)&\implies f(m+n)|f(m)-f(n)\end{align*}
Now we need a quick bookkeeping-ish job based on size-reasons. Suppose WLOG $f(m)> f(n)$, then $f(m+n)\le f(m)-f(n)<f(m)$, so $f(m)>|f(n)-f(m+n)|$, but we have $f(m)|f(n)-f(m+n)$, so $f(n)=f(m+n)$. Then $f(n)|f(m+n)|f(m)-f(n)\implies f(n)|f(m)$ follows immediately, proving the claim. $\blacksquare$

I hate it when the only non-trivial idea in a number theory problem is that divisibility implies size constraints. Fortunately, there's a better NT problem coming up to ease off that pang...

Chemical Analysis #5: Fair and square
Quote:
(Tournament of Towns 2012) Let $n$ be a positive integer. Prove that there exist integers $a_1, a_2, \cdots  , a_n$ such that for any integer $x$, the number $(\cdots (((x^2+a_1)^2+a_2)^2+\cdots)^2+a_{n-1})^2+a_n$ is divisible by $2n-1$.

Lemme tell you a little secret: every ToT problem is discreetly a combinatorics problem. Trust me. Things may look disguised as something else, they might contain bits of non-trivial occurrences of divisibility, triangle, polynomials and what not, but deep down these are really attempts at masquerading. It's combinatorics all the way down.

In this problem, too, the key is combinatorial thinking. $x$ can take all $2n-1$ possible residues; we somehow need to restrict that to $0$ by squaring and adding stuff. Squaring $x$ once makes the number of residues bog down to $n$ already since $i^2\equiv (2n-1-i)^2\pmod{2n-1}$, so we're in good shape. Now we can add $n$ $a_i$'s and square lots of times in the meantime. It'd be really nice if adding $a_i$ and squaring reduced the number of possible residues everytime by at least $1$. In that case, we could decrease the possibilities by $1$ each until adding $a_{n-1}$ and squaring leaves us with only one possibility, in which case we could use a suitable $a_n$ to the whole thing to make that possibility zero.

Okay, let's see what we can do. Well, the only time the number of possible residues changes is when we square. But that collapses $i$ and $2n-1-i$ together. So maybe we can shift any two possible residues by $a_i$ at any given instant so that they become of the form $i,2n-1-i$, and then square? Well, let's say $a,b$ are two possible residues at some point. We want to find $a_i$ so that $(a+a_i)+(b+a_i)\equiv 2n-1\iff a_i\equiv\tfrac{2n-1-a-b}{2}\pmod{2n-1}$, and of course, this works, because $(2,2n-1)=1$, so division by $2$ makes sense! So we can add this $a_i$ and then square, so that the number of possible residues decreases by at least $1$, and proceed as described before. Mission accomplished! $\blacksquare$

Also, a fun fact: $\text{Fe}_3\text{N}_2$ is grey in colour, so you may slyly call it 'grey matter' and feel smug for the rest of the day.
This post has been edited 6 times. Last edited by Ankoganit, Jun 30, 2017, 3:59 AM

Inequality sans Quality

by Ankoganit, Jun 6, 2017, 7:16 AM

There are many, many people on our AoPS who are fond of solving and creating inequalities. They find inequalities very beautiful; and so do I, to some extent. This post is not supposed to be a rant on what is beautiful and what is not (that'd lead us into the dark and perilous woods of vaguely expressed philosophy); the point I'm trying to make is, the beauty of an inequality lies not in a string of symbols, but in the underlying idea, or the ingenuity behind its proof.

To drive this home, I'll just briefly mention two examples of nice inequality problems; but this will be done very hastily, since the point of this post is actually examples of the opposite kind (you'll see how they turn out to be interesting in a very different sense).

Ex. 1: This is an example from a recent blog post by Aiscrim (go check it out!).
Quote:
Let $a,b,c>1$. Prove that $$\dfrac{1}{a^3-1}+\dfrac{1}{b^3-1}+\dfrac{1}{c^3-1}\ge \dfrac{3}{abc-1}.$$
Bad proof:
Substituting $a:=x+1,b:=y+1,c:=z+1$ and expanding gives the following equivalent inequality: hidden for ugliness, which is true by Schurheading (well, hopefully, I didn't check it).
Good proof (from Aiscrim's blog):
Take $a=\dfrac{1}{x},b=\dfrac{1}{y},c=\dfrac{1}{z}$ with $0<x,y,z<1$. We have that
$$\dfrac{x^3}{1-x^3}+\dfrac{y^3}{1-y^3}+\dfrac{z^3}{1-z^3}\ge \dfrac{3xyz}{1-xyz}$$$$\Leftrightarrow \  \displaystyle\sum\limits_{k=1}^\infty x^{3k}+\displaystyle\sum\limits_{k=1}^\infty y^{3k}+\displaystyle\sum\limits_{k=1}^\infty z^{3k}\ge 3\displaystyle\sum\limits_{k=1}^\infty (xyz)^k$$but $m^3+n^3+p^3\ge 3mnp$ and the conclusion follows. $\blacksquare$

Ex. 2: This one's from ARMO 2015, which was also used in India Practice Test, 2016.
Quote:
Let a,b,c,d be real numbers satisfying $|a|,|b|,|c|,|d|>1$ and $abc+abd+acd+bcd+a+b+c+d=0$. Prove that $\frac {1} {a-1}+\frac {1} {b-1}+ \frac {1} {c-1}+ \frac {1} {d-1} >0$
Bad proof: Lagrange Multipliers! (I was told it's actually feasible.)
Good proof: Multiplying by $2$ and adding $1$ to each fraction, the inequality can be rewritten as $$\frac {a+1} {a-1}+\frac {b+1} {b-1}+ \frac {c+1} {c-1}+ \frac {d+1} {d-1} >4.$$Now since $|a|>1$, we have that $\tfrac{a+1}{a-1}=\tfrac{(a+1)^2}{a^2-1}$ is actually positive, and the given condition implies $(a+1)(b+1)(c+1)(d+1)=(a-1)(b-1)(c-1)(d-1)$, so AM-GM seals the deal. $\blacksquare$

The moral of the story is, the test of a pudding is in its proof. Or something like that. Basically, the same inequality can be perceived as magnificent or murderous, depending on the approach. (I know a few people who tried LM'ing Ex. 2. during the test; they didn't look very elated about the problem.) That's why if some person comes up with seemingly beautiful and difficult inequalities but refuses to reveal their proofs, you have every reason to doubt them.

"But, wait", you say, "how would they come up with those inequalities if they didn't actually have a proof?" Unfortunately, the attitude of "They must be true because, if they were not true, no one would have the imagination to invent them" may be useful in other cases, but in case of inequalities, it's no longer trustable.

The Demonstration

Seasoned AoPSers may find the following inequality familiar:
Quote:
Suppose positive reals $a,b,c,d$ satisfy $a+b+c+d=4$. Prove that $$ a^{ab}+b^{bc}+c^{cd}+d^{da} \ge \pi.$$

This baffled many users for a long time, and no proof has been posted till date. We'll see how we can concoct the same inequality, without having the slightest idea of its proof.

You may be aware many inequality-solvers use computer programs to analyze and prove inequalities. This programs usually take years of hard work and superior expertise to build, and they are indeed very powerful. However, to analyze this particular inequality, we need a short python program that a novice can easily put together in half an hour.

Here's the code I'm talking about.

What does this thing do? Well, it's a function that takes an algebraic expression in $a,b,c,d$ and a positive real $S$ as input, and tries to approximate the minimum of that expression under the condition $a+b+c+d=S$ and $a,b,c,d\in\mathbb R^+$. But how does it work? That's ridiculously simple. It just checks all the values of $a,b,c,d$ that belong to the set $\{0,0.1,0.2,\cdots ,0.9\}$ and sees when the expression is minimized. This yields an approximation that's somewhat good upto $1$ decimal place. Suppose the minima is obtained at $(a,b,c,d)=(0.5,0.2,0.3,0.4)$ (totally made up numbers). Then it tries to ascertain the second digit by checking values of $a$ among $\{0.41,0.42,0.43,\cdots, 0.58,0.59\}$. Each iteration gets it closer and closer to the actual minimum, and it stops when the sufficient degree of accuracy is achieved.

Of course, this kind of dumbassed approach has its drawbacks: for very weird behaving function, it can easily fail. But for functions that are 'smooth' enough, this is usually accurate enough.

So we can use this program to find the minima of $a^{ab}+b^{bc}+c^{cd}+d^{da}$. But how'd we cook up such an expression in the first place? Well, that's easy. If you put a lot of variable stuff in the exponents and make the inequality cyclic but not symmetric, chances are you'd end up with a rather unassailable inequality with ungodly equality cases. This works better for four variables, so the above is one of the many, many expressions you can conjure up.

Now let's use the program to find our desired minima. The following code is just the one i showed above, except that I added the line
getmin('a**(a*b)+b**(b*c)+c**(c*d)+d**(d*a)',4,10)
The $4$ in above code stands for the sum $a+b+c+d$, and the $10$ signifies the desired level of accuracy. Of course, "$\texttt{**}$" is pythonic for exponentiation (and not some censored curse word), in case you aren't a Parselmouth.

def f(x,n):
    return range(max(0,int(x*10**n-10)),min(int(x*10**n+11),10**n))
def getmin(ex,S,n):
    a,b,c,d=4,0,0,0
    x=eval(ex)
    current=[a,b,c,d,x]
    for i in range(0,11):
        for j in range(0,11):
            for k in range(0,11):
                if i+j+k<=10:
                    a,b,c,d=S*i/10,S*j/10,S*k/10,S*(1-(i/10+j/10+k/10))
                    x=eval(ex)
                    if x<current[-1]:
                        current=[a,b,c,d,x]
    for m in range(2,n+1):
        for i in f(current[0]/S,m):
            for j in f(current[1]/S,m):
                for k in f(current[2]/S,m):
                    if i+j+k<=10**m:
                        a,b,c,d=S*i/10**m,S*j/10**m,S*k/10**m,S*(1-(i+j+k)/10**m)
                        x=eval(ex)
                        if x<current[-1]:
                            current=[a,b,c,d,x]
    return current
getmin('a**(a*b)+b**(b*c)+c**(c*d)+d**(d*a)',4,10)

Unfortunately, AoPS python doesn't suppose $\texttt{eval}$ yet; so here's a link where you can actually try running it. If all goes well, it should faithfully output $$\texttt{[1.748e-07, 0.2750877336, 0.4454582272, 3.2794538643999998, 3.1605859174508946]}$$or something similar. The last element of this list is the minimum value, and the other four are the corresponding values of $a,b,c,d$. We see that it's slightly bigger than $\pi$; so we come up with the arcane looking inequality stated before.

Okay, so in this example, we simply recognized that the minima is roughly $\pi$; what if we didn't? Well, internet to the rescue again. Enter RIES, the fantastic online tool by Robert Munafo, which tries to approximate given numbers by interesting-looking expression. Input $3.16058$ into it, and it informs that $\pi$, $3^{\frac{\pi}{3}}, \tfrac{3}{\sqrt{\cos\frac{\pi}{7}}}$ are all pretty good approximations; we could've used any of these in the above inequality.

In a similar fashion, you can "create" loads of novel and interesting looking inequalities, which are $99.99\% $ likely to be true, but you don't need to have any idea about their proof. Here are some amusing examples:
  • Suppose $a,b,c,d$ are positive reals satisfying $a+b+c+d=4$. Prove that $$a^{a^2b}+b^{b^2c}+c^{c^2d}+d^{d^2a}\ge (\tfrac{\pi}{5}+1)(\pi-1).$$Note
  • Suppose $a,b,c,d$ are positive reals satisfying $a+b+c+d=4$. Prove that $$\frac{a^b}{b+1}+\frac{b^c}{c+1}+\frac{c^d}{d+1}+\frac{d^a}{a+1}\ge \sqrt{3}-\ln 2.$$
    Note
  • Suppose $a,b,c,d$ are positive reals satisfying $a+b+c+d=4$. Prove that $$a^{\sin^2 ab}+b^{\sin^2 bc}+c^{\sin^2 cd}+d^{\sin^2 da}\ge 3+\frac1{\sqrt{7}}.$$Note
  • Suppose $a,b,c,d$ are positive reals satisfying $a+b+c+d=4$. Prove that $$\sqrt[ab+1]a+\sqrt[bc+1]b+\sqrt[cd+1]c+\sqrt[da+1]d\ge \frac6{\pi}+\frac16.$$Note
  • Suppose $a,b,c,d$ are positive reals satisfying $a+b+c+d=4$. Prove that $$(a+b^2)^{ab^2}+(b+c^2)^{bc^2}+(c+d^2)^{cd^2}+(d+a^2)^{da^2}\ge e+\sqrt{2-\frac1e}.$$Note
So the lesson is clear; don't judge an inequality by its cover; its true beauty and elegance lies in the proof. And if the poser refuses to give proofs, the reason may be more surreptitious than you think.
This post has been edited 1 time. Last edited by Ankoganit, Jun 12, 2019, 5:10 AM

Intriguing Integrals

by Ankoganit, May 24, 2017, 3:22 AM

For outsiders, mathematics is mostly the 'art' of manipulating random strings of weird symbols, and two symbols that seem to summarize the quintessential esoterica of mathematics are $\textstyle\sum$ and $\textstyle\int$. When you get into actually doing some real math, you discover that the aura of mysticism surrounding $\textstyle\sum$ mostly stems from the fact that mathematicians are horrible at making intuitive symbols. For example, the sigma could've easily replaced by something like $$\mathop{\stackrel{\infty}{\Huge\text{+}}}_{i=1}\frac{1}{i^2}=\frac{\pi^2}{6}.$$But the $\textstyle\int$ symbol, often dubbed the 'long-S', actually has some meaning of its own. It doesn't just rephrase some well known idea in a strange way; it indicates a very powerful and novel idea that was never encountered before. In this post, we're going to see some proofs where the idea of integrals makes a stellar appearance.


Exhibit #1: Silence Your Silly Friend
Have you ever had a friend (or worse, a teacher) claiming that $\pi$ equals $\frac{22}{7}$? Quite an irrational statement, isn't it? I know how desperate and annoying it is to try to convince them of what's really going on. So here's a novel way to silence your friend once and for all.

Consider the definite integral $\int_0^1 \frac{x^4(1-x)^4}{1+x^2}\, \text{d}x$. It can be evaluated as follows (mostly copied from Wikipedia):

\begin{align*}
 \int_0^1\frac{x^4(1-x)^4}{1+x^2}\,\text{d}x & = \int_0^1\frac{x^4-4x^5+6x^6-4x^7+x^8}{1+x^2}\,\text{d}x\\
& = \int_0^1 \left(x^6-4x^5+5x^4-4x^2+4-\frac{4}{1+x^2}\right) \,\text{d}x \\
& = \left.\left(\frac{x^7}{7}-\frac{2x^6}{3}+ x^5- \frac{4x^3}{3}+4x-4\arctan{x}\right)\,\right|_0^1 \\
& = \frac{1}{7}-\frac{2}{3}+1-\frac{4}{3}+4-\pi \\
& = \frac{22}{7}-\pi. 
\end{align*}And here's the surprise. Since the integrand is positive on $(0,1)$, the integral must be positive, implying $\frac{22}{7}>\pi$. $\blacksquare$

The question of evaluating the integral was asked on Putnam 1968. It is surprisingly simple, no doubt, but the inference drawn is simply surprising.


Exhibit #2: Bound for LCM

Let's say we wish to obtain a good lower bound on the expression $\text{lcm}(1,2,\cdots ,n)$. Why we need that is irrelevant, but it can of use in, say, some analytical number theory proof. Specifically, we wish to prove that $$\text{lcm}(1,2,\cdots , 2n+1)\ge 4^n.$$Note how it would give us a bound of roughly $2^n$ for the original expression.

But how do we prove this? There are many ways, but the following proof is particularly nifty:

Call the expression $\text{lcm}(1,2,\cdots , 2n+1)$ simply $X$, and consider the integral $\int_0^1 x^n(1-x)^n\,\text{d} x.$ When we expand out the expression $x^n(1-x)^n$, the largest degree of $x$ would be $2n$, so the integrals of the terms would be of the form $\left.\frac{c_ix^{i+1}}{i+1}\right|_{0}^1=\frac{c_i}{i+1}$ for integers $c_i$ and $i\in \{0,1,\cdots ,2n\}$. That means $X\int_0^1 x^n(1-x)^n\,\text{d} x$ would be an integer; but it's clearly positive, so $X\int_0^1 x^n(1-x)^n\,\text{d} x\ge 1$. Now note that AM-GM gives $x(1-x)\le \frac14$, so $$\int_0^1 x^n(1-x)^n\,\text{d} x\le \int_0^1 \frac{1}{4^n}\,\text{d} x=\frac{1}{4^n}\implies X\ge 4^n.$$This proves our claim. $\blacksquare$

I got to know of this proof at the IMOTC this year; don't you think it's too clever not to share?


Exhibit #3: Measuring in Two Ways

The calculation of the antiderivative of $\ln x$ is one of the first exercises you'd encounter after studying integration by parts in a calculus class. It's easy to work out, no doubt; but what follows is another simple proof which I find particularly elegant.

But first, I must mention something called Fubini's theorem in analysis (if you already know what it is, feel free to skip this paragraph). In normal calculus done on 2 dimensional $xy$-plane, integrals are used to compute areas under curves. When you move on to 3 dimensions, you might be curious about the volume under the graph of some function $f(x,y)$. There are two, morally dual ways of computing this: you might divvy up the volume in thin sheets parallel to the $xz$-plane , compute each of them by normal 2-d integrals, and sum over them in the direction of $yz$-plane. Or you do this other way round; first dividing parallel to $yz$-plane and then integrating parallel to $xz$-plane. It's intuitively obvious that both processes should yield the same answer; and that's what Fubini's theorem says. Informally speaking, this allows us to 'switch' integrals as we often do with sums (under certain conditions, but we won't go into the details). It's the continuous version of double-counting, if you will.

Now we move to the proof. Note that $\ln x=\int _1^x \frac1u\,\text{d}u$, so we have $$\int _1^t \ln x\,\text{d}x=\int_1^t \int_1^x \frac{1}u \,\text{d}u\,\text{d}x.$$Now what does the last integral mean? It's the volume under the surface $f(x,u)=\frac1u$, but computed with rather weird limits. For each $x$, the inside integral goes from $1$ to $x$, while $x$ itself moves from $1$ to $t$. For an intermediate value of $x$, say $t_1$, $u$ goes only up to $t_1$. It's not hard to graph the whole thing and see that this means $(x,u)$ travels inside the triangle with vertices at $(1,1),(1,t),(t,1)$. So when we switch the integral, they must be done in a peculiar way:$$\int_1^t \int_1^x \frac{1}u \,\text{d}u\,\text{d}x=\int_1^t\int_u^t \frac1u\,\text{d}x\,\text{d}u=\int_1^t\left( \frac tu-1\right)\,\text{d}u=\left. t\ln u-u\right|_{u=1}^{u=t}=t\ln t-t+1.$$That's the same thing we'd get from Integration by Parts. $\blacksquare$

This beautiful proof is stolen from Math.SE.

Asymptotic Growth of Fun

by Ankoganit, Apr 29, 2017, 4:01 PM

Asymptote! Even the name sounds super cool! But, what is it?

People familiar with AoPS are probably familiar with all those cute Asymptote-generated diagrams as well. In a nutshell, Asymptote is a powerful programming language (so the diagrams are accurate) for producing vector graphics (so you can zoom in as hard as you want without hitting pixels) that also happens to $\text{\LaTeX}$ compatible. And the key reason we're discussing this in the first place, it's supported by AoPS bbcode!

We're all familiar with accurate and elegant geometry diagrams generated by Asymptote. In this post, I intend to showcase some better examples of what Asymptote is really made for. You should definitely read this post, because:
  • If you're interested in programming, this would be somewhat helpful.
  • If you like to look at pretty mathematical pictures, this post will have plenty of them.
  • If you loathe both programming and pretty pictures, you should probably rethink the purpose of your life.

Exhibit #1: Fractals

Fractals are beautiful pieces of intricate mathematical graphics that can described in surprisingly simple terms. (Here when I say "fractals", I mean the popular notion of self-similar objects, and not the mathematically accurate definition involving fractional dimensions and stuff.) Let's begin with our favorite Sierpinski triangle:

[asy]size(15cm);struct triangle{pair A;pair B;pair C;}
triangle applyt(triangle T,transform t){triangle ans;ans.A=t*(T.A);ans.B=t*(T.B);ans.C=t*(T.C);return ans;}
void drawt(triangle T){fill(T.A--T.B--T.C--cycle);}
void drawtarray(triangle[] T){ for(triangle t:T){drawt(t); }} 
triangle X;X.A=(0,0);X.B=(1,0);X.C=(1/2,sqrt(3)/2);
triangle[] sierpinski(int n){ triangle X; X.A=(0,0); X.B=(1,0); X.C=(1/2,sqrt(3)/2); triangle[] ans; if(n==1){ ans.push(X);}else{ triangle[] preans;triangle[] precopy=sierpinski(n-1);preans.append(precopy);
  for(triangle t:precopy)
  {preans.push(applyt(t,shift(1,0)));}
  for(triangle t:precopy)
  {preans.push(applyt(t,shift(1/2,sqrt(3)/2)));}
  for(triangle t:preans)
  {ans.push(applyt(t,scale(1/2)));}}
 return ans;} 
triangle[] X=sierpinski(10);
drawtarray(X);
[/asy]

The interesting thing to note is the line in the code that says:
triangle[] X=sierpinski(10);

It basically tells Asymptote to spit out the $10$th iteration of the drawing. So we could very easily change the number to something else. I don't recommend increasing the number too far beyond; since the number of triangles grows exponentially, setting the number too high is essentially telling Asymptote to commit suicide. But we can try smaller values of $n$ to see the steps of iteration (this time in colour!):

[asy]size(15cm);struct triangle{pair A;pair B;pair C;}
triangle applyt(triangle T,transform t){triangle ans;ans.A=t*(T.A);ans.B=t*(T.B);ans.C=t*(T.C);return ans;}
void drawt(triangle T, pair P){fill((T.A+P)--(T.B+P)--(T.C+P)--cycle,blue);}
void drawtarray(triangle[] T, pair P){ for(triangle t:T){drawt(t,P); }} 
triangle X;X.A=(0,0);X.B=(1,0);X.C=(1/2,sqrt(3)/2);
triangle[] sierpinski(int n){ triangle X; X.A=(0,0); X.B=(1,0); X.C=(1/2,sqrt(3)/2); triangle[] ans; if(n==1){ ans.push(X);}else{ triangle[] preans;triangle[] precopy=sierpinski(n-1);preans.append(precopy);
  for(triangle t:precopy)
  {preans.push(applyt(t,shift(1,0)));}
  for(triangle t:precopy)
  {preans.push(applyt(t,shift(1/2,sqrt(3)/2)));}
  for(triangle t:preans)
  {ans.push(applyt(t,scale(1/2)));}}
 return ans;} 
triangle[] X1=sierpinski(1);triangle[] X2=sierpinski(2);triangle[] X3=sierpinski(3);triangle[] X4=sierpinski(4);triangle[] X5=sierpinski(5);triangle[] X6=sierpinski(6);triangle[] X7=sierpinski(7);triangle[] X8=sierpinski(8);
drawtarray(X1,(0,0));drawtarray(X2,(1.2,0));drawtarray(X3,(2.4,0));drawtarray(X4,(3.6,0));drawtarray(X5,(0,-1));drawtarray(X6,(1.2,-1));drawtarray(X7,(2.4,-1));drawtarray(X8,(3.6,-1));
[/asy]

How the code works:
First a disclaimer: the above is a very inefficient way of doing this. Basically I was playing around with the idea of defining new data structures, and decided to use it here; it's neither necessary nor the smartest way. But anyway, it is what it is; and I can't be bothered to improve it as long as it works.

So yeah, the code begins by defining a new data structure called $\texttt{triangle}$ that holds the coordinates of three vertices. Then I tell Asymptote what it means to "draw a triangle"; that's why I define something called $\texttt{drawt}$. The function $\texttt{drawtarray}$ tells it how to draw a whole bunch of triangles. And then begins the real fun.

Of course, I've used recursion to get a Sierpinski triangle. First, $\texttt{Sierpinski(1)}$ is defined as simple equilateral triangle. For $n>1$, I tell the computer to do this: (1) Get $\texttt{Sierpinski(n-1)}$, (2) Scale it down by half, (3)Take three copies of this and place them correctly. And boom! You have $\texttt{Sierpinski(n)}$ ! Simple enough, right?


Let's now try something else; say what about some Hilbert curve? It's a fractal, a space filling curve, and what's most important, it's super-easy to code!

[asy]size(10cm);pair[] X={(0,0),(0,1),(1,1),(1,0)};
void drawarray(picture p=currentpicture, pair[] A){ for(int i=0;i<(A.length-1);++i){draw(p,A[i]--A[i+1]);}}
pair[] flipfront(pair[] A){ pair[] answer;for(pair a: A){ answer.push(2*foot(a,(0,0),(1,1))-a);} return answer;}
pair[] flipback(pair[] A){ pair[] answer;for(pair a: A){ answer.push(2*foot(a,(1,0),(0,1))-a);}return answer;}
pair[] hilbert(int i){pair[] final;if(i==1){ final.append(X); } else {
  pair[] precopy=hilbert(i-1);
  pair[] bottomleft=flipfront(precopy);
  pair[] bottomrightone=flipback(precopy);
  pair[] topleft;
  pair[] topright;
  pair[] bottomright;
  real unitt=(abs(precopy[1]-precopy[0]));
  for(pair k:precopy){
   topleft.push(k+(0,1+unitt));
   topright.push(k+(1+unitt,1+unitt));
  }
  for(pair k:bottomrightone){
   bottomright.push(k+(1+unitt,0));
  } 
  bottomleft.append(topleft);
  bottomleft.append(topright);
  bottomleft.append(bottomright);
  for(pair x:bottomleft){
   final.push(x/abs(bottomleft[bottomleft.length-1]-(0,0)));
  }
 }
 return final;  
}
pair[] X=hilbert(6);
drawarray(X);[/asy]
The above maze-y thing is the $6$th iteration of the algorithm, as the penultimate line in the code would tell you. As before, it's natural to want to see how the iterations evolve:
[asy]size(15cm);pair[] X={(0,0),(0,1),(1,1),(1,0)};
void drawarray(picture p=currentpicture, pair[] A,pair P){ for(int i=0;i<(A.length-1);++i){draw(p,(A[i]+P)--(A[i+1]+P));}}
pair[] flipfront(pair[] A){ pair[] answer;for(pair a: A){ answer.push(2*foot(a,(0,0),(1,1))-a);} return answer;}
pair[] flipback(pair[] A){ pair[] answer;for(pair a: A){ answer.push(2*foot(a,(1,0),(0,1))-a);}return answer;}
pair[] hilbert(int i){pair[] final;if(i==1){ final.append(X); } else {
  pair[] precopy=hilbert(i-1);
  pair[] bottomleft=flipfront(precopy);
  pair[] bottomrightone=flipback(precopy);
  pair[] topleft;
  pair[] topright;
  pair[] bottomright;
  real unitt=(abs(precopy[1]-precopy[0]));
  for(pair k:precopy){
   topleft.push(k+(0,1+unitt));
   topright.push(k+(1+unitt,1+unitt));
  }
  for(pair k:bottomrightone){
   bottomright.push(k+(1+unitt,0));
  } 
  bottomleft.append(topleft);
  bottomleft.append(topright);
  bottomleft.append(bottomright);
  for(pair x:bottomleft){
   final.push(x/abs(bottomleft[bottomleft.length-1]-(0,0)));
  }
 }
 return final;  
}
pair[] X1=hilbert(1);pair[] X2=hilbert(2);pair[] X3=hilbert(3);pair[] X4=hilbert(4);pair[] X5=hilbert(5);pair[] X6=hilbert(6);
drawarray(X1,(0,0));drawarray(X2,(1.2,0));drawarray(X3,(2.4,0));drawarray(X4,(0,-1.2));drawarray(X5,(1.2,-1.2));drawarray(X6,(2.4,-1.2));[/asy]
Notice how quickly the pattern gets more and more awful awesome.

But it's more fun to see the whole thing in action, so here's a link to a turtle program for that.

How it works: The algorithm is as simple as it can be: start with the simplest thing, $\texttt{hilbert(1)}$, and for each next iteration: scale it down by a factor of $\tfrac12$, create four copies, flip the lower two parts, and join the lines. Rinse and repeat.


Fractals are cool and all, but when you cram too many of them into a single post, it starts getting boring. So I'll end this section with a Koch snowflake.

First, let's have a look at the 6th iteration:

[asy]
size(10cm);void drawarray(pair[] A){
path X=A[0];for(int i=1;i<(A.length);++i){ X=X--A[i];}X=X--cycle;filldraw(X,cyan);}
pair[] Kochify(pair A, pair B){pair[] ans;ans.push(A);ans.push(A+(1/3)*(B-A));ans.push(rotate(-60,A+(1/3)*(B-A))*(A+(2/3)*(B-A)));ans.push(A+(2/3)*(B-A));return ans;
}
pair[] Koch(int n){ pair[] X;pair[] base={(0,0),(1,0),(1/2,sqrt(3)/2)};if(n==1){X.append(base);} else{pair[] precopy=Koch(n-1);precopy.cyclic=true;
 for(int i=0;i<(precopy.length);++i){ X.append(Kochify(precopy[i],precopy[i+1]));}}
 return X;}
pair[] X=Koch(6);
drawarray(X);[/asy]
And a gallery of its evolution:
[asy]
size(15cm);void drawarray(pair[] A,pair P){
path X=A[0]+P;for(int i=1;i<(A.length);++i){ X=X--(A[i]+P);}X=X--cycle;filldraw(X,cyan);}
pair[] Kochify(pair A, pair B){pair[] ans;ans.push(A);ans.push(A+(1/3)*(B-A));ans.push(rotate(-60,A+(1/3)*(B-A))*(A+(2/3)*(B-A)));ans.push(A+(2/3)*(B-A));return ans;
}
pair[] Koch(int n){ pair[] X;pair[] base={(0,0),(1,0),(1/2,sqrt(3)/2)};if(n==1){X.append(base);} else{pair[] precopy=Koch(n-1);precopy.cyclic=true;
 for(int i=0;i<(precopy.length);++i){ X.append(Kochify(precopy[i],precopy[i+1]));}}
 return X;}
pair[] X1=Koch(1);pair[] X2=Koch(2);pair[] X3=Koch(3);pair[] X4=Koch(4);pair[] X5=Koch(5);pair[] X6=Koch(6);
drawarray(X1,(0,0));drawarray(X2,(1.2,0));drawarray(X3,(2.4,0));drawarray(X4,(0,-1.2));drawarray(X5,(1.2,-1.2));drawarray(X6,(2.4,-1.2));[/asy]
Cute, right?



Exhibit #2: Tunnel Vision
The following image is nothing but a bunch of squares interlaced, but the precision offered by Asymptote makes it something way greater than any manually made graphic could ever hope to become.
[asy]real r=0.95;
pair[] A={(0,0),(1,0),(1,1),(0,1)};
A.cyclic=true;
for(int i=1;i<100;++i){
real x=1-1/(1.05^i);
fill(A[0]--A[1]--A[2]--A[3]--cycle,rgb(x,x,x^(1/3)));
pair B1=A[0]*(1-r)+A[1]*(r);pair B2=A[1]*(1-r)+A[2]*(r);
pair B3=A[2]*(1-r)+A[3]*(r);pair B4=A[3]*(1-r)+A[0]*(r);
A[0]=B1;A[1]=B2;A[2]=B3;A[3]=B4;}
[/asy]

By the way, did you know that we can generate animations with Asymptote? Unfortunately AoPS renderer doesn't support the $\texttt{animation}$ package, so you have to run the code on your local machine. This is the outputted GIF, hosted on giphy (because AoPS won't be happy with anything bigger than $512$ kB; lemme know if the link breaks):
https://media.giphy.com/media/3og0IUQkmmt5MHIbrW/giphy.gif

In case you're curious, here's the full code; but you'll need Asymptote and ImageMagick to compile it. Note how remarkably short the code is, considering the complexity of the output.

Can you imagine how this might look for other regular polygons? How about non-regular ones? What if we change the color? What if the brightness increases linearly instead of exponentially? The possibilities are simply limitless -- heading towards infinity. Just as an asymptote does.

Laser Shootouts and Some Math

by Ankoganit, Apr 12, 2017, 10:13 AM

It's been almost two months since I last posted an entry. Oops.

So here are some random problems I've been trying. Also some random other stuff I've been thinking about. Basically, this is going to be the post that best meets up to the blog description.

Random Thing #1: Lasers

I think we're all familiar with those problems about lasers (or sometimes, unrealistically perfect billiard balls) that bounce off the walls in some rectangular/triangular/some-other-angular board. The standard trick is to reflect the board on its walls so the laser's path becomes a nice, manageable straight line. But recently, lasers have been making appearances in the combinatorial arena in the most unexpected of situations; mostly thanks to our resident expert on combinatorics, MellowMelon.

Here goes an example:
Quote:
(MEMO 2015) Let $N$ be a positive integer. In each of the $N^2$ unit squares of an $N\times N$ board, one of the two diagonals is drawn. The drawn diagonals divide the $N\times N$ board into $K$ regions. For each $N$, determine the smallest possible values of $K$.

[asy]
draw((0,0)--(3,0)--(3,3)--(0,3)--cycle);
draw((1,0)--(1,3), dotted);
draw((2,0)--(2,3), dotted);
draw((0,1)--(3,1), dotted);
draw((0,2)--(3,2), dotted);
draw((1,0)--(0,1)--(2,3)--(3,2)--(2,1)--(0,3));
draw((1,1)--(2,0)--(3,1));
label("$1$",(0.35,2));
label("$2$",(1,2.65));
label("$3$",(2,2));
label("$4$",(2.65,2.65));
label("$5$",(0.35,0.35));
label("$6$",(1.3,1.3));
label("$7$",(2.65,0.35));
label("Example with $N=3$, $K=7$",(0,-0.3)--(3,-0.3),S);
[/asy]

Solution

Random Thing #2: Completeness
Real numbers have a number of properties that ordinary rationals don't. For example, every lower-bounded set of reals has a real infimum; or that most sequences that should morally converge in $\mathbb R$, does converge in $\mathbb R$.

Let me make the last point a bit more explicit. Convergence is a familiar topic to calculus students; put in Hand-waving-ish language, it means that a sequence goes closer and closer to some specific point. Real numbers make it always possible to pinpoint that "some" point; a quality that mere rationals lack. To see how, suppose reals haven't been invented yet. We're stuck with boring fractions, as, say, Archimedeans were. Everything goes well, till, one day, someone invents a curious sequence:$$1,1+\frac12,1+\frac{1}{2+\frac12},1+\frac{1}{2+\frac{1}{2+\frac{1}{2}}},1+\frac{1}{2+\frac{1}{2+\frac{1}{2+\frac12}}},\cdots $$You, being the enlightened reader, perhaps realize that this is essentially approximating $\sqrt{2}$ with continued fractions. But imagine the state of those "rationalists"! They must've been quite creeped out; every term can be calculated as rational number, they are closer and closer, so it's morally correct for them to converge somewhere, but it's not approaching any fraction they know of!

But real numbers are special. Any morally convergent sequence actually has a real limit. Now let's rigorize the 'morally convergent' idea:

Cauchy Sequence (Morally Convergent Sequence): A sequence $\langle x_i\rangle _{i=0}^\infty $ of reals/rationals/bunnies/whatever is called Cauchy, if for every positive real $\epsilon$, there is some integer $N$ so that $d(x_m,x_n)<\epsilon$ for every $m,n>N$. Here $d(x,y)$ is the distance function; for reals/rationals, it usually means simply $|x-y|$ (although there are other, weirder notions of distance). I don't know about bunnies, you should ask someone else.

So real numbers have this interesting property that every Cauchy sequence converges; we say that real numbers are complete. This isn't entirely obvious (even if it's intuitively clear); it needs to be proven rigorously. Depending on your definition of reals, it can either follow from the definition immediately (if you define $\mathbb R$ to be the completion $\mathbb Q$; why that's a valid definition is another, more complicated question); or it can take some fair amount of busywork (if, say, you define reals from Dedekind cuts). Basically, it's complicated, so I won't mention it here. But if you take this completeness on faith, a lot of of other useful properties follow.
  • Property 1: Infimum exists (aka greatest lower bound property): Infimum is basically forcing a bounded set to have some sort of ad-hoc minimum,even if it's reluctant to take on one. Consider the $(1,2)$. It has literally forsaken its minimum $1$ to attain some sort of saintly asceticism; but stubborn mathematicians have still found a way to say $1$ is its sharpest lower bound; its greatest lower bound, to be exact. To prove such a greatest lower bound (infimum) exists, one of the many ways is to take any lower bound, and walk upward till you hit the greatest one. You may never actually hit one, but completeness guarantees that your walk will converge somewhere. Then you can prove quite effortlessly that that limit is the desired infimum. Of course, by a similar logic, one could show the existence of supremum for any real set bounded from above.
  • Property 2: Monotone convergence theorem: This theorem states that any monotonic increasing sequence that's bounded above must converge (to its supremum); in other words, sky sup is the limit. It's not hard to prove this from property 1. Note that this is often cited as Bolzano-Weierstrass theorem, and to make matters worse, there's a different (but related) theorem by that name. Basically, mathematicians are bad at naming things and using them consistently.
Now let's use these weapons.
Quote:
Let $\langle x_i\rangle_{n=0}^\infty$ be a sequence of positive real numbers such that $x_0=1$ and $x_i\ge x_{i+1}$ for all nonnegative integers $i$.
  1. Prove that for every such sequence, there is an integer $n$ such that $$\frac{x_0^2}{x_1}+\frac{x_1^2}{x_2}+\cdots +\frac{x_{n-1}^2}{x_n}\ge 3.999.$$
  2. Find a sequence in which $$\frac{x_0^2}{x_1}+\frac{x_1^2}{x_2}+\cdots +\frac{x_{n-1}^2}{x_n}<4$$for every $n$.

To begin with, the $3.999$ bound looks contrived enough; replace it with $4-\epsilon$. Now, let's see; the only bound we have over $x_i$'s is that they are non-increasing, and that they are positive. The positivity is a nice thing because it means we don't' have to worry about the complications that negative numbers bring in inequalities. How do we use the non-increasing-ness? Note that $x_{i}\ge x_{i+1}$ means that $\frac{x_{i}^2}{x_{i+1}}\ge x_i$, so the sum on the LHS admits of a simple estimate: $$\frac{x_0^2}{x_1}+\frac{x_1^2}{x_2}+\cdots +\frac{x_{n-1}^2}{x_n}\ge x_0+x_1+x_2+\cdots+x_n.$$Now one thing is for sure: if we can manage a positive lower bound on the $x_i$, say $c$, then the sum on the right side above exceed $nc$ for any $n$, which blows up for large enough $n$; $4-\epsilon$ is not a problem in this case. So this case is cleared. But what if this is not the case? What if there's no such $c$? What if $x_i$'s get arbitrarily small, wiggling arbitrarily close to zero? (Not that arbitrarily, though.)

This seems like like the perfect time to try part 2 and get a feel for the sequence in the edge cases. Let's see, we need to find an example. A sequence that has a positive lower bound won't work, as we saw above. Let's get the $x_i'$ approach $0$. Infinite positive sequence getting towards zero...has a nice closed form...must be a geometric sequence, right? Trying $x_i=r^i$ quickly yield $r=\frac12$, and indeed, the sequence $x_i=\frac1{2^i}$ works. Part 2 solved!

Returning to part 1, we realize that we need a bound n the terms that attains equality for $x_i=2x_{i+1}$; why not try the obviousmost option, $(x_i-2x_{i+1})^2\ge 0$? Expanding the terms and fiddling around a bit, we see that this obvious inequality is equivalent to $\frac{x_i^2}{x_{i+1}}\ge 4x_i-4x_{i+1}$. And guess what, that telescopes! Using this new estimate in the left-hand sum yields:$$\frac{x_0^2}{x_1}+\frac{x_1^2}{x_2}+\cdots +\frac{x_{n-1}^2}{x_n}\ge 4x_0-4x_1+4x_1-\cdots-4x_n=4-4x_n.$$Remember the left-out case from the first paragraph. If $x_n$ indeed does approach $0$, then $4-4x_n$ gets as close to $4$ as we wish, so it crosses $4-\epsilon$ at some point. Huzzah! $\blacksquare$

Strictly speaking, none of all those completeness stuff was actually needed; but the notion of infimum gives a nice way to rigorize the case-splitting; the two cases are simply when $\inf x_n$ is $0$ and when it's not. I know, I know, there are simpler ways of seeing things, but hey, the phrase 'completeness of reals' is way too cool not to use.

Next is an example where I actually use the above tools:
Quote:
(Iran TST 2012) The function $f:\mathbb R^{\ge 0} \longrightarrow \mathbb R^{\ge 0}$ satisfies the following properties for all $a,b\in \mathbb R^{\ge 0}$:

a) $f(a)=0 \Leftrightarrow a=0$

b) $f(ab)=f(a)f(b)$

c) $f(a+b)\le 2 \max \{f(a),f(b)\}$.

Prove that for all $a,b\in \mathbb R^{\ge 0}$ we have $f(a+b)\le f(a)+f(b)$.

Proposed by Masoud Shafaei

Let's begin with some preliminary observations. First off, setting $b=1$ is (b) gives $f(1)=1$, and setting $a=b=1$ in (c) gives $f(2)\le 2f(1)=2$. Obviously, multiplicativity nicely generalizes to any number of variables; for example, $f(abc)=f(ab)f(c)=f(a)f(b)f(c)$. Also, setting $a=b=x$ in (b) gives $f(x^2)=f(x)^2$. This is going to be useful.

Now define a sequence $\langle c_i\rangle _{i=0}^{\infty}$ as follows: $$c_i=\begin{cases} 2 & \text{ if }i=0,\\ \displaystyle\sqrt{\frac{2c_{i-1}^2+c_{i-1}}{3}} & \text{ if }i\ge 1.\end{cases}$$We claim that $f(a+b)\le c_i \left(f(x)+f(y)\right)$ for all $i\in \mathbb N_0$ and all $a,b\in\mathbb R^{\ge 0}$. Let's do induction on $i$.

For $i=0$ we need to prove $f(a+b)\le 2\left(f(a)+f(b)\right)$; but c'mon, that's obvious: $f(a+b)\le 2\max\{f(a),f(b)\}\le 2\left(f(a)+f(b)\right).$ Now time to make the jump from $i$ to $i+1$.

We have \begin{align*} f(x+y)^2 &= f((x+y)^2)\\ &=f(x^2+2xy+y^2)\\
&\le c_if(x^2)+c_if(2xy+y^2)\\
& \le c_if(x^2)+c_i^2f(2xy)+c_i^2f(y^2) && (1)\end{align*}In a similar fashion, we would get
\begin{align*} f(x+y)^2 & \le c_i^2f(x^2)+c_if(2xy)+c_i^2f(y^2) && (2)\\
f(x+y)^2 &\le c_i^2f(x^2)+c_i^2f(2xy)+c_if(y^2) && (3)\end{align*}Adding these up,
\begin{align*} 3f(x+y)^2& \le (2c_i^2+c_i)\left(f(x^2)+f(2xy)+f(y^2)\right)\\
&= (2c_i^2+c_i)\left(f(x)^2+f(2)f(x)f(y)+f(y)^2\right)\\
&\le (2c_i^2+c_i)\left(f(x)^2+2f(x)f(y)+f(y)^2\right)\\
&\le (2c_i^2+c_i)(f(x)+f(y))^2\\
\implies f(x+y)& \le \sqrt{\frac{2c_i^2+c_i}{3}}\left(f(x)+f(y)\right)\\
&\le c_{i+1}(f(x)+f(y)).\end{align*}This completes the induction.

Now focus on the sequence $\langle c_i\rangle _{i=0}^\infty$. It is rather easy to establish by induction that $1\le c_i$ for all $i$ and that it's a monotone decreasing sequence. So by monotone convergence theorem (see? see it's sheer power?) this sequence converges; let's say the limit is $c$. Then taking $i\to \infty$ in the relation $c_{i+1}=\sqrt{\frac{2c_i^2+c_i}{3}}$ gives $c=\sqrt{\frac{2c^2+c}{3}}\implies c\in\{0,1\}$. The case $c=0$ can be eliminated because of $x_i\ge 1$ as we saw before; so $\lim_{i\to\infty} c_i=1$. Now invoking $f(a+b)\le c_i \left(f(x)+f(y)\right)$ immediately gives $f(a+b)\le f(a)+f(b)$, as desired. $\blacksquare$

Random Thing #3: A Combinatorial Conundrum

Some time ago, I was trying out some past Taiwanese problems when I stumbled upon this cute problem. Incidentally, EGMO 2017 P5 happened to feature a similar idea.
Quote:
(Taiwan TST 2014) Positive integers $x_1, x_2, \dots, x_n$ ($n \ge 4$) are arranged in a circle such that each $x_i$ divides the sum of the neighbors; that is \[ \frac{x_{i-1}+x_{i+1}}{x_i} = k_i \]is an integer for each $i$, where $x_0 = x_n$, $x_{n+1} = x_1$. Prove that \[ 2n \le k_1 + k_2 + \dots + k_n < 3n. \]

The lower bound is pretty trivial; it's just AM-GM. Let's focus on the upper bound. After trying out some small cases, one becomes pretty convinced that $3n-1$ is the achievable upper bound, attained when the numbers are simply $1,2,3,\cdots ,n$. Given the equality case, induction seems like a natural choice. The base cases are easy enough (in fact you can totally ignore $n\ge 4$, extrapolate the problem to smaller values of $n$ and prove $n=2$ instead). Let's focus on the induction step.

We want to somehow reduce the $n$ case to the $n-1$ case, and in the equality case, it amounts to removing the $n$-labelled point, so it seems worthwhile to try and pinpoint that "pivot element" $n$; let's say we are looking at the maximal $x_i$. Secretly, we know that that maximal element is $n$, or at least, behaves like $n$, but we still need to show that. Suppose WLOG $x_1$ is the maximal element. Note that we need $x_1|x_{n}+x_ 2\iff \frac{x_{n}+x_2}{x_1}\in \mathbb N$. If at least one of $x_{n}$ and $x_2$ is strictly less than $x_1$, then $\frac{x_{n}+x_2}{x_1}<2\implies k_1=1\implies x_1=x_{n}+x_2$. Otherwise, if both of them are equal to $x_1$, then we can either have all elements equal to $x_1$ (in which case $\sum k_i=2n$, and we've got nothing to prove), or we can have a run of all the biggest element bordered by some smaller number. If the border looks like $x_i,x_{i+1},x_{i+2}$ with $x_i=x_{i+1}>x_{i+2}$, then $x_{i+1}<x_{i}+x_{i+2}<2x_{i+1}$ contradicting $x_{i+1}|x_i+x_{i+2}$. So the only option is $k_1=1$.

Now it's easy to check that removing $x_1$ yields another setup with the same property; and that $\sum k_i$ goes down by $3$. This together with induction hypothesis seals that deal. $\blacksquare$
This post has been edited 1 time. Last edited by Ankoganit, Apr 12, 2017, 10:15 AM

For the lack of a clever title...

by Ankoganit, Feb 14, 2017, 4:09 AM

After quite a long stretch of math-related posts, this post is mostly going to be a collection of some of my most recent puzzles. Without further ado, here we go.

You're not feeling well. You need something.

You're not feeling good. Something is horribly wrong with you. You're trying hard to feign happiness, but...

...deep inside, you feel dead.
...deep inside, you are hurt.
...deep inside, you know you are hated.
...deep inside, you are all alone.
Forward, and backward. The paths meet.


You need something. What's it? You should let the stars spell it out.
https://i.stack.imgur.com/fBkgE.png


Solved jointly by Wu330 and Rubio.

Solution

I never knew my words could speak!

You know, I've kept some words. Words that work for me. No no, they aren't really my slaves; more like, my pets. They never said they don't like my job or anything; besides, they can't actually do much except what they're told to. They do all sorts of chores for me: like, say, recording a dispute in the past, or cleaning my room. They can even lie to my amazing old relative (he's is one who won't believe), and scold him.

So today morning, as usual, I went to their cage...er, room to take them out and sort out their duties. What I saw there totally blew my mind. I never knew that my pet words could speak! For themselves!

Here's what they seemed to be saying:
  • Did G contain H? L
  • C's got hold of T, and happens to be E. R
  • D equals at least one of C and itself. C
  • Regarding N: does G contain H? L
  • M stays indoors because of itself. C
  • G, P, R -- they all happen to be NT. C
  • Inside G, L is the same as T. L
  • When placed near H, E becomes T. L
  • H equals at least one of T and Y. R

Now, of course, there are people who "talk" to their pets; frankly speaking, I'm not among them. I didn't even know that my pet words could speak! Seriously, I'm very confused right now.

Just what do these words want to say?


Solved by MOehm.

Solution

Be careful with your words!

You use words all the time, don't you? Better be careful; words are a lot more powerful than you might think. Used mindlessly, they might dishearten someone. They might make you sound off-center.

$$\boxed{\begin{array}{c}\text{Decapitate a ut of a necklace}\\
\text{Gained conption fast }\\
\text{The ultimate, last anwer: don't listen much}\\
\text{Most fain ge gone by}\\
\text{A cold desse; a versifier}\end{array}}$$
This should be a lesson: what not to be when using words.


Solved by MOehm, again.

Solution



Links:
http://puzzling.stackexchange.com/questions/46628/youre-not-feeling-well-you-need-something
http://puzzling.stackexchange.com/questions/48397/i-never-knew-my-words-could-speak
http://puzzling.stackexchange.com/questions/48569/be-careful-with-your-words
This post has been edited 2 times. Last edited by Ankoganit, Feb 14, 2017, 4:10 AM

Some random interesting things

avatar

Ankoganit
Shouts
Submit
  • reap 5 months
    (i came here from searching google’s “nag a ram” and somehow ended up in aops again)

    by rhydon516, Mar 23, 2025, 6:21 PM

  • 1 year passed again (not exactly but u get the point

    by alexanderhamilton124, Oct 11, 2024, 12:36 PM

  • 1 year passed again

    by HoRI_DA_GRe8, Jan 4, 2024, 4:43 PM

  • @below i get to be the second person then :)

    by kamatadu, Jan 21, 2023, 12:11 PM

  • Oh the blog is dead again.Fun fact nobody has even shouted since it's last post.

    by HoRI_DA_GRe8, Jan 4, 2023, 3:58 AM

  • monthly check finally sees its revival

    by Rickyminer, Aug 24, 2021, 5:26 PM

  • woohoo!! revive

    by DofL, Aug 12, 2021, 3:22 PM

  • yay!!!!!

    by p_square, Jul 30, 2021, 8:28 AM

  • @all proponents of revival: there you go.

    by Ankoganit, Jul 30, 2021, 8:23 AM

  • \revive \revive \revive

    by tumbleweed, Jun 4, 2021, 1:07 AM

  • Wow, this blog is amazinggg! Revive!!

    by L567, May 17, 2021, 9:04 AM

  • 'Elegant, not Elephant', 'Some properties of ferrous nitride'. Haha, creative titles I must say :). This is the reason I guessed you were one of the proposers of INMO P6 after reading 'A ninth-graders guide to polynomials.' Really love your work. :D

    by RMOAspirantFaraz, Mar 8, 2021, 1:20 PM

  • ;Puffer13

    by Puffer13, Oct 13, 2020, 5:34 PM

  • @Prabh1512 the secret is lots of caffeine

    by Ankoganit, Sep 30, 2020, 6:12 AM

  • How to get the motivation to carry out random polynomial multiplication ? TO BASH of course

    by Hamroldt, Sep 22, 2020, 4:41 AM

95 shouts
Contributors
Tags
About Owner
  • Posts: 3070
  • Joined: May 11, 2015
Blog Stats
  • Blog created: Jul 30, 2016
  • Total entries: 21
  • Total visits: 44267
  • Total comments: 116
Search Blog
a