Summer is a great time to explore cool problems to keep your skills sharp!  Schedule a class today!

G
Topic
First Poster
Last Poster
k a May Highlights and 2025 AoPS Online Class Information
jlacosta   0
May 1, 2025
May is an exciting month! National MATHCOUNTS is the second week of May in Washington D.C. and our Founder, Richard Rusczyk will be presenting a seminar, Preparing Strong Math Students for College and Careers, on May 11th.

Are you interested in working towards MATHCOUNTS and don’t know where to start? We have you covered! If you have taken Prealgebra, then you are ready for MATHCOUNTS/AMC 8 Basics. Already aiming for State or National MATHCOUNTS and harder AMC 8 problems? Then our MATHCOUNTS/AMC 8 Advanced course is for you.

Summer camps are starting next month at the Virtual Campus in math and language arts that are 2 - to 4 - weeks in duration. Spaces are still available - don’t miss your chance to have an enriching summer experience. There are middle and high school competition math camps as well as Math Beasts camps that review key topics coupled with fun explorations covering areas such as graph theory (Math Beasts Camp 6), cryptography (Math Beasts Camp 7-8), and topology (Math Beasts Camp 8-9)!

Be sure to mark your calendars for the following upcoming events:
[list][*]May 9th, 4:30pm PT/7:30pm ET, Casework 2: Overwhelming Evidence — A Text Adventure, a game where participants will work together to navigate the map, solve puzzles, and win! All are welcome.
[*]May 19th, 4:30pm PT/7:30pm ET, What's Next After Beast Academy?, designed for students finishing Beast Academy and ready for Prealgebra 1.
[*]May 20th, 4:00pm PT/7:00pm ET, Mathcamp 2025 Qualifying Quiz Part 1 Math Jam, Problems 1 to 4, join the Canada/USA Mathcamp staff for this exciting Math Jam, where they discuss solutions to Problems 1 to 4 of the 2025 Mathcamp Qualifying Quiz!
[*]May 21st, 4:00pm PT/7:00pm ET, Mathcamp 2025 Qualifying Quiz Part 2 Math Jam, Problems 5 and 6, Canada/USA Mathcamp staff will discuss solutions to Problems 5 and 6 of the 2025 Mathcamp Qualifying Quiz![/list]
Our full course list for upcoming classes is below:
All classes run 7:30pm-8:45pm ET/4:30pm - 5:45pm PT unless otherwise noted.

Introductory: Grades 5-10

Prealgebra 1 Self-Paced

Prealgebra 1
Tuesday, May 13 - Aug 26
Thursday, May 29 - Sep 11
Sunday, Jun 15 - Oct 12
Monday, Jun 30 - Oct 20
Wednesday, Jul 16 - Oct 29

Prealgebra 2 Self-Paced

Prealgebra 2
Wednesday, May 7 - Aug 20
Monday, Jun 2 - Sep 22
Sunday, Jun 29 - Oct 26
Friday, Jul 25 - Nov 21

Introduction to Algebra A Self-Paced

Introduction to Algebra A
Sunday, May 11 - Sep 14 (1:00 - 2:30 pm ET/10:00 - 11:30 am PT)
Wednesday, May 14 - Aug 27
Friday, May 30 - Sep 26
Monday, Jun 2 - Sep 22
Sunday, Jun 15 - Oct 12
Thursday, Jun 26 - Oct 9
Tuesday, Jul 15 - Oct 28

Introduction to Counting & Probability Self-Paced

Introduction to Counting & Probability
Thursday, May 15 - Jul 31
Sunday, Jun 1 - Aug 24
Thursday, Jun 12 - Aug 28
Wednesday, Jul 9 - Sep 24
Sunday, Jul 27 - Oct 19

Introduction to Number Theory
Friday, May 9 - Aug 1
Wednesday, May 21 - Aug 6
Monday, Jun 9 - Aug 25
Sunday, Jun 15 - Sep 14
Tuesday, Jul 15 - Sep 30

Introduction to Algebra B Self-Paced

Introduction to Algebra B
Tuesday, May 6 - Aug 19
Wednesday, Jun 4 - Sep 17
Sunday, Jun 22 - Oct 19
Friday, Jul 18 - Nov 14

Introduction to Geometry
Sunday, May 11 - Nov 9
Tuesday, May 20 - Oct 28
Monday, Jun 16 - Dec 8
Friday, Jun 20 - Jan 9
Sunday, Jun 29 - Jan 11
Monday, Jul 14 - Jan 19

Paradoxes and Infinity
Mon, Tue, Wed, & Thurs, Jul 14 - Jul 16 (meets every day of the week!)

Intermediate: Grades 8-12

Intermediate Algebra
Sunday, Jun 1 - Nov 23
Tuesday, Jun 10 - Nov 18
Wednesday, Jun 25 - Dec 10
Sunday, Jul 13 - Jan 18
Thursday, Jul 24 - Jan 22

Intermediate Counting & Probability
Wednesday, May 21 - Sep 17
Sunday, Jun 22 - Nov 2

Intermediate Number Theory
Sunday, Jun 1 - Aug 24
Wednesday, Jun 18 - Sep 3

Precalculus
Friday, May 16 - Oct 24
Sunday, Jun 1 - Nov 9
Monday, Jun 30 - Dec 8

Advanced: Grades 9-12

Olympiad Geometry
Tuesday, Jun 10 - Aug 26

Calculus
Tuesday, May 27 - Nov 11
Wednesday, Jun 25 - Dec 17

Group Theory
Thursday, Jun 12 - Sep 11

Contest Preparation: Grades 6-12

MATHCOUNTS/AMC 8 Basics
Friday, May 23 - Aug 15
Monday, Jun 2 - Aug 18
Thursday, Jun 12 - Aug 28
Sunday, Jun 22 - Sep 21
Tues & Thurs, Jul 8 - Aug 14 (meets twice a week!)

MATHCOUNTS/AMC 8 Advanced
Sunday, May 11 - Aug 10
Tuesday, May 27 - Aug 12
Wednesday, Jun 11 - Aug 27
Sunday, Jun 22 - Sep 21
Tues & Thurs, Jul 8 - Aug 14 (meets twice a week!)

AMC 10 Problem Series
Friday, May 9 - Aug 1
Sunday, Jun 1 - Aug 24
Thursday, Jun 12 - Aug 28
Tuesday, Jun 17 - Sep 2
Sunday, Jun 22 - Sep 21 (1:00 - 2:30 pm ET/10:00 - 11:30 am PT)
Monday, Jun 23 - Sep 15
Tues & Thurs, Jul 8 - Aug 14 (meets twice a week!)

AMC 10 Final Fives
Sunday, May 11 - Jun 8
Tuesday, May 27 - Jun 17
Monday, Jun 30 - Jul 21

AMC 12 Problem Series
Tuesday, May 27 - Aug 12
Thursday, Jun 12 - Aug 28
Sunday, Jun 22 - Sep 21
Wednesday, Aug 6 - Oct 22

AMC 12 Final Fives
Sunday, May 18 - Jun 15

AIME Problem Series A
Thursday, May 22 - Jul 31

AIME Problem Series B
Sunday, Jun 22 - Sep 21

F=ma Problem Series
Wednesday, Jun 11 - Aug 27

WOOT Programs
Visit the pages linked for full schedule details for each of these programs!


MathWOOT Level 1
MathWOOT Level 2
ChemWOOT
CodeWOOT
PhysicsWOOT

Programming

Introduction to Programming with Python
Thursday, May 22 - Aug 7
Sunday, Jun 15 - Sep 14 (1:00 - 2:30 pm ET/10:00 - 11:30 am PT)
Tuesday, Jun 17 - Sep 2
Monday, Jun 30 - Sep 22

Intermediate Programming with Python
Sunday, Jun 1 - Aug 24
Monday, Jun 30 - Sep 22

USACO Bronze Problem Series
Tuesday, May 13 - Jul 29
Sunday, Jun 22 - Sep 1

Physics

Introduction to Physics
Wednesday, May 21 - Aug 6
Sunday, Jun 15 - Sep 14
Monday, Jun 23 - Sep 15

Physics 1: Mechanics
Thursday, May 22 - Oct 30
Monday, Jun 23 - Dec 15

Relativity
Mon, Tue, Wed & Thurs, Jun 23 - Jun 26 (meets every day of the week!)
0 replies
jlacosta
May 1, 2025
0 replies
Serbian selection contest for the IMO 2025 - P6
OgnjenTesic   12
N an hour ago by atdaotlohbh
Source: Serbian selection contest for the IMO 2025
For an $n \times n$ table filled with natural numbers, we say it is a divisor table if:
- the numbers in the $i$-th row are exactly all the divisors of some natural number $r_i$,
- the numbers in the $j$-th column are exactly all the divisors of some natural number $c_j$,
- $r_i \ne r_j$ for every $i \ne j$.

A prime number $p$ is given. Determine the smallest natural number $n$, divisible by $p$, such that there exists an $n \times n$ divisor table, or prove that such $n$ does not exist.

Proposed by Pavle Martinović
12 replies
OgnjenTesic
May 22, 2025
atdaotlohbh
an hour ago
Geometry with fix circle
falantrng   33
N an hour ago by zuat.e
Source: RMM 2018 Problem 6
Fix a circle $\Gamma$, a line $\ell$ to tangent $\Gamma$, and another circle $\Omega$ disjoint from $\ell$ such that $\Gamma$ and $\Omega$ lie on opposite sides of $\ell$. The tangents to $\Gamma$ from a variable point $X$ on $\Omega$ meet $\ell$ at $Y$ and $Z$. Prove that, as $X$ varies over $\Omega$, the circumcircle of $XYZ$ is tangent to two fixed circles.
33 replies
falantrng
Feb 25, 2018
zuat.e
an hour ago
Brilliant Problem
M11100111001Y1R   2
N an hour ago by Davdav1232
Source: Iran TST 2025 Test 3 Problem 3
Find all sequences \( (a_n) \) of natural numbers such that for every pair of natural numbers \( r \) and \( s \), the following inequality holds:
\[
\frac{1}{2} < \frac{\gcd(a_r, a_s)}{\gcd(r, s)} < 2
\]
2 replies
M11100111001Y1R
Yesterday at 7:28 AM
Davdav1232
an hour ago
USAMO 2001 Problem 2
MithsApprentice   54
N an hour ago by lpieleanu
Let $ABC$ be a triangle and let $\omega$ be its incircle. Denote by $D_1$ and $E_1$ the points where $\omega$ is tangent to sides $BC$ and $AC$, respectively. Denote by $D_2$ and $E_2$ the points on sides $BC$ and $AC$, respectively, such that $CD_2=BD_1$ and $CE_2=AE_1$, and denote by $P$ the point of intersection of segments $AD_2$ and $BE_2$. Circle $\omega$ intersects segment $AD_2$ at two points, the closer of which to the vertex $A$ is denoted by $Q$. Prove that $AQ=D_2P$.
54 replies
MithsApprentice
Sep 30, 2005
lpieleanu
an hour ago
Irreducible polynomials in extensions of the wrong degree
math_explorer   0
Apr 22, 2016
Theorem. Let $\mathbb{F}$ be a field, let $P(x)$ be a degree-$p$ irreducible polynomial in $\mathbb{F}[x]$, and let $\mathbb{K}$ be a degree-$q$ extension of $\mathbb{F}$. If $p$ and $q$ are coprime, then $P(x)$ is still irreducible over $\mathbb{K}[x]$.

(The condition is tight; if $\gcd(p, q) = g$, then $x^p - 2$ has the factor $x - \sqrt[g]{2}$ in $\mathbb{K}(\sqrt[q]{2})$.

Proof. Suppose not, and let $D(x)$ be a (proper, nontrivial) irreducible polynomial factor of $P(x)$ over $\mathbb{K}[x]$. Let $d$ be the degree of $D(x)$. Adjoin a root $\alpha$ of $D(x)$ to $\mathbb{K}$, producing the field $\mathbb{K}(\alpha)$, which has degree $d$ over $\mathbb{K}$ and thus degree $dq$ over $\mathbb{F}$.

Then note that, since $D(x)$ is a factor of $P(x)$, we have that $\alpha$ is also a root of $P(x)$, which means $\mathbb{F}(\alpha)$ has degree $p$. Furthermore, $\mathbb{F}(\alpha) \subseteq \mathbb{K}(\alpha)$. But since the former has degree $p$ and the latter has degree $dq$ over $\mathbb{F}$, \[ p \mid dq \Longrightarrow p \mid d \Longrightarrow d \in \{0, p\}, \]contradicting the claim that $D(x)$ is a proper nontrivial factor.
0 replies
math_explorer
Apr 22, 2016
0 replies
Axioms of the wedge product and characteristics of fields
math_explorer   3
N Apr 21, 2016 by briantix
Let $V$ be a vector space over some field $\mathbb{K}$. Then we can define a new vector space $\Lambda(V)$ whose elements look like $v \wedge w$ for $v, w \in V$. The thing $v \wedge w$ is called the wedge product or exterior product of $v$ and $w$. The vector space $\Lambda(V)$ is called the exterior algebra or Grassmanian algebra, but that's not important. Wikipedia doesn't call it the wedge algebra. Sad.

In fishy terms, $\wedge$ is a product — in more formal terms, it's something you get by quotienting out stuff from a tensor algebra — so you have these properties for all $v, w \in V$:
\begin{align*}
v \wedge w + v \wedge w' &= v \wedge (w + w') \\
v \wedge w + v' \wedge w &= (v + v') \wedge w \\
c(v \wedge w) &= (cv) \wedge w = v \wedge (cw)
\end{align*}where in the last equation, $c$ is the element of the field $\mathbb{K}$. These relations are something you need to travel two Wikipedia articles away from "wedge product" to figure out, but they're not that hard; they're the properties you expect multiplication to have, even if it's hard to visualize what you really get when multiplying two vectors so freely. If you replace $\wedge$ with $\otimes$, these properties completely define tensor products; but for the wedge product in addition, we have these two properties:

[list=1]
[*]Alternatingness(sp?): $v \wedge v = 0$ for all $v \in V$
[*]Anticommutativity: $v \wedge w = -(w \wedge v)$ for all $v, w \in V$
[/list]

Alternatingness implies anticommutativity: you can get this by expanding $0 = (v + w) \wedge (v + w)$. Anticommutativity almost implies alternatingness: by setting $v = w$, you get \[ v \wedge v = -(v \wedge v) \Longrightarrow 2(v \wedge v) = 0, \]which implies $v \wedge v = 0$... unless $\mathbb{K}$ has characteristic 2.

Today's lessons:
[list=1]
[*]Alternatingness is a better axiom, and
[*]Fields of characteristic 2 are weird.
[/list]
(But $\mathbb{F}_2$ is an amazing field, so it's worth it. Maybe that's material for a future post.)
3 replies
math_explorer
Apr 20, 2016
briantix
Apr 21, 2016
Monoid/group algebras and representations: a fruitless intro
math_explorer   0
Apr 18, 2016
Several relevatory experiences later, I am considering making one short and probably silly math post here every day. I don't know if this is eventually too much commitment, but since today is a holiday, it's a great time to start and do whatever. I just won't be as strict about it as I was when I tried this on the Other Blog.

This post does not do anything interesting except give me an excuse to start a blog post title with the word "monoid". Monoids are underrated.

Definition. Let $\mathbb{K}$ be a field. Then a $\mathbb{K}$-algebra $A$ is a possibly noncommutative (!) ring with a copy of $\mathbb{K}$ inside it (more formally, a homomorphism from $\mathbb{K}$ into $A$, which has to be injective because fields are nice, except maybe for silly algebras where $0 = 1$).

Examples of $\mathbb{K}$-algebras:
[list]
[*]$\mathbb{K}$ itself
[*]a polynomial ring $\mathbb{K}[x_1, \ldots, x_n]$
[*]the set $\operatorname{End}(V)$ of linear operators $V \to V$ ("endomorphisms of $V$") for some $\mathbb{K}$-vector space $V$ (where the copy of $\mathbb{K}$ is the scalings of the identity)
[*]these things below:
[/list]

Definition. Let $G$ be a monoid (a set with an associate binary operation and identity). Then the monoid algebra $\mathbb{K}[G]$ is the $\mathbb{K}$-algebra obtained by considering finite formal sums of the form \[ \sum_{m\in G} k_mm, \]where $k_m \in \mathbb{K}$, the different $m$ are considered linearly independent (over $\mathbb{K}$) elements of $G$, the product of $m_1$ and $m_2$ is the monoid operation on them, and multiplying two of these sums in general is expanding everything and moving scalar coefficients out. (The copy of $\mathbb{K}$ is given by formally multiplying by the identity of the monoid: $k \mapsto k1_G$.)

In practice we will choose $G$ to be a group (a monoid with inverses), which is why I grudgingly accepted the letter $G$ from Wikipedia, but I wanted to define this for monoids because it lets you see how other $\mathbb{K}$-algebras also kinda fit in the same framework. In particular, the monoid algebra $\mathbb{K}[\mathbb{N}_0]$ is the polynomial ring $\mathbb{K}[x]$ (element $n \in \mathbb{N}_0$ corresponds to $x^n$), and in general, the monoid algebra $\mathbb{K}[\mathbb{N}_0^n]$ is the polynomial ring $\mathbb{K}[x_1, \ldots, x_n]$.

We're defining group algebras at all because v_Enhance says this is the right way to do representation theory, so let's go there.

Definition. A representation of a $\mathbb{K}$-algebra $A$ is a way for $A$ to act on some $\mathbb{K}$-vector space $V$ "nicely": $A$-on-$V$ action distributes both ways, $A$-on-$V$ action and $V$-on-$V$ multiplication associate, and copy-of-$\mathbb{K}$-on-$V$ action is scalar multiplication in $V$.

Equivalently, and more simply but perhaps also more abstractly, a representation is a homomorphism from $A$ to $\operatorname{End}(V)$.

The driving motivation behind all this is that we want to take abstract groups, which we don't know very much about in general, and turn them into linear transformations, which we do know quite a bit about and have lots of tools to deal with: dimension, trace, and so on.

As traditionally defined, note that a representation of a group (or heck, even a monoid) $G$ is still also a homomorphism into $\operatorname{End}(V)$, but this time only considering the group structure of $\operatorname{End}(V)$, with multiplication being composition, while ignoring the additive and scalar-multiplicative structure. We can naturally and uniquely extend any such representation to a representation of $\mathbb{K}[G]$, just by multiplying and summing.

We haven't actually done anything interesting yet in this post, but I'm tired and don't want to set the bar too high for further posts, so let's stop.
0 replies
math_explorer
Apr 18, 2016
0 replies
how (actually) to prove Hilbert's Nullstellensatz
math_explorer   0
May 19, 2015
I procrastinated posting this :( It's probably because the ending is so trippy.

First, what is Hilbert's Nullstellensatz?

Let $\mathbb{K}$ be an algebraically closed field (a thing where you can add, subtract, multiply, divide by everything except zero, and find roots of all nonconstant polynomials), let $\mathbb{K}[X_1, \ldots, X_n]$ be the polynomial ring you get from adding $n$ indeterminates to $\mathbb{K}$, and let $J$ be an ideal of this polynomial ring. We dumped some of the terminology in the last post already so I won't repeat that.

Then $I(V(J)) = \sqrt{J}$, where:
[list=1]
[*]$V(J)$ means the set of zeroes of $J$, i.e. the points of $\mathbb{K}^n$ that give 0 when you plug them into any element of $J$
[*]$I(\cdots)$ means the ideal of polynomials that are zero on all the points of the $\cdots$
[*]$\sqrt{J}$ is the radical of $J$, which means all elements that are some $n$th root of an element of $J$.
[/list]

It is "easy" to see that $\sqrt{J} \subseteq I(V(J))$: If $P \in \sqrt{J}$, we want to prove $P$ is zero wherever all of $J$ is zero. But some $P^n$ is in $J$, and wherever $P^n$ is zero, $P$ is also zero, since fields are nice enough that you can't multiply nonzero things to get zero.

The other direction is more annoying. But having proven Zariski's lemma, we're already more than half done. Our last intermediate step is Weak Nullstellensatz: If $\mathbb{K}$ is an algebraically closed field and $I$ is a proper ideal in the polynomial ring $\mathbb{K}[x_1, \ldots, x_n]$, then $V(I)$, the set of points at which all elements of $I$ are zero, is not empty.

Proof: Since $\mathbb{K}$ is a field, it's Noetherian, so $\mathbb{K}[x_i, \ldots, x_n]$ is Noetherian, so we can pick a maximal ideal that includes $I$.

Then (via ideal preliminary) $\mathbb{K}[x_i, \ldots, x_n]/I$ is a field. Furthermore we can view $\mathbb{K}$ as a subfield of this. Now we apply Zariski's lemma to see that $\mathbb{K}[x_i, \ldots, x_n]/I$ is module-finite over $\mathbb{K}$.

In fact, we claim that this means $\mathbb{K}[x_i, \ldots, x_n]/I$ is isomorphic to $\mathbb{K}$. This part relies on the algebraic closedness of $\mathbb{K}$.

Suppose that we have some $e \in \mathbb{K}[x_i, \ldots, x_n]/I$ (treated as a field extension of $\mathbb{K}$); we want to prove it is actually in $\mathbb{K}$. Then the module-finiteness also implies $e$ is integral over $\mathbb{K}$ (by Lemma 2 of the Zariski post darn I'm forgetful, although in my defense, that lemma was optional for my cobbled-together proof). But if $e$ is the root of a polynomial $P$ with coefficients in $\mathbb{K}$, then we can factor $P$ into linear factors merely in $\mathbb{K}$ because $\mathbb{K}$ is algebraically closed. So we have $P(e) = (e-k_1)\cdots(e-k_d)$ is zero and $\mathbb{K}$ is a field, which means one of those linear factors $(e - k_i)$ is zero and $e = k_i \in \mathbb{K}$, as desired.

Okay. So $\mathbb{K}[x_1, \ldots, x_n]/I \cong \mathbb{K}$, so in the field $\mathbb{K}[x_1, \ldots, x_n]$, for each $x_i$ there's a residue $k_i \in \mathbb{K}$ that it's in the same class as. So $(x_i - k_i) \in I$.

But the ideal $I' := (x_1 - k_1, \ldots, x_n - a_n)$ is maximal, because by "plugging in" $x_i = k_i$ in any element of $\mathbb{K}[x_1, \ldots, x_n]$ you can reduce it to something in $\mathbb{K}$, which differs from it by an element of $I'$. Therefore, adding any other element to $I'$ would reduce it to cover the entirety of $\mathbb{K}[x_1, \ldots, x_n]$. So $I'$ is maximal, yet $I' \subseteq I$ and $I$ is proper, so $I = I'$.

This means that $(k_1, \ldots, k_n) \in V(I)$ and we are done.

— end proof —

Now we can tackle the hard direction of Hilbert's nullstellensatz. Quick recap, it states that if $\mathbb{K}$ be an algebraically closed field and $J$ is an ideal of the polynomial ring $\mathbb{K}[X_1, \ldots, X_n]$, then $I(V(J)) = \sqrt{J}$. We now want to prove that $I(V(J)) \subseteq \sqrt{J}$, or that if $G \in I(V(J))$ then $G \in \sqrt{J}$.

To do this, we add another indeterminate to the polynomial ring to get $\mathbb{K}[X_1, \ldots, X_n, Y]$. And we consider the ideal $J^*$ generated by $F_1, \ldots, F_r, YG - 1$. Yes, this is trippy. The star doesn't mean anything, I just haven't found an excuse to mean a star superscript in a while.

But this is useful because at any point where $F_1, \ldots, F_r$ are all zero, that means $G$ is also zero which means $YG - 1$ is $-1$, so $J*$ never completely evaluates to zero at any point. Therefore $V(J^*)$ is zero and by the contrapositive of the weak nullstellensatz, $J^* = \mathbb{K}[X_1, \ldots, X_n, Y]$. Therefore $J^*$ contains 1.

In other words we have an equation \[ A_1F_1 + A_2F_2 + \cdots + A_rF_r + B(YG - 1) = 1, \] where each $A_i$ is an element of $\mathbb{K}[X_1, \ldots, X_n, Y]$.

Divide everything by a high power of $Y$ until there's no more of it in the numerator so you get something like:
\[ A'_1F_1 + A'_2F_2 + \cdots + A'_rF_r + B'(G/Y - 1) = 1/Y^\eta, \]
where each $A'_i$ is an element of $\mathbb{K}[X_1, \ldots, X_n, 1/Y]$, where $1/Y$ is a magical indeterminate.

Then set $1/Y = G$ to get the result:
\[ A''_1F_1 + A''_2F_2 + \cdots + A''_rF_r = G^\eta. \]
And magically, we're done!

Okay, I don't know if you feel like these plugging in variables is fishy the same way I do, but we can do it unfancily like this:
[list]
[*]treat $\mathbb{K}[X_1, \ldots, X_n, Y]$ as a subring of $\mathbb{K}(X_1, \ldots, X_n, Y)$
[*]take the homomorphism of that to $\mathbb{K}(X_1, \ldots, X_n)$ where $Y = 1/G$ or $YG = 1$, and supposing $A_i \mapsto A'_i$
[*]clearing denominators in the equation to make it involve $A''_i \in \mathbb{K}[X_1, \ldots, X_n]$ and feature $G^\eta$ on the right side
[*]saying "And magically, we're done!"
[/list]

Really.

I should be several chapters ahead of where I am right now.
0 replies
math_explorer
May 19, 2015
0 replies
Zariski's lemma (can't think of a witty title ):
math_explorer   1
N Apr 25, 2015 by briantix
Okay so as dedicated readers (which I still suspect don't really exist) may recall, I was working on the proof of Hilbert's Nullstellensatz after refactoring out the part where I forgot how to prove Hilbert's basis theorem and walked through it on my own because the whole post was too long.

Well, I did that and worked on the shortened proof and patched holes in the draft I forgot about and wrote a bunch of short posts about tangents I ran into and, in spite of all of that, it turns out it was still too long. What a world.

I can't figure out any neat pieces to move out any more, so let's go through Wikipedia's steps. Just in the reverse order (which is actually the logical order of deduction so I don't know what's with Wikipedia). We start with Zariski's lemma. Zariski's lemma states that if you have two fields $L$ and $K$ and $L$ is ring-finite over $K$, then $L$ is module-finite over $K$. Hopefully you remember the definitions from the last post and their differences, and that it is easy to prove that module-finiteness is stronger than ring-finiteness.

But let's explore the similarities and differences with some examples. You can take the big field $L = \mathbb{Q}[\omega]$ where $\omega$ is a 2015th root of unity, and the small field $K = \mathbb{Q}$. The big field is ring-generated* from the small ring plus the single element $\omega$. It is module-generated* from the small ring plus the elements $1, \omega, \omega^2, \ldots, \omega^{2014}$.

* this is kind-of-not-really a made up term

Another example, consider $\mathbb{R}(x)$, which is the big field of rational functions of $x$, over the small field of reals, $\mathbb{R}$.

It's certainly not module-finite since you'd at least need the infinitely many elements $x, x^2, x^3, x^4, \ldots$, which are linearly independent — you can't pick some, slap on nonzero coefficients from $\mathbb{R}$, and get the zero polynomial. It's not ring-finite either, however, which is trickier to show. The most obvious "bottleneck elements" that can't be ring-finitely-generated are \[\frac{1}{x}, \frac{1}{x+1}, \frac{1}{x+2}, \cdots\] but to prove that these elements can't be ring-finitely generated isn't so obvious, and this exact idea doesn't generalize to finite fields since you'd only have finitely elements of the form $\frac{1}{x+k}$. Fortunately, we'll need the proof of this for general fields, so we can detour to prove it without feeling like we wasted any effort.

Lemma 1. Let $F$ be a field. Then the field of rational functions in one free variable, $F(x)$, is not ring-finite over $F$.

Proof. What we have to prove is that for any finite list of elements $v_1, \ldots, v_n \in F(x)$, the generated ring $F[v_1, \ldots, v_n]$ can't be the entirety of $F(x)$.

To do so, write each $v_i$ as a fraction of polynomials, take all the denominators, multiply them together, and multiply the result by an extra term of $x^{1337}$ just to be safe to get a big polynomial $P$. We claim:

[list=1]
[*]For each element $w \in F[v_1, \ldots, v_n]$, the product $wP^m$ is a polynomial (i.e. element of $F[x]$) for sufficiently large $m$.
[*]There exists an element $z \in F(x)$ such that $zP^m$ is not a polynomial (i.e. element of $F[x]$) for any nonnegative integer $m$.
[/list]

1 is true because for any $w$, you expand it into a polynomial of $v_1, \ldots, v_n$ and pick $m$ to be the largest degree of the terms.

2 is true by taking $z = 1/(P-1)$. For any positive integer $m$, we recognize from elementary factoring tricks that \[ \frac{P^m}{P-1} = P^{m-1} + \cdots + 1 + \frac{1}{P-1}, \] which is not a polynomial because it is equal to a polynomial plus the obviously not-a-polynomial $\frac{1}{P-1}$. (This is where the extra $x^{1337}$ helps — it guarantees that the denominator is not just a constant.)

Since both are true, $z$ is in $F(x)$ but not in $F[v_1, \ldots, v_n]$. And we're done.

— end proof —

Note: Now, using this and the weird results we've built up over the last few posts, we can already prove Zariski's lemma straightforwardly. But since I already wrote all of this down, I'm going to include the requisites for the original proof in Fulton too, and then give both proofs.

Although this seems the natural start and endpoint for writing the lemma, it turns out what we eventually want to use is actually Point 2 above:

Corollary 1. Let $F$ be a field and let $P$ be a polynomial in $F[x]$. Then there exists an element $z \in F(x)$ such that $zP^m$ is not a polynomial (i.e. element of $F[x]$) for any nonnegative integer $m$.

As it turns out we also need to understand some miscellaneous facts about integral elements over rings.

Integral elements are abstractified algebraic integers, as seen in 2013 Winter OMO #48. Also, TIL "omo" is a substring of "automorphism". They are defined thus:

Let $R$ be a subring of a commutative ring $S$. Then $v \in S$ is integral over $R$ if it's the root of a monic polynomial with coefficients in $R$.

We assume $S$ is nice below — an integral domain, so we can divide by everything nonzero. Wow, "integral" is really a way overused word. Anyway, we have an interesting alternate definition of integral elements that will turn out useful later.

Lemma 2. Let $v \in S$. These three conditions are equivalent:

[list=1]
[*]$v$ is integral over $R$.
[*]$R[v]$ is module-finite over $R$.
[*]There exists a subring of $S$ containing $R[v]$ that is module-finite over $R$.
[/list]

(It looks silly that we need both conditions 2 and 3 in this lemma, but remember, if $R[v]$ is replaced with a general ring between $R$ and $S$, 3 does not imply 2 unless $R$ is Noetherian.)

Proof. $(1 \Longrightarrow 2)$: Easy. If the monic polynomial that $v$ is a root of has degree $d$, it means that we can express $v^d$ as a linear combination of $1, v, \ldots, v^{d-1}$, so it provides an inductive way to express every $v^e$ for $e \geq d$ as a linear combination of $1, v, \ldots, v^{d-1}$ by factoring out $v^d$, plugging in, and inducting. Therefore $R[v]$ is generated as an $R$-module by $1, v, \ldots, v^{d-1}$.

$(2 \Longrightarrow 3)$: Left as an exercise for the reader. Really, it's obvious enough that you should have figured it out in the time it took you to read this unnecessarily long sentence. :)

$(3 \Longrightarrow 1)$: This part of the proof is very weird. The difficulty here is that module-finiteness makes it easy to make up a nontrivial linear combination of $1, v, v^2, \ldots$ (a.k.a. a polynomial in $v$) that sums to zero, but you desperately need the polynomial to be monic.

So, how? Take a finite bunch of things $w_1, \ldots, w_n$ that module-generate the ring $R'$ that contains $R[v]$.

Then, since it's a ring containing $v$, if you multiply each $w_i$ by $v$, the result stays in $R'$, and thus $vw_i$ can be expressed as another linear combination with coefficients-in-$R$ of all the $w_j$. So you can write $n$ equations about the $w_i$, each of which has coefficients-in-$R$ for all but one $w_i$; that $w_i$ has, as a coefficient, $v$ plus something from $R$.

The fact that these equations have the nontrivial solution of the $w_i$ themselves means the matrix of the coefficients of these equations has zero determinant. (It's OK to detour to the field of fractions of $R'$ to prove this clearly.) But if you expand the determinant, you see it's a (degree-$n$) monic polynomial of $v$, since $v$ appears only on the main diagonal and nowhere else, so this monic polynomial is what we want.

— end proof —

A wild determinant appearing looks very odd indeed, but apparently the proof sketched on Wikipedia also involves determinants and is even weirder, so I guess I'll have to be satisfied with it. Onward, then:

Corollary 2. (still assuming that $R$ is a subring of a commutative ring $S$) The set of elements of $S$ integral over $R$ is a subring of $S$ containing $R$.

Proof. Clearly it contains $R$ because for every $r \in R$, it's a root of the silly polynomial $x - r$. So we just need to prove that it's closed under addition and multiplication. But this follows quickly from lemma 2: if $x, y \in S$ are integral over $R$ then even $R[x][y]$ is module-finite over $R$, because module-finiteness is transitive, as we proved last post. And it's easy to see that each of $R[x+y], R[x-y], R[xy]$ is contained in $R[x][y]$. So we activate Lemma 2 using condition 3 to see that $x + y, x - y, xy$ are all integral over $R$. Thus the set of elements of $S$ integral over $R$ is closed under ring operations, and since it inherits those operations from $S$, it's a ring.

(For the nice special case of $R = \mathbb{Z}$, you can proved the closedness under ring-operations straightforwardly without lemma 2; if you have monic polynomials $P(x), Q(y)$ with degrees $n, m$, you just keep using them to express powers of $x+y$ or $x-y$ or $xy$ as linear combinations of $x^iy^j, 0 \leq i < n, 0 \leq j < n$, and then look at the ideals of the coefficients of those linear combinations. But you need the no-infinite-chain-of-descending-ideals property — Noetherian-ness, as you probably recall from the last post — of $\mathbb{Z}$, so unfortunately this line of argument doesn't generalize to arbitrary rings, or at least, I don't know how to generalize it. Of course, there are a lot of things I don't know here.)

Corollary 2.2. Let $F$ be a field. The set of elements of $F(x)$ integral over $F[x]$ is just $F[x]$.

Proof. Suppose otherwise, that there's a rational function $P/Q$ in $F(x)$ integral over $F[x]$ and $P$ is not a multiple of $Q$.

Perform the polynomial division $P/Q$ to get $P = KQ + R$, where $K$ is a polynomial and the remainder $R$ satisfies $\deg R < \deg Q$ but is not zero.

Then $P/Q = K + R/Q$ and $K$ is trivially integral over $F[x]$, so $R/Q$ is also integral over $F[x]$ since integral elements over $F[x]$ are a subring.

Then by the corollary $F[x][R/Q]$ is module-finite over $F[x]$; suppose it's generated by the elements $\{u_i\}$.

But consider the "rate-of-growth degree" of rational functions $\deg V/W := \deg V - \deg W$ (which is apparently not the usual degree of rational functions), which satisfies $\deg AB = \deg A + \deg B$.

We have $\deg R/Q < 0$ so $F[x][R/Q]$ has elements of the form $(R/Q)^n$ with arbitrary low rate-of-growth-degrees. But by Lemma 2, $F[x][R/Q]$ must be module-finite over $F[x]$; if its module generators are $w_1 \ldots w_n$, then these module generators getting multiplied by polynomials with nonnegative rate-of-growth degrees and then added together can never produce an element with denominator having a larger degree than the degree of the product of the denominators of all the $w_i$. So we have a contradiction, as desired.

— end proof —

There. At long last, we return to Zariski, which states, recall, that if you have two fields $L$ and $K$, where $L$ is bigger and contains $K$, and if $L$ is ring-finite over $K$ (that is, $L = K[v_1, \ldots, v_n]$ for some finite bunch of $\{v_i\} \subseteq L$), then $L$ is module-finite over $K$.

Proof. Suppose $L = K[v_1, \ldots, v_n]$. We induct on $n$.

The intuitive way to induct is to use the induction hypothesis on $K[v_1, \ldots, v_{n-1}]$ over $K$. There's a problem, though, because it's not obvious that $K[v_1, \ldots, v_{n-1}]$ will be a field. This can be proved, but only by invoking a lot of prerequisites (which I spent the last few posts randomly proving), which is why I suppose Fulton's book doesn't do it that way. But it turns out that, even using all the stuff I proved, it's easier to induct on the pair of fields $K[v_1, \ldots, v_n]$ over $K[v_n]$. Of course, to do that we still have to prove that $K[v_n]$ is a field. The result is that the two proofs are really similar:

First, we use the induction hypothesis on $K' = K(v_n)$, the field generated by $K$ and $v_n$ that is a field by construction, and $L' = K(v_n)[v_1, \ldots, v_{n-1}]$, which is just another way to write the field $L$ since it does no harm by letting $v_n$ field-generate stuff in something that's already a field.

So, the induction hypothesis gives us a bunch of $w_1, \ldots, w_m$ that module-generate $L$ from $K(v_n)$.

Now, the goal is just to prove that $K[v_n]$ is a field, so that $K[v_n] = K(v_n)$. To prove this, we just need to prove that $v_n$ is algebraic over $K$ and then cite the field preliminary corollary.

The alternative is that $v_n$ is transcendental over $K$, so that we just need to prove that this leads to a contradiction:

< begin proof by contradiction >

If $v_n$ is transcendental over $K$, then $K(v_n)$ would be isomorphic to the field of rational functions $K(x)$. Here the proofs diverge.

My proof using the claim from the last post:

By Lemma 1, $K(v_n)$ is not ring-finite over $K$. But we apply the claim:

[quote]Claim. If we have three rings $A \subsetneq B \subsetneq C$, $A$ is Noetherian, $C$ is module-finite over $B$, and $C$ is ring-finite over $A$, then $B$ is ring-finite over $A$.[/quote]

We take
\begin{align*}
A &= K \\ B &= K(v_n) \\ C &= L = K[v_1, \ldots, v_n]
\end{align*}
$A$ is clearly Noetherian because it's a field. $C$ is module-finite over $B$ by the induction hypothesis as we applied it above. $C$ is ring-finite over $A$; that was a given condition for the theorem we're proving. So the claim holds, and $B = K(v_n)$ is ring-finite over $A$, a contradiction! (That was fast.)

The proof using the results above in this post:

Since $L$ is module-finite over $K(v_n)$, the conditions of Lemma 2 hold for each $v_i$, so they're integral over $K(v_n)$, i.e. for each $v_i$ you can write a monic polynomial $P_i$ with coefficients in $K(v_n)$ such that $v_i$ is a root.

Let $c$ be the product of ALL the denominators of coefficients (considered as rational functions = elements of $K(v_n)$). Then you can turn each polynomial $P_i$ into a monic polynomial in $cv_i$ with coefficients in $K[v_n]$, the ring of polynomials of $v_n$. Basically you multiply the polynomial by a huge power of $c$ and absorb some of it into the variable and the rest of it into the coefficient to get a new polynomial. Write it out and see.

So each $cv_i$ is integral over $K[v_n]$. Because the set of integral elements forms a subring, integralness is closed under ring-generation, so every element of $K[cv_1, \ldots, cv_n]$ is integral over $K[v_n]$, which means that for each element $x$ of $K[v_1, \ldots, v_n]$, sufficiently large $\ell$ means $c^\ell x$ is integral over $K[v_n]$.

But $K[v_1, \ldots, v_n]$ is given to be a field and contains $K(v_n)$. This means that, for each element of $K(v_n)$, sufficiently large $\ell$ means $c^\ell x$ is integral over $K[v_n]$, which, by Corollary 2.2, is the same as saying that that $c^\ell x$ is simply in $K[v_n]$. But since $v_n$ is transcendental over $K$, this contradicts Corollary 1!

< end proof by contradiction >

Either way, we've proven that $v_n$ is in fact algebraic over $K$. This is the same thing as being integral over $K$, since $K$ is a field and you can make any polynomial monic by multiplying by a suitable constant.

To reiterate how the proof ends: invoking Lemma 2 again, we have that $K[v_n]$ is module-finite over $K$. And we can invoke the field preliminary corollary to see that $K[v_n]$ is a field so $K(v_n) = K[v_n]$. So $K[v_1, \ldots, v_n]$ is module-finite over $K(v_n) = K[v_n]$ is module-finite over $K$, and by the transitivity of module-finiteness from the last post, we have that $K[v_1, \ldots, v_n]$ is module-finite over $K$. As desired. Yay.

So finally we can announce:

— Zariski done! —
1 reply
math_explorer
Apr 24, 2015
briantix
Apr 25, 2015
Towers of commutative rings and finiteness criteria (edited)
math_explorer   0
Apr 20, 2015
It's about time I introduce the two finteness criteria I was going on about, which feature prominently in Zariski's lemma, and investigate. I already sort-of defined them in the last post, but I should do it formally.

If we have commutative rings $R$ and $S$ where $S \subseteq R$, then:
[list]
[*]$R$ is ring-finite over $S$ if you can get $R$ from $S$ by adding finitely many elements and letting them generate elements until you get a ring. Notationally, you have to be able to find elements $v_1, \ldots, v_n$ so that $R = S[v_1, \ldots, v_n]$.

Even more explicitly, this means that for every element $r \in R$, you can write it as a sum of elements of the form $sv_1^{\alpha_1}v_2^{\alpha_2}\cdots v_n^{\alpha_n}$ where $s \in S$ and $\alpha_i$ are nonnegative integers.
[*]$R$ is module-finite over $S$ if you can get $R$ from $S$ by adding finitely many elements and letting them generate elements as an $S$-module, which is essentially a vector space over $S$. Notationally, you have to be able to find elements $w_1, \ldots, w_m$ so that $R = Sw_1 + Sw_2 + \cdots + Sw_m$.

Even more explicitly, this means that for every element $r \in R$, you can write it in the form $s_1w_1 + \cdots + s_mw_m$ where $s_i \in S$.
[/list]

The difference is that in the former case you can multiply the finitely many elements by themselves and by each other, whereas in the latter case you can only form linear combinations of the finitely many elements you choose, where "linear combinations" means using coefficients in $K$. So it is clear that module-finiteness implies ring-finiteness, since you can pick the same finite set of elements.

One of the intuitive things about these properties is that they are transitive. One of these statements is easier to prove than the other.

Let $A \subseteq B \subseteq C$ be commutative rings.
[list]
[*]If $B$ is ring-finite over $A$ and $C$ is ring-finite over $B$, then $C$ is ring-finite over $A$.

To prove this, just take the union of the two finite sets of ring-generators. If $B = A[u_1, \ldots, u_n]$ and $C = B[v_1, \ldots, v_N]$ then $C = A[u_1, \ldots, u_n, v_1, \ldots, v_N]$. No big deal.
[*]If $B$ is module-finite over $A$ and $C$ is module-finite over $B$, then $C$ is module-finite over $A$.

To prove this, take the Cartesian product of the module bases and multiply each pair together.

Explicitly: suppose $B$ is module-generated over $A$ by $w_1, \ldots, w_m$ and $C$ is module-generated over $B$ by $\psi_1, \ldots, \psi_\mu$; then for every $c \in C$ we can write it as $c = b_1\psi_1 + \cdots + b_\mu\psi_\mu$ for $b_i \in B$. We can write each $b_i$ as $b_i = a_{i1}w_1 + \cdots + a_{im}w_m$.

Then \begin{align*} c &= (a_{11}w_1 + \cdots + a_{1m}w_m)\psi_1 + (a_{21}w_1 + \cdots)\psi_2 \\ &\qquad + \cdots + (\cdots + a_{\mu m}w_m)\psi_\mu \\ &= \sum_{i=1}^\mu \sum_{j=1}^m a_{ij}w_i\psi_j \end{align*} so $\{w_i\psi_j \mid i = 1, \ldots, n;\; j = 1, \ldots, \mu\}$ is a finite set that $C$ is generated by as an $A$-module.
[/list]

Anyway, one of the questions I got sidetracked into while trying to motivate each step in Hilbert's Nullstellensatz, which I thought would be similarly intuitive, was this:

[quote]Suppose we have three (let's say commutative) rings $A \subsetneq B \subsetneq C$.
If $C$ is ring-finite over $A$, is $B$ ring-finite over $A$?
If $C$ is module-finite over $A$, is $B$ module-finite over $A$?[/quote]

Note that in both cases it is easy to deduce the related statement that $C$ is (ring|module)-finite over $B$, by using the exact same set of generators.

Surprisingly and unintuitively to me, it turns out both statements are false. The first statement was easier for me to disprove: For any ring $A$ we can take $B = A[x, xy, xy^2, xy^3, \ldots]$ and $C = A[x, y]$.

v_Enhance offered the startling counterexample to both claims at the same time:
\begin{align*}
A &= \mathbb{Z}[x_1, x_2, x_3, \ldots] \\
B &= \mathbb{Z}[x_1, x_2, x_3, \ldots, \varepsilon x_1, \varepsilon x_2, \varepsilon x_3, \ldots] \\
C &= \mathbb{Z}[\varepsilon, x_1, x_2, x_3, \ldots]
\end{align*}
where $\varepsilon$ is a nonzero element such that $\varepsilon^2 = 0$.

After searching up some random websites I did figure out finally that the statement with module-finiteness is true if we assume that $A$ is Noetherian. That was essentially Corollaries 2 and 3 of the last post. (So the motivation for constructing the above example is clearer, because $A$ has to be not Noetherian.) The claim follows this way:

Claim. If we have three rings $A \subsetneq B \subsetneq C$, $A$ is Noetherian, and $C$ is module-finite over $A$, then $B$ is module-finite over $A$.

Proof. By Corollary 3 of the last post, $C$ is Noetherian as an $A$-module, so any submodule of it is finitely generated as an $A$-module. $B$ is one such, which is just another way to say that it's module-finite over $A$. QED. — end proof —

I had no idea how to go about the statement with ring-finiteness, though, at this point I asked my professor for something similar, and I got the theorem:

Claim. If we have three rings $A \subsetneq B \subsetneq C$, $A$ is Noetherian, $C$ is module-finite over $B$, and $C$ is ring-finite over $A$, then $B$ is ring-finite over $A$.

Proof: Let the ring-generators of $C$ over $A$ be $v_1, \ldots, v_n$, and let the module-generators of $C$ over $B$ be $w_1, \ldots, w_m$.

The module-finiteness means we can write each $v_i=\sum_{s=1}^m p_{is} w_s$ and $w_iw_j = \sum_{t=1}^m q_{ijt} w_t$, where $p_{is}$, $q_{ijt}$ are in $B$.

Now, set $B'=A[p_{is}, q_{ijt}]$. This is a Noetherian ring by the "general" Hilbert basis theorem of the last post.

Subclaim: By construction of $B'$, \[C=B'w_1+ \ldots +B'w_m.\qquad(*)\]
Why is this true? Every element of $C$ is in $A[v_1, \ldots, v_n]$ so we can write it as the sum of stuff like $a_{\mathrm{junk}}v_1^{e_1}\cdots v_n^{e^n}$ for $a_{\mathrm{junk}} \in A$. We can plug in $v_i = \sum_{s=1}^m p_{is} w_s$ to reduce this into a sum of stuff like $a_{\mathrm{junk}}w_1^{\eta_1}\cdots w_m^{\eta_m}$, then as long as the sum of the $\eta_i$ is more than 1, repeatedly plug in $w_iw_j = \sum{t=1}^m q_{ijs} w_s$ to reduce each such term into even more terms with less total degree, until we finally end with a sum of an insane number of terms of the form $a_{\mathrm{too much junk}}w^i$. (I just have to have excuses to use rare Greek letters.)

So from equation $(*)$ and Corollary 3, $C$ turns out to be a Noetherian $B'$-module. Then $B$ is a $B'$-submodule of $C$ and thus $B$ is a finitely generated $B'$-module. Therefore, $B$ is ring-finite over $A$.

— end proof —

As you can see, the conditions are pretty strong. Now, the claim is still true without the condition that $C$ be module-finite over $B$; however, it appears the shortest path to proving that is citing Zariski's lemma. I'll get to that in the next post, but since my original goal with proving this claim was to use it in an intuitive proof of Zariski's lemma, I can only use the above claim right now.

I asked my professor and he even told me: [quote]I think a direct proof without going through the Zariski Lemma might be hard and even not easy to be found.[/quote]

"Hard and even not easy" probably sums it up.
0 replies
math_explorer
Apr 20, 2015
0 replies
Noetherian stuff
math_explorer   0
Apr 17, 2015
Grrr. At some point I'm going to have transcribed down all the content of my abstract algebra textbooks and then some. I've got a little confession, I don't know what I'm doing, but if you want you can poke holes in these proofs so I can get closer to understanding. What?

I proved Hilbert's basis theorem a few posts ago while sidestepping the alternate definition that "every ideal contained in [a Noetherian ring] is finitely generated as a module". Then I forgot all about that characterization and was unable to understand when it popped up in discussions of finiteness criteria. If you don't know what the finiteness criteria I'm referring to yet, I just mean when rings are generated as modules or rings by adding finitely many elements over other rings, but it doesn't matter.

It turns out furthermore that stuff we prove is more useful if we generalize the idea of Noetherian-ness to modules.

Let $R$ be a commutative ring (I am not brave enough to handle noncommutative ones) and let $M$ be an $R$-module (which means we have a function $(\cdot) : R \times M \to M$ satisfying various clear multiplication axioms that I will irresponsibly not list here.)

[list=1]
[*]$M$ is Noetherian_1 if any infinite chain of submodules \[ M_1 \subseteq M_2 \subseteq M_3 \subseteq \cdots \] is eventually constant.
[*]$M$ is Noetherian_2 if every submodule of $M$ is finitely generated as a module.
[/list]

When we speak of Noetherian rings, that means they're Noetherian when we treat them as modules of themselves. Another note to self: in a ring $R$, an ideal and an $R$-module are the same zarking thing and it means the same thing for them to be finitely generated, which is the thing the Noetherian-ness criterion is all about. I am unbelievably obtuse sometimes. So you can just read the proofs below while mentally replacing "module" or "submodule" by "ideal".

First, let's prove the equivalence.

(standard, ~ DF 12.1 Theorem 1) An $R$-module $M$ is Noetherian_1 iff it is Noetherian_2.

Proof. ($\Longrightarrow$) Let $M'$ be a submodule of $M$. Starting from the zero module, attempt to pick a sequence of elements $a_1, a_2, \ldots$ such that $a_i$ is in $M'$ but not in the module generated by $a_1, \ldots, a_{i-1}$. This cannot go on forever since the modules $M_i$ generated by $a_1, \ldots, a_i$ are an infinite chain of ascending modules; the only way it stops is if at some point $M_i = M'$, at which point we have proven that $a_1, \ldots, a_i$ generate $M'$.

($\Longleftarrow$) Let $M_1 \subseteq M_2 \subseteq M_3 \subseteq \cdots$ be an infinite chain of submodules of $M$. Then the union of all of them is a module (verify the axioms), which is finitely generated. Each of the generators appears in one of the modules $M_i$; if $m$ the maximum of those $i$ then the chain is constant after $M_m$.

— end proof —

Okay, weirder stuff:

Let $M$ be an $R$-module, and let $N$ be a submodule. Then $M$ is Noetherian iff $N$ and $M/N$ are both Noetherian.

Proof. ($\Longrightarrow$) If $M$ is Noetherian, then any submodule of $N$ is a submodule of $M$ and is thus finitely generated, so $N$ is Noetherian. As for $M/N$, any submodule of it corresponds to a submodule of $M$ containing $N$. That submodule is also finitely generated, and the image of those generators in $M/N$ finitely generate it, so $M/N$ is Noetherian.

Now assume $N$ and $M/N$ are both Noetherian; we want to prove $M$ is Noetherian. Let $M'$ be a submodule of $M$; we want to prove it's finitely generated. Consider $M'' = M' \cap N$. Then $M''$ is a submodule of $N$ and is finitely generated, say by $\{a_i\}$. Also, observe that the obvious mapping from $M'/M''$ into $M/N$ is well-defined, a homomorphism, and injective, so $M'/M''$ is isomorphic to a submodule of $M/N$ and is also finitely generated. Take preimages of those generators in $M'$ and call them $\{b_j\}$; those elements and the generators of $M''$ together generate $M'$ because you can pick elements from $\{b_j\}$ to reach the equivalence class in $M'/M''$ of any element and then reach the actual element in that class using $\{a_i\}$.

— end proof —

Corollary 1. If $M$ is a Noetherian $R$-module, then $M^n$ (the obvious $R$-module of $n$-tuples of $M$) is Noetherian.

Proof. Induct on $n$. It's trivial when $n = 1$. If you let $M_1$ be the submodule of tuples of $M^n$ with only the first element nonzero, then $M_1$ is isomorphic to $M$ and is a Noetherian $R$-module, and $M^n/M_1$ is isomorphic to $M^{n-1}$ and is a Noetherian $R$-module by the induction hypothesis, so $M^n$ is Noetherian.

Corollary 2. If $M$ is a Noetherian $R$-module, then any module $S$ of the form $Ma_1 + Ma_2 + \cdots + Ma_n$ for $a_i \in S$ is also a Noetherian $R$-module. (The condition is just what we mean when we say $S$ is module-finite over $M$.)

Proof. There's a surjective homomorphism $\phi$ from $M^n$ to $S$ sending the tuple with $i$th element 1 and all others 0 to the element $a_i$. So $S$ is isomorphic to $M^n/\ker \phi$ where $\ker \phi$ is the kernel of $\phi$, and $M^n$ is Noetherian by Corollary 1, so $S$ is Noetherian by the theorem.

Corollary 3. If $R$ is a Noetherian ring, then an $R$-module $M$ is Noetherian iff it is finitely generated.

Proof. If $M$ is Noetherian then any submodule of $M$, including itself, is finitely generated. On the other hand, if it's finitely generated by $\{a_i\}$, that means $M = Ra_1 + \cdots + Ra_n$, so it's Noetherian by Corollary 2.

— end corollaries —

Separately:

If $R$ is a Noetherian ring and $I$ is an ideal, then $R/I$ is a Noetherian ring.

Proof. Any ideal in $R/I$ is $J/I$ for some ideal $J$ of $R$. Then $J$ is finitely generated and the image of the generators in $R/I$ finitely generate $J/I$. Alternatively, any chain of ascending ideals in $R/I$ likewise corresponds to a chain of ascending ideals in $R$, which is eventually constant. — end proof —

(Note that this does not instantly follow from the above theorem, which proves that $R/I$ is Noetherian if you treat it as a $R$-module. It appears you can deduce the consequence you want from this, but it's so easy by itself anyway that I don't think it's worth it.)

And, finally: If $R$ is a Noetherian ring and $S$ is ring-finite over $R$, then $S$ is Noetherian.

Proof. Let $S$ be ring-generated over $R$ by elements $v_1, \ldots, v_n$. There is the obvious homomorphism from the polynomial ring in $n$ indeterminates $R[x_1, \ldots, x_n]$ to $R[v_1, \ldots, v_n] = S$; if $I$ is the kernel of this then $S \cong R[x_1, \ldots, x_n]/I$. By the Hilbert basis theorem $R[x_1, \ldots, x_n]$ is Noetherian and by the above theorem $S$ is Noetherian. — end proof —
0 replies
math_explorer
Apr 17, 2015
0 replies
Ideal preliminaries
math_explorer   0
Apr 12, 2015
Wait, I finally figured out the misunderstanding that confused me for a long time; I had my criteria for ideals and quotient rings mixed up. Darn. Also, it's so nice to be able to assume rings are commutative...

(again this is easy stuff you should know this already, self)

As you recall, an ideal $I$ of a commutative ring $R$ is a subset of $R$ that is closed under addition (with another element of $I$) and under multiplication by any element of $R$. Given an ideal and a ring, you can get a new ring — a quotient ring (also called a factor ring among other names) — by taking the quotient $R/I$, which contains as elements sets of the form $a + I\;(= \{a + i \mid i \in I\})$ for $a \in R$. It is easy/boring to check that ring operations are well-defined, so I won't do that.

$I$ is defined to be prime if $ab \in I$ implies $a \in I$ or $b \in I$. Some authors additionally require that $I$ be proper, i.e. not equal to $R$ itself.

$I$ is prime iff $R/I$ is an integral domain. This is a straightforward translation of the definition. The additional requirement depends on if you accept the single-element $\{0\}$, in which $0$ serves as both the additive and the multiplicative identity, to be a ring.

$I$ is defined to be maximal if $I$ is proper and no ideal $J$ exists such that \[ I \subsetneq J \subsetneq R. \]
$I$ is maximal iff $R/I$ is a field. This is more or less what we proved in the last post, once we note that any ideal $J$ of $R$ satisfying the above equation gives rise to an ideal $J/I$ in $R/I$ and vice-versa, which is one of those intuitive homomorphism theorems.
0 replies
math_explorer
Apr 12, 2015
0 replies
Field preliminaries
math_explorer   0
Apr 10, 2015
I'm just spamming blog posts now.

Let $F$ be a commutative ring. $F$ is a field iff it has only two ideals, itself and $0$.

Proof. If $F$ is a field then any nonzero ideal $I$ contains a nonzero element $w$, which generates $1$, which generates the whole field, so $I = F$. So $0$ and $F$ are indeed the only two ideals.

If $F$ is not a field then there's a noninvertible element $w \in F$ and the ideal generated by $w$ is neither $0$ nor $F$.

— end proof —

(Theorem 2.16 in Jacobson) Let $F$ be a field and let $u$ be algebraic over $F$ with minimal polynomial $P(x)$. Then $F[u]$ is:
[list]
[*]a field if $P$ is irreducible;
[*]not even an integral domain if $P$ is reducible.
[/list]

Proof. If $P$ is irreducible, we claim $F[u]$ has no ideals other than itself and $0$. So suppose we have an ideal $I$ of $F[u]$. Then the preimage of $I$ under the obvious homomorphism from the polynomial ring $F[x]$ to $F[u]$ would be an ideal $I'$ of $F[x]$. Since $F[x]$ is a principal ideal domain, as seen in the last post, $I'$ is generated by an element $Q$. Also $I'$ contains $P$, so $P$ is a multiple of $Q$. Since $P$ is irreducible we must have $Q = 1$ or $Q = P$ (up to a constant factor), which respectively means $I = F[u]$ or $I = 0$, as desired. Thus $F[u]$ is a field by the statement earlier in this post.

If $P$ is reducible as $Q \cdot R$ then $Q(u)$ and $R(u)$ are nonzero in $F[u]$ but $Q(u)R(u)$ is zero, so $F[u]$ is not an integral domain.

Corollary. If $F$ is a field and $F[u]$ is embeddable in an integral domain, then $F[u]$ is a field.

Proof. As above, $F[u]$ is either a field or contains zero divisors, and the latter case is impossible if $F[u]$ is embeddable in an integral domain.
0 replies
math_explorer
Apr 10, 2015
0 replies
Principal ideal domain preliminary
math_explorer   0
Apr 10, 2015
This is easy you should know this already, self.

Let $F$ be a commutative ring. Then $F$ is a field iff the field of polynomials $F[x]$ is a principal ideal domain.

Proof. $(\Longrightarrow)$ Suppose $F$ is a field. Given an ideal $I$ in $F[x]$, take a polynomial $P$ with minimal degree in $I$. This generates every element of $I$ because you can use the division algorithm to divide any such element by $P$ and any nonzero remainder would be a polynomial of less degree than $P$ in $I$, contradicting our choice of $P$.

$(\Longleftarrow)$ Suppose $F$ is not a field because $w \in F$ has no inverse. Then the ideal generated by $x$ and $w$ in $F[x]$ is not principal, because if it had a generator $P \in F[x]$ then it would have to be a constant to generate $w$, but it can't generate $1$, which means it can't generate $x$.
0 replies
math_explorer
Apr 10, 2015
0 replies
4 lines concurrent
Zavyk09   7
N May 2, 2025 by bin_sherlo
Source: Homework
Let $ABC$ be triangle with circumcenter $(O)$ and orthocenter $H$. $BH, CH$ intersect $(O)$ again at $K, L$ respectively. Lines through $H$ parallel to $AB, AC$ intersects $AC, AB$ at $E, F$ respectively. Point $D$ such that $HKDL$ is a parallelogram. Prove that lines $KE, LF$ and $AD$ are concurrent at a point on $OH$.
7 replies
Zavyk09
Apr 9, 2025
bin_sherlo
May 2, 2025
4 lines concurrent
G H J
Source: Homework
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Zavyk09
17 posts
#1 • 1 Y
Y by PikaPika999
Let $ABC$ be triangle with circumcenter $(O)$ and orthocenter $H$. $BH, CH$ intersect $(O)$ again at $K, L$ respectively. Lines through $H$ parallel to $AB, AC$ intersects $AC, AB$ at $E, F$ respectively. Point $D$ such that $HKDL$ is a parallelogram. Prove that lines $KE, LF$ and $AD$ are concurrent at a point on $OH$.
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
aidenkim119
34 posts
#2 • 1 Y
Y by PikaPika999
............
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
aidenkim119
34 posts
#3 • 1 Y
Y by PikaPika999
First three are trivial by pascal, but AD looks a bit hard / '
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
ItzsleepyXD
151 posts
#4 • 1 Y
Y by PikaPika999
Redefine
Let $ABC$ be triangle with circumcenter $(O)$ and orthocenter $H$. $A'$ is antipode of $A$ . $O'$ is circumcenter of $(BOC)$ . Point $E,F$ satisfied $OE // A'F // AB , OF // A'E // AC$ then prove $OH,O'A',BE,CF$ concurrent .

Let $B',C'$ be antipode of $B,C$ respectively.
MMP: Fix $(O),B,C$ . Move $A$ on $(O)$ deg 2.
Since $A',E,C'$ collinear and $A',F,B'$ collinear
By $\angle C'EO = \angle BAC = \angle BC'C = \angle C'BO$ so $C',B,O,E$ concyclic.
implies that $E$ deg 2. Also $F$ deg 2.
So line $BE,CF$ deg 1.
$H=$ reflection of $A \infty_{\perp BC} \cap (O)$ across $BC$
Since $H,A'$ deg 2. implies that line $O'A'.OH$ deg 2.
We want to prove $BE,CF,OH$ concurrent first and $BE,CF,O'A'$ concurrent.
but both have deg 1+1+2+1 = 5 .

choose $A= B,C,B',C'$ and midpoint of arc $BC$
the rest of problem is easy. $\square$
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
pingupignu
50 posts
#5 • 1 Y
Y by PikaPika999
My solution may not be elegant but here's some DDIT spam for you guys to enjoy :blush:.
Let $X = KE \cap LF$. I will show that $X, A, D$ and $X, O, H$ are collinear.

Part 1:
I first claim that $\angle \infty_{CH}XL = \angle \infty_{BH}XK$. This follows from
$$\angle \infty_{CH}XL = \angle FLH = \angle FHL = 90^\circ - \angle BHL = 90^\circ - \angle CHK = \angle EHK = \angle XKH = \angle \infty_{BH}XK.$$Then, applying DDIT on $X \cup LDKH$ we see that $$(XK, XL), (XH, XD), (X\infty_{BH}, X\infty_{CH})$$are reciprocal pairs under some involution on $\mathcal{P}_X$. This involution must be a reflection in the angle bisector of $\angle KXL$. Hence $XH, XD$ are isogonal in $\angle KXL$.

Next, since $AK=AL$ (well-known), $LF=FH=AE$, $AF=EH=EK$, we yield $\triangle LAF \cong \triangle AKE$.
I claim that $X\infty_{AB}, X\infty_{AC}$ are isogonal in $\angle XKL$. This is because $$\angle \infty_{AB}XF = 180^\circ - \angle AFL = 180^\circ - \angle AEK = \angle AEX = \angle \infty_{AC}XE.$$Applying DDIT on $X \cup AEHF$ would then give $XA, XH$ are isogonal in $\angle EXF$. Since $XH, XD$, $XH, XA$ are isogonal in $\angle KXL = \angle EXF$ we conclude that $XA \equiv XD$, or $X \in AD$.

Part 2:
I first prove that $X\infty_{CH}, OL, AK$ concur at a point $S$ on $(XLK)$. For this, let $S = X\infty_{CH} \cap OL$, where from a short angle chase we get $$\angle XSL = \angle OLH = B-A = (90^\circ - A) - (90^\circ - B) = \angle AKL - \angle LCB$$$$= \angle AKL - \angle LAF = \angle AKL - \angle  AKE = \angle EKL = \angle XKL$$Hence $XSKL$ are cyclic, and from
$$\angle SKX = \angle SLX = \angle FLO = \angle ALO - \angle ALF = A - \angle EAK = A - (90^\circ - C)$$which equals
$$= A+C-90^\circ = 90^\circ - B = \angle LAB = \angle AKE = \angle AKX$$$\implies S \in AK$. The claim is proven.

From DDIT in $X \cup AKOL$ we have the reciprocal pairs $$(XA, XO), (XK, XL), (XS, XT)$$where $T = AL \cap KO \cap X \infty_{BH}$ (similarly).
since $(XS, XT) = (X\infty_{CH}, X\infty_{BH})$ and we have established $(XA, XH), (X\infty_{CH}, X\infty_{BH})$ are isogonal in $\angle KXL$ we get $XH \equiv XO \implies X \in OH$. The problem is solved. $\blacksquare$
Attachments:
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
tomsuhapbia
6 posts
#6 • 1 Y
Y by Amkan2022
We need two well-known lemmas about the isogonal line:
1. Given a $\triangle ABC$ and a point $P$ satisfied $\angle ABP=\angle ACP$. Let $Q$ be the reflection of $P$ in the midpoint of $BC$. Then $AP$ and $AQ$ are isogonals wrt $\angle BAC$.
2. Given a trapezoid $ABCD$ $(AB\parallel CD)$ is inscribed $(O)$. Let $E,F$ be the intersections of $BC$ and $AD$; $AC$ and $BD$. Let $S$ be a abitary point on $(O)$. Then $SE,SG$ are isogonals wrt $\angle ASB$.

Back to the problem: Let $X,Y$ be the intersections of $OL$ and $AK$; $AL$ and $OK$. By symmetric, we have $AK=AH=AL$ so $AO$ is the perpendicular bisector of $KL$ then $XKLY$ is a trapezoid. Let $LK$ intersects $KE$ at $Z$.

We have
$$\angle FZK=\angle ZEH-\angle ZFC=180^\circ-\angle KEH-\angle HFC=180^\circ-3\angle BAC$$and
$$\angle LYK=\angle AOK-\angle OAL=2(90^\circ-\angle OAL)-\angle OAL=180^\circ-3\angle OAL=180^\circ-3\angle BAC=\angle FZK\,(2)$$since
$$\angle OAL=\dfrac{1}{2}\angle KAL=\dfrac{1}{2}(\angle KAB+\angle BAH+\angle HAC+\angle CAL)=\angle BAC$$$(2)$ leads to $Z$ lies on the circumcircle of $XKLY$. From the lemma 2, we obtain that $ZA,ZO$ are isogonals wrt $\angle LZK$ $(1)$. We also have $\angle ZFE=\angle LFE=\angle KEH=180^\circ-\angle ZEH$ and $\angle HEZ=\angle HKE=\angle FLH=180^\circ-\angle HLZ$, using lemma 1 and get $(XD,XH)$ and $(XH,XA)$ are two isogonal pairs wrt $\angle LZK\equiv\angle FZE$, so $A,D,Z$ are collinear. Combine with $(1)$ and we conclude $Z,O,H$ are collinear or $AD,LF,KE,OH$ are concurrent at $Z$.

https://i.postimg.cc/hJQdTNdW/image.png
This post has been edited 2 times. Last edited by tomsuhapbia, May 1, 2025, 5:25 PM
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
hectorleo123
347 posts
#7
Y by
I apologize for the complex bash, but I couldn't find another way.

Let \( B' \) and \( C' \) be the antipodes of \( B \) and \( C \).
Since \( L \) is the reflection of \( H \) over \( AB \), we have \( \angle FLB = \angle FHB = 90^\circ \) (since \( FH \parallel AC \perp BH \)).
Analogously, \( \angle EKC = 90^\circ \).
\(\Rightarrow B', F, L \) are collinear and \( C', E, K \) are collinear.
By Pascal's Theorem on
\[ \binom{B, L, C'}{C, K, B'} \]we get that \( KE, LF \), and \( OH \) are concurrent.
Now it suffices to prove that \( AD, KE \), and \( LF \) are concurrent.

We use complex numbers, where \( (ABC) \) is the unit circle and \( a = 1 \).
Let
\[
k = -\frac{c}{b}, \quad l = -\frac{b}{c}, \quad c' = -c, \quad b' = -b, \quad h = b + c + 1, \quad o = 0
\]\[
d + b + c + 1 = d + h = l + k = -\frac{b}{c} - \frac{c}{b}
\]\[
\Rightarrow d = -\frac{b^2 + c^2 + b^2c + bc^2 + bc}{bc}
\]
Let \( X = KC' \cap LB' \).
We have:
\[
\frac{x + c}{\overline{x + c}} = \frac{c - \frac{c}{b}}{\overline{c - \frac{c}{b}}} = -\frac{c^2}{b}
\Rightarrow \overline{x} = -\frac{xb + bc + c}{c^2}
\]Analogously,
\[
\overline{x} = -\frac{xc + bc + b}{b^2}
\]Equating both expressions:
\[
(b - c)(b^2c + bc^2 + bc - x(b^2 + bc + c^2)) = 0
\Rightarrow x = -\frac{bc(b + c + 1)}{b^2 + bc + c^2}
\]
Points \( A, D, X \) are collinear if and only if
\[
\frac{d - 1}{x - 1} \in \mathbb{R}
\]Substituting:
\[
\frac{(b^2 + c^2 + b^2c + bc^2 + 2bc)/bc}{(b^2c + bc^2 + 2bc + b^2 + c^2)/(b^2 + bc + c^2)} = \frac{b^2 + bc + c^2}{bc}\in \mathbb{R}_\blacksquare
\]
Z K Y
The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
bin_sherlo
733 posts
#8
Y by
Let $B',C'$ be the antipodes of $B,C$ on $(ABC)$. Also let $LC'\cap KB'=W,KC'\cap LB'=P$.
Claim: $B',F,L$ and $C',E,K$ are collinear.
Proof: Pascal at $KB'LCAB$ yields $AC_{\infty},B'L\cap AB,H$ are collinear thus, $B'L\cap AB=F$. Similarily $C',E,K$ are collinear.
Claim: $P$ lies on $OH$.
Proof: Pascal at $BKC'CLB'$ gives $H,P,O$ are collinear.
Claim: $A,D,P$ are collinear.
Proof: Notice that $W,L,H,K$ lie on the circle with diameter $WH$ and since $AH=AK=AL$, $A$ must be the circumcenter of $(KLHW)$. Hence $W,A,H$ are collinear.
DDIT at $DLHK$ implies $(\overline{AD},\overline{AH}),(\overline{AK},\overline{AL}),(\overline{AC'},\overline{AB'})$ is an involution. DDIT at $B'KC'L$ gives $(\overline{AP},\overline{AH}),(\overline{AK},\overline{AL}),(\overline{AC'},\overline{AB'})$ is an involution. Combining these implies $AD\equiv AP$ hence $A,D,P$ are collinear as desired.$\blacksquare$
Z K Y
N Quick Reply
G
H
=
a