how (actually) to prove Hilbert's Nullstellensatz
by math_explorer, May 19, 2015, 2:56 AM
I procrastinated posting this
It's probably because the ending is so trippy.
First, what is Hilbert's Nullstellensatz?
Let
be an algebraically closed field (a thing where you can add, subtract, multiply, divide by everything except zero, and find roots of all nonconstant polynomials), let
be the polynomial ring you get from adding
indeterminates to
, and let
be an ideal of this polynomial ring. We dumped some of the terminology in the last post already so I won't repeat that.
Then
, where:
It is "easy" to see that
: If
, we want to prove
is zero wherever all of
is zero. But some
is in
, and wherever
is zero,
is also zero, since fields are nice enough that you can't multiply nonzero things to get zero.
The other direction is more annoying. But having proven Zariski's lemma, we're already more than half done. Our last intermediate step is Weak Nullstellensatz: If
is an algebraically closed field and
is a proper ideal in the polynomial ring
, then
, the set of points at which all elements of
are zero, is not empty.
Proof: Since
is a field, it's Noetherian, so
is Noetherian, so we can pick a maximal ideal that includes
.
Then (via ideal preliminary)
is a field. Furthermore we can view
as a subfield of this. Now we apply Zariski's lemma to see that
is module-finite over
.
In fact, we claim that this means
is isomorphic to
. This part relies on the algebraic closedness of
.
Suppose that we have some
(treated as a field extension of
); we want to prove it is actually in
. Then the module-finiteness also implies
is integral over
(by Lemma 2 of the Zariski post darn I'm forgetful, although in my defense, that lemma was optional for my cobbled-together proof). But if
is the root of a polynomial
with coefficients in
, then we can factor
into linear factors merely in
because
is algebraically closed. So we have
is zero and
is a field, which means one of those linear factors
is zero and
, as desired.
Okay. So
, so in the field
, for each
there's a residue
that it's in the same class as. So
.
But the ideal
is maximal, because by "plugging in"
in any element of
you can reduce it to something in
, which differs from it by an element of
. Therefore, adding any other element to
would reduce it to cover the entirety of
. So
is maximal, yet
and
is proper, so
.
This means that
and we are done.
— end proof —
Now we can tackle the hard direction of Hilbert's nullstellensatz. Quick recap, it states that if
be an algebraically closed field and
is an ideal of the polynomial ring
, then
. We now want to prove that
, or that if
then
.
To do this, we add another indeterminate to the polynomial ring to get
. And we consider the ideal
generated by
. Yes, this is trippy. The star doesn't mean anything, I just haven't found an excuse to mean a star superscript in a while.
But this is useful because at any point where
are all zero, that means
is also zero which means
is
, so
never completely evaluates to zero at any point. Therefore
is zero and by the contrapositive of the weak nullstellensatz,
. Therefore
contains 1.
In other words we have an equation
where each
is an element of
.
Divide everything by a high power of
until there's no more of it in the numerator so you get something like:
![\[ A'_1F_1 + A'_2F_2 + \cdots + A'_rF_r + B'(G/Y - 1) = 1/Y^\eta, \]](//latex.artofproblemsolving.com/6/5/3/6539f5ca2da7e93549e2d8e8fb78540a1f9c89f8.png)
where each
is an element of
, where
is a magical indeterminate.
Then set
to get the result:
![\[ A''_1F_1 + A''_2F_2 + \cdots + A''_rF_r = G^\eta. \]](//latex.artofproblemsolving.com/d/3/e/d3e5647a9dd2ba75f194de73bdbf6e2b838026a2.png)
And magically, we're done!
Okay, I don't know if you feel like these plugging in variables is fishy the same way I do, but we can do it unfancily like this:
Really.
I should be several chapters ahead of where I am right now.

First, what is Hilbert's Nullstellensatz?
Let

![$\mathbb{K}[X_1, \ldots, X_n]$](http://latex.artofproblemsolving.com/1/4/b/14b7393416e4ff371ec0323f49be232c4d1fafac.png)



Then

means the set of zeroes of
, i.e. the points of
that give 0 when you plug them into any element of
means the ideal of polynomials that are zero on all the points of the
is the radical of
, which means all elements that are some
th root of an element of
.
It is "easy" to see that








The other direction is more annoying. But having proven Zariski's lemma, we're already more than half done. Our last intermediate step is Weak Nullstellensatz: If


![$\mathbb{K}[x_1, \ldots, x_n]$](http://latex.artofproblemsolving.com/5/8/7/587a572ad086eb2f0abffd206a065b532b33810a.png)


Proof: Since

![$\mathbb{K}[x_i, \ldots, x_n]$](http://latex.artofproblemsolving.com/e/1/6/e16c1631fcffd79792039e8a2c3504563ed868f8.png)

Then (via ideal preliminary)
![$\mathbb{K}[x_i, \ldots, x_n]/I$](http://latex.artofproblemsolving.com/4/8/f/48f417610a73e5b2a5d6511c1fb6c31316421dff.png)

![$\mathbb{K}[x_i, \ldots, x_n]/I$](http://latex.artofproblemsolving.com/4/8/f/48f417610a73e5b2a5d6511c1fb6c31316421dff.png)

In fact, we claim that this means
![$\mathbb{K}[x_i, \ldots, x_n]/I$](http://latex.artofproblemsolving.com/4/8/f/48f417610a73e5b2a5d6511c1fb6c31316421dff.png)


Suppose that we have some
![$e \in \mathbb{K}[x_i, \ldots, x_n]/I$](http://latex.artofproblemsolving.com/e/3/9/e3909948f865a35976678c34d01c9c6fc14fcb7a.png)














Okay. So
![$\mathbb{K}[x_1, \ldots, x_n]/I \cong \mathbb{K}$](http://latex.artofproblemsolving.com/1/5/7/157ec190eb2936ecf66948bf87593559f13dbaf9.png)
![$\mathbb{K}[x_1, \ldots, x_n]$](http://latex.artofproblemsolving.com/5/8/7/587a572ad086eb2f0abffd206a065b532b33810a.png)



But the ideal


![$\mathbb{K}[x_1, \ldots, x_n]$](http://latex.artofproblemsolving.com/5/8/7/587a572ad086eb2f0abffd206a065b532b33810a.png)



![$\mathbb{K}[x_1, \ldots, x_n]$](http://latex.artofproblemsolving.com/5/8/7/587a572ad086eb2f0abffd206a065b532b33810a.png)




This means that

— end proof —
Now we can tackle the hard direction of Hilbert's nullstellensatz. Quick recap, it states that if


![$\mathbb{K}[X_1, \ldots, X_n]$](http://latex.artofproblemsolving.com/1/4/b/14b7393416e4ff371ec0323f49be232c4d1fafac.png)




To do this, we add another indeterminate to the polynomial ring to get
![$\mathbb{K}[X_1, \ldots, X_n, Y]$](http://latex.artofproblemsolving.com/b/d/e/bde50ddf1be7e01e594ae954876cfefe6da74592.png)


But this is useful because at any point where






![$J^* = \mathbb{K}[X_1, \ldots, X_n, Y]$](http://latex.artofproblemsolving.com/a/7/b/a7b5a1b8e222efe6992ba1a78e2f9127c0a8f9bd.png)

In other words we have an equation
![\[ A_1F_1 + A_2F_2 + \cdots + A_rF_r + B(YG - 1) = 1, \]](http://latex.artofproblemsolving.com/8/9/7/897689f90948d4f5282341368165a897731c0075.png)

![$\mathbb{K}[X_1, \ldots, X_n, Y]$](http://latex.artofproblemsolving.com/b/d/e/bde50ddf1be7e01e594ae954876cfefe6da74592.png)
Divide everything by a high power of

![\[ A'_1F_1 + A'_2F_2 + \cdots + A'_rF_r + B'(G/Y - 1) = 1/Y^\eta, \]](http://latex.artofproblemsolving.com/6/5/3/6539f5ca2da7e93549e2d8e8fb78540a1f9c89f8.png)
where each

![$\mathbb{K}[X_1, \ldots, X_n, 1/Y]$](http://latex.artofproblemsolving.com/b/8/c/b8c47b859fce91566c27f6790ebfbae51f48603a.png)

Then set

![\[ A''_1F_1 + A''_2F_2 + \cdots + A''_rF_r = G^\eta. \]](http://latex.artofproblemsolving.com/d/3/e/d3e5647a9dd2ba75f194de73bdbf6e2b838026a2.png)
And magically, we're done!
Okay, I don't know if you feel like these plugging in variables is fishy the same way I do, but we can do it unfancily like this:
- treat
as a subring of
- take the homomorphism of that to
where
or
, and supposing
- clearing denominators in the equation to make it involve
and feature
on the right side
- saying "And magically, we're done!"
Really.
I should be several chapters ahead of where I am right now.