# The Theorem of The Existence of Zeroes

It’s time to prove the central result of elementary algebraic geometry. Mostly it’s referred to as Hilbert’s Nullstellensatz. This German term translates precisely to the title of this post. Indeed ‘Null’ means ‘zero’, ‘stellen’ means to exist and ‘Satz’ means theorem. But referring to it merely as an existence theorem for zeroes is inadequate. Its real power is in setting up a correspondence between algebra and geometry.

Are you sitting comfortably? Grab a glass of water (or wine if you prefer). Settle back and have a peruse of these theorems. This is your first glance into the heart of a magical subject.

(In many texts these theorems are all referred to as the Nullstellensatz. I think this is both pointless and confusing, so have renamed them! If you have any comments or suggestions about these names please let me know).

Theorem 4.1 (Hilbert’s Nullstellensatz) Let $J\subsetneq k[\mathbb{A}^n]$ be a proper ideal of the polynomial ring. Then $V(J)\neq \emptyset$. In other words, for every nontrivial ideal there exists a point which simulataneously zeroes all of its elements.

Theorem 4.2 (Maximal Ideal Theorem) Every maximal ideal $\mathfrak{m}\subset k[\mathbb{A}^n]$ is of the form $(x-a_1,\dots,x-a_n)$ for some $(a_1,\dots,a_n)\in \mathbb{A}^n$. In other words every maximal ideal is the ideal of some single point in affine space.

Theorem 4.3 (Correspondence Theorem) For every ideal $J\subset k[\mathbb{A}^n]$ we have $I(V(J))=\sqrt{J}$.

We’ll prove all of these shortly. Before that let’s have a look at some particular consequences. First note that 4.1 is manifestly false if $k$ is not algebraically closed. Consider for example $k=\mathbb{R}$ and $n=1$. Then certainly $V(x^2+1)=\emptyset$. Right then. From here on in we really must stick just to algebraically closed fields.

Despite having the famous name, 4.1 not really immediately useful. In fact we’ll see its main role is as a convenient stopping point in the proof of 4.3 from 4.2. The maximal ideal theorem is much more important. It precisely provides the converse to Theorem 3.10. But it is the correspondence theorem that is of greatest merit. As an immediate corollary of 4.3, 3.8 and 3.10 (recalling that prime and maximal ideals are radical) we have

Corollary 4.4 The maps $V,I$ as defined in 1.2 and 2.4 give rise to the following bijections

$\{\textrm{affine varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{radical ideals in } k[\mathbb{A}^n]\}$
$\{\textrm{irreducible varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{prime ideals in } k[\mathbb{A}^n]\}$
$\{\textrm{points in }\mathbb{A}^n\} \leftrightarrow \{\textrm{maximal ideals in } k[\mathbb{A}^n]\}$

Proof We’ll prove the first bijection explicitly, for it is so rarely done in the literature. The second and third bijections follow from the argument for the first and 3.8, 3.10. Let $J$ be a radical ideal in $k[\mathbb{A}^n]$. Then $V(J)$ certainly an affine variety so $V$ well defined. Moreover $V$ is injective. For suppose $\exists J'$ radical with $V(J')=V(J)$. Then $I(V(J'))=I(V(J))$ and thus by 4.3 $J = J'$. It remains to prove that $V$ surjective. Take $X$ an affine variety. Then $J'=I(X)$ an ideal with $V(J')=X$ by Lemma 2.5. But $J'$ not necessarily radical. Let $J=\sqrt{J'}$ a radical ideal. Then by 4.3 $I(V(J'))=J$. So $V(J) = V(I(V(J')) = V(J') = X$ by 2.5. This completes the proof. $\blacksquare$

We’ll see in the next post that we need not restrict our attention to $\mathbb{A}^n$. In fact using the coordinate ring we can gain a similar correspondence for the subvarieties of any given variety. This will lead to an advanced introduction to the language of schemes. With these promising results on the horizon, let’s get down to business. We’ll begin by recalling a definition and a theorem.

Definition 4.5 A finitely generated $k$-algebra is a ring $R$ s.t. $R \cong k[a_1,\dots,a_n]$ for some $a_i \in R$. A finite $k$-algebra is a ring $R$ s.t. $R\cong ka_1 + \dots ka_n$.

Observe how this definition might be confusing when compared to a finitely generated $k$-module. But applying a broader notion of ‘finitely generated’ to both algebras and modules clears up the issue. You can check that the following definition is equivalent to those we’ve seen for algebras and modules. A finitely generated algebra is richer than a finitely generated module because an algebra has an extra operation – multiplication.

Definition 4.6 We say an algebra (module) $A$ is finitely generated if there exists a finite set of generators $F$ s.t. $A$ is the smallest algebra (module) containing $F$. We then say that $A$ is generated by $F$.

Theorem 4.7 Let $k$ be a general field and $A$ a finitely generated $k$-algebra. If $A$ is a field then $A$ is algebraic over $k$.

Okay I cheated a bit saying ‘recall’ Theorem 4.7. You probably haven’t seen it anywhere before. And you might think that it’s a teensy bit abstract! Nevertheless we shall see that it has immediate practical consequences. If you are itching for a proof, don’t worry. We’ll in fact present two. The first will be due to Zariski, and the second an idea of Noether. But before we come to those we must deduce 4.1 – 4.3 from 4.7.

Proof of 4.2 Let $m \subset k[\mathbb{A}^n]$ be a maximal ideal. Then $F = k[\mathbb{A}^n]/m$ a field. Define the natural homomorphism $\pi: k[\mathbb{A}^n] \ni x \mapsto x+m \in F$. Note $F$ is a finitely generated $k$-algebra, generated by the $x_i+m$ certainly. Thus by 4.7 $F/k$ is an algebraic extension. But $k$ was algebraically closed. Hence $k$ is isomorphic to $F$ via $\phi : k \rightarrowtail k[\mathbb{A}^n] \xrightarrow{\pi} F$.

Let $a_i = \phi^{-1}(x_i+m)$. Then $\pi(x_i - a_i) = 0$ so $x_i - a_i \in \textrm{ker}\pi = m$. Hence $(x_1-a_1, \dots, x_n-a_n) \subset m$. But $(x_1-a_1, \dots, x_n-a_n)$ is itself maximal by 3.10. Hence $m = (x_1-a_1, \dots, x_n-a_n)$ as required. $\blacksquare$

That was really quite easy! We just worked through the definitions, making good use of our stipulation that $k$ is algebraically closed. We’ll soon see that all the algebraic content is squeezed into the proof of 4.7

Proof of 4.1 Let $J$ be a proper ideal in the polynomial ring. Since $k[\mathbb{A}^n]$ Noetherian $J\subset m$ some maximal ideal. From 4.2 we know that $m=I(P)$ some point $P\in \mathbb{A}^n$. Recall from 2.5 that $V(I(P)) = \{P\} \subset V(J)$ so $V(J) \neq \emptyset$. $\blacksquare$

The following proof is lengthier but still not difficult. Our argument uses a method known as the Rabinowitsch trick.

Proof of 4.3 Let $J\triangleleft k[\mathbb{A}^n]$ and $f\in I(V(J))$. We want to prove that $\exists N$ s.t. $f^N \in J$. We start by introducing a new variable $t$. Define an ideal $J_f \supset J$ by $J_f = (J, ft - 1) \subset k[x_1,\dots,x_n,t]$. By definition $V(J_f) = \{(P,b) \in \mathbb{A}^{n+1} : P\in V(J), \ f(P)b = 1\}$. Note that $f \in I(V(J))$ so $V(J_f) = \emptyset$.

Now by 4.1 we must have that $J_f$ improper. In other words $J_f = k[x_1,\dots, x_n, t]$. In particular $1 \in J_f$. Since $k[x_1,\dots, x_n, t]$ is Noetherian we know that $J$ finitely generated by some $\{f_1,\dots,f_r\}$ say. Thus we can write $1 = \sum_{i=1}^r g_i f_i + g_o (ft - 1)$ where $g_i\in k[x_1,\dots , x_n, t]$ (*).

Let $N$ be such that $t^N$ is the highest power of $t$ appearing among the $g_i$ for $=\leq i \leq r$. Now multiplying (*) above by $f^N$ yields $f^N = \sum_{i=1}^r G_i(x_1,\dots, x_n, ft) f_i + G_0(x_1,\dots,x_n,ft)(ft-1)$ where we define $G_i = f^N g_i$. This equation is valid in $k[x_1,\dots,x_n, t]$. Consider its reduction in the ring $k[x_1,\dots,x_n,t]/(ft - 1)$. We have the congruence $f_N\equiv \sum_{i=1}^r h_i (x_1,\dots,x_n) f_i \ \textrm{mod}\ (ft-1)$ where $h_i = G_i(x_1,\dots,x_n,1)$.

Now consider the map $\phi:k[x_1,\dots, x_n]\rightarrowtail k[x_n,\dots, x_n,t]\xrightarrow{\pi} k[x_n,\dots, x_n,t]/(ft-1)$. Certainly nothing in the image of the injection can possibly be in the ideal $(ft - 1)$, not having any $t$ dependence. Hence $\phi$ must be injective. But then we see that $f^N = \sum_{i=1}^r h_i(x_1,\dots, x_n) f_i$ holds in the ring $k[\mathbb{A}^n]$. Recalling that the $f_i$ generate $J$ gives the result. $\blacksquare$

We shall devote the rest of this post to establishing 4.7. To do so we’ll need a number of lemmas. You might be unable to see the wood for the trees! If so, you can safely skim over much of this. The important exception is Noether normalisation, which we’ll come to later. I’ll link the ideas of our lemmas to geometrical concepts at our next meeting.

Definition 4.8 Let $A,B$ be rings with $B \subset A$. Let $a\in A$. We say that $a$ is integral over $B$ if $a$ is the root of some monic polynomial with roots in $B$. That is to say $\exists b_i \in B$ s.t. $a^n + b_{n-1}a^{n-1} + \dots + b_0 = 0$. If every $a \in A$ is integral over $B$ we say that $A$  is integral over $B$ or $A$ is an integral extension of $B$.

Let’s note some obvious facts. Firstly we can immediately talk about $A$ being integral over $B$ when $A,B$ are algebras with $B$ a subalgebra of $A$. Remember an algebra is still a ring! It’s rather pedantic to stress this now, but hopefully it’ll prevent confusion if I mix my termin0logy later. Secondly observe that when $A$ and $B$ are fields “integral over” means exactly the same as “algebraic over”.

We’ll begin by proving some results that will be of use in both our approaches. We’ll see that there’s a subtle interplay between finite $k$-algebras, integral extensions and fields.

Lemma 4.9 Let $F$ be a field and $R\subset F$ a subring. Suppose $F$ is an integral extension of $R$. Then $R$ is itself a field.

Proof Let $r \in R$. Then certainly $r \in F$ so $r^{-1} \in F$ since $F$ a field. Now $r^{-1}$ integral over $R$ so satisfies an equation $r^-n = b_{n-1} r^{-n+1} +\dots + b_0$ with$b_i \in R$. But now multiplying through by $r^{n-1}$ yields $r^{-1} = b_{n-1} + \dots + b_0 r^{n-1} \in R$. $\blacksquare$

Note that this isn’t obvious a priori. The property that an extension is integral contains sufficient information to percolate the property of inverses down to the base ring.

Lemma 4.10 If $A$ is a finite $B$ algebra then $A$ is integral over $B$.

Proof Write $A = Ba_1 + \dots +Ba_n$. Let $x \in A$. We want to prove that $x$ satisfies some equation $x^n + b_{n-1}x^n{n-1} + \dots + b_0 = 0$. We’ll do so by appealing to our knowledge about determinants. For each $a_i$ we may clearly write $xa_i = \sum_{i=1}^{n} b_{ij}a_j$ for some $b_ij \in B$.

Writing $\vec{a} = (a_1, \dots, a_n)$ and defining the matrix $(\beta)_{ij} = b_{ij}$ we can express our equation as $\beta a = xa$. We recognise this as an eigenvalue problem. In particular $x$ satisfies the characteristic polynomial of $\beta$, a polynomial of degree $n$ with coefficients in $B$. But this is precisely what we wanted to show. $\blacksquare$

Corollary 4.11 Let $A$ be a field and $B\subset A$ a subring. If $A$ is a finite $B$-algebra then $B$ is itself a field.

Proof Immediate from 4.9 and 4.10. $\blacksquare$

We now focus our attention on Zariski’s proof of the Nullstellensatz. I take as a source Daniel Grayson’s excellent exposition.

Lemma 4.12 Let $R$ be a ring an $F$ a $R$-algebra generated by $x \in F$. Suppose further that $F$ a field. Then $\exists s \in R$ s.t. $S = R[s^{-1}]$ a field.  Moreover $x$ is algebraic over $S$.

Proof Let $R'$ be the fraction field of $R$. Now recall that $x$ is algebraic over $R'$ iff $R'[x] \supset R'(x)$. Thus $x$ is algebraic over $R'$ iff $R'[x]$ is a field. So certainly our $x$ is algebraic over $R'$ for we are given that $F$ a field. Let $x^n + f_{n-1}x^{n-1} + \dots + f_0$ be the minimal polynomial of $x$.

Now define $s\in R$ to be the common denominator of the $f_i$, so that $f_0,\dots, f_{n-1} \in R[s^{-1}] = S$. Now $x$ is integral over $S$ so $F/S$ an integral extension. But then by 4.9 $S$ a field, and $x$ algebraic over it. $\blacksquare$

Observe that this result is extremely close to 4.7. Indeed if we take $R$ to be a field we have $S = R$ in 4.12. Then lemma then says that $R[x]$ is algebraic as a field extension of $R$. Morally this proof mostly just used definitions. The only nontrivial fact was the relationship between $R'(x)$ and $R'[x]$. Even this is not hard to show rigorously from first principles, and I leave it as an exercise for the reader.

We’ll now attempt to generalise 4.12 to $R[x_1,\dots,x_n]$. The argument is essentially inductive, though quite laborious. 4.7 will be immediate once we have succeeded.

Lemma 4.13 Let $R = F[x]$ be a polynomial ring over a field $F$. Let $u\in R$. Then $R[u^{-1}]$ is not a field.

Proof By Euclid, $R$ has infinitely many prime elements. Let $p$ be a prime not dividing $u$. Suppose $\exists q \in R[u^{-1}]$ s.t. $qp = 1$. Then $q = f(u^{-1})$ where $f$ a polynomial of degree $n$ with coefficients in $R$. Hence in particular $u^n = u^n f(u^{-1}) p$ holds in $R$ for $u^n f(u^{-1}) \in R$. Thus $p | u^n$ but $p$ prime so $p | u$. This is a contradiction. $\blacksquare$

Corollary 4.14 Let $K$ be a field, $F\subset K$ a subfield, and $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $x$ is algebraic over $F$. Moreover $R = K$.

Proof Suppose $x$ were transcendental over $F$. Then $R=F[x]$ would be a polynomial ring, so by 4.12 $R[u^{-1}]$ couldn’t be a field. Hence $x$ is algebraic over $F$ so $R$ is a field. Hence $R=R[u{-1}]=K$. $\blacksquare$

The following fairly abstract theorem is the key to unlocking the Nullstellensatz. It’s essentially a slight extension of 4.14, applying 4.12 in the process. I’d recommend skipping the proof first time, focussing instead on how it’s useful for the induction of 4.16.

Theorem 4.15 Take $K$ a field, $F \subset K$ a subring, $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $\exists 0\neq s \in F s.t. F[s^{-1}]$ is a field. Moreover $F[s^{-1}][x] = K$ and $x$ is algebraic over $F[s^{-1}]$.

Proof Let $L=\textrm{Frac}(F)$. Now by 4.14 we can immediately say that $L[x]=K$, with $x$ algebraic over $L$. Now we seek our element $s$ with the desired properties. Looking back at 4.12, we might expect it to be useful. But to use 4.12 for our purposes we’ll need to apply it to some $F' = F[t^{-1}]$ with $F'[x] = K$, where $t \in F$.

Suppose we’ve found such a $t$. Then 4.12 gives us $s' \in F'$ s.t. $F'[s'^{-1}]$ a field with $x$ algebraic over it. But now $s' = qt^{-m}$ some $q \in F, \ m \in \mathbb{N}$. Now $F'[s'^{-1}]=F[t^{-1}][s'^{-1}]=F[(qt)^{-1}]$, so setting $=qt$ completes the proof. (You might want to think about that last equality for a second. It’s perhaps not immediately obvious).

So all we need to do is find $t$. We do this using our first observation in the proof. Observe that $u^{-1}\in K=L[x]$ so we can write $u^{-1}=l_0+\dots +l_{n-1}x^{n-1}$, $l_i \in L$. Now let $t \in F$ be a common denominator for all the $l_i$. Then $u^{-1} \in F'=F[t^{-1}]$ so $F'[x]=K$ as required. $\blacksquare$

Corollary 4.16 Let $k$ a ring, $A$ a field, finitely generated as a $k$-algebra by $x_1,\dots,x_n$. Then $\exists 0\neq s\in k$ s.t. $k[s^{-1}]$ a field, with $A$ a finite algebraic extension of $k[s^{-1}]$. Trivially if $k$ a field, then $A$ is algebraic over $k$, establishing 4.7.

Proof Apply Lemma 4.15 with $F=k[x_1,\dots,x_{n-1}]$, $x=x_n$, $u=1$ to get $s'\in F$ s.t. $A' = k[x_1,\dots,x_{n-1}][s'^{-1}]$ is a field with $x_n$ algebraic over it. But now apply 4.15 again with $F=k[x_1,\dots,x_{n-2}]$, $u = s'$ to deduce that $A''=k[x_1,\dots, x_{n-2}][s''^{-1}]$ is a field, with $A'$ algebraic over $A''$, for some $s'' \in F$. Applying the lemma a further $(n-2)$ times gives the result. $\blacksquare$

This proof of the Nullstellensatz is pleasingly direct and algebraic. However it has taken us a long way away from the geometric content of the subject. Moreover 4.13-4.15 are pretty arcane in the current setting. (I’m not sure whether they become more meaningful with a better knowledge of the subject. Do comment if you happen to know)!

Our second proof sticks closer to the geometric roots. We’ll introduce an important idea called Noether Normalisation along the way. For that you’ll have to come back next time!