# Calabi-Yau Manifolds and Moduli Stabilization

For better or worse, string theory dominates modern research in theoretical physics. Naively, you might expect a theory consisting of tiny strings to be pretty simple. But the subject has grown into a vast and exciting playground for new ideas.

String theory is popular not just because it might unify physics or quantize gravity. In fact, many unexpected offshoots have proved more successful than the original idea! From particle physics to superconductors, string theory is having a surprising indirect impact. It’s certainly useful, even if it doesn’t prove to be the ultimate description of reality.

But what of the original plan – to describe nature using strings? A key sticking point is the existence of extra dimensions. String theory needs 6 of these to work consistently. Another problem is supersymmetry. 10 dimensional string theory must have lots of this to work correctly. But 4 dimensional physics only has a little bit of supersymmetry at most!

It turns out that these problems can be solved in one step. By coiling up the extra dimensions into a Calabi-Yau manifold we can make the extra dimensions effectively invisible, while reducing the supersymmetry we end up with in our 4D world. So what is this Calabi-Yau manifold, I hear you ask!

Well, Calabi-Yau is just a technical term for the shape of the compact extra dimensions. Different shapes break different amounts of symmetries, leaving us with different theories. Calabi-Yau’s are just symmetrical enough to break the right amount of supersymmetry, giving us a sensible theory in the end!

Technically Calabi-Yau manifolds must have a metric which is Kaehler and Ricci flat. These properties provide the correct information about the shape of the curled up dimensions. So we must look for 6-real dimensional manifolds with these properties.

Generically, you don’t have to put a notion of distance on a space. When I go for a walk, I don’t always carry around a yardstick so I can measure how far I’ve gone! You can have a perfectly good manifold without giving it a metric, but you get extra information once you have defined what distance means.

As it happens, finding a metric which is Calabi-Yau is quite difficult. But due to the genius of Shing-Tung Yau, we know that you don’t need to do this! There’s an equivalent definition of a Calabi-Yau manifold which doesn’t depend on metrics at all. All you need to know is the topological information about the manifold – roughly speaking, how “holey” it is.

If you know something about differential geometry, this kind of equivalence might sound familiar. Yau’s theorem relating geometry and topology is like a (much) more complicated version of the classic Gauss-Bonnet theorem!

It’s a darn sight easier to discover Calabi-Yau’s when you know it’s only the topological data that matters. At first people thought there might only be a few, but now we know there’s a huge number of potential candidates! The problem then becomes choosing one which produces the physics of our universe.

While people have made progress on this, the going is tough. One reason is that nobody knows the metric on a compact Calabi-Yau. This isn’t so important for string calculations, but it makes a big difference when you need to consider branes. So people have come up with various workarounds, which give promising physical results. One such success story is provided by my colleague Zac Kenton, who recently wrote a paper on brane inflation with his PhD supervisor, Steve Thomas.

There’s one final complication that I should mention. If string theory is to be a fundamental theory, then the Calabi-Yau shape should be dynamic. More specifically it will squeeze and stretch over time, unless there’s some mechanism to keep it stable. From the perspective of the 4 large dimensions, this freedom is seen as free scalar fields. These so-called “moduli” fields are bad, because we don’t observe anything like them in nature!

To solve this problem, we must find a way of constraining the fluctuations of the Calabi-Yau. Put another way we have to stabilize the moduli fields, by giving them potential terms, so that their fluctuations are small and essentially negligible at low energies. Hence this is known as the problem of moduli stabilization.

One popular way to solve the conundrum is to turn on some supergravity fields at high energy. These so-called fluxes generate potential terms for the moduli, solving the stabilization problem. Initially this idea was unpopular because of a famous no-go theorem by Witten. But since the advent of the D-brane revolution, the concept is back in vogue!

So there you have it – a 5 minute snapshot of “real” string theory. Now it’s time to get back to my calculations, where string theory is more the background Muse, and certainly not the main protagonist!

# The Theorem of The Existence of Zeroes

It’s time to prove the central result of elementary algebraic geometry. Mostly it’s referred to as Hilbert’s Nullstellensatz. This German term translates precisely to the title of this post. Indeed ‘Null’ means ‘zero’, ‘stellen’ means to exist and ‘Satz’ means theorem. But referring to it merely as an existence theorem for zeroes is inadequate. Its real power is in setting up a correspondence between algebra and geometry.

Are you sitting comfortably? Grab a glass of water (or wine if you prefer). Settle back and have a peruse of these theorems. This is your first glance into the heart of a magical subject.

(In many texts these theorems are all referred to as the Nullstellensatz. I think this is both pointless and confusing, so have renamed them! If you have any comments or suggestions about these names please let me know).

Theorem 4.1 (Hilbert’s Nullstellensatz) Let $J\subsetneq k[\mathbb{A}^n]$ be a proper ideal of the polynomial ring. Then $V(J)\neq \emptyset$. In other words, for every nontrivial ideal there exists a point which simulataneously zeroes all of its elements.

Theorem 4.2 (Maximal Ideal Theorem) Every maximal ideal $\mathfrak{m}\subset k[\mathbb{A}^n]$ is of the form $(x-a_1,\dots,x-a_n)$ for some $(a_1,\dots,a_n)\in \mathbb{A}^n$. In other words every maximal ideal is the ideal of some single point in affine space.

Theorem 4.3 (Correspondence Theorem) For every ideal $J\subset k[\mathbb{A}^n]$ we have $I(V(J))=\sqrt{J}$.

We’ll prove all of these shortly. Before that let’s have a look at some particular consequences. First note that 4.1 is manifestly false if $k$ is not algebraically closed. Consider for example $k=\mathbb{R}$ and $n=1$. Then certainly $V(x^2+1)=\emptyset$. Right then. From here on in we really must stick just to algebraically closed fields.

Despite having the famous name, 4.1 not really immediately useful. In fact we’ll see its main role is as a convenient stopping point in the proof of 4.3 from 4.2. The maximal ideal theorem is much more important. It precisely provides the converse to Theorem 3.10. But it is the correspondence theorem that is of greatest merit. As an immediate corollary of 4.3, 3.8 and 3.10 (recalling that prime and maximal ideals are radical) we have

Corollary 4.4 The maps $V,I$ as defined in 1.2 and 2.4 give rise to the following bijections

$\{\textrm{affine varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{radical ideals in } k[\mathbb{A}^n]\}$
$\{\textrm{irreducible varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{prime ideals in } k[\mathbb{A}^n]\}$
$\{\textrm{points in }\mathbb{A}^n\} \leftrightarrow \{\textrm{maximal ideals in } k[\mathbb{A}^n]\}$

Proof We’ll prove the first bijection explicitly, for it is so rarely done in the literature. The second and third bijections follow from the argument for the first and 3.8, 3.10. Let $J$ be a radical ideal in $k[\mathbb{A}^n]$. Then $V(J)$ certainly an affine variety so $V$ well defined. Moreover $V$ is injective. For suppose $\exists J'$ radical with $V(J')=V(J)$. Then $I(V(J'))=I(V(J))$ and thus by 4.3 $J = J'$. It remains to prove that $V$ surjective. Take $X$ an affine variety. Then $J'=I(X)$ an ideal with $V(J')=X$ by Lemma 2.5. But $J'$ not necessarily radical. Let $J=\sqrt{J'}$ a radical ideal. Then by 4.3 $I(V(J'))=J$. So $V(J) = V(I(V(J')) = V(J') = X$ by 2.5. This completes the proof. $\blacksquare$

We’ll see in the next post that we need not restrict our attention to $\mathbb{A}^n$. In fact using the coordinate ring we can gain a similar correspondence for the subvarieties of any given variety. This will lead to an advanced introduction to the language of schemes. With these promising results on the horizon, let’s get down to business. We’ll begin by recalling a definition and a theorem.

Definition 4.5 A finitely generated $k$-algebra is a ring $R$ s.t. $R \cong k[a_1,\dots,a_n]$ for some $a_i \in R$. A finite $k$-algebra is a ring $R$ s.t. $R\cong ka_1 + \dots ka_n$.

Observe how this definition might be confusing when compared to a finitely generated $k$-module. But applying a broader notion of ‘finitely generated’ to both algebras and modules clears up the issue. You can check that the following definition is equivalent to those we’ve seen for algebras and modules. A finitely generated algebra is richer than a finitely generated module because an algebra has an extra operation – multiplication.

Definition 4.6 We say an algebra (module) $A$ is finitely generated if there exists a finite set of generators $F$ s.t. $A$ is the smallest algebra (module) containing $F$. We then say that $A$ is generated by $F$.

Theorem 4.7 Let $k$ be a general field and $A$ a finitely generated $k$-algebra. If $A$ is a field then $A$ is algebraic over $k$.

Okay I cheated a bit saying ‘recall’ Theorem 4.7. You probably haven’t seen it anywhere before. And you might think that it’s a teensy bit abstract! Nevertheless we shall see that it has immediate practical consequences. If you are itching for a proof, don’t worry. We’ll in fact present two. The first will be due to Zariski, and the second an idea of Noether. But before we come to those we must deduce 4.1 – 4.3 from 4.7.

Proof of 4.2 Let $m \subset k[\mathbb{A}^n]$ be a maximal ideal. Then $F = k[\mathbb{A}^n]/m$ a field. Define the natural homomorphism $\pi: k[\mathbb{A}^n] \ni x \mapsto x+m \in F$. Note $F$ is a finitely generated $k$-algebra, generated by the $x_i+m$ certainly. Thus by 4.7 $F/k$ is an algebraic extension. But $k$ was algebraically closed. Hence $k$ is isomorphic to $F$ via $\phi : k \rightarrowtail k[\mathbb{A}^n] \xrightarrow{\pi} F$.

Let $a_i = \phi^{-1}(x_i+m)$. Then $\pi(x_i - a_i) = 0$ so $x_i - a_i \in \textrm{ker}\pi = m$. Hence $(x_1-a_1, \dots, x_n-a_n) \subset m$. But $(x_1-a_1, \dots, x_n-a_n)$ is itself maximal by 3.10. Hence $m = (x_1-a_1, \dots, x_n-a_n)$ as required. $\blacksquare$

That was really quite easy! We just worked through the definitions, making good use of our stipulation that $k$ is algebraically closed. We’ll soon see that all the algebraic content is squeezed into the proof of 4.7

Proof of 4.1 Let $J$ be a proper ideal in the polynomial ring. Since $k[\mathbb{A}^n]$ Noetherian $J\subset m$ some maximal ideal. From 4.2 we know that $m=I(P)$ some point $P\in \mathbb{A}^n$. Recall from 2.5 that $V(I(P)) = \{P\} \subset V(J)$ so $V(J) \neq \emptyset$. $\blacksquare$

The following proof is lengthier but still not difficult. Our argument uses a method known as the Rabinowitsch trick.

Proof of 4.3 Let $J\triangleleft k[\mathbb{A}^n]$ and $f\in I(V(J))$. We want to prove that $\exists N$ s.t. $f^N \in J$. We start by introducing a new variable $t$. Define an ideal $J_f \supset J$ by $J_f = (J, ft - 1) \subset k[x_1,\dots,x_n,t]$. By definition $V(J_f) = \{(P,b) \in \mathbb{A}^{n+1} : P\in V(J), \ f(P)b = 1\}$. Note that $f \in I(V(J))$ so $V(J_f) = \emptyset$.

Now by 4.1 we must have that $J_f$ improper. In other words $J_f = k[x_1,\dots, x_n, t]$. In particular $1 \in J_f$. Since $k[x_1,\dots, x_n, t]$ is Noetherian we know that $J$ finitely generated by some $\{f_1,\dots,f_r\}$ say. Thus we can write $1 = \sum_{i=1}^r g_i f_i + g_o (ft - 1)$ where $g_i\in k[x_1,\dots , x_n, t]$ (*).

Let $N$ be such that $t^N$ is the highest power of $t$ appearing among the $g_i$ for $=\leq i \leq r$. Now multiplying (*) above by $f^N$ yields $f^N = \sum_{i=1}^r G_i(x_1,\dots, x_n, ft) f_i + G_0(x_1,\dots,x_n,ft)(ft-1)$ where we define $G_i = f^N g_i$. This equation is valid in $k[x_1,\dots,x_n, t]$. Consider its reduction in the ring $k[x_1,\dots,x_n,t]/(ft - 1)$. We have the congruence $f_N\equiv \sum_{i=1}^r h_i (x_1,\dots,x_n) f_i \ \textrm{mod}\ (ft-1)$ where $h_i = G_i(x_1,\dots,x_n,1)$.

Now consider the map $\phi:k[x_1,\dots, x_n]\rightarrowtail k[x_n,\dots, x_n,t]\xrightarrow{\pi} k[x_n,\dots, x_n,t]/(ft-1)$. Certainly nothing in the image of the injection can possibly be in the ideal $(ft - 1)$, not having any $t$ dependence. Hence $\phi$ must be injective. But then we see that $f^N = \sum_{i=1}^r h_i(x_1,\dots, x_n) f_i$ holds in the ring $k[\mathbb{A}^n]$. Recalling that the $f_i$ generate $J$ gives the result. $\blacksquare$

We shall devote the rest of this post to establishing 4.7. To do so we’ll need a number of lemmas. You might be unable to see the wood for the trees! If so, you can safely skim over much of this. The important exception is Noether normalisation, which we’ll come to later. I’ll link the ideas of our lemmas to geometrical concepts at our next meeting.

Definition 4.8 Let $A,B$ be rings with $B \subset A$. Let $a\in A$. We say that $a$ is integral over $B$ if $a$ is the root of some monic polynomial with roots in $B$. That is to say $\exists b_i \in B$ s.t. $a^n + b_{n-1}a^{n-1} + \dots + b_0 = 0$. If every $a \in A$ is integral over $B$ we say that $A$  is integral over $B$ or $A$ is an integral extension of $B$.

Let’s note some obvious facts. Firstly we can immediately talk about $A$ being integral over $B$ when $A,B$ are algebras with $B$ a subalgebra of $A$. Remember an algebra is still a ring! It’s rather pedantic to stress this now, but hopefully it’ll prevent confusion if I mix my termin0logy later. Secondly observe that when $A$ and $B$ are fields “integral over” means exactly the same as “algebraic over”.

We’ll begin by proving some results that will be of use in both our approaches. We’ll see that there’s a subtle interplay between finite $k$-algebras, integral extensions and fields.

Lemma 4.9 Let $F$ be a field and $R\subset F$ a subring. Suppose $F$ is an integral extension of $R$. Then $R$ is itself a field.

Proof Let $r \in R$. Then certainly $r \in F$ so $r^{-1} \in F$ since $F$ a field. Now $r^{-1}$ integral over $R$ so satisfies an equation $r^-n = b_{n-1} r^{-n+1} +\dots + b_0$ with$b_i \in R$. But now multiplying through by $r^{n-1}$ yields $r^{-1} = b_{n-1} + \dots + b_0 r^{n-1} \in R$. $\blacksquare$

Note that this isn’t obvious a priori. The property that an extension is integral contains sufficient information to percolate the property of inverses down to the base ring.

Lemma 4.10 If $A$ is a finite $B$ algebra then $A$ is integral over $B$.

Proof Write $A = Ba_1 + \dots +Ba_n$. Let $x \in A$. We want to prove that $x$ satisfies some equation $x^n + b_{n-1}x^n{n-1} + \dots + b_0 = 0$. We’ll do so by appealing to our knowledge about determinants. For each $a_i$ we may clearly write $xa_i = \sum_{i=1}^{n} b_{ij}a_j$ for some $b_ij \in B$.

Writing $\vec{a} = (a_1, \dots, a_n)$ and defining the matrix $(\beta)_{ij} = b_{ij}$ we can express our equation as $\beta a = xa$. We recognise this as an eigenvalue problem. In particular $x$ satisfies the characteristic polynomial of $\beta$, a polynomial of degree $n$ with coefficients in $B$. But this is precisely what we wanted to show. $\blacksquare$

Corollary 4.11 Let $A$ be a field and $B\subset A$ a subring. If $A$ is a finite $B$-algebra then $B$ is itself a field.

Proof Immediate from 4.9 and 4.10. $\blacksquare$

We now focus our attention on Zariski’s proof of the Nullstellensatz. I take as a source Daniel Grayson’s excellent exposition.

Lemma 4.12 Let $R$ be a ring an $F$ a $R$-algebra generated by $x \in F$. Suppose further that $F$ a field. Then $\exists s \in R$ s.t. $S = R[s^{-1}]$ a field.  Moreover $x$ is algebraic over $S$.

Proof Let $R'$ be the fraction field of $R$. Now recall that $x$ is algebraic over $R'$ iff $R'[x] \supset R'(x)$. Thus $x$ is algebraic over $R'$ iff $R'[x]$ is a field. So certainly our $x$ is algebraic over $R'$ for we are given that $F$ a field. Let $x^n + f_{n-1}x^{n-1} + \dots + f_0$ be the minimal polynomial of $x$.

Now define $s\in R$ to be the common denominator of the $f_i$, so that $f_0,\dots, f_{n-1} \in R[s^{-1}] = S$. Now $x$ is integral over $S$ so $F/S$ an integral extension. But then by 4.9 $S$ a field, and $x$ algebraic over it. $\blacksquare$

Observe that this result is extremely close to 4.7. Indeed if we take $R$ to be a field we have $S = R$ in 4.12. Then lemma then says that $R[x]$ is algebraic as a field extension of $R$. Morally this proof mostly just used definitions. The only nontrivial fact was the relationship between $R'(x)$ and $R'[x]$. Even this is not hard to show rigorously from first principles, and I leave it as an exercise for the reader.

We’ll now attempt to generalise 4.12 to $R[x_1,\dots,x_n]$. The argument is essentially inductive, though quite laborious. 4.7 will be immediate once we have succeeded.

Lemma 4.13 Let $R = F[x]$ be a polynomial ring over a field $F$. Let $u\in R$. Then $R[u^{-1}]$ is not a field.

Proof By Euclid, $R$ has infinitely many prime elements. Let $p$ be a prime not dividing $u$. Suppose $\exists q \in R[u^{-1}]$ s.t. $qp = 1$. Then $q = f(u^{-1})$ where $f$ a polynomial of degree $n$ with coefficients in $R$. Hence in particular $u^n = u^n f(u^{-1}) p$ holds in $R$ for $u^n f(u^{-1}) \in R$. Thus $p | u^n$ but $p$ prime so $p | u$. This is a contradiction. $\blacksquare$

Corollary 4.14 Let $K$ be a field, $F\subset K$ a subfield, and $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $x$ is algebraic over $F$. Moreover $R = K$.

Proof Suppose $x$ were transcendental over $F$. Then $R=F[x]$ would be a polynomial ring, so by 4.12 $R[u^{-1}]$ couldn’t be a field. Hence $x$ is algebraic over $F$ so $R$ is a field. Hence $R=R[u{-1}]=K$. $\blacksquare$

The following fairly abstract theorem is the key to unlocking the Nullstellensatz. It’s essentially a slight extension of 4.14, applying 4.12 in the process. I’d recommend skipping the proof first time, focussing instead on how it’s useful for the induction of 4.16.

Theorem 4.15 Take $K$ a field, $F \subset K$ a subring, $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $\exists 0\neq s \in F s.t. F[s^{-1}]$ is a field. Moreover $F[s^{-1}][x] = K$ and $x$ is algebraic over $F[s^{-1}]$.

Proof Let $L=\textrm{Frac}(F)$. Now by 4.14 we can immediately say that $L[x]=K$, with $x$ algebraic over $L$. Now we seek our element $s$ with the desired properties. Looking back at 4.12, we might expect it to be useful. But to use 4.12 for our purposes we’ll need to apply it to some $F' = F[t^{-1}]$ with $F'[x] = K$, where $t \in F$.

Suppose we’ve found such a $t$. Then 4.12 gives us $s' \in F'$ s.t. $F'[s'^{-1}]$ a field with $x$ algebraic over it. But now $s' = qt^{-m}$ some $q \in F, \ m \in \mathbb{N}$. Now $F'[s'^{-1}]=F[t^{-1}][s'^{-1}]=F[(qt)^{-1}]$, so setting $=qt$ completes the proof. (You might want to think about that last equality for a second. It’s perhaps not immediately obvious).

So all we need to do is find $t$. We do this using our first observation in the proof. Observe that $u^{-1}\in K=L[x]$ so we can write $u^{-1}=l_0+\dots +l_{n-1}x^{n-1}$, $l_i \in L$. Now let $t \in F$ be a common denominator for all the $l_i$. Then $u^{-1} \in F'=F[t^{-1}]$ so $F'[x]=K$ as required. $\blacksquare$

Corollary 4.16 Let $k$ a ring, $A$ a field, finitely generated as a $k$-algebra by $x_1,\dots,x_n$. Then $\exists 0\neq s\in k$ s.t. $k[s^{-1}]$ a field, with $A$ a finite algebraic extension of $k[s^{-1}]$. Trivially if $k$ a field, then $A$ is algebraic over $k$, establishing 4.7.

Proof Apply Lemma 4.15 with $F=k[x_1,\dots,x_{n-1}]$, $x=x_n$, $u=1$ to get $s'\in F$ s.t. $A' = k[x_1,\dots,x_{n-1}][s'^{-1}]$ is a field with $x_n$ algebraic over it. But now apply 4.15 again with $F=k[x_1,\dots,x_{n-2}]$, $u = s'$ to deduce that $A''=k[x_1,\dots, x_{n-2}][s''^{-1}]$ is a field, with $A'$ algebraic over $A''$, for some $s'' \in F$. Applying the lemma a further $(n-2)$ times gives the result. $\blacksquare$

This proof of the Nullstellensatz is pleasingly direct and algebraic. However it has taken us a long way away from the geometric content of the subject. Moreover 4.13-4.15 are pretty arcane in the current setting. (I’m not sure whether they become more meaningful with a better knowledge of the subject. Do comment if you happen to know)!

Our second proof sticks closer to the geometric roots. We’ll introduce an important idea called Noether Normalisation along the way. For that you’ll have to come back next time!

# Algebra, Geometry and Topology: An Excellent Cocktail

Yes and I’ll have another one of those please waiter. One shot Geometry, topped up with Algebra and then a squeeze of Topology. Shaken, not stirred.

Okay, I admit that was both clichéd and contrived. But nonetheless it does accurately sum up the content of this post. We’ll shortly see that studying affine varieties on their own is like having a straight shot of gin – a little unpleasant, somewhat wasteful, and not an experience you’d be keen to repeat.

Part of the problem is the large number of affine varieties out there! We took a look at some last time, but it’s useful to have just a couple more examples. An affine plane curve is the zero set of any polynomial in $\mathbb{A}^2$. These crop up all the time in maths and there’s a lot of them. Go onto Wolfram Alpha and type plot f(x,y) = 0 replacing the term f(x,y) with any polynomial you wish. Here are a few that look nice

There’s a more general notion than an affine plane curve that works in $\mathbb{A}^n$. We say a hypersurface is the zero of a single polynomial in $\mathbb{A}^n$. The cone in $\mathbb{R}^3$ that we say last time is a good example of a hypersurface. Finally we say a hyperplane is the zero of a single polynomial of degree $1$ in $\mathbb{A}^n$.

Hopefully all that blathering has convinced you that there really are a lot of varieties, and so it looks like it’s going to be hard to say anything general about them. Indeed we could look at each one individually, study it hard and come up with some conclusions. But to do this for every single variety would be madness!

We could also try to group them into different types, then analyse them geometrically. This way is a bit more efficient, and indeed was the method of the Ancients when they learnt about conic sections. But it is predictably difficult to generalise this to higher dimensions. Moreover, most geometrical groupings are just the tip of the iceberg!

What with all this negativity, I imagine that a shot of gin sounds quite appealing now. But bear with me one second, and I’ll turn it into a Long Island Iced Tea! By broadening our horizons a bit with algebraic and topological ideas, we’ll see that all is not lost. In fact there are deep connections that make our (mathematical) life much easier and richer, thank goodness.

First though, I must come good on my promise to tell you about some subset’s of $\mathbb{C}^n$ that aren’t algebraic varieties. A simple observation allows us to come up with a huge class of such subsets. Recall that polynomials are continuous functions from $\mathbb{C}^n$ to $\mathbb{C}$, and therefore their zero sets must be closed in the Euclidean topology. Hence in particular, no open ball in $\mathbb{C}^n$ can be thought of as a variety. (If you didn’t understand this, it’s probably time to brush up on your topology).

There are two further ‘obvious’ classes. Firstly graphs of transcendental functions are not algebraic varieties. For example the zero set of the function $f(x,y) = e^{xy}-x^2$ is not an affine variety. Secondly the closed square $\{(x,y)\in \mathbb{C}^2:|x|,|y|\leq 1\}$ is an example of a closed set which is not an affine variety. This is because it clearly contains interior points, while no affine variety in $\mathbb{C}^2$ can contain such points. I’m not entirely sure at present why this is, so I’ve asked on math.stackexchange for a clarification!

How does algebra come into the mix then? To see that, we’ll need to recall a definition about a particular type of ring.

Definition 2.1 A Noetherian ring is a ring which satisfies the ascending chain condition on ideals. In other words given any chain $I_1 \subseteq I_2 \subseteq \dots \ \exists n$ s.t. $I_{n+k}=I_n$ for all $k\in\mathbb{N}$.

It’s easy to see that all fields are trivially Noetherian, for the only ideals in $k$ are $latex 0$ and $k$ itself. Moreover we have the following theorem due to Hilbert, which I won’t prove. You can find the (quite nifty) proof here.

Theorem 2.2 (Hilbert Basis) Let $N$ be Noetherian. Then $N[x_1]$ is Noetherian also, and by induction so is $N[x_1,\dots,x_n]$ for any positive integer $n$.

This means that our polynomial rings $k[\mathbb{A}^n]$ will always be Noetherian. In particular, we can write any ideal $I\subset k[\mathbb{A}^n]$ as $I=(f_1, \dots, f_r)$ for some finite $r$, using the ascending chain condition. Why is this useful? For that we’ll need a lemma.

Lemma 2.3 Let $Y$ be an affine variety, so $Y=V(T)$ some $T\subset K[\mathbb{A}^n]$. Let $J=(T)$, the ideal generated by $T$. Then $Y=V(J)$.

Proof By definition $T\subset J$ so $V(J)\subset V(T)$. We now need to show the reverse inclusion. For any $g\in J$ there exist polynomials $t_1,\dots, t_n$ in $T$ and $q_1,\dots,q_n$ in $K[\mathbb{A}^n]$ s.t. $g=\sum q_i t_i$. Hence if $p\in V(T)$ then $t_i(p)=0 \ \forall i$ so $p\in V(J)$. $\blacksquare$

Let’s put all these ideas together. After a bit of thought, we see that every affine variety $Y$ can be written as the zero set of a finite number of polynomials $t_1, \dots,t_n$. If you don’t get this straight away look back carefully at the theorem and the lemma. Can you see how to marry their conclusions to get this fact?

This is an important and already somewhat surprising result. If you give me any subset of $\mathbb{A}^n$ obtained from the solutions (possibly infinite) number of polynomial equations, I can always find a finite number of equations whose solutions give your geometrical shape! (At least in theory I can – doing so in practice is not always easy).

You can already see that a tiny bit of algebra has sweetened the cocktail! We’ve been able to deduce a fact about every affine variety with relative ease. Let’s pursue this link with algebra and see where it takes us.

Definition 2.4 For any subset $X \subset \mathbb{A}^n$ we say the ideal of $X$ is the set $I(X):=\{f \in k[\mathbb{A}^n] : f(x)=0\forall x\in X\}$.

In other words the ideal of $X$ is all the polynomials which vanish on the set $X$. A trivial example is of course $I(\mathbb{A}^n)=(0)$. Try to think of some other obvious examples before we move on.

Let’s recap. We’ve now defined two maps $V: \{\textrm{ideals in }k[\mathbb{A}^n]\}\rightarrow \{\textrm{affine varieties in }\mathbb{A}^n\}$ and $I:\{\textrm{subsets of }\mathbb{A}^n\}\rightarrow \{\textrm{ideals in }k[\mathbb{A}^n]\}$. Intuitively these maps are somehow ‘opposite’ to each other. We’d like to be able to formalise that mathematically. More specifically we want to find certain classes of affine varieties and ideals where $V$ and $I$ are mutually inverse bijections.

Why did I say certain classes? Well, clearly it’s not the case that $V$ and $I$ are bijections on their domains of definition. Indeed $V(x^n)=V(x)$, but $(x)=\neq(x^n)$ so $V$ isn’t injective. Furthermore working in $\mathbb{A}^1_{\mathbb{C}}$ we see that $I(\mathbb{Z})=(0)=I(\mathbb{A}^1)$ so $I$ is not injective. Finally for $n\geq 2 \ (x^n)\notin \textrm{Im}(I)$ so $I$ is not surjective.

It’ll turn out in the next post that a special type of ideal called a radical ideal will play an important role. To help motivate its definition, think of some more examples where $V$ fails to be injective. Can you spot a pattern? We’ll return to this next time.

Now that we’ve got our maps $V$ and $I$ it’s instructive to examine their properties. This will give us a feeling for the basic manipulations of algebraic geometry. No need to read it very thoroughly, just skim it to pick up some of the ideas.

Lemma 2.5 The maps $I$ and $V$ satisfy the following, where $J_i$ ideals and $X_i$ subsets of $\mathbb{A}^n$:
(1) $V(o)=\mathbb{A}^n,\ V(\mathbb{A}^n)=0$
(2) $V(J_1)\cup V(J_2)=V(J_1\cap J_2)$
(3) $\bigcap_{\lambda\in\Lambda}V(J_{\lambda}=V(\sum_{\lambda\in\Lambda}J_{\lambda})$
(4) $J_1\subset J_2 \Rightarrow V(J_2) \subset V(J_1)$
(5) $X_1\subset X_2 \Rightarrow I(X_2)\subset I(X_1)$
(6) $J_1 \subset I(V(J_1))$
(7) $X_1 \subset V(I(X_1))$ with equality iff $X_1$ is an affine variety

Proof We prove each in turn.
(1) Trivial.
(2) We first prove “$\subset$“. Let $q\in V(J_1)\cup V(J_2)$. Wlog assume $q \in V(J_1)$. Then $f(q)=0 \ \forall f \in J_1$. So certainly $f(q)=0 \ \forall f\in J_1\cap J_2$, which is what we needed to prove. Now we show “$\supset$“. Let $q\not\in {V(J_1)\cup V(J_2)}$. Then $q \not\in V(J_1)$ and $q \not\in V(J_2)$. So there exists $f \in J_1, \ g\in J_2$ s.t. $f(q) \neq 0,\ g(q)\neq 0$. Hence $fg(q)\neq 0$. But $fg\in J_1\cap J_2$ s0 $q \not\in {V(J_1\cap J_2)}$.
(3) “$\subset$” is trivial. For “$\supset$” note that $0 \in J_{\lambda}\ \forall \lambda$, and then it’s trivial.
(4) Trivial.
(5) Trivial.
(6) If $p \in J_1$ then $p(q)=0\ \forall q \in V(J_1)$ by definition, so $p \in I(V(J_1))$.
(7) The relation $X_1 \subset V(I(X_1))$ follows from definitions exactly as (6) did. For the “if” statement, suppose $X_1=V(J_1)$, some ideal $J_1$. Then by (5) $J_1 \subset I(V(J_1))$ so by (4) $V(I(X_1)=V(I(V(J_1)) \subset V(J_1)=X_1$. Conversely, suppose $V(I(X_1)= X_1$. Then $X_1$ is the zero set of $I(X_1)$ so an affine variety by definition. $\blacksquare$

That was rather a lot of tedious set up! If you’re starting to get weary with this formalism, I can’t blame you. You may be losing sight of the purpose of all of this. What are these maps $V$ and $I$ and why do we care how they behave? A fair question indeed.

The answer is simple. Our $V,\ I$ bijections will give us a dictionary between algebra and geometry. With minimal effort we can translate problems into an easier language. In particular, we’ll be allowed to use a generous dose of algebra to sweeten the geometric cocktail! You’ll have to wait until next time to see that in all its glory.

Finally, how does topology fit into all of this? Well, Lemma 2.5 (1)-(3) should give you an inkling. Indeed it instantly shows that the following definition makes sense.

Definition 2.6 We define the Zariski topology on $\mathbb{A}^n$ by taking as closed sets all the affine varieties.

In some sense this is the natural topology on $\mathbb{A}^n$ when we are concerned with solving equations. Letting $k=\mathbb{C}$ we can make some comparisons with the usual Euclidean topology.

First note that since every affine variety is closed in the Euclidean topology, every Zariski closed set is Euclidean closed. However we saw in the last post that not all Euclidean closed sets are affine varieties. In fact there are many more Euclidean closed sets than Zariski ones. We say that the Euclidean topology is finer than the Zariski topology. Indeed the Euclidean topology has open balls of arbitrarily small radius. The general Zariski open set is somehow very large, since it’s the complement of a line or surface in $\mathbb{A}^n$.

Next time we’ll prove that for algebraically closed $k$ every Zariski open set is dense in the Zariski topology, and hence (if $k =\mathbb{C}$) in the Euclidean topology. In particular, no nonempty Zariski open set is bounded in the Euclidean topology. Hence we immediately see that the intersection of two nonempty Zariski open sets of $\mathbb{A}$^n is never empty. This important observation tells us the the Zariski topology is not Hausdorff. We really are working with a very strange topological space!

And how is this useful? You know what I am going to say. It gives us yet another perspective on the world of affine varieties! Rather than just viewing them as geometrical objects in abstract $\mathbb{A}^n$ we can imagine them as a fundamental world structure. We’ll now be able to use the tools of topology to help us learn things about geometry. And there’s the slice of lemon to garnish the perfect cocktail.

I leave you with this enlightening question I recently stumbled upon. Both the question, and the proposed solutions struck me as extremely elegant.

# A Bit of Variety

Time to introduce some real mathematics! Today we’ll be talking about algebraic varieties. Gosh, that already sounds pretty heavy going. Part of the problem with starting algebraic geometry is the none of the nomenclature makes any intuitive sense. So it’s probably worth going on a bit of a historical digression to find out where this term originated.

Back in the 19th century, a good deal of algebraic geometry was done by French mathematicians. So it’s not surprising that much of the terminology of basic algebraic geometry has been borrowed from French. The word variety is one example. In 19th century French, a variété was an umbrella term for a geometrical object in space. A typical example of a 19th century variété would be a manifold, that is a space that looks locally like $\mathbb{R}^n$ everywhere.

As time passed, the word variété caught on in English, despite the fact it seemed linguistically arcane. Mathematicians rarely worry about such things, it seems. As maths became increasingly formalised and rigorous, new terms like manifold and surface were introduced to describe particular types of varieties. By a combination of French stubbornness and historical accident, the word variety eventually came to refer to an abstract class of geometrical ‘things’.

Hopefully the concept of algebraic presents fewer difficulties. As I mentioned in an earlier post, algebra is essentially the study of solutions to (mostly polynomial) equations. So what’s an algebraic variety? You got it – it’s a geometrical object which can be represented as a solution of (one or many) polynomial equations.

I’d properly better formalise all that as a definition. But first we need to know what kind of space we are working in. In other words, where do we allow our algebraic varieties to exist? The naive answer is in $n$-dimensional Euclidean space. This is indeed a good suggestion, and yields many informative examples, but there is too much loss of generality. Instead we’ll work in $n$-dimensional affine space which I’ll define shortly. Keep the idea of $n$-dimensional Euclidean space in mind as an intuition, though!

Definition 1.1 Let $k$ be a field. We say affine space of dimension $n$ over $k$ is the set $\mathbb{A}^n:=k^n=\{(a_1,\dots,a_n):a_i\in k\}$.

You might think this is a bit of an odd notation. After all it takes more time to write $\mathbb{A}^n$ than $k^n$ and they are the same as sets by definition. However there is a subtlety. Mathematicians often think of $k^n$ as being endowed with a natural vector space structure, with an origin and addition operation. Affine space $\mathbb{A}^n$ is to be regarded merely as a geometrical blank canvas, with no associated operations or distinguished points. In fact we’ll see later that the right way to think about $\mathbb{A}^n$ is as a topological space.

Since this post has an historical bent, I’ll digress a little to explain why we use the word affineThe word has its roots in Latin – affinis, meaning ‘related’. Mathematical usage seems to have been introduced by Euler to describe a type of geometry that studies how geometric objects are ‘related’ by slanting and scaling. Absolute notions of length and angle cease to make sense in this setting. Rather affine geometry is concerned more with the concepts of parallelism and ratios of lengths.

This might all seem a bit abstract, so let me put it another way. Affine geometry is the study of shapes which remain unchanged when they are transformed in such a way as to preserve straight lines. These so called affine transformations crop up all the time – translation, expansion, rotation are all examples we meet in everyday life. Affine geometry tries to make sense of all these in one geometrical space, affine space.

Definition 1.2 Let $T \subset k[x_1,\dots,x_n]$ be a subset of the polynomial ring. We define the zero locus of $T$ to be the set $V(T):=\{P\in \mathbb{A}^n : f(P) = 0 \forall f \in T\}$.

Sorry if that definition was a bit out of the blue. You may have to get used to me moving fast as this blog evolves. Remember that the polynomial ring is just the set of all polynomials in the variables $x_1,\dots,x_n$ endowed with the obvious addition operation allowing you to add two polynomials. In plain English this definition is saying, ‘the zero locus of a set of polynomials, is all the points that make all the polynomials zero’. Sensible, huh?

Let’s do some examples. If we fix $k=\mathbb{R}$ and work in $\mathbb{A}^2$ we have $V(x^2+y^2)$ is a circle. Can you see what $V(y-x^2)$ is? (Hint: the answer is in an earlier post)! Now if we work in $\mathbb{A}^3$ we can get some familiar surfaces. $V(x^2+y^2-z^2)$ is a cone. $V(x^2-y, x^3 - z)$ is a weird shape called a twisted cubic, pictured below.

Have some fun trying to think up some more wacky and wonderful shapes that can be represented as the zero locus of a subset of $k[\mathbb{A}^n]$. (Note: we’ll sometimes use the terminology $k[\mathbb{A}^n ]= k[x_1,\dots ,x_n]$ as I have done here). Do leave me a comment if you come up with something fun!

Finally we’re ready to say what an algebraic variety is. Here we go.

Definition 1.3 A subset $Y \subset \mathbb{A}^n$ is called an affine algebraic variety if there exists some subset $T\subset k[x_1,\dots,x_n]$ of the polynomial ring such that $Y = V(T)$.

Read that a couple of times and make sure you understand it. This really is the bedrock on which the subject stands. In plain English this merely says that ‘an affine algebraic variety is a geometrical shape which can be represented as the zero locus of some polynomials’. That’s exactly what we said it should mean above. (Note: I’ll often call affine algebraic varieties just affine varieties for short).

Right, that’s quite enough for one night. Next time I’ll talk about what Hilbert had to say about affine varieties. We’ll also start to see a surprisingly deep connection between algebra and geometry. Oh and if someone reminds me I’ll throw in an amusing video, like this marvellous CassetteBoy offering.

One more thing to do before you go – think about what kinds of shapes aren’t affine varieties.  Answers in the comments please!  Looking for such examples is something mathematicians like to do. It’ll hopefully give you a better understanding of what the concepts really mean! I’ll touch on this properly next time.

Apologies that this post is a little late – I have been struggling with some technical issues! I hope now they are sorted, thanks to the kindness of the IT Department at New College, Oxford.

# A Slice of Algebra

and a nice cup of tea. I always find that helps. Before we get down to business, you might want to put this delightful recording on. It’s always nice to have a bit of background music, and Strauss just seems to fit with Algebra somehow.

A broad definition of Algebra could be the study of equations and their solutions. This is perhaps the type of algebra we’re all familiar with from school. Here’s a typical problem

Find $x\in \mathbb{R}$ given that $x^2-2x+1=0$

That was easy, of course. Let’s try another one

Find all $x,y \in \mathbb{R}$ such that $y-x^2 = 0$

Perhaps you had to think for a moment before realising that this just defines a parabola in 2D space, pictured below.

These example illustrate that the solutions to equations can come in the form of points, or curves, and it’s not hard to see that solutions to equations in sufficiently many variables can define surfaces of any dimension you like. For example the equation $z=0$ defines a plane in 3D space.

So we can easily see that Algebra gives rise to geometrical structures of the type we discussed in the last post. It should now seem natural to study geometrical structures from an algebraic point of view. Voila – we have the motivation for Algebraic Geometry.

There’s nothing to restrict us to studying the solutions (often referred to as zeroes) of a single equation. In fact many interesting and useful geometric constructions arise as the simultaneous zeroes of several equations. Can you see two equations in (some or all of) the variables $x,y,z$ whose simultaneous solutions give rise to the $y$-axis in 3D [2]?

The technical terminology for the collection of simultaneous zeroes of several equations is an algebraic set. It is the most fundamental object of study which we will focus on.

Here we reach a slight technical impasse. For what follows I’ll assume a familiarity with elementary abstract algebra as outlined on the Background page. This may be viewed as a technical toolkit for our forthcoming studies. I’ll also assume some very basic knowledge about Topology, though not much more than can be gleaned by a thorough reading of the Wikipedia page. If you’ve never come across abstract algebra before, now is the time to do some serious thinking! I can’t promise it’ll be easy, and it might take a couple of days to get your head around the concepts, but I promise you it’s worth it. I’ll be happy to answer any questions commented on the Background page, and will flesh out the currently sparse details in the near future.

Good luck!

[1] We only every consider polynomial equations, which are those of the form $f(x_1,\dots,x_n)=0$ where $f(x_1,\dots,x_n)$ is a finite sum of nonnegative integer powers  of products of the variables $x_1,\dots, x_n$. Thus $f(x,y)=x^2+y^2=0$, the circle, is admissible for study but $f(x,y)=x^y=0$ is not. It turns out that not much is lost by restricting our study to polynomials only. In some sense any mathematically interesting curve can be approximated arbitrarily closely by the set of solutions to polynomial equations. (This entirely depends on your definition of mathematically interesting though)!

[2] The equations are of course $x=0$ and $z=0$. Geometrically this is true since the $y$-axis is the intersection of the two planes defined by $x=0$ and $z=0$.

# Algebraic Geometry – Sorry, What?

Okay it’s a bit of a mouthful. Let’s break it down a bit. You probably remember geometry from school. Drawing triangles and calculating angles. Maybe even a few circle theorems. Pretty arcane stuff, you probably agree. Turns out that this is just one tiny area of what mathematicians call Geometry.

Roughly speaking Geometry is the study of any kinds of curves, shapes and surfaces you can imagine. We naturally think of curves as “1-dimensional objects” you can draw on a “2-dimensional” piece of paper. Similarly we think of surfaces as “2 dimensional objects” that exist in “3-dimensional space”. A piece of paper is an example of a surface, as it the surface of a beach ball. We can also think of “3-dimensional objects” like a solid snooker ball. In general geometry answers questions about what properties these things have.

Now you may be thinking that this is all a bit pointless. After all we know quite a lot about how a beach ball behaves. There are two caveats however. Firstly the surface of a beach ball is a very symmetrical object. We want to be able to make conclusions about vastly asymmetrical surfaces, and possibly ones it’s hard to imagine. These kind of general observations are useful because then if someone asks us about a specific case, a beach ball with a ring donut stuck onto it for example, we can tell them it’s geometrical features with no extra work.

The second pertinent observation is that sometimes we want to know about geometry in more than “3-dimensions”. Hang about, that’s completely pointless, I hear you say. Fair point, but you are forgetting we live in a 4-dimensional universe – 3 space and 1 time dimension. And thanks to Einstein’s General Theory of Relativity we know that gravity bends space. So knowing about how geometry works in 4D is vital for sending men to the moon, or getting accurate GPS signals from satellites.

Now you might be starting to see that Geometry is quite broad, quite useful but also quite hard. After all there doesn’t seem to be much a triangle has in common with the surface of the Earth! Nevertheless we’ll see in the next post that using Algebra we can start to pin down some classes of curves and surfaces that do share some surprisingly strong properties.

As a challenge before the next post, make a Mobius Strip, pictured above, and count how many sides it has. Okay that’s easy, if a little odd if it’s the first time you’ve seen it. You can see that even an easily constructible surface can have some surprises. Try and imagine some other odd surfaces; if you think of anything good then comment it! One such is the Klein Bottle. Don’t worry about the technical terminology on the Wikipedia page, just have a look at the pictures. It’s a 2-dimensional surface that can only be “drawn” in 4-dimensions. Looks like we’ll have our work cut out!