# Three Ways with Totally Positive Grassmannians

This week I’m down in Canterbury for a conference focussing on the positive Grassmannian. “What’s that?”, I hear you ask. Roughly speaking, it’s a mysterious geometrical object that seems to crop up all over mathematical physics, from scattering amplitudes to solitons, not to mention quantum groups. More formally we define

$\displaystyle \mathrm{Gr}_{k,n} = \{k\mathrm{-planes}\subset \mathbb{C}^n\}$

We can view this as the space of $k\times n$ matrices modulo a $GL(k)$ action, which has homogeneous “Plücker” coordinates given by the $k \times k$ minors. Of course, these are not coordinates in the true sense, for they are overcomplete. In particular there exist quadratic Plücker relations between the minors. In principle then, you only need a subset of the homogeneous coordinates to cover the whole Grassmannian.

To get to the positive Grassmannian is easy, you simply enforce that every $k \times k$ minor is positive. Of course, you only need to check this for some subset of the Plücker coordinates, but it’s tricky to determine which ones. In the first talk of the day Lauren Williams showed how you can elegantly extract this information from paths on a graph!

In fact, this graph encodes much more information than that. In particular, it turns out that the positive Grassmannian naturally decomposes into cells (i.e. things homeomorphic to a closed ball). The graph can be used to exactly determine this cell decomposition.

And that’s not all! The same structure crops up in the study of quantum groups. Very loosely, these are algebraic structures that result from introducing non-commutativity in a controlled way. More formally, if you want to quantise a given integrable system, you’ll typically want to promote the coordinate ring of a Poisson-Lie group to a non-commutative algebra. This is exactly the sort of problem that Drinfeld et al. started studying 30 years ago, and the field is very much active today.

The link with the positive Grassmannian comes from defining a quantity called the quantum Grassmannian. The first step is to invoke a quantum plane, that is a $2$-dimensional algebra generated by $a,b$ with the relation that $ab = qba$ for some parameter $q$ different from $1$. The matrices that linearly transform this plane are then constrained in their entries for consistency. There’s a natural way to build these up into higher dimensional quantum matrices. The quantum Grassmannian is constructed exactly as above, but with these new-fangled quantum matrices!

The theorem goes that the torus action invariant irreducible varieties in the quantum Grassmannian exactly correspond to the cells of the positive Grassmannian. The proof is fairly involved, but the ideas are rather elegant. I think you’ll agree that the final result is mysterious and intriguing!

And we’re not done there. As I’ve mentioned before, positive Grassmannia and their generalizations turn out to compute scattering amplitudes. Alright, at present this only works for planar $\mathcal{N}=4$ super-Yang-Mills. Stop press! Maybe it works for non-planar theories as well. In any case, it’s further evidence that Grassmannia are the future.

From a historical point of view, it’s not surprising that Grassmannia are cropping up right now. In fact, you can chronicle revolutions in theoretical physics according to changes in the variables we use. The calculus revolution of Newton and Leibniz is arguably about understanding properly the limiting behaviour of real numbers. With quantum mechanics came the entry of complex numbers into the game. By the 1970s it had become clear that projectivity was important, and twistor theory was born. And the natural step beyond projective space is the Grassmannian. Viva la revolución!

# Conference Amplitudes 2015 – Don’t Stop Me Now!

All too soon we’ve reached the end of a wonderful conference. Friday morning dawned with a glimpse of perhaps the most impressive calculation of the past twelve months – Higgs production at three loops in QCD. This high precision result is vital for checking our theory against the data mountain produced by the LHC.

It was well that Professor Falko Dulat‘s presentation came at the end of the week. Indeed the astonishing computational achievement he outlined was only possible courtesy of the many mathematical techniques recently developed by the community. Falko illustrated this point rather beautifully with a word cloud.

As amplitudeologists we are blessed with a incredibly broad field. In a matter of minutes conversation can encompass hard experiment and abstract mathematics. The talks this morning were a case in point. Samuel Abreu followed up the QCD computation with research linking abstract algebra, graph theory and physics! More specifically, he introduced a Feynman diagram version of the coproduct structure often employed to describe multiple polylogs.

Dr. HuaXing Zhu got the ball rolling on the final mini-session with a topic close to my heart. As you may know I’m currently interested in soft theorems in gauge theory and gravity. HuaXing and Lance Dixon have made an important contribution in this area by computing the complete $2$-loop leading soft factor in QCD. Maybe unsurprisingly the breakthrough comes off the back of the master integral and differential equation method which has dominated proceedings this week.

Last but by no means least we had an update from the supergravity mafia. In recent years Dr. Tristan Dennen and collaborators have discovered unexpected cancellations in supergravity theories which can’t be explained by symmetry alone. This raises the intriguing question of whether supergravity can play a role in a UV complete quantum theory of gravity.

The methods involved rely heavily on the color-kinematics story. Intriguingly Tristan suggested that the double copy connection because gauge theory and gravity could form an explanation for the miraculous results (in which roughly a billion terms combine to give zero)! The renormalizability of Yang-Mills theory could well go some way to taming gravity’s naive high energy tantrums.

There’s still some way to go before bottles of wine change hands. But it was fitting to end proceedings with an incomplete story. For all that we’ve thought hard this week, it is now that the graft really starts. I’m already looking forward to meeting in Stockholm next year. My personal challenge is to ensure that I’m among the speakers!

Particular thanks to all the organisers, and the many masters students, PhDs, postdocs and faculty members at ETH Zurich who made our stay such an enjoyable and productive one!

# Conference Amplitudes 2015 – Air on the Superstring

One of the first pieces of Bach ever recorded was August Wilhelmj’s arrangement of the Orchestral Suite in D major. Today the transcription for violin and piano goes by the moniker Air on the G String. It’s an inspirational and popular work in all it’s many incarnations, not least this one featuring my favourite cellist Yo-Yo Ma.

This morning we heard the physics version of Bach’s masterpiece. Superstrings are nothing new, of course. But recently they’ve received a reboot courtesy of Dr. David Skinner among others. The ambitwistor string is an infinite tension version which only admit right-moving vibrations! At first the formalism looks a little daunting, until you realise that many calculations follow the well-trodden path of the superstring.

Now superstring amplitudes are quite difficult to compute. So hard, in fact, that Dr. Oliver Schloterrer devoted an entire talk to understanding particular functions that emerge when scattering just  $4$ strings at next-to-leading order. Mercifully, the ambitwistor string is far more well-behaved. The resulting amplitudes are rather beautiful and simple. To some extent, you trade off the geometrical aesthetics of the superstring for the algebraic compactness emerging from the ambitwistor approach.

This isn’t the first time that twistors and strings have been combined to produce quantum field theory. The first attempt dates back to 2003 and work of Edward Witten (of course). Although hugely influential, Witten’s theory was esoteric to say the least! In particular nobody knows how to encode quantum corrections in Witten’s language.

Ambitwistor strings have no such issues! Adding a quantum correction is easy – just put your theory on a donut. But this conceptually simple step threatened a roadblock for the research. Trouble was, nobody actually knew how to evaluate the resulting formulae.

Nobody, that was, until last week! Talented folk at Oxford and Cambridge managed to reduce the donutty problem to the original spherical case. This is an impressive feat – nobody much suspected that quantum corrections would be as easy as a classical computation!

There’s a great deal of hope that this idea can be rigorously extended to higher loops and perhaps even break the deadlock on maximal supergravity calculations at $7$-loop level. The resulting concept of off-shell scattering equations piqued my interest – I’ve set myself a challenge to use them in the next 12 months!

Scattering equations, you say? What are these beasts? For that we need to take a closer look at the form of the ambitwistor string amplitude. It turns out to be a sum over the solutions of the following equations

$\sum_{i\neq j}\frac{s_{ij}}{z_i - z_j}=0$

The $s_{ij}$ are just two particle invariants – encoding things you can measure about the speed and angle of particle scattering. And the $z_i$ are just some bonus variables. You’d never dream of introducing them unless somebody told you to! But yet they’re exactly what’s required for a truly elegant description.

And these scattering equations don’t just crop up in one special theory. Like spies in a Cold War era film, they seem to be everywhere! Dr. Freddy Cachazo alerted us to this surprising fact in a wonderfully engaging talk. We all had a chance to play detective and identify bits of physics from telltale clues! By the end we’d built up an impressive spider’s web of connections, held together by the scattering equations.

Freddy’s talk put me in mind of an interesting leadership concept espoused by the conductor Itay Talgam. Away from his musical responsibilities he’s carved out a niche as a business consultant, teaching politicians, researchers, generals and managers how to elicit maximal productivity and creativity from their colleagues and subordinates. Critical to his philosophy is the concept of keynote listening – sharing ideas in a way that maximises the response of your audience. This elusive quality pervaded Freddy’s presentation.

Following this masterclass was no mean feat, but one amply performed by my colleague Brenda Penante. We were transported to the world of on-shell diagrams – a modern alternative to Feynman’s ubiquitous approach. These diagrams are known to produce the integrand in planar $\mathcal{N}=4$ super-Yang-Mills theory to all orders! What’s more, the answer comes out in an attractive $d \log$ form, ripe for integration to multiple polylogarithms.

Cunningly, I snuck the word planar into the paragraph above. This approximation means that the diagrams can be drawn on a sheet of paper rather than requiring $3$ dimensions. For technical reasons this is equivalent to working in the theory with an infinite number of color charges, not just the usual $3$ we find for the strong force.

Obviously, it would be helpful to move beyond this limit. Brenda explained a decisive step in this direction, providing a mechanism for computing all leading singularities of non-planar amplitudes. By examining specific examples the collaboration uncovered new structure invisible in the planar case.

Technically, they observed that the boundary operation on a reduced graph identified non-trivial singularities which can’t be understood as the vanishing of minors. At present, there’s no proven geometrical picture of these new relations. Amazingly they might emerge from a 1,700-year-old theorem of Pappus!

Bootstraps were back on the agenda to close the session. Dr. Agnese Bissi is a world-expert on conformal field theories. These models have no sense of distance and only know about angles. Not particularly useful, you might think! But they crop up surprisingly often as approximations to realistic physics, both in particle smashing and modelling materials.

Agnese took a refreshingly rigorous approach, walking us through her proof of the reciprocity principle. Until recently this vital tool was little more than an ad hoc assumption, albeit backed up by considerable evidence. Now Agnese has placed it on firmer ground. From here she was able to “soup up” the method. The supercharged variant can compute OPE coefficients as well as dimensions.

Alas, it’s already time for the conference dinner and I haven’t mentioned Dr. Christian Bogner‘s excellent work on the sunrise integral. This charmingly named function is the simplest case where hyperlogarithms are not enough to write down the answer. But don’t just take it from me! You can now hear him deliver his talk by visiting the conference website.

Conversations

I’m very pleased to have chatted with Professor Rutger Boels (on the Lagrangian origin of Yang-Mills soft theorems and concerning the universality of subheading collinear behaviour) and Tim Olson (about determining the relative sign between on-shell diagrams to ensure cancellation of spurious poles).

Note: this post was originally written on Thursday 9th July but remained unpublished. I blame the magnificent food, wine and bonhomie at the conference dinner!

# Conference Amplitudes 2015 – Integrability, Colorful Duality and Hiking

The middle day of a conference. So often this is the graveyard slot – when initial hysteria has waned and the final furlong seems far off. The organisers should take great credit that today was, if anything, the most engaging thus far! Even the weather was well-scheduled, breaking overnight to provide us with more conducive working conditions.

Integrability was our wake-up call this morning. I mentioned this hot topic a while back. Effectively it’s an umbrella term for techniques that give you exact answers. For amplitudes folk, this is the stuff of dreams. Up until recently the best we could achieve was an expansion in small or large parameters!

So what’s new? Dr. Amit Sever brought us up to date on developments at the Perimeter Institute, where the world’s most brilliant minds have found a way to map certain scattering amplitudes in $4$ dimensions onto a $2$ dimensional model which can be exactly solved. More technically, they’ve created a flux tube representation for planar amplitudes in $\mathcal{N}=4$ super-Yang-Mills, which can then by solved using spin chain methods.

The upshot is that they’ve calculated $6$ particle scattering amplitudes to all values of the (‘t Hooft) coupling. Their method makes no mention of Feynman diagrams or string theory – the old-fashioned ways of computing this amplitude for weak and strong coupling respectively. Nevertheless the answer matches exactly known results in both of these regimes.

There’s more! By putting their computation under the microscope they’ve unearthed unexpected new physics. Surprisingly the multiparticle poles familiar from perturbative quantum field theory disappear. Doing the full calculation smoothes out divergent behaviour in each perturbative term. This is perhaps rather counterintuitive, given that we usually think of higher-loop amplitudes as progressively less well-behaved. It reminds me somewhat of Regge theory, in which the UV behaviour of a tower of higher spin states is much better than that of each one individually.

The smorgasbord of progress continued in Mattias Wilhelm’s talk. The Humboldt group have a completely orthogonal approach linking integrability to amplitudes. By computing form factors using unitarity, they’ve been able to determine loop-corrections to anomalous dimensions. Sounds technical, I know. But don’t get bogged down! I’ll give you the upshot as a headline – New Link between Methods, Form Factors Say.

Coffee consumed, and it was time to get colorful. You’ll hopefully remember that the quarks holding protons and neutrons together come in three different shades. These aren’t really colors that you can see. But they are internal labels attached to the particles which seem vital for our theory to work!

About 30 years ago, people realised you could split off the color-related information and just deal with the complicated issues of particle momentum. Once you’ve sorted that out, you write down your answer as a sum. Each term involves some color stuff and a momentum piece. Schematically

$\displaystyle \textrm{gluon amplitude}=\sum \textrm{color}\times \textrm{kinematics}$

What they didn’t realise was that you can shuffle momentum dependence between terms to force the kinematic parts to satisfy the same equations as the color parts! This observation, made back in 2010 by Zvi Bern, John Joseph Carrasco and Henrik Johansson has important consequences for gravity in particular.

Why’s that? Well, if you arrange your Yang-Mills kinematics in the form suggested by those gentlemen then you get gravity amplitudes for free. Merely strip off the color bit and replace it by another copy of the kinematics! In my super-vague language above

$\displaystyle \textrm{graviton amplitude}=\sum \textrm{kinematics}\times \textrm{kinematics}$

Dr. John Joseph Carrasco himself brought us up to date with a cunning method of determining the relevant kinematic choice at loop level. I can’t help but mention his touching modesty. Even though the whole community refers to the relations by the acronym BCJ, he didn’t do so once!

Before that Dr. Donal O’Connell took us on an intriguing detour of solutions to classical gravity theories with an appropriate dual Yang-Mills theory, obtainable via a BCJ procedure. The idea is beautiful, and seems completely obvious once you’ve been told! Kudos to the authors for thinking of it.

After lunch we enjoyed a well-earned break with a hike up the Uetliberg mountain. I learnt that this large hill is colloquially called Gmuetliberg. Yvonne Geyer helpfully explained that this is derogatory reference to the tame nature of the climb! Nevertheless the scenery was very pleasant, particularly given that we were mere minutes away from the centre of a European city. What I wouldn’t give for an Uetliberg in London!

Evening brought us to Heidi and Tell, a touristic yet tasty burger joint. Eager to offset some of my voracious calorie consumption I took a turn around the Altstadt. If you’re ever in Zurich it’s well worth a look – very little beats medieval streets, Alpine water and live swing music in the evening light.

Conversations

It was fantastic to meet Professor Lionel Mason and discuss various ideas for extending the ambitwistor string formalism to form factors. I also had great fun chatting to Julio Martinez about linking CHY and BCJ. Finally huge thanks to Dr. Angnis Schmidt-May for patiently explaining the latest research in the field of massive gravity. The story is truly fascinating, and could well be a good candidate for a tractable quantum gravity model!

Erratum: An earlier version of this post mistakenly claimed that Chris White spoke about BCJ for equations of motion. Of course, it was his collaborator Donal O’Connell who delivered the talk. Many thanks to JJ Carrasco for pointing out my error!

# Conference Amplitudes 2015 – Integration Ahoy!

I recall fondly a maths lesson from my teenage years. Dr. Mike Wade – responsible as much an anyone for my scientific passion – was introducing elementary concepts of differentiation and integration. Differentiation is easy, he proclaimed. But integration is a tricky beast.

That prescient warning perhaps foreshadowed my entry into the field of amplitudes. For indeed integration is of fundamental importance in determining the outcome of scattering events. To compute precise “loop corrections” necessarily requires integration. And this is typically a hard task.

Today we were presented with a smorsgasbord of integrals. Polylogarithms were the catch of the day. This broad class of functions covers pretty much everything you can get when computing amplitudes (provided your definition is generous)! So what are they? It fell to Dr. Erik Panzer to remind us.

Laymen will remember logarithms from school. These magic quantities turn multiplication into addition, giving rise to the ubiquitous schoolroom slide rules predating electronic calculators. Depending on your memory of math class, logarithms are either curious and fascinating or strange and terrifying! But boring they most certainly aren’t.

One of the most amusing properties of a logarithm comes about from (you guessed it) integration. Integrating $x^{a-1}$ is easy, you might recall. You’ll end up with $x^a/a$ plus some constant. But what happens when $a$ is zero? Then the formula makes no sense, because dividing by zero simply isn’t allowed.

And here’s where the logarithm comes to the rescue. As if by witchcraft it turns out that

$\displaystyle \int_0^x x^{-1} = -\log (1-x)$

This kind of integral crops when you compute scattering amplitudes. The traditional way to work out an amplitudes is to draw Feynman diagrams – effectively pictures representing the answer. Every time you get a loop in the picture, you get an integration. Every time a particle propagates from A to B you get a fraction. Plug through the maths and you sometimes see integrals that give you logarithms!

But logarithms aren’t the end of the story. When you’ve got many loop integrations involved, and perhaps many propagators too, things can get messy. And this is where polylogarithms come in. They’ve got an integral form like logarithms, only instead of one integration there are many!

$\displaystyle \textrm{Li}_{\sigma_1,\dots \sigma_n}(x) = \int_0^z \frac{1}{z_1- \sigma_1}\int_0^{z_1} \frac{1}{z_2-\sigma_2} \dots \int_0^{z_{n-1}}\frac{1}{z_n-\sigma_n}$

It’s easy to check that out beloved $\log$ function emerges from setting $n=1$ and $\sigma_1=0$. There’s some interesting sociology underlying polylogs. The polylogs I’ve defined are variously known as hyperlogs, generalized polylogs and Goncharov polylogs depending on who you ask. This confusion stems from the fact that these functions have been studied in several fields besides amplitudes, and predictably nobody can agree on a name! One name that is universally accepted is classical polylogs – these simpler functions emerging when you set all the $\sigma$s to zero.

So far we’ve just given names to some integrals we might find in amplitudes. But this is only the beginning. It turns out there are numerous interesting relations between different polylogs, which can be encoded by clever mathematical tools going by esoteric names – cluster algebras, motives and the symbol to name but a few. Erik warmed us up on some of these topics, while also mentioning that even generalized polylogs aren’t the whole story! Sometimes you need even wackier functions like elliptic polylogs.

All this gets rather technical quite quickly. In fact, complicated functions and swathes of algebra are a sad corollary of the traditional Feynman diagram approach to amplitudes. But thankfully there are new and powerful methods on the market. We heard about these so-called bootstraps from Dr. James Drummond and Dr. Matt von Hippel.

The term bootstrap is an old one, emerging in the 1960s to describe methods which use symmetry, locality and unitarity to determine amplitudes. It’s probably a humorous reference to the old English saying “pull yourself up by your bootstraps” to emphasise the achievement of lofty goals from meagre beginnings. Research efforts in the 60s had limited success, but the modern bootstrap programme is going from strength to strength. This is due in part to our much improved understanding of polylogarithms and their underlying mathematical structure.

The philosophy goes something like this. Assume that your answer can be written as a polylog (more precisely as a sum of polylogs, with the integrand expressed as $\prod latex d \log(R_i)$ for appropriate rational functions $R_i$). Now write down all the possible rational functions that could appear, based on your knowledge of the process. Treat these as alphabet bricks. Now put your alphabet bricks together in every way that seems sensible.

The reason the method works is that there’s only one way to make a meaningful “word” out of your alphabet bricks. Locality forces the first letter to be a kinematic invariant, or else your answer would have branch cuts which don’t correspond to physical particles. Take it from me, that isn’t allowed! Supersymmetry cuts down the possibilities for the final letter. A cluster algebra ansatz also helps keep the possibilities down, though a physical interpretation for this is as yet unknown. For $7$ particles this is more-or-less enough to get you the final answer. But weirdly $6$ particles is smore complicated! Counter-intuitive, but hey – that’s research. To fix the six point result you must appeal to impressive all-loop results from integrability.

Next up for these bootstrap folk is higher loops. According to Matt, the $5$-loop result should be gettable. But beyond that the sheer number of functions involved might mean the method crashes. Naively one might expect that the problem lies with having insufficiently many constraints. But apparently the real issue is more prosaic – we just don’t have the computing power to whittle down the options beyond 5-loop.

With the afternoon came a return to Feynman diagrams, but with a twist. Professor Johannes Henn talked us through an ingenious evaluation method based on differential equations. The basic concept has been known for a long time, but relies heavily on choosing the correct basis of integrals for the diagram under consideration. Johannes’ great insight was to use conjectures about the dlog form of integrands to suggest a particularly nice set of basis integrals. This makes solving the differential equations a cinch – a significant achievement!

Now the big question is – when can this new method be applied? As far as I’m aware there’s no proof that this nice integral basis always exists. But it seems that it’s there for enough cases to be useful! The day closed with some experimentally relevant applications, the acid test. I’m now curious as to whether you can link the developments in symbology and cluster algebras with this differential equation technique to provide a mega-powerful amplitude machine…! And that’s where I ought to head to bed, before you readers start to worry about theoretical physicists taking over the world.

Conversations

It was a pleasure to chat all things form factors with Brenda Penante, Mattias Wilhelm and Dhritiman Nandan at lunchtime. Look out for a “on-shell” blog post soon.

I must also thank Lorenzo Magnea for an enlightening discussion on soft theorems. Time to bury my head in some old papers I’d previously overlooked!

# Conference Amplitudes 2015!

It’s conference season! I’m hanging out in very warm Zurich with the biggest names in my field – scattering amplitudes. Sure it’s good fun to be outside the office. But there’s serious work going on too! Research conferences are a vital forum for the exchange of ideas. Inspiration and collaboration flow far more easily in person than via email or telephone. I’ll be blogging the highlights throughout the week.

Monday | Morning Session

To kick-off we have some real physics from the Large Hadron Collider! Professor Nigel Glover‘s research provides a vital bridge between theory and experiment. Most physicists in this room are almost mathematicians, focussed on developing techniques rather than computing realistic quantities. Yet the motivation for this quest lie with serious experiments, like the LHC.

We’re currently entering an era where the theoretical uncertainty trumps experimental error. With the latest upgrade at CERN, particle smashers will reach unprecedented accuracy. This leaves us amplitudes theorists with a large task. In fact, the experimentalists regularly draw up a wishlist to keep us honest! According to Nigel, the challenge is to make our predictions twice as good within ten years.

At first glance, this 2x challenge doesn’t seem too hard! After all Moore’s Law guarantees us a doubling of computing power in the next few years. But the scale of the problem is so large that more computing power won’t solve it! We need new techniques to get to NNLO – that is, corrections that are multiplied by $\alpha_s^2$ the square of the strong coupling. (Of course, we must also take into account electroweak effects but we’ll concentrate on the strong force for now).

Nigel helpfully broke down the problem into three components. Firstly we must compute the missing higher order terms in the amplitude. The start of the art is lacking at present! Next we need better control of our input parameters. Finally we need to improve our model of how protons break apart when you smash them together in beams.

My research helps in a small part with the final problem. At present I’m finishing up a paper on subleading soft loop corrections, revealing some new structure and developing a couple of new ideas. The hope is that one day someone will use this to better eliminate some irritating low energy effects which can spoil the theoretical prediction.

In May, I was lucky enough to meet Bell Labs president Dr. Marcus Weldon in Murray Hill, New Jersey. He spoke about his vision for a 10x leap forward in every one of their technologies within a decade. This kind of game changing goal requires lateral thinking and truly new ideas.

We face exactly the same challenge in the world of scattering amplitudes. The fact that we’re aiming for only a 2x improvement is by no means a lack of ambition. Rather it underlines that problem that doubling our predictive power entails far more than a 10x increase in complexity of calculations using current techniques.

I’ve talked a lot about accuracy so far, but notice that I haven’t mentioned precision. Nigel was at pains to distinguish the two, courtesy of this amusing cartoon.

Why is this so important? Well, many people believe that NNLO calculations will reduce the renormalization scale uncertainty in theoretical predictions. This is a big plus point! Many checks on known NNLO results (such as W boson production processes) confirm this hunch. This means the predictions are much more precise. But it doesn’t guarantee accuracy!

To hit the bullseye there’s still much work to be done. This week we’ll be sharpening our mathematical tools, ready to do battle with the complexities of the universe. And with that in mind – it’s time to get back to the next seminar. Stay tuned for further updates!

Update | Monday Evening

Only time for the briefest of bulletins, following a productive and enjoyable evening on the roof of the ETH main building. Fantastic to chat again to Tomek Lukowski (on ambitwistor strings), Scott Davies (on supergravity 4-loop calculations and soft theorems) and Philipp Haehnal (on the twistor approach to conformal gravity). Equally enlightening to meet many others, not least our gracious hosts from ETH Zurich.

My favourite moment of the day came in Xuan Chen’s seminar, where he discussed a simple yet powerful method to check the numerical stability of precision QCD calculations. It’s well known that these should factorize in appropriate kinematic regions, well described by imaginatively named antenna functions. By painstakingly verifying this factorization in a number of cases Xuan detected and remedied an important inaccuracy in a Higgs to 4 jet result.

Of course it was a pleasure to hear my second supervisor, Professor Gabriele Travaglini speak about his latest papers on the dilatation operator. The rederivation of known integrability results using amplitudes opens up an enticing new avenue for those intrepid explorers who yearn to solve $\mathcal{N}=4$ super-Yang-Mills!

Finally Dr. Simon Badger‘s update on the Edinburgh group’s work was intriguing. One challenge for NNLO computations is to understand 2-loop corrections in QCD. The team have taken an important step towards this by analysing 5-point scattering of right-handed particles. In principle this is a deterministic procedure: draw some pictures and compute.

But to get a compact formula requires some ingenuity. First you need appropriate integral reduction to identify appropriate master integrals. Then you must apply KK and BCJ relations to weed out the dead wood that’s cluttering up the formula unnecessarily. Trouble is, both of these procedures aren’t uniquely defined – so intelligent guesswork is the order of the day!

That’s quite enough for now – time for some sleep in the balmy temperatures of central Europe.

# Scattering Without Scale, Or The S-Matrix In N=4

My research focuses on an unrealistic theory called massless $\mathcal{N}=4$ super Yang-Mills (SYM). This sounds pretty pointless, at least at first. But actually this model shares many features with more complete accounts of reality.  So it’s not all pie in the sky.

The reason I look at  SYM is because it contains lots of symmetry. This simplifies matters a lot. Studying SYM is like going to an adventure playground – you can still have great fun climbing and jumping, but it’s a lot safer than roaming out into a nearby forest.

Famously SYM has a conformal symmetry. Roughly speaking, this means that the theory looks the same at every length scale. (Whether conformal symmetry is equivalent to scale invariance is a hot topic, in fact)! Put another way, SYM has no real notion of length. I told you it was unrealistic.

This is a bit unfortunate for me, because I’d like to use SYM to think about particle scattering. To understand the problem, you need to know what I want to calculate. The official name for this quantity is the S-matrix.

The jargon is quite straightforward. “S” just stands for scattering. The “matrix” part tells you that this quantity encodes many possible scattering outcomes. To get an S-matrix, you have to assume you scatter particles from far away. That’s certainly the case in big particle accelerators – the LHC is huge compared to a proton!

But remember I said that SYM doesn’t have a length scale. So really you can’t get an S-matrix. And without an S-matrix, you can’t say anything about particle scattering. Things aren’t looking good.

Fortunately all is not lost. You can try to define an S-matrix using the usual techniques that worked in normal theories. All the calculations go through fine, unless there are any low energy particles around. Any of these so-called soft particles will cause your S-matrix to blow up to infinity!

But hey, we should expect our S-matrix to be badly behaved. After all, we’ve chosen a theory without a sense of scale! These irritating infinities go by the name of infrared divergences. Thankfully there’s a systematic way of eliminating them.

Remember that I said our SYM theory is massless. All the particles are like photons, constantly whizzing about that the speed of light. If you were a photon, life would be very dull. That’s because you’d move so fast through space you couldn’t move through time. This means that essentially our massless particles have no way of knowing about distances.

Viewed from this perspective it’s intuitive that this lack of mass yields the conformal symmetry. We can remove the troublesome divergences by destroying the conformal symmetry. We do this in a controlled way by giving some particles a small mass.

Technically our theory is now called Coulomb branch SYM. Who’s Coulomb, I hear you cry? He’s the bloke who developed electrostatics 250 years ago. And why’s he cropped up now? Because when we dispense with conformal symmetry, we’re left with some symmetries that match those of electromagnetism.

In Coulomb branch SYM it’s perfectly fine to define an S-matrix! You get sensible answers from all your calculations. Now imagine we try to recover our original theory by decreasing all masses to zero. Looking closely at the S-matrix, we see it split into two pieces – finite and infinite. Just ignore the infinite bit, and you’ve managed to extract useful scattering data for the original conformal theory!

You might think I’m a bit blasé in throwing away these divergences. But this is actually well-motivated physically. The reason is that such infinities cancel in any measurable quantity. You could say that they only appear in the first place because you’re doing the wrong sum!

This perspective has been formalized for the realistic theories as the KLN theorem. It may even be possible to get a rigorous version for our beloved massless $\mathcal{N}=4$ SYM.

So next time somebody tells you that you can’t do scattering in a conformal theory, you can explain why they’re wrong! Okay, I grant you, that’s an unlikely pub conversation. But stranger things have happened.

And if you’re planning to grab a pint soon, make it a scientific one!

# A Tale of Two Calculations

This post is mathematically advanced, but may be worth a skim if you’re a layman who’s curious how physicists do real calculations!

Recently I’ve been talking about the generalized unitarity method, extolling its virtues for $1$-loop calculations. Despite all this hoodoo, I have failed to provide a single example of a successful application. Now it’s time for that to change. I’m about to show you just how useful generalized unitarity can be, borrowing examples from $\mathcal{N}=4$ super-Yang-Mills (SYM) and $SU(3)$ Yang-Mills (YM).

We’ll begin by revising the general form of the generalized unitarity method. In picture form

What exactly does all the notation mean? On the left hand side, I’m referring to the residue of the integrand when all the loop momenta $l_i$ for $i = 1,2,3,4$ are taken on-shell. On the right hand side, I take a product of tree level diagrams with external lines as shown, and sum over the possible particle content of the $l_i$ lines. Implicit in each of the blobs in the equation is a sum over tree level diagrams.

We’d like to use this formula to calculate $1$-loop amplitudes. But hang on, doesn’t it only tell us about residues of integrands? Naively, it seems like that’s too little information to reconstruct the full result.

Fear not, however – help is at hand! Back in 1965, Don Melrose published his first paper. He presciently observed that loop diagrams in $D$ dimensions could be expressed as linear combinations of scalar loop diagrams with $\leq 4$ sides. Later Bern, Dixon and Kosower generalized this result to take account of regularization.

Let’s express those words mathematically. We have

$\displaystyle \mathcal{A}_n^{1\textrm{-loop}} = \sum_i D_i I_4(K^i) + \sum_j C_j I_3 (K^j) + \sum_m B_m I_2 (K^m) + R_n + O(\epsilon)\qquad (*)$

where $I_a$ are integrals corresponding to particular scalar theory diagrams, $K_a^i$ indicate distribution of momenta on external legs, $R_n$ is a rational function and $\epsilon$ a regulator.

The integrals $I_4$, $I_3$ and $I_2$ are referred to as box, triangle and bubble integrals respectively. This is an obvious homage to their structure as Feynman diagrams. For example a triangle diagram looks like

where $K_1$, $K_2$, $K_3$ label the sums of external momenta at each of the vertices. The Feynman rules give (in dimensional regularization)

$\displaystyle I_3(K_1, K_2, K_3) = \mu^{2 \epsilon}\int \frac{d^{4-2\epsilon}l}{(2\pi)^{4-2\epsilon}}\frac{1}{l^2 (l-K_1)^2 (l+K_3)^2}$

We call result $(*)$ above an integral basis expansion. It’s useful because the integrands of box, triangle and bubble diagrams have different pole structures. Thus we can reconstruct their coefficients by taking generalized unitarity cuts. Of course, the rational term cannot be determined this way. Theoretically we have reduced our problem to a simpler case, but not completely solved it.

Before we jump into a calculation, it’s worth taking a moment to consider the origin of the rational term. In Melrose’s original analysis, this term was absent. It appears in regularized versions, precisely because the act of regularization gives rise to extra rational terms at $O(\epsilon^0)$. Such terms will be familiar if you’ve studied anomalies.

We can therefore loosely say that rational terms are associated with theories requiring renormalization. (This is not quite true; see page 44 of this review). In particular we know that $\mathcal{N}=4$ SYM theory is UV finite, so no rational terms appear. In theory, all $1$-loop amplitudes are constructible from unitarity cuts alone!

Ignoring the subtleties of IR divergences, let’s press on and calculate an $\mathcal{N}=4$ SYM amplitude using unitarity. More precisely we’ll tackle the $4$-point $1$-loop superamplitude. It’s convenient to be conservative and cut only two propagators. To get the full result we need to sum over all channels in which we could make the cut, denoted $s = (12)$, $t = (13)$ and $u=(14)$.

To make our lives somewhat easier, we’ll work in the planar limit of $\mathcal{N}=4$ SYM. This means we can ignore any diagrams which would be impossible to draw in the plane, in particular the $u$-channel ones. We make this assumption since it simplifies our analysis of the color structure of the theory. In particular it’s possible to factor out all $SU(3)$ data as a single trace of generators in the planar limit.

Assuming this has been done, we’ll ignore color factors and calculate only the color-ordered amplitudes. We’ve got two channels to consider $s$ and $t$. But since the trace is cyclic we can cyclically permute the external lines to equate the $s$ and $t$ channel cuts. Draw a picture if you are skeptical.

So we’re down to considering the $s$-channel unitarity cut. Explicitly the relevant formula is

where $\mathcal{A}_4$ is the tree level $4$-particle superamplitude. Now observe that by necessity $\mathcal{A}_4$ must be an MHV amplitude. Indeed it is only nonvanishing if exactly two external particles have $+$ve helicity. Leaving the momentum conservation delta function implicit we quote the standard result

$\displaystyle \mathcal{A}_4(-l_1, 1, 2, l_2) = \frac{\delta^{(8)}(L)}{\langle l_1 1\rangle\langle 1 2\rangle\langle 2l_2\rangle\langle l_2 l_1 \rangle}$

where $\delta^{(8)}(L)$ is a supermomentum conservation delta function. We get a similar result for the other tree level amplitude, involving a delta function $\delta^{(8)}(R)$. Now by definition of the superamplitude, the sum over states can be effected as an integral over the Grassman variables $\eta_{l_1}$ and $\eta_{l_2}$. Under the integral signs we may write

$\displaystyle \delta^{(8)}(L) \delta^{(8)}(R) = \delta^{(8)}(L+R)\delta^{(8)}(R) = \delta^{(8)}(\tilde{Q})\delta^{(8)}(R)$

where $\delta^{(8)}(\tilde{Q})$ is the overall supermomentum conservation delta function, which one can always factor out of a superamplitude in a supersymmetric theory. The remaining delta function gives a nonzero contribution in the integral. To evaluate this recall that the Grassman delta function for a process with $n$ external particles has the form

$\displaystyle \delta^{(8)}(R) = \prod_{A=1}^4 \sum_{i

We know that Grassman integration is the same as differentiation, so

$\displaystyle \int d^4 \eta_{l_1} d^4 \eta_{l_2} \delta^{(8)}(R) = \langle l_1 l_2 \rangle ^4$

Now plugging this in to the pictured formula we find the $s$-channel residue to be

$\displaystyle \textrm{Res}_s = \frac{\delta^{(8)}(\tilde{Q})\langle l_1 l_2 \rangle^2}{\langle 12 \rangle\langle 34 \rangle \langle l_1 1 \rangle \langle 2 l_2 \rangle \langle l_2 4 \rangle \langle 3 l_1 \rangle} \qquad (\dagger)$

Now for the second half of our strategy. We must compare this to the residues from scalar box, triangle and bubble integrands. We aim to pull out a kinematic factor depending on the external momenta, letting the basis integrand residue absorb all factors of loop momenta $l_1$ and $l_2$. But which basis integrands contribute to the residue from our unitarity cut?

This is quite easy to spot. Suppose we consider the residue of a loop integrand after a generic unitarity cut. Any remaining dependence on loop momentum $l$ appears as factors of $(l-K)^{-2}$. These may be immediately matched with uncut loop propagators in the basis diagrams. Simple counting then establishes which basis diagram we want. As an example

$\displaystyle \textrm{factor of }(l-K_1)^{-2}(l-K_2)^{-2}\Rightarrow 2 \textrm{ uncut propagators} \Rightarrow \textrm{box diagram}$

We’ll momentarily see that this example is exactly the case for our calculation of $\mathcal{A}_4^{1\textrm{-loop}}$. To accomplish this, we must express the residue $(\dagger)$ in more familiar momentum space variables. Our tools are the trusty identities

$\displaystyle \langle ij \rangle [ij] =(p_i + p_j)^2$

$\displaystyle \sum_i \langle ri \rangle [ik] = 0$

The first follows from the definition of the spinor-helicity formalism. Think of it as a consequence of the Weyl equation if you like. The second encodes momentum conservation. We’ve in fact got three set of momentum conservation to play with. There’s one each for the left and right hand tree diagrams, plus the overall $(1234)$ relation.

To start with we can deal with that pesky supermomentum conservation delta function by extracting a factor of the tree level amplitude $\mathcal{A}_4^{\textrm{tree}}$. This leaves us with

$\displaystyle \textrm{Res}_s = \mathcal{A}_4^{\textrm{tree}} \frac{\langle 23 \rangle \langle 41 \rangle \langle l_1 l_2 \rangle^2}{ \langle l_1 1 \rangle \langle 2 l_2 \rangle \langle l_2 4 \rangle \langle 3 l_1 \rangle}$

Those factors of loop momenta in the numerator are annoying, because we know there shouldn’t be any in the momentum space result. We can start to get rid of them by multiplying top and bottom by $[l_2 2]$. A quick round of momentum conservation leaves us with

$\displaystyle \textrm{Res}_s = \mathcal{A}_4^{\textrm{tree}} \frac{\langle 23 \rangle \langle 41 \rangle [12] \langle l_1 l_2 \rangle}{(l_2 + p_2)^2\langle l_2 4 \rangle \langle 3 l_1 \rangle}$

That seemed to be a success, so let’s try it again! This time the natural choice is $[3l_1]$. Again momentum conservation leaves us with

$\displaystyle \textrm{Res}_s = \mathcal{A}_4^{\textrm{tree}} \frac{\langle 23 \rangle \langle 41 \rangle [12] [34]}{(l_2 + p_2)^2 (l_1+p_3)^2}$

Overall momentum conservation in the numerator finally leaves us with

$\displaystyle \textrm{Res}_s = -\mathcal{A}_4^{\textrm{tree}} \frac{\langle 12 \rangle [12] \langle 23 \rangle [23]}{(l_2 + p_2)^2 (l_1+p_3)^2} = -\mathcal{A}_4^{\textrm{tree}} \frac{st}{(l_2 + p_2)^2 (l_1+p_3)^2}$

where $s$ and $t$ are the standard Mandelstam variables. Phew! That was a bit messy. Unfortunately it’s the price you pay for the beauty of spinor-helicity notation. And it’s a piece of cake compared with the Feynman diagram approach.

Now we can immediately read off the dependence of the residue on loop momenta. We have two factors of the form $(l-K)^{-2}$ so our result matches only the box integral. Therefore the $4$-point $1$-loop amplitude in $\mathcal{N}=4$ SYM takes the form

$\displaystyle \mathcal{A}_4^{1\textrm{-loop}} = DI_4(p_1,p_2,p_3,p_4)$

We determine the kinematic constant $D$ by explicitly computing the $I_4$ integrand residue on our unitarity cut. This computation quickly yields

$\displaystyle \mathcal{A}_4^{1\textrm{-loop}} = st \mathcal{A}_4^{\textrm{tree}}I_4(p_1,p_2,p_3,p_4)$

Hooray – we are finally done. Although this looks like a fair amount of work, each step was mathematically elementary. The entire calculation fits on much less paper than the equivalent Feynman diagram approach. Naively you’d need to draw $1$-loop diagrams for all the different particle scattering processes in $\mathcal{N}=4$ SYM, including possible ghost states in the loops. This itself would take a long time, and that’s before you’ve evaluated a single integral! In fact the first computation of this result didn’t come from classical Feynman diagrams, but rather as a limit of string theory.

A quick caveat is in order here. The eagle-eyed amongst you may have spotted that my final answer is wrong by a minus sign. Indeed, we’ve been very casual with our factors of $i$ throughout this post. Recall that Feynman rules usually assign a factor of $i$ to each propagator in a diagram. But we’ve completely ignored this prescription!

Sign errors and theorists are best of enemies. So we’d better confront our nemesis and find that missing minus sign. In fact it’s not hard to see where it comes from. The only place in our calculation where extra factors of $i$ wouldn’t simply cancel comes from the cut propagators. Look back at the very first figure and observe that the left hand side has four more factors of $i$ than the right.

Of course we’ve only cut two propagators to obtain the amplitude. This means that we should pick up an extra factor of $(1/i)^2 = -1$. This precisely corrects the sign error than pedants (or experimentalists) would find irritating!

I promised an $SU(3)$ YM calculation, and I won’t disappoint. This will also provide a chance to show off generalized unitarity in all it’s glory. Explicitly we’re going to show that the NMHV gluon four-mass box coefficients vanish.

To start with, let’s disentangle some of that jargon. Remember that an $n$-particle NMHV gluon amplitude has $3$ negative helicity external gluons and $n-3$ positive helicity ones. The four-mass condition means that each corner of the box has more than two external legs, so that the outgoing momentum is a massive $4$-vector.

The coefficient of the box diagram will be given by a generalized unitarity cut of four loop propagators. Indeed triangle and bubble diagrams don’t even have four propagators available to cut, which mathematically translates into a zero contribution to the residue. The usual rules to compute residues tell us that we’ll always have a zero numerator factor left over in for bubble and triangle integrands.

Now the generalized unitarity method tells us to compute the product of four tree diagrams. By our four-mass assumption, each of these has at least $4$ external gluons. We must have exactly $4$ negative helicity and $4$ positive helicity gluons from the cut propagators since all lines are assumed outgoing. We have exactly $3$ further negative helicity particles by our NMHV assumption, so $7$ negative helicity gluons to go round.

But tree level diagrams with $\geq 4$ legs must have at least $2$ negative helicity gluons to be non-vanishing. This is not possible with our setup, since $7 < 8$. We conclude that the NMHV gluon four-mass box coefficients vanish.

Our result here is probably a little disappointing compared with the $\mathcal{N}=4$ SYM example above. There we were able to completely compute a $4$ point function at $1$-loop. But for ordinary YM there are many more subcases to consider. Heuristically we lack enough symmetry to constrain the amplitude fully, so we have to do more work ourselves! A full analysis would consider all box cases, then move on to nonzero contributions from triangle and bubble integrals. Finally we’d need to determine the rational part of the amplitude, perhaps using BCFW recursion at loop level.

Don’t worry – I don’t propose to go into any further detail now. Hopefully I’ve sketched the mathematical landscape of amplitudes clearly enough already. I leave you with the thought-provoking claim that the simplest QFTs are those with the most symmetry. As Arkani-Hamed, Cachazo and Kaplan explain, this is at odds with our childhood desire for simple Lagrangians!

# What Can Unitarity Tell Us About Amplitudes?

Let’s start by analysing the discontinuities in amplitudes, viewed as a function of external momenta. The basic Feynman rules tell us that $1$-loop processes yield amplitudes of the form

$\displaystyle \int d^4 l \frac{A}{l^2(p+q-l)^2}$

where $A$ is some term independent of $l$. This yields a complex logarithm term, which thus gives a branch cut as a function of a Mandelstam variable $(p+q)^2$.

It’s easy to get a formula for the discontinuity across such a cut. Observe first that amplitudes are real unless some internal propagator goes on shell. Indeed when an internal line goes on shell the $i\epsilon$ prescription yields an imaginary contribution.

Now suppose we are considering some process as a function of an external momentum invariant $s$, like a Mandelstam variable. Consider the internal line whose energy is encoded by $s$. If $s$ is lower than the threshold for producing a multiparticle state, then the internal line cannot go on shell. In that case the amplitude and $s$ are both real so we may write

$\displaystyle \mathcal{A}(s) = \mathcal{A}(s^*)^*$

Now we analytically continue $s$ to the whole complex plane. This equation must still hold, since each side is an analytic function of $s$. Fix $s$ at some real value greater than the threshold for multiparticle state production, so that the internal line can go on shell. In this situation of course we expect a branch cut.

Our formula above enforces the relations

$\displaystyle \textrm{Re}\mathcal{A}(s+i\epsilon) = \textrm{Re}\mathcal{A}(s-i\epsilon)$

$\displaystyle \textrm{Im}\mathcal{A}(s+i\epsilon) = -\textrm{Im}\mathcal{A}(s-i\epsilon)$

Thus we must indeed have a branch cut for $s$ in this region, with discontinuity given by

$\displaystyle \textrm{Disc}\mathcal{A}(s) = 2\textrm{Im}\mathcal{A}(s) \qquad (*)$

Now we’ve got a formula for the discontinuity across a general amplitude branch cut, we’re in a position to answer our original question. What can unitarity tell us about discontinuities?

When I say unitarity, I specifically mean the unitarity of the $S$-matrix. Remember that we compute amplitudes by sandwiching the $S$-matrix between incoming and outgoing states defined at a common reference time in the far past. In fact we usually discard non-interacting terms by considering instead the $T$-matrix defined by

$\displaystyle S = \mathbf{1}+iT$

The unitarity of the $S$-matrix, namely $S^\dagger S = \mathbf{1}$ yields for the $T$-matrix the relation

$\displaystyle 2\textrm{Im}(T) = T^\dagger T$

Okay, I haven’t quite been fair with that final line. In fact it should make little sense to you straight off! What on earth is the imaginary part of a matrix, after all? Before you think to deeply about any mathematical or philosophical issues, let me explain that the previous equation is simply a notation. We understand it to hold when evaluated between any incoming and outgoing states. In other words

$\displaystyle 2 \textrm{Im} \langle \mathbf{p}_1 \dots \mathbf{p}_n | T | \mathbf{k}_1 \dots \mathbf{k}_m\rangle = \langle \mathbf{p}_1 \dots \mathbf{p}_n | T^\dagger T | \mathbf{k}_1 \dots \mathbf{k}_m\rangle$

But there’s still a problem: how do you go about evaluating the $T^\dagger T$ term? Thinking back to the heady days of elementary quantum mechanics, perhaps you’re inspired to try inserting a completeness relation in the middle. That way you obtain a product of amplitudes, which are things we know how to compute. The final result looks like

$\displaystyle 2 \textrm{Im} \langle \mathbf{p}_1 \dots \mathbf{p}_n | T | \mathbf{k}_1 \dots \mathbf{k}_m\rangle = \sum_l \left(\prod_{i=1}^l \int\frac{d^3 \mathbf{q}_i}{(2\pi)^3 2E_i}\right) \langle \mathbf{p}_1 \dots \mathbf{p}_n | T^\dagger | \{\mathbf{q_i}\} \rangle \langle \{\mathbf{q_i}\} | T | \mathbf{k}_1 \dots \mathbf{k}_m\rangle$

Now we are in business. All the matrix elements in this formula correspond to amplitudes we can calculate. Using equation $(*)$ above we can then relate the left hand side to a discontinuity across a branch cut. Heuristically we have the equation

$\displaystyle \textrm{Disc}\mathcal{A}(1,\dots m \to 1,\dots n) = \sum_{\textrm{states}} \mathcal{A}(1,\dots m \to \textrm{state})\mathcal{A}(1,\dots n \to \textrm{state})^* \qquad (\dagger)$

Finally, after a fair amount of work, we can pull out some useful information! In particular we can make deductions based on a loop expansion in powers of $\hbar$ viz.

$\displaystyle \mathcal{A}(m,n) = \sum_{L=0}^\infty \hbar^L \mathcal{A}^{(L)}(m,n)$

where $\mathcal{A}^{(L)}(m,n)$ is the $L$-loop amplitude with $m$ incoming and $n$ outgoing particles. Expanding equation $(\dagger)$ order by order in $\hbar$ we obtain

$\displaystyle \textrm{Disc}\mathcal{A}^{(0)}(m,n) = 0$

$\displaystyle \textrm{Disc}\mathcal{A}^{(1)}(m,n) = \sum_{\textrm{states}} \mathcal{A}^{(0)}(m,\textrm{state})\mathcal{A}^{(0)}(n,\textrm{state})^*$

and so forth. The first equation says that tree amplitudes have no branch cuts, which is immediately obvious from the Feynman rules. The second equation is more interesting. It tells us that the discontinuities of $1$-loop amplitudes are given by products of tree level amplitudes! We can write this pictorially as

Here we have specialized to $m=2$, $n=3$ and have left implicit a sum over the possible intermediate states. This result is certainly curious, but it’s hard to see how it can be useful in its current form. In particular, the sum we left implicit involves an arbitrary number of states. We’d really like a simpler relation which involves a well-defined, finite number of Feynman diagrams.

It turns out that this can be done, provided we consider particular channels in which the loop discontinuities occur. For each channel, the associated discontinuity is computed as a product of tree level diagrams obtained by cutting two of the loop propagators. By momentum conservation, each channel is uniquely determined by a subset of external momenta. Thus we label channels by their external particle content.

How exactly does this simplification come about mathematically? To see this we must take a more detailed look at Feynman diagrams, and particularly at the on-shell poles of loop integrands. This approach yields a pragmatic method, at the expense of obscuring the overarching role of unitarity. The results we’ve seen here will serve as both motivation and inspiration for the pedestrian perturbative approach.

We leave those treats in store for a future post. Until then, take care, and please don’t violate unitarity.

# The Calculus of Particle Scattering

Quantum field theory allows us to calculate “amplitudes” for particle scattering processes. These are mathematical functions that encode the probability of particles scattering through various angles. Although the theory is quite complicated, miraculously the rules for calculating these amplitudes are pretty easy!

The key idea came from physicist Richard Feynman. To calculate a scattering amplitude, you draw a series of diagrams. The vertices and edges of the diagram come with particular factors relevant to the theory. In particular vertices usually carry coupling constants, external edges carry polarization vectors, and internal edges carry functions of momenta.

From the diagrams you can write down a mathematical expression for the scattering amplitude. All seems to be pretty simple. But what exactly are the diagrams you have to draw?

Well there are simple rules governing that too. Say you want to compute a scattering amplitude with $2$ incoming and $2$ outgoing particles. Then you draw four external lines, labelled with appropriate polarizations and momenta. Now you need to connect these lines up, so that they all become part of one diagram.

This involves adding internal lines, which connect to the external ones at vertices. The types and numbers of lines allowed to connect to a vertex is prescribed by the theory. For example in pure QCD the only particles are gluons. You are allowed to connect either three or four different lines to each vertex.

Here’s a few different diagrams you are allowed to draw – they each give different contributions to the overall scattering amplitude. Try to draw some more yourself if you’re feeling curious!

Now it’s immediately obvious that there are infinitely many possible diagrams you could draw. Sounds like this is a problem, because adding up infinitely many things is hard! Thankfully, we can ignore a lot of the diagrams.

So why’s that? Well it transpires that each loop in the diagram contributes an extra factor of Planck’s constant $\hbar$. This is a very small number, so the effect on the amplitude from diagrams with many loops is negligable. There are situations in which this analysis breaks down, but we won’t consider them here.

So we can get a good approximation to a scattering amplitude by evaluating diagrams with only a small number of loops. The simplest have $0$-loops, and are known as tree level diagrams because they look like trees. Here’s a QCD example from earlier

Next up you have $1$-loop diagrams. These are also known as quantum corrections because they give the QFT correction to scattering processes from quantum mechanics, which were traditionally evaluated using classical fields. Here’s a nice QCD $1$-loop diagram from earlier

If you’ve been reading some of my recent posts, you’ll notice I’ve been talking about how to calculate tree level amplitudes. This is sensible because they give the most important contribution to the overall result. But the real reason for focussing on them is because the maths is quite easy.

Things get more complicated at $1$-loop level because Feynman’s rules tell us to integrate over the momentum in the loop. This introduces another curve-ball for us to deal with. In particular our arguments for the simple tree level recursion relations now fail. It seems that all the nice tricks I’ve been learning are dead in the water when it comes to quantum corrections.

But thankfully, all is not lost! There’s a new set of tools that exploits the structure of $1$-loop diagrams. Back in the 1950s Richard Cutkosky noticed that the $1$-loop diagrams can be split into tree level ones under certain circumstances. This means we can build up information about loop processes from simpler tree results. The underlying principle which made this possible is called unitarity.

So what on earth is unitarity? To understand this we must return to the principles of quantum mechanics. In quantum theories we can’t say for definite what will happen. The best we can do is assign probabilities to different outcomes. Weird as this might sound, it’s how the universe seems to work at very small scales!

Probabilities measure the chances of different things happening. Obviously if you add up the chances of all possible outcomes you should get $1$. Let’s take an example. Suppose you’re planning a night out, deciding whether to go out or stay in. Thinking back over the past few weeks you can estimate the probability of each outcome. Perhaps you stay in $8$ times out of $10$ and go out $2$ times out of ten. $8/10 + 2/10 = 10/10 = 1$ just as we’d expect for probability!

Now unitarity is just a mathsy way of saying that probabilities add up to $1$. It probably sounds a bit stupid to make up a word for such a simple concept, but it’s a useful shorthand! It turns out that unitarity is exactly what we need to derive Cutkosky’s useful result. The method of splitting loop diagrams into tree level ones has become known as the unitarity method.

The nicest feature of this method is that it’s easy to picture in terms of Feynman diagrams. Let’s plunge straight in and see what that looks like.

At first glance it’s not at all clear what this picture means. But it’s easy to explain step by step. Firstly observe that it’s an equation, just in image form. On the left hand side you see a loop diagram, accompanied by the word $\textrm{Disc}$. This indicates a certain technical property of a loop diagram that it’s useful to calculate. On the right you see two tree diagrams multiplied together.

Mathematically these diagrams represent formulae for scattering amplitudes. So all this diagram says is that some property of $1$-loop amplitudes is produced by multiplying together two tree level ones. This is extremely useful if you know about tree-level results but not about loops! Practically, people usually use this kind of equation to constrain the mathematical form of a $1$-loop amplitude.

If you’re particularly sharp-eyed you might notice something about the diagrams on the left and right sides of the equation. The two diagrams on the right come from cutting through the loop on the left in two places. This cutting rule enables us to define the unitarity method for all loop diagrams. This gives us the full result that Cutkosky originally found. He’s perhaps the most aptly named scientist of all time!

We’re approaching the end of our whirlwind tour of particle scattering. We’ve seen how Feynman diagrams give simple rules but difficult maths. We’ve mentioned the tree level tricks that keep calculations easy. And now we’ve observed that unitarity comes to our rescue at loop-level. But most of these ideas are actually quite old. There’s just time for a brief glimpse of a hot contemporary technique.

In our pictorial representation of the unitarity method, we gained information by cutting the loop in two places. It’s natural to ask whether you could make further such cuts, giving more constraints on the form of the scattering amplitude. It turns out that the answer is yes, so long as you allow the momentum in the loop to be a complex number!

You’d be forgiven for thinking that this is all a bit unphysical, but in the previous post we saw that using the complex numbers is actually a very natural and powerful mathematical trick. The results we get in the end are still real, but the quickest route there is via the complex domain.

So why do the complex numbers afford us the extra freedom to cut more lines? Well, the act of cutting a line is mathematically equivalent to taking the corresponding momentum $(l-K)$ to be on-shell; that is to say $(l-K)^2 =0$. We live in a four-dimensional world, so $l$ has four components. That means we can solve a maximum of four equations $(l-K)^2 =0$ simultaneously. So generically we should be allow to cut four lines!

However, the equations $(l-K)^2 =0$ are quadratic. This means we are only guaranteed a solution if the momentum $l$ can be complex. So to use a four line cut, we must allow the loop momentum to be complex. With our simple $2$ line cuts there was enough freedom left to keep $l$ real.

The procedure of using several loop cuts is known as the generalized unitarity method. It’s been around since the late 90s, but is still actively used to determine scattering amplitudes. Much of our current knowledge about QCD loop corrections is down to the power of generalized unitarity!

That’s all for now folks. I’ll be covering the mathematical detail in a series of posts over the next few days.

My thanks to Binosi et al. for their excellent program JaxoDraw which eased the drawing of Feynman diagrams.