Tag Archives: qcd

Conference Amplitudes 2015 – Don’t Stop Me Now!

All too soon we’ve reached the end of a wonderful conference. Friday morning dawned with a glimpse of perhaps the most impressive calculation of the past twelve months – Higgs production at three loops in QCD. This high precision result is vital for checking our theory against the data mountain produced by the LHC.

It was well that Professor Falko Dulat‘s presentation came at the end of the week. Indeed the astonishing computational achievement he outlined was only possible courtesy of the many mathematical techniques recently developed by the community. Falko illustrated this point rather beautifully with a word cloud.

Word Cloud Higgs Production

As amplitudeologists we are blessed with a incredibly broad field. In a matter of minutes conversation can encompass hard experiment and abstract mathematics. The talks this morning were a case in point. Samuel Abreu followed up the QCD computation with research linking abstract algebra, graph theory and physics! More specifically, he introduced a Feynman diagram version of the coproduct structure often employed to describe multiple polylogs.

Dr. HuaXing Zhu got the ball rolling on the final mini-session with a topic close to my heart. As you may know I’m currently interested in soft theorems in gauge theory and gravity. HuaXing and Lance Dixon have made an important contribution in this area by computing the complete 2-loop leading soft factor in QCD. Maybe unsurprisingly the breakthrough comes off the back of the master integral and differential equation method which has dominated proceedings this week.

Last but by no means least we had an update from the supergravity mafia. In recent years Dr. Tristan Dennen and collaborators have discovered unexpected cancellations in supergravity theories which can’t be explained by symmetry alone. This raises the intriguing question of whether supergravity can play a role in a UV complete quantum theory of gravity.

The methods involved rely heavily on the color-kinematics story. Intriguingly Tristan suggested that the double copy connection because gauge theory and gravity could form an explanation for the miraculous results (in which roughly a billion terms combine to give zero)! The renormalizability of Yang-Mills theory could well go some way to taming gravity’s naive high energy tantrums.

There’s still some way to go before bottles of wine change hands. But it was fitting to end proceedings with an incomplete story. For all that we’ve thought hard this week, it is now that the graft really starts. I’m already looking forward to meeting in Stockholm next year. My personal challenge is to ensure that I’m among the speakers!

Particular thanks to all the organisers, and the many masters students, PhDs, postdocs and faculty members at ETH Zurich who made our stay such an enjoyable and productive one!

Note: this article was originally written on Friday 10th July.



Conference Amplitudes 2015!

It’s conference season! I’m hanging out in very warm Zurich with the biggest names in my field – scattering amplitudes. Sure it’s good fun to be outside the office. But there’s serious work going on too! Research conferences are a vital forum for the exchange of ideas. Inspiration and collaboration flow far more easily in person than via email or telephone. I’ll be blogging the highlights throughout the week.

Monday | Morning Session

To kick-off we have some real physics from the Large Hadron Collider! Professor Nigel Glover‘s research provides a vital bridge between theory and experiment. Most physicists in this room are almost mathematicians, focussed on developing techniques rather than computing realistic quantities. Yet the motivation for this quest lie with serious experiments, like the LHC.

We’re currently entering an era where the theoretical uncertainty trumps experimental error. With the latest upgrade at CERN, particle smashers will reach unprecedented accuracy. This leaves us amplitudes theorists with a large task. In fact, the experimentalists regularly draw up a wishlist to keep us honest! According to Nigel, the challenge is to make our predictions twice as good within ten years.

At first glance, this 2x challenge doesn’t seem too hard! After all Moore’s Law guarantees us a doubling of computing power in the next few years. But the scale of the problem is so large that more computing power won’t solve it! We need new techniques to get to NNLO – that is, corrections that are multiplied by \alpha_s^2 the square of the strong coupling. (Of course, we must also take into account electroweak effects but we’ll concentrate on the strong force for now).

Nigel helpfully broke down the problem into three components. Firstly we must compute the missing higher order terms in the amplitude. The start of the art is lacking at present! Next we need better control of our input parameters. Finally we need to improve our model of how protons break apart when you smash them together in beams.

My research helps in a small part with the final problem. At present I’m finishing up a paper on subleading soft loop corrections, revealing some new structure and developing a couple of new ideas. The hope is that one day someone will use this to better eliminate some irritating low energy effects which can spoil the theoretical prediction.

In May, I was lucky enough to meet Bell Labs president Dr. Marcus Weldon in Murray Hill, New Jersey. He spoke about his vision for a 10x leap forward in every one of their technologies within a decade. This kind of game changing goal requires lateral thinking and truly new ideas.

We face exactly the same challenge in the world of scattering amplitudes. The fact that we’re aiming for only a 2x improvement is by no means a lack of ambition. Rather it underlines that problem that doubling our predictive power entails far more than a 10x increase in complexity of calculations using current techniques.

I’ve talked a lot about accuracy so far, but notice that I haven’t mentioned precision. Nigel was at pains to distinguish the two, courtesy of this amusing cartoon.

Accuracy Vs Precision

Why is this so important? Well, many people believe that NNLO calculations will reduce the renormalization scale uncertainty in theoretical predictions. This is a big plus point! Many checks on known NNLO results (such as W boson production processes) confirm this hunch. This means the predictions are much more precise. But it doesn’t guarantee accuracy!

To hit the bullseye there’s still much work to be done. This week we’ll be sharpening our mathematical tools, ready to do battle with the complexities of the universe. And with that in mind – it’s time to get back to the next seminar. Stay tuned for further updates!

Update | Monday Evening

View from the Main Building at ETH Zurich

Only time for the briefest of bulletins, following a productive and enjoyable evening on the roof of the ETH main building. Fantastic to chat again to Tomek Lukowski (on ambitwistor strings), Scott Davies (on supergravity 4-loop calculations and soft theorems) and Philipp Haehnal (on the twistor approach to conformal gravity). Equally enlightening to meet many others, not least our gracious hosts from ETH Zurich.

My favourite moment of the day came in Xuan Chen’s seminar, where he discussed a simple yet powerful method to check the numerical stability of precision QCD calculations. It’s well known that these should factorize in appropriate kinematic regions, well described by imaginatively named antenna functions. By painstakingly verifying this factorization in a number of cases Xuan detected and remedied an important inaccuracy in a Higgs to 4 jet result.

Of course it was a pleasure to hear my second supervisor, Professor Gabriele Travaglini speak about his latest papers on the dilatation operator. The rederivation of known integrability results using amplitudes opens up an enticing new avenue for those intrepid explorers who yearn to solve \mathcal{N}=4 super-Yang-Mills!

Finally Dr. Simon Badger‘s update on the Edinburgh group’s work was intriguing. One challenge for NNLO computations is to understand 2-loop corrections in QCD. The team have taken an important step towards this by analysing 5-point scattering of right-handed particles. In principle this is a deterministic procedure: draw some pictures and compute.

But to get a compact formula requires some ingenuity. First you need appropriate integral reduction to identify appropriate master integrals. Then you must apply KK and BCJ relations to weed out the dead wood that’s cluttering up the formula unnecessarily. Trouble is, both of these procedures aren’t uniquely defined – so intelligent guesswork is the order of the day!

That’s quite enough for now – time for some sleep in the balmy temperatures of central Europe.

The Calculus of Particle Scattering

Quantum field theory allows us to calculate “amplitudes” for particle scattering processes. These are mathematical functions that encode the probability of particles scattering through various angles. Although the theory is quite complicated, miraculously the rules for calculating these amplitudes are pretty easy!

The key idea came from physicist Richard Feynman. To calculate a scattering amplitude, you draw a series of diagrams. The vertices and edges of the diagram come with particular factors relevant to the theory. In particular vertices usually carry coupling constants, external edges carry polarization vectors, and internal edges carry functions of momenta.

From the diagrams you can write down a mathematical expression for the scattering amplitude. All seems to be pretty simple. But what exactly are the diagrams you have to draw?

Well there are simple rules governing that too. Say you want to compute a scattering amplitude with 2 incoming and 2 outgoing particles. Then you draw four external lines, labelled with appropriate polarizations and momenta. Now you need to connect these lines up, so that they all become part of one diagram.

This involves adding internal lines, which connect to the external ones at vertices. The types and numbers of lines allowed to connect to a vertex is prescribed by the theory. For example in pure QCD the only particles are gluons. You are allowed to connect either three or four different lines to each vertex.

Here’s a few different diagrams you are allowed to draw – they each give different contributions to the overall scattering amplitude. Try to draw some more yourself if you’re feeling curious!

5 gluons at tree level

5 Gluons at 1 loop

Now it’s immediately obvious that there are infinitely many possible diagrams you could draw. Sounds like this is a problem, because adding up infinitely many things is hard! Thankfully, we can ignore a lot of the diagrams.

So why’s that? Well it transpires that each loop in the diagram contributes an extra factor of Planck’s constant \hbar. This is a very small number, so the effect on the amplitude from diagrams with many loops is negligable. There are situations in which this analysis breaks down, but we won’t consider them here.

So we can get a good approximation to a scattering amplitude by evaluating diagrams with only a small number of loops. The simplest have 0-loops, and are known as tree level diagrams because they look like trees. Here’s a QCD example from earlier

5 gluons at tree level

Next up you have 1-loop diagrams. These are also known as quantum corrections because they give the QFT correction to scattering processes from quantum mechanics, which were traditionally evaluated using classical fields. Here’s a nice QCD 1-loop diagram from earlier

5 Gluons at 1 loop

If you’ve been reading some of my recent posts, you’ll notice I’ve been talking about how to calculate tree level amplitudes. This is sensible because they give the most important contribution to the overall result. But the real reason for focussing on them is because the maths is quite easy.

Things get more complicated at 1-loop level because Feynman’s rules tell us to integrate over the momentum in the loop. This introduces another curve-ball for us to deal with. In particular our arguments for the simple tree level recursion relations now fail. It seems that all the nice tricks I’ve been learning are dead in the water when it comes to quantum corrections.

But thankfully, all is not lost! There’s a new set of tools that exploits the structure of 1-loop diagrams. Back in the 1950s Richard Cutkosky noticed that the 1-loop diagrams can be split into tree level ones under certain circumstances. This means we can build up information about loop processes from simpler tree results. The underlying principle which made this possible is called unitarity.

So what on earth is unitarity? To understand this we must return to the principles of quantum mechanics. In quantum theories we can’t say for definite what will happen. The best we can do is assign probabilities to different outcomes. Weird as this might sound, it’s how the universe seems to work at very small scales!

Probabilities measure the chances of different things happening. Obviously if you add up the chances of all possible outcomes you should get 1. Let’s take an example. Suppose you’re planning a night out, deciding whether to go out or stay in. Thinking back over the past few weeks you can estimate the probability of each outcome. Perhaps you stay in 8 times out of 10 and go out 2 times out of ten. 8/10 + 2/10 = 10/10 = 1 just as we’d expect for probability!

Now unitarity is just a mathsy way of saying that probabilities add up to 1. It probably sounds a bit stupid to make up a word for such a simple concept, but it’s a useful shorthand! It turns out that unitarity is exactly what we need to derive Cutkosky’s useful result. The method of splitting loop diagrams into tree level ones has become known as the unitarity method.

The nicest feature of this method is that it’s easy to picture in terms of Feynman diagrams. Let’s plunge straight in and see what that looks like.


At first glance it’s not at all clear what this picture means. But it’s easy to explain step by step. Firstly observe that it’s an equation, just in image form. On the left hand side you see a loop diagram, accompanied by the word \textrm{Disc}. This indicates a certain technical property of a loop diagram that it’s useful to calculate. On the right you see two tree diagrams multiplied together.

Mathematically these diagrams represent formulae for scattering amplitudes. So all this diagram says is that some property of 1-loop amplitudes is produced by multiplying together two tree level ones. This is extremely useful if you know about tree-level results but not about loops! Practically, people usually use this kind of equation to constrain the mathematical form of a 1-loop amplitude.

If you’re particularly sharp-eyed you might notice something about the diagrams on the left and right sides of the equation. The two diagrams on the right come from cutting through the loop on the left in two places. This cutting rule enables us to define the unitarity method for all loop diagrams. This gives us the full result that Cutkosky originally found. He’s perhaps the most aptly named scientist of all time!

We’re approaching the end of our whirlwind tour of particle scattering. We’ve seen how Feynman diagrams give simple rules but difficult maths. We’ve mentioned the tree level tricks that keep calculations easy. And now we’ve observed that unitarity comes to our rescue at loop-level. But most of these ideas are actually quite old. There’s just time for a brief glimpse of a hot contemporary technique.

In our pictorial representation of the unitarity method, we gained information by cutting the loop in two places. It’s natural to ask whether you could make further such cuts, giving more constraints on the form of the scattering amplitude. It turns out that the answer is yes, so long as you allow the momentum in the loop to be a complex number!

You’d be forgiven for thinking that this is all a bit unphysical, but in the previous post we saw that using the complex numbers is actually a very natural and powerful mathematical trick. The results we get in the end are still real, but the quickest route there is via the complex domain.

So why do the complex numbers afford us the extra freedom to cut more lines? Well, the act of cutting a line is mathematically equivalent to taking the corresponding momentum (l-K) to be on-shell; that is to say (l-K)^2 =0. We live in a four-dimensional world, so l has four components. That means we can solve a maximum of four equations (l-K)^2 =0 simultaneously. So generically we should be allow to cut four lines!

However, the equations (l-K)^2 =0 are quadratic. This means we are only guaranteed a solution if the momentum l can be complex. So to use a four line cut, we must allow the loop momentum to be complex. With our simple 2 line cuts there was enough freedom left to keep l real.

The procedure of using several loop cuts is known as the generalized unitarity method. It’s been around since the late 90s, but is still actively used to determine scattering amplitudes. Much of our current knowledge about QCD loop corrections is down to the power of generalized unitarity!

That’s all for now folks. I’ll be covering the mathematical detail in a series of posts over the next few days.

My thanks to Binosi et al. for their excellent program JaxoDraw which eased the drawing of Feynman diagrams.

Why I Like Supersymmetry

Supersymmetry can be variously described as beautiful, convenient, unphysical and contrived. The truth is that nobody really knows whether we’re likely to find it in our universe. Like most theoretical physicists I hope we do, but even if we don’t it can still be a useful mathematical tool.

There are tons of reasons to like supersymmetry, as well as a good many arguments against it. I can’t cover all of these in a brief post, so I’m just going to talk about one tiny yet pretty application I glanced at today.

Let’s talk about scattering processes again, my favourite topic of (physics) conversation. These are described by quantum field theory, which is itself based on very general principles of symmetry. In the standard formulation (imaginatively called the Standard Model) these symmetries involve physical motions in spacetime, as well as more abstract transformations internal to the theory. The spacetime symmetries are responsible for giving particles mass, spin and momentum, while the internal ones endow particles with various charges.

At the quantum level these symmetries actually provide some bonus information, in the form of certain identities that scattering processes have to satisfy. These go by the name of Ward identities. For example QED has a both a gauge and a global U(1) symmetry. The Ward identity for the global symmetry tells you that charge must be conserved. The Ward identity for the gauge symmetry tells you that longitudinally polarized photons are unphysical.

If you’re a layman and got lost above then don’t worry. All you need to know is that Ward identities are cool because they tell you extra things about a theory. The more information you have, the more constrained the answer must be, so the less work you have to do yourself! And this is where supersymmetry comes into the picture.

Supersymmetry is another (very special) type of symmetry that pairs up fermions (matter) and bosons (forces). Because it’s a symmetry it has associated Ward identities. These relate different scattering amplitudes. The upshot is that once you compute one amplitude you get more for free. The more supersymmetry you have, the more relations there are, so the easier your job becomes.

So what’s the use if supersymmetry isn’t true then? Well, in general terms it’s still useful to look at these simplified situations because it might help us discover tools that would be hard to uncover otherwise. Take the analogy of learning a language, for example. One way to do it is just to plunge headlong in and try to pick things up as you go along. This way you tend to get lots of everyday phrases quickly, but won’t necessary understand the structure of the language.

Alternatively you can go to classes that break things down into simpler building blocks. Okay spending one hour studying the subjunctive alone might not seem very useful at first, but when you go back to having a real conversation you’ll pick up new subtleties you never noticed before.

If you’re still unconvinced here’s a (somewhat trivial) concrete example. Recall that you can show that purely positive helicity gluon amplitudes must vanish at tree level in QCD. The proof is easy, but requires some algebraic fiddling. The SUSY Ward identity tells us immediately than in a Super-Yang-Mills (SYM) theory this amplitude must vanish to all orders in the loop expansion. So how do we connect back to QCD?

Well the gluon superpartners (gluinos) have quadratic coupling to the gluon, so an all gluon scattering amplitude in SYM can’t include any gluinos at tree level. (Just think about trying to draw the diagram if you’re confused)! In other words, at tree level the SYM amplitude is exactly the QCD amplitude, which proves our result.

Not sure what will be on the menu tomorrow – I’m guessing that either color-ordering or unitarity methods will feature. Drop me a comment if you have a preference.

The Parke-Taylor Formula

Unfortunately I’m not going to have time today to give you a full post, mostly due to an abortive mission to Barking! The completion of that mission tomorrow may impact on post length again, so stay tuned for the first full PhD installment.

Nonetheless, here’s a brief tidbit from my first day. Let’s think about the theory of the strong force, which binds quarks and nuclei together. Mathematically it’s governed by quantum chromodynamics (QCD). At it’s simplest we can study QCD with no matter, so just consider the scattering interactions of the force carrying gluon particles.

It turns out that even this is pretty complicated! At tree level in Feynman diagram calculations, the simplest possible approximation, there are about 12000 terms for a four gluon scattering event. Thankfully these all cancel to give a single, closed form expression for the scattering amplitude. But why?

There’s a simpler way that makes use of some clever tricks to prove the more general Parke-Taylor formula that the maximal helicity violating n gluon amplitude is simply

\frac{\langle 12 \rangle^4}{\langle 12 \rangle \langle 23 \rangle \langle 34 \rangle \dots \langle n1 \rangle }

What does this all mean?

Qualitatively, that there is a formalism in which these calculations come out very simply and naturally. This will be the starting point for my exploration of modern day amplitudology – a subject that ranges through twistor theory, complex analysis and high dimensional geometry!

For the real mathematics behind the formula above, I’m afraid you’ll have to wait until tomorrow or Wednesday!