Category Archives: Quantum Field Theory

(Chiral) Supersymmetry in Different Dimensions

This week I’m at the CERN winter school on supergravity, strings and gauge theory. Jonathan Heckman’s talks about a top-down approach to 6D SCFTs have particularly caught my eye. After all the \mathcal{N}=(2,0) theory is in some sense the mother of my favourite playground \mathcal{N}=4 in four dimensions.

Unless you’re a supersymmetry expert, the notation should already look odd to you! Why do I write down two numbers to classify supersymmetries in 6D, but one suffices for 4D. The answer comes from a subtlety in the definition of the superalgebra, which isn’t often discussed outside of lengthy (and dull) textbooks. Time to set the record straight!

At kindergarten we learn that supersymmetry adds fermonic generators to the Poincare algebra yielding a “unique” extension to the possible spacetime symmetries. Of course, this hides a possible choice – there are many fermionic representations of the Lorentz algebra one could choose for the supersymmetry generators.

Folk theorem: anything you want to prove can be found in Weinberg.
Folk theorem: anything you want to prove can be found in Weinberg.

Fortunately, mathematical consistency restricts you to simple options. For the algebra to close, the generators must live in the lowest dimensional representations of the Lorentz algebra – check Weinberg III for a proof. You’re still free to take many independent copies of the supersymmetry generators (up to the restrictions placed by forbidding higher spin particles, which are usually imposed).

Therefore the classification of supersymmetries allowed in different dimensions reduces to the problem of understanding the possible spinor representations. Thankfully, there are tables of these.

Screen Shot 2016-02-02 at 17.56.04

Reading carefully, you notice that dimensions 2, 6 and 10 are particularly special, in that they admit Majorana-Weyl spinors. Put informally, this means you can have your cake and eat it! Normally, the minimal dimension spinor representation is obtained by imposing a Majorana (reality) or Weyl (chirality) condition. But in this case, you can have both!

This means that in D=2,\ 6 or 10, the supersymmetry generators can be chosen to be chiral. The stipulation \mathcal{N}=(1,0) says that Q should be a left-handed Majorana spinor, for instance. In D = 4 a Majorana spinor must by necessity contain both left-handed and right-handed pieces, so this choice would be impossible! Or, if you like, should I choose Q to be a left-handed Weyl spinor, then it’s conjugate Q\dagger is forced to be right-handed.

Advertisements

Black Holes and the Information Paradox: a Solution?

Gravity has a good sense of humour. On the one hand, it’s the weakest force we know. The upward push of your chair is more than enough to counteract the pull of the entire planet! Yet gravity has an ace up its sleeve – unlike all other forces, it’s always attractive. For larger objects, the other forces start bickering and cancelling out. But gravity just keeps on getting stronger, until it’s impossible to escape – a black hole!

As a theoretical physicist, I tend to carry a black hole whenever I’m travelling.

wood and stainless tapered-800x600
My “black hole” bucket.

As you can see, the top of this bucket is the surface of a black hole, otherwise known as the event horizon. When I release water from a glass above the black hole, it is attracted to the black hole, and falls inexorably towards it, never to be seen again. 

This water is doomed!
This water is doomed!

Okay, I suppose my bucket isn’t a real black hole. After all, it’s the gravity of the Earth that pulls the water in. And light can definitely escape because I can see inside it! But it does accurately represent the bending of space and time. Albert Einstein taught us that everything in the universe rolls around on the cosmic quilt of spacetime, like balls on an elastic sheet. A heavy ball distorts the sheet, creating my black hole bucket.

You might not feel too threatened by black holes – after all, the nearest one is probably 8 billion billion miles away. But in actual fact you could be falling into a black hole right now without noticing! Turns out that for a large enough black hole, the event horizon is so far away that gravity there is very weak. So there’s no reason why you should experience anything special.

Maybe we, and all this, have just passed the point of no return...
Maybe we, and all this, have just passed the point of no return…

This disturbing fact has an unexpected consequence from the microscopic world of quantum mechanics. Every quantum theory must have a single vacuum, essentially the most boring and lazy state of affairs. If I stand still the vacuum is just empty space. But as soon as I start accelerating something weird happens. Particles suddenly appear from nowhere!

What does that mean for our black hole? Well if you’re not falling in, you must be accelerating away to oppose the huge pull of gravity! This means that you should see the black hole glowing with particles called Hawking radiation. Remember my black hole full of water? Well, you haven’t fallen in. And that means I have to cover you with Hawking radiation!

It's confetti! Er, I mean Hawking radiation.
It’s confetti! Um, I mean Hawking radiation.

Luckily for your computer, the Hawking confetti that came out isn’t the same as the water that went in. From your perspective the water has simply disappeared! Exactly the same thing seems to happen for real black holes.

This black hole magic trick has become infamous among scientists, resisting all efforts at explanation. But a solution might be at hand, courtesy of Hawking himself! What if you could slightly change the vacuum every time something dropped into the black hole? Then, if you’re very careful, you might just be able to reconstruct the original water from the confetti of Hawking radiation.

Is the event horizon a cosmic hairdresser?
Is the event horizon a cosmic hairdresser?

Put another way, the event horizon takes a lock of soft hair from every passing particle as a memento of its existence. This information is eventually carried off by Hawking’s magic particles, reminding us of what we’d lost. It remains for soft experts, like myself, to work out the exact details.

This post is based on a talk given for the Famelab competition. You can read the full paper by Stephen Hawking, Malcolm Perry and Andy Strominger here.

Three Ways with Totally Positive Grassmannians

This week I’m down in Canterbury for a conference focussing on the positive Grassmannian. “What’s that?”, I hear you ask. Roughly speaking, it’s a mysterious geometrical object that seems to crop up all over mathematical physics, from scattering amplitudes to solitons, not to mention quantum groups. More formally we define

\displaystyle \mathrm{Gr}_{k,n} = \{k\mathrm{-planes}\subset \mathbb{C}^n\}

We can view this as the space of k\times n matrices modulo a GL(k) action, which has homogeneous “Plücker” coordinates given by the k \times k minors. Of course, these are not coordinates in the true sense, for they are overcomplete. In particular there exist quadratic Plücker relations between the minors. In principle then, you only need a subset of the homogeneous coordinates to cover the whole Grassmannian.

To get to the positive Grassmannian is easy, you simply enforce that every k \times k minor is positive. Of course, you only need to check this for some subset of the Plücker coordinates, but it’s tricky to determine which ones. In the first talk of the day Lauren Williams showed how you can elegantly extract this information from paths on a graph!

Screen Shot 2016-01-07 at 21.55.04

In fact, this graph encodes much more information than that. In particular, it turns out that the positive Grassmannian naturally decomposes into cells (i.e. things homeomorphic to a closed ball). The graph can be used to exactly determine this cell decomposition.

And that’s not all! The same structure crops up in the study of quantum groups. Very loosely, these are algebraic structures that result from introducing non-commutativity in a controlled way. More formally, if you want to quantise a given integrable system, you’ll typically want to promote the coordinate ring of a Poisson-Lie group to a non-commutative algebra. This is exactly the sort of problem that Drinfeld et al. started studying 30 years ago, and the field is very much active today.

The link with the positive Grassmannian comes from defining a quantity called the quantum Grassmannian. The first step is to invoke a quantum plane, that is a 2-dimensional algebra generated by a,b with the relation that ab = qba for some parameter q different from 1. The matrices that linearly transform this plane are then constrained in their entries for consistency. There’s a natural way to build these up into higher dimensional quantum matrices. The quantum Grassmannian is constructed exactly as above, but with these new-fangled quantum matrices!

The theorem goes that the torus action invariant irreducible varieties in the quantum Grassmannian exactly correspond to the cells of the positive Grassmannian. The proof is fairly involved, but the ideas are rather elegant. I think you’ll agree that the final result is mysterious and intriguing!

And we’re not done there. As I’ve mentioned before, positive Grassmannia and their generalizations turn out to compute scattering amplitudes. Alright, at present this only works for planar \mathcal{N}=4 super-Yang-Mills. Stop press! Maybe it works for non-planar theories as well. In any case, it’s further evidence that Grassmannia are the future.

From a historical point of view, it’s not surprising that Grassmannia are cropping up right now. In fact, you can chronicle revolutions in theoretical physics according to changes in the variables we use. The calculus revolution of Newton and Leibniz is arguably about understanding properly the limiting behaviour of real numbers. With quantum mechanics came the entry of complex numbers into the game. By the 1970s it had become clear that projectivity was important, and twistor theory was born. And the natural step beyond projective space is the Grassmannian. Viva la revolución!

Romeo and Juliet, through a Wormhole

I spent last week at the Perimeter Institute in Waterloo, Ontario. Undoubtedly one of the highlights was Juan Maldenena’s keynote on resolving black hole paradoxes using wormholes. Matt’s review of the talk below is well worth a read!

4 gravitons

Perimeter is hosting this year’s Mathematica Summer School on Theoretical Physics. The school is a mix of lectures on a topic in physics (this year, the phenomenon of quantum entanglement) and tips and tricks for using the symbolic calculation program Mathematica.

Juan Maldacena is one of the lecturers, which gave me a chance to hear his Romeo and Juliet-based explanation of the properties of wormholes. While I’ve criticized some of Maldacena’s science popularization work in the past, this one is pretty solid, so I thought I’d share it with you guys.

You probably think of wormholes as “shortcuts” to travel between two widely separated places. As it turns out, this isn’t really accurate: while “normal” wormholes do connect distant locations, they don’t do it in a way that allows astronauts to travel between them, Interstellar-style. This can be illustrated with something called a Penrose…

View original post 626 more words

Correlation Functions in Cosmology – What Do They Measure?

The cosmic microwave background (CMB) is a key observable in cosmology.  Experimentalists can precisely measure the temperature of microwave radiation left over from the big bang. The data shows very small differences in temperature across the sky. It’s up to theorists to figure out why!

The most popular explanation invokes a scalar field early in the universe. Quantum fluctuations in the field are responsible for the classical temperature distribution we see today. This argument, although naively plausible, requires some serious thought for full rigour.

Talks by cosmologists often parrot the received wisdom that the two-point correlation function of the scalar field can be observed on the sky. But how exactly is this done? In this post I’ll carefully explain the winding path from theory to observation.

First off, what really is a correlation function? Given two random variables X and Y we can (roughly speaking) determine their correlation as

\mathbb{E}(XY)

Intuitively this definition makes sense – in configurations where X and Y share the same sign there is a positive contribution to the correlation.

There’s another way of looking at correlation. You can think of it as a measure of the probability that for any random sample of X there will be a value of Y within some given distance. Hopefully this too feels intuitive. It can be proved more rigorously using Bayes’ theorem.

This second way of viewing correlation is particularly useful in cosmology. Here the random variables are usually position dependent fields. The correlation then becomes

\langle \chi(x)\chi(y) \rangle

where the average is over the whole sky with the direction of the vector x- y fixed. The magnitude of this vector provides a natural distance scale for the probabilistic interpretation of correlation. We see that the correlation is an avatar for the lumpiness of the distribution at a particular distance scale!

Now let’s focus on the CMB. The temperature fluctuations are defined as the percentage deviation from the average temperature at each point on the sky. Mathematically we write

\delta T / T (\hat{n})

where \hat{n} defines a point on the unit 2-sphere. We want to relate this to theoretical predictions. Given our discussion above, it’s not surprising that our first step is to compute the correlation function

C(\theta) = \displaystyle \langle \frac{\delta T}{ T}(\hat{n}_1) \frac{\delta T}{T}(\hat{n}_2)\rangle

where the average is over the whole sky with the angle \theta between \hat{n}_1 and \hat{n}_2 fixed. This average doesn’t lose any physical information since there’s no preferred direction in the sky! We can conveniently encode the correlation function using spherical harmonics

\delta T / T = \sum a_{l,m} Y_{l,m}

The coefficients a_{l,m} are known as the multipole moments of the temperature distribution. Substituting this in the correlation function definition we obtain

C(\theta) = \sum C_l P_l (\cos \theta)

where C_l = \sum_m |a_{l,m}|^2. We’re almost finished with our derivation! The final step is to convert from the correlation function to it’s momentum space representation, known as the power spectrum. With a little work, you can show that the power at multipole number l is given by

l(l+1)C_l

This is exactly the quantity we see plotted from sky map data on graphs comparing inflation theory to experiment!

Screen Shot 2015-08-24 at 17.39.03

From the theory perspective, this quantity is fairly easy to extract. We must compute the power spectrum of the primordial fluctuations of the inflation field. This is merely a matter of quantum field theory, albeit in de Sitter spacetime. Perhaps the most comprehensive account of this procedure is provided in Daniel Baumann’s notes.

Without going into any details, it’s worth mentioning a few theoretical models. The simplest option is to have a massless free inflaton field. This gives a scale-invariant power spectrum, which is almost correct! Adding mass corrects this result, providing fluctuations in the power spectrum. This is a better approximation, but has been ruled out by Planck data.

Clearly we need a more general potential. Here’s where the fun starts for cosmologists. The buzzwords are effective field theory, string inflation, non-Gaussianity and multiple fields! But that’ll have to wait for another blog post.

Written at the Mathematica Summer School 2015, inspired by Juan Maldecena’s lecture.

What Gets Conserved at Vertices in Feynman Diagrams?

The simple answer is – everything! If there’s a symmetry in your theory then the associated Noether charge must be conserved at a Feynman vertex. A simple and elegant rule, and one of the great strengths of Feynman’s method.

Even better, it’s not hard to see why all charges are conserved at vertices. Remember, every vertex corresponds to an interaction term in the Lagrangian. These are automatically constructed to be Lorentz invariant so angular momentum and spin had better be conserved. Translation invariance is built in by virtue of the Lagrangian spacetime integral so momentum is conserved too.

Internal symmetries work in much the same way. Color or electric charge must be conserved at each vertex because the symmetry transformation exactly guarantees that contributions from interaction terms cancel transformations of the kinetic terms. If you ain’t convinced go and check this in any Feynman diagram!

But watch out, there’s a subtlety! Suppose we’re interested in scalar QED for instance. One diagram for pair creation and annihilation looks like

scalarqed

Naively you might be concerned that angular momentum and momentum can’t possibly be conserved. After all, don’t photons have spin and mass squared equal to zero? The resolution of this apparent paradox is provided by the realization that the virtual photon is off-shell. This is a theorist’s way of saying that it doesn’t obey equations of motion. Therefore the usual restrictions from symmetries do not apply to the virtual photon! Thinking another way, the photon is a manifestation of a quantum fluctuation.

Erratum: a previous version of this article erroneously claimed that Noether’s second theorem is related to Ward identities that guarantee gauge invariance at the quantum level. This is not the case, to our knowledge. Indeed, the Ward identity is a statement about averaging over field configurations, which necessarily depends on the behaviour of the path integral measure, a quantity that Noether never concerned herself with! Interestingly, there is a connection between the second theorem and large residual gauge symmetries, as pointed out in https://arxiv.org/abs/1510.07038.

Conference Amplitudes 2015 – Don’t Stop Me Now!

All too soon we’ve reached the end of a wonderful conference. Friday morning dawned with a glimpse of perhaps the most impressive calculation of the past twelve months – Higgs production at three loops in QCD. This high precision result is vital for checking our theory against the data mountain produced by the LHC.

It was well that Professor Falko Dulat‘s presentation came at the end of the week. Indeed the astonishing computational achievement he outlined was only possible courtesy of the many mathematical techniques recently developed by the community. Falko illustrated this point rather beautifully with a word cloud.

Word Cloud Higgs Production

As amplitudeologists we are blessed with a incredibly broad field. In a matter of minutes conversation can encompass hard experiment and abstract mathematics. The talks this morning were a case in point. Samuel Abreu followed up the QCD computation with research linking abstract algebra, graph theory and physics! More specifically, he introduced a Feynman diagram version of the coproduct structure often employed to describe multiple polylogs.

Dr. HuaXing Zhu got the ball rolling on the final mini-session with a topic close to my heart. As you may know I’m currently interested in soft theorems in gauge theory and gravity. HuaXing and Lance Dixon have made an important contribution in this area by computing the complete 2-loop leading soft factor in QCD. Maybe unsurprisingly the breakthrough comes off the back of the master integral and differential equation method which has dominated proceedings this week.

Last but by no means least we had an update from the supergravity mafia. In recent years Dr. Tristan Dennen and collaborators have discovered unexpected cancellations in supergravity theories which can’t be explained by symmetry alone. This raises the intriguing question of whether supergravity can play a role in a UV complete quantum theory of gravity.

The methods involved rely heavily on the color-kinematics story. Intriguingly Tristan suggested that the double copy connection because gauge theory and gravity could form an explanation for the miraculous results (in which roughly a billion terms combine to give zero)! The renormalizability of Yang-Mills theory could well go some way to taming gravity’s naive high energy tantrums.

There’s still some way to go before bottles of wine change hands. But it was fitting to end proceedings with an incomplete story. For all that we’ve thought hard this week, it is now that the graft really starts. I’m already looking forward to meeting in Stockholm next year. My personal challenge is to ensure that I’m among the speakers!

Particular thanks to all the organisers, and the many masters students, PhDs, postdocs and faculty members at ETH Zurich who made our stay such an enjoyable and productive one!

Note: this article was originally written on Friday 10th July.

 

Conference Amplitudes 2015 – Air on the Superstring

One of the first pieces of Bach ever recorded was August Wilhelmj’s arrangement of the Orchestral Suite in D major. Today the transcription for violin and piano goes by the moniker Air on the G String. It’s an inspirational and popular work in all it’s many incarnations, not least this one featuring my favourite cellist Yo-Yo Ma.

This morning we heard the physics version of Bach’s masterpiece. Superstrings are nothing new, of course. But recently they’ve received a reboot courtesy of Dr. David Skinner among others. The ambitwistor string is an infinite tension version which only admit right-moving vibrations! At first the formalism looks a little daunting, until you realise that many calculations follow the well-trodden path of the superstring.

Now superstring amplitudes are quite difficult to compute. So hard, in fact, that Dr. Oliver Schloterrer devoted an entire talk to understanding particular functions that emerge when scattering just  4 strings at next-to-leading order. Mercifully, the ambitwistor string is far more well-behaved. The resulting amplitudes are rather beautiful and simple. To some extent, you trade off the geometrical aesthetics of the superstring for the algebraic compactness emerging from the ambitwistor approach.

This isn’t the first time that twistors and strings have been combined to produce quantum field theory. The first attempt dates back to 2003 and work of Edward Witten (of course). Although hugely influential, Witten’s theory was esoteric to say the least! In particular nobody knows how to encode quantum corrections in Witten’s language.

Ambitwistor strings have no such issues! Adding a quantum correction is easy – just put your theory on a donut. But this conceptually simple step threatened a roadblock for the research. Trouble was, nobody actually knew how to evaluate the resulting formulae.

Nobody, that was, until last week! Talented folk at Oxford and Cambridge managed to reduce the donutty problem to the original spherical case. This is an impressive feat – nobody much suspected that quantum corrections would be as easy as a classical computation!

There’s a great deal of hope that this idea can be rigorously extended to higher loops and perhaps even break the deadlock on maximal supergravity calculations at 7-loop level. The resulting concept of off-shell scattering equations piqued my interest – I’ve set myself a challenge to use them in the next 12 months!

Scattering equations, you say? What are these beasts? For that we need to take a closer look at the form of the ambitwistor string amplitude. It turns out to be a sum over the solutions of the following equations

\sum_{i\neq j}\frac{s_{ij}}{z_i - z_j}=0

The s_{ij} are just two particle invariants – encoding things you can measure about the speed and angle of particle scattering. And the z_i are just some bonus variables. You’d never dream of introducing them unless somebody told you to! But yet they’re exactly what’s required for a truly elegant description.

And these scattering equations don’t just crop up in one special theory. Like spies in a Cold War era film, they seem to be everywhere! Dr. Freddy Cachazo alerted us to this surprising fact in a wonderfully engaging talk. We all had a chance to play detective and identify bits of physics from telltale clues! By the end we’d built up an impressive spider’s web of connections, held together by the scattering equations.

Scattering Equation Theory Web

Freddy’s talk put me in mind of an interesting leadership concept espoused by the conductor Itay Talgam. Away from his musical responsibilities he’s carved out a niche as a business consultant, teaching politicians, researchers, generals and managers how to elicit maximal productivity and creativity from their colleagues and subordinates. Critical to his philosophy is the concept of keynote listening – sharing ideas in a way that maximises the response of your audience. This elusive quality pervaded Freddy’s presentation.

Following this masterclass was no mean feat, but one amply performed by my colleague Brenda Penante. We were transported to the world of on-shell diagrams – a modern alternative to Feynman’s ubiquitous approach. These diagrams are known to produce the integrand in planar $\mathcal{N}=4$ super-Yang-Mills theory to all orders! What’s more, the answer comes out in an attractive d \log form, ripe for integration to multiple polylogarithms.

Cunningly, I snuck the word planar into the paragraph above. This approximation means that the diagrams can be drawn on a sheet of paper rather than requiring 3 dimensions. For technical reasons this is equivalent to working in the theory with an infinite number of color charges, not just the usual 3 we find for the strong force.

Obviously, it would be helpful to move beyond this limit. Brenda explained a decisive step in this direction, providing a mechanism for computing all leading singularities of non-planar amplitudes. By examining specific examples the collaboration uncovered new structure invisible in the planar case.

Technically, they observed that the boundary operation on a reduced graph identified non-trivial singularities which can’t be understood as the vanishing of minors. At present, there’s no proven geometrical picture of these new relations. Amazingly they might emerge from a 1,700-year-old theorem of Pappus!

Bootstraps were back on the agenda to close the session. Dr. Agnese Bissi is a world-expert on conformal field theories. These models have no sense of distance and only know about angles. Not particularly useful, you might think! But they crop up surprisingly often as approximations to realistic physics, both in particle smashing and modelling materials.

Agnese took a refreshingly rigorous approach, walking us through her proof of the reciprocity principle. Until recently this vital tool was little more than an ad hoc assumption, albeit backed up by considerable evidence. Now Agnese has placed it on firmer ground. From here she was able to “soup up” the method. The supercharged variant can compute OPE coefficients as well as dimensions.

Alas, it’s already time for the conference dinner and I haven’t mentioned Dr. Christian Bogner‘s excellent work on the sunrise integral. This charmingly named function is the simplest case where hyperlogarithms are not enough to write down the answer. But don’t just take it from me! You can now hear him deliver his talk by visiting the conference website.

Conversations

I’m very pleased to have chatted with Professor Rutger Boels (on the Lagrangian origin of Yang-Mills soft theorems and concerning the universality of subheading collinear behaviour) and Tim Olson (about determining the relative sign between on-shell diagrams to ensure cancellation of spurious poles).

Note: this post was originally written on Thursday 9th July but remained unpublished. I blame the magnificent food, wine and bonhomie at the conference dinner!

Conference Amplitudes 2015 – Integrability, Colorful Duality and Hiking

The middle day of a conference. So often this is the graveyard slot – when initial hysteria has waned and the final furlong seems far off. The organisers should take great credit that today was, if anything, the most engaging thus far! Even the weather was well-scheduled, breaking overnight to provide us with more conducive working conditions.

Integrability was our wake-up call this morning. I mentioned this hot topic a while back. Effectively it’s an umbrella term for techniques that give you exact answers. For amplitudes folk, this is the stuff of dreams. Up until recently the best we could achieve was an expansion in small or large parameters!

So what’s new? Dr. Amit Sever brought us up to date on developments at the Perimeter Institute, where the world’s most brilliant minds have found a way to map certain scattering amplitudes in 4 dimensions onto a 2 dimensional model which can be exactly solved. More technically, they’ve created a flux tube representation for planar amplitudes in \mathcal{N}=4 super-Yang-Mills, which can then by solved using spin chain methods.

The upshot is that they’ve calculated 6 particle scattering amplitudes to all values of the (‘t Hooft) coupling. Their method makes no mention of Feynman diagrams or string theory – the old-fashioned ways of computing this amplitude for weak and strong coupling respectively. Nevertheless the answer matches exactly known results in both of these regimes.

There’s more! By putting their computation under the microscope they’ve unearthed unexpected new physics. Surprisingly the multiparticle poles familiar from perturbative quantum field theory disappear. Doing the full calculation smoothes out divergent behaviour in each perturbative term. This is perhaps rather counterintuitive, given that we usually think of higher-loop amplitudes as progressively less well-behaved. It reminds me somewhat of Regge theory, in which the UV behaviour of a tower of higher spin states is much better than that of each one individually.

The smorgasbord of progress continued in Mattias Wilhelm’s talk. The Humboldt group have a completely orthogonal approach linking integrability to amplitudes. By computing form factors using unitarity, they’ve been able to determine loop-corrections to anomalous dimensions. Sounds technical, I know. But don’t get bogged down! I’ll give you the upshot as a headline – New Link between Methods, Form Factors Say.

Coffee consumed, and it was time to get colorful. You’ll hopefully remember that the quarks holding protons and neutrons together come in three different shades. These aren’t really colors that you can see. But they are internal labels attached to the particles which seem vital for our theory to work!

About 30 years ago, people realised you could split off the color-related information and just deal with the complicated issues of particle momentum. Once you’ve sorted that out, you write down your answer as a sum. Each term involves some color stuff and a momentum piece. Schematically

\displaystyle \textrm{gluon amplitude}=\sum \textrm{color}\times \textrm{kinematics}

What they didn’t realise was that you can shuffle momentum dependence between terms to force the kinematic parts to satisfy the same equations as the color parts! This observation, made back in 2010 by Zvi Bern, John Joseph Carrasco and Henrik Johansson has important consequences for gravity in particular.

Why’s that? Well, if you arrange your Yang-Mills kinematics in the form suggested by those gentlemen then you get gravity amplitudes for free. Merely strip off the color bit and replace it by another copy of the kinematics! In my super-vague language above

\displaystyle \textrm{graviton amplitude}=\sum \textrm{kinematics}\times \textrm{kinematics}

Dr. John Joseph Carrasco himself brought us up to date with a cunning method of determining the relevant kinematic choice at loop level. I can’t help but mention his touching modesty. Even though the whole community refers to the relations by the acronym BCJ, he didn’t do so once!

Before that Dr. Donal O’Connell took us on an intriguing detour of solutions to classical gravity theories with an appropriate dual Yang-Mills theory, obtainable via a BCJ procedure. The idea is beautiful, and seems completely obvious once you’ve been told! Kudos to the authors for thinking of it.

After lunch we enjoyed a well-earned break with a hike up the Uetliberg mountain. I learnt that this large hill is colloquially called Gmuetliberg. Yvonne Geyer helpfully explained that this is derogatory reference to the tame nature of the climb! Nevertheless the scenery was very pleasant, particularly given that we were mere minutes away from the centre of a European city. What I wouldn’t give for an Uetliberg in London!

Evening brought us to Heidi and Tell, a touristic yet tasty burger joint. Eager to offset some of my voracious calorie consumption I took a turn around the Altstadt. If you’re ever in Zurich it’s well worth a look – very little beats medieval streets, Alpine water and live swing music in the evening light.

Conversations

It was fantastic to meet Professor Lionel Mason and discuss various ideas for extending the ambitwistor string formalism to form factors. I also had great fun chatting to Julio Martinez about linking CHY and BCJ. Finally huge thanks to Dr. Angnis Schmidt-May for patiently explaining the latest research in the field of massive gravity. The story is truly fascinating, and could well be a good candidate for a tractable quantum gravity model!

Erratum: An earlier version of this post mistakenly claimed that Chris White spoke about BCJ for equations of motion. Of course, it was his collaborator Donal O’Connell who delivered the talk. Many thanks to JJ Carrasco for pointing out my error!

Conference Amplitudes 2015 – Integration Ahoy!

I recall fondly a maths lesson from my teenage years. Dr. Mike Wade – responsible as much an anyone for my scientific passion – was introducing elementary concepts of differentiation and integration. Differentiation is easy, he proclaimed. But integration is a tricky beast.

That prescient warning perhaps foreshadowed my entry into the field of amplitudes. For indeed integration is of fundamental importance in determining the outcome of scattering events. To compute precise “loop corrections” necessarily requires integration. And this is typically a hard task.

Today we were presented with a smorsgasbord of integrals. Polylogarithms were the catch of the day. This broad class of functions covers pretty much everything you can get when computing amplitudes (provided your definition is generous)! So what are they? It fell to Dr. Erik Panzer to remind us.

Laymen will remember logarithms from school. These magic quantities turn multiplication into addition, giving rise to the ubiquitous schoolroom slide rules predating electronic calculators. Depending on your memory of math class, logarithms are either curious and fascinating or strange and terrifying! But boring they most certainly aren’t.

One of the most amusing properties of a logarithm comes about from (you guessed it) integration. Integrating x^{a-1} is easy, you might recall. You’ll end up with x^a/a plus some constant. But what happens when a is zero? Then the formula makes no sense, because dividing by zero simply isn’t allowed.

And here’s where the logarithm comes to the rescue. As if by witchcraft it turns out that

\displaystyle \int_0^x x^{-1} = -\log (1-x)

This kind of integral crops when you compute scattering amplitudes. The traditional way to work out an amplitudes is to draw Feynman diagrams – effectively pictures representing the answer. Every time you get a loop in the picture, you get an integration. Every time a particle propagates from A to B you get a fraction. Plug through the maths and you sometimes see integrals that give you logarithms!

But logarithms aren’t the end of the story. When you’ve got many loop integrations involved, and perhaps many propagators too, things can get messy. And this is where polylogarithms come in. They’ve got an integral form like logarithms, only instead of one integration there are many!

\displaystyle \textrm{Li}_{\sigma_1,\dots \sigma_n}(x) = \int_0^z \frac{1}{z_1- \sigma_1}\int_0^{z_1} \frac{1}{z_2-\sigma_2} \dots \int_0^{z_{n-1}}\frac{1}{z_n-\sigma_n}

It’s easy to check that out beloved \log function emerges from setting n=1 and \sigma_1=0. There’s some interesting sociology underlying polylogs. The polylogs I’ve defined are variously known as hyperlogs, generalized polylogs and Goncharov polylogs depending on who you ask. This confusion stems from the fact that these functions have been studied in several fields besides amplitudes, and predictably nobody can agree on a name! One name that is universally accepted is classical polylogs – these simpler functions emerging when you set all the \sigmas to zero.

So far we’ve just given names to some integrals we might find in amplitudes. But this is only the beginning. It turns out there are numerous interesting relations between different polylogs, which can be encoded by clever mathematical tools going by esoteric names – cluster algebras, motives and the symbol to name but a few. Erik warmed us up on some of these topics, while also mentioning that even generalized polylogs aren’t the whole story! Sometimes you need even wackier functions like elliptic polylogs.

All this gets rather technical quite quickly. In fact, complicated functions and swathes of algebra are a sad corollary of the traditional Feynman diagram approach to amplitudes. But thankfully there are new and powerful methods on the market. We heard about these so-called bootstraps from Dr. James Drummond and Dr. Matt von Hippel.

The term bootstrap is an old one, emerging in the 1960s to describe methods which use symmetry, locality and unitarity to determine amplitudes. It’s probably a humorous reference to the old English saying “pull yourself up by your bootstraps” to emphasise the achievement of lofty goals from meagre beginnings. Research efforts in the 60s had limited success, but the modern bootstrap programme is going from strength to strength. This is due in part to our much improved understanding of polylogarithms and their underlying mathematical structure.

The philosophy goes something like this. Assume that your answer can be written as a polylog (more precisely as a sum of polylogs, with the integrand expressed as \prod latex d \log(R_i) for appropriate rational functions R_i). Now write down all the possible rational functions that could appear, based on your knowledge of the process. Treat these as alphabet bricks. Now put your alphabet bricks together in every way that seems sensible.

The reason the method works is that there’s only one way to make a meaningful “word” out of your alphabet bricks. Locality forces the first letter to be a kinematic invariant, or else your answer would have branch cuts which don’t correspond to physical particles. Take it from me, that isn’t allowed! Supersymmetry cuts down the possibilities for the final letter. A cluster algebra ansatz also helps keep the possibilities down, though a physical interpretation for this is as yet unknown. For 7 particles this is more-or-less enough to get you the final answer. But weirdly 6 particles is smore complicated! Counter-intuitive, but hey – that’s research. To fix the six point result you must appeal to impressive all-loop results from integrability.

Next up for these bootstrap folk is higher loops. According to Matt, the 5-loop result should be gettable. But beyond that the sheer number of functions involved might mean the method crashes. Naively one might expect that the problem lies with having insufficiently many constraints. But apparently the real issue is more prosaic – we just don’t have the computing power to whittle down the options beyond 5-loop.

With the afternoon came a return to Feynman diagrams, but with a twist. Professor Johannes Henn talked us through an ingenious evaluation method based on differential equations. The basic concept has been known for a long time, but relies heavily on choosing the correct basis of integrals for the diagram under consideration. Johannes’ great insight was to use conjectures about the dlog form of integrands to suggest a particularly nice set of basis integrals. This makes solving the differential equations a cinch – a significant achievement!

Now the big question is – when can this new method be applied? As far as I’m aware there’s no proof that this nice integral basis always exists. But it seems that it’s there for enough cases to be useful! The day closed with some experimentally relevant applications, the acid test. I’m now curious as to whether you can link the developments in symbology and cluster algebras with this differential equation technique to provide a mega-powerful amplitude machine…! And that’s where I ought to head to bed, before you readers start to worry about theoretical physicists taking over the world.

Conversations

It was a pleasure to chat all things form factors with Brenda Penante, Mattias Wilhelm and Dhritiman Nandan at lunchtime. Look out for a “on-shell” blog post soon.

I must also thank Lorenzo Magnea for an enlightening discussion on soft theorems. Time to bury my head in some old papers I’d previously overlooked!