Category Archives: Math

The Post-Truth Society is a Failure to Teach Probability

Many commentators have suggested that recent unexpected events, like the continuing support of US voters for Donald Trump, or victory of the UK Brexit campaign, should be attributed to a new era of human reasoning, where truth is no longer important. Ironically, the real truth is much more subtle, and considerably more damning of the very media who peddle the post-truth myth.

Probabilistically, we may quantify our trust in expert opinion by measuring the chances that some event happens, given that experts have predicted it. Mathematically we write

\displaystyle P(\textrm{trust}) = P(A \textrm{ happens } | \textrm{ experts predict } A)

Now the media advocate that we can estimate this directly. Just think about all the times that experts have made a prediction, and work out how often they got it right. Although this is mathematically correct, it isn’t of much practical use. After all, who wanders round reciting all the predictions that experts have ever made? The times we tend to remember predictions are when they were wrong!

This problem is inherent in the traditional frequentist interpretation of probability, taught in schools. This says that probabilities can be calculated empirically by repeating an experiment many times, and looking at the outcomes. While this works fine for tossing coins, it doesn’t intuitively extend to more complicated situations.

So when the media (and to some extent experts themselves) encourage people to make the estimate directly, we’re bound to make mistakes. In particular, in many cases where we remember predictions, they were exactly wrong! This artificially bumps up the importance of outlier events like the financial crash of 2010. The result is that we put a very low value on our trust in the experts.

Fortunately, there’s a better way of looking at the problem, which is sadly unknown to the vast majority of people. It goes by the name of Bayes’ Theorem. Despite the technical name, it represents a far easier way to quantify uncertainty. What’s more, it lies closer to our intuition that probability should measure how much we believe in something.

Put simply Bayes’ theorem says that our belief that A will happen, given supporting evidence from experts, is the chances that experts predicted A given that it happens, multiplied by our prior belief in A. (Strictly speaking, there’s another factor involved, but we won’t need it). In maths we can write

\displaystyle P(\textrm{trust}) \propto P(\textrm{experts predicted } A \ | \ A \textrm{ happens}) \times P(A \textrm{ happens})

This makes sense – we should update our prior expectations, based on the likelihood that A happens. Of course, some people are so entrenched in their view as to never change their minds. But there are often many “floating voters” who might adopt just such a strategy in forming an opinion, if encouraged to do so! The undecided segment of the population played an important role in the Brexit vote, and may yet become a crucial factor in the US election.

So what’s the advantage of this formula over the frequentist interpretation? Because it sits closer to our intuition, it’s much easier for us to make a good guess. The first term in the product asks us to consider all the times that events have happened, and determine the proportion of the time that experts correctly predicted them. Notice that this reverses the perspective we had in the original direct formula!

This change of viewpoint is great news for our intuition. Most of us can think of various economic or political events that have happened over our lifetime. And while some of these were unexpected (like the financial crash), many others were not – witness the long periods of economic stability and the foregone conclusions of many elections over the past 20 years. So we are likely to say that when an event happens, experts have often predicted it first.

Let’s look at this mathematically. Rather than calculating the probability of trust, which is a slightly wishy-washy concept, it’s better to compare our trust and our doubt. We do this automatically in everyday life, as a sanity check. If we have niggling doubts, it’s probably because we’ve overestimated our trust, and vice versa. In equation form we’ll determine

\displaystyle \textrm{trust factor} = \frac{ P(\textrm{trust}) }{ P (\textrm{doubt}) }

If this is (say) bigger than 3, we should be happy to trust the experts, since it far outweighs any doubts we might have. Let’s suppose that prior to the expert opinion, we have no strong view as to whether A will happen or not. In other words

\displaystyle P(A \textrm{ happens}) = P(A \textrm{ doesn't happen}) = 0.5

Then using Bayes we find that our trust factor is a so-called Bayes factor, namely

\displaystyle \textrm{trust factor} = \frac{P(\textrm{experts predicted } A \ | \ A \textrm{ happens})}{P(\textrm{experts predicted } A \ | \ A \textrm{ doesn't happen})}

We’ve already argued that the term is the numerator is plausibly large. It is also sensible to think that the term in the denominator is relatively small. We’d all agree that major events not happening is rather common. And of all the events that don’t happen, experts don’t often tend to say they will. Of course, there are some doomsayers who are frequently forecasting disaster, but they’re mostly on the periphery of expert opinion.

So if the numerator is large and the denominator is small, we can conclude that our trust factor is quite large indeed. It’s not unreasonable to suspect it’ll be at least greater than 3. With the right intuitive tools, we’ve arrived at a more reasonable level of trust in our experts. Sadly, such arguments are few and far between in a media hell-bent on a “keep it simple, stupid” attitude, and expert spokespeople convinced that “dumbing down” is the only way to communicate. Ironically, this is more alienating and less intuitive than Bayesian logic!

The post-truth society is a myth, created by a media who themselves are confused about probability. More accurately, we are currently living in a pre-inference society. Bayesian estimation has never been adequately taught in schools, and for many years this was no great loss. But in the modern world, with the ubiquitous availability of data, it is imperative that we provide people with adequate logical tools for inferring reasonable decisions.

P.S. For balance, here’s why Bayes isn’t always best!

A Second Course in String Theory

I’ve been lucky enough to have funding from SEPnet to create a new lecture course recently, following on from David Tong’s famous Part III course on string theory. The notes are intended for the beginning PhD student, bridging the gap between Masters level and the daunting initial encounters with academic seminars. If you’re a more experienced journeyman, I hope they’ll provide a useful reminder of certain terminology. Remember what twist is? What does a D9-brane couple to? Curious about the scattering equations? Now you can satisfy your inner quizmaster!

Here’s a link to the notes. Comments, questions and corrections are more than welcome.

Thanks are particularly due to Andy O’Bannon for his advice and support throughout the project. 

Collabor8 – A New Type of Conference

Next month, I’m running a day-long conference here at QMUL. The meeting is intended to give early career researchers the chance to seek possible collaborations. Despite living in this globalised age, all too often PhD students and postdocs are restricted to working with faculty members in their current institution. This is no surprise – at the conferences and meetings where networking opportunities arise, we’re usually talking about completed work, rather than discussing new problems.

We’re shaking up the status quo by asking our participants to speak about ongoing research, and in particular to outline roadblocks where they need input from theorists with different expertise. What’s more, we’re throwing together random teams for speed collaboration sessions on the issues presented, getting the ball rolling for possible acknowledgements and group projects. We’re extremely fortunate to have the inspirational Fernando Alday as our guest speaker, a serial collaborator himself.

The final novelty of this conference comes in digital form. The conference website doubles as a social network, making it easy to keep track of your connections and maintain interactions after the meeting. We hope to generate good content on the site during the day, where some participants will be invited to act as scribes and note down any interesting ideas that arise. This way, there’ll be a valuable and evolving database of ideas ready for future collaborations to draw on.

Over to you! If you’re doing a PhD or a postdoc in the UK, or you know someone who is, send them a link to the website

http://www.collabor8research.org

If you’re further afield, feel free to follow developments from afar. In the long term we’re hoping to roll out the social network to other conferences and institutions – watch this space!

Tidbits from the High Table of Physics

This evening, I was lucky enough to dine with Brenda Penante, Stephane Launois, Lionel Mason, Nima Arkani-Hamed, Tom Lenagan and David Hernandez. Here for your delectation are some tidbits from the conversation.

  • The power of the renormalisation group comes from the fact that the $1$-loop leading logarithm suffices to fix the leading logarithm at all loops. Here’s a reference.
  • The BPHZ renormalisation scheme (widely seen in the physics community as superseded by the Wilsonian renormalisation group) has a fascinating Hopf algebra structure.
  • The central irony of QFT is thus. IR divergences were discovered before UV divergences and “solved” almost instantly. Theorists then wrangled for a couple of decades over the UV divergences, before finally Wilson laid their qualms to rest. At this point the experimentalists came back and told them that it was the IR divergences that were the real problem again. (This remains true today, hence the motivation behind my work on soft theorems).
  • IR divergences are a consequence of the Lorentzian signature. In Euclidean spacetime you have a clean separation of scales, but not so in our world. (Struggling to find a reference for this, anybody know of one?)
  • The next big circular collider will probably have a circumference of 100km, reach energies 7 times that of the LHC and cost at least £20 billion.
  • The Fourier transform of any polynomial in cos(x) with roots at \pm (2i+1) / (2n+1) for 1 \leq i \leq n-1 has all positive coefficients. This is equivalent to the no-ghost theorem in string theory, proved by Peter Goddard, and seems to require some highly non-trivial machinery. (Again, does anyone have a reference?)

and finally

  • Never, ever try to copy algebra into Mathematica late at night!

Three Ways with Totally Positive Grassmannians

This week I’m down in Canterbury for a conference focussing on the positive Grassmannian. “What’s that?”, I hear you ask. Roughly speaking, it’s a mysterious geometrical object that seems to crop up all over mathematical physics, from scattering amplitudes to solitons, not to mention quantum groups. More formally we define

\displaystyle \mathrm{Gr}_{k,n} = \{k\mathrm{-planes}\subset \mathbb{C}^n\}

We can view this as the space of k\times n matrices modulo a GL(k) action, which has homogeneous “Plücker” coordinates given by the k \times k minors. Of course, these are not coordinates in the true sense, for they are overcomplete. In particular there exist quadratic Plücker relations between the minors. In principle then, you only need a subset of the homogeneous coordinates to cover the whole Grassmannian.

To get to the positive Grassmannian is easy, you simply enforce that every k \times k minor is positive. Of course, you only need to check this for some subset of the Plücker coordinates, but it’s tricky to determine which ones. In the first talk of the day Lauren Williams showed how you can elegantly extract this information from paths on a graph!

Screen Shot 2016-01-07 at 21.55.04

In fact, this graph encodes much more information than that. In particular, it turns out that the positive Grassmannian naturally decomposes into cells (i.e. things homeomorphic to a closed ball). The graph can be used to exactly determine this cell decomposition.

And that’s not all! The same structure crops up in the study of quantum groups. Very loosely, these are algebraic structures that result from introducing non-commutativity in a controlled way. More formally, if you want to quantise a given integrable system, you’ll typically want to promote the coordinate ring of a Poisson-Lie group to a non-commutative algebra. This is exactly the sort of problem that Drinfeld et al. started studying 30 years ago, and the field is very much active today.

The link with the positive Grassmannian comes from defining a quantity called the quantum Grassmannian. The first step is to invoke a quantum plane, that is a 2-dimensional algebra generated by a,b with the relation that ab = qba for some parameter q different from 1. The matrices that linearly transform this plane are then constrained in their entries for consistency. There’s a natural way to build these up into higher dimensional quantum matrices. The quantum Grassmannian is constructed exactly as above, but with these new-fangled quantum matrices!

The theorem goes that the torus action invariant irreducible varieties in the quantum Grassmannian exactly correspond to the cells of the positive Grassmannian. The proof is fairly involved, but the ideas are rather elegant. I think you’ll agree that the final result is mysterious and intriguing!

And we’re not done there. As I’ve mentioned before, positive Grassmannia and their generalizations turn out to compute scattering amplitudes. Alright, at present this only works for planar \mathcal{N}=4 super-Yang-Mills. Stop press! Maybe it works for non-planar theories as well. In any case, it’s further evidence that Grassmannia are the future.

From a historical point of view, it’s not surprising that Grassmannia are cropping up right now. In fact, you can chronicle revolutions in theoretical physics according to changes in the variables we use. The calculus revolution of Newton and Leibniz is arguably about understanding properly the limiting behaviour of real numbers. With quantum mechanics came the entry of complex numbers into the game. By the 1970s it had become clear that projectivity was important, and twistor theory was born. And the natural step beyond projective space is the Grassmannian. Viva la revolución!

Academia: Leadership and Training Required

If you know someone who works in academia, chances are they’ve told you that research takes time. A lot of time, that is. But does it have to?

It’s 3:15pm on an overcast Wednesday afternoon. A group of PhD students, postdocs and senior academics sit down to discuss their latest paper. They’re “just finishing” the write-up, almost ready to submit to a journal. This phase has already been in motion for months, and could well take another week or two.

Of course, this sounds monumentally inefficient to corporate ears. In a world where time is money, three months of tweaking and wrangling could not be tolerated. So why is it acceptable in academia? Because nobody is in charge! Many universities suffer from a middle management leadership vacuum; the combined result of lack of training and unwise promotions.

It is ironic that renowned bastions of learning have fallen so far behind their industrial counterparts when it comes to research efficiency. When you consider that lecturers need no teacher training, supervisors no management expertise, and interviewers no subconscious bias training, the problem becomes less surprising. No wonder academia is leading the way on gender inequality.

The solution – a cultural shake-up. Universities must offer more teaching-only posts, breaking the vicious cycle which sees disgruntled researchers forced to lecture badly, and excellent teachers forced out from lack of research output. Senior management should mandate leadership training for group leaders and supervisors, empowering them to manage effectively and motivate their students. Doctoral candidates, for that matter, might also benefit from a course in followership, the latest business fashion. Perhaps most importantly, higher education needs to stop hiring, firing and promoting based purely on research brilliance, with no regard for leadership, teamwork and communication skills.

Conveniently, higher education has ready made role-models in industrial research organisations. Bell Labs is a good example. Not long ago this once famous institution was in the doldrums, even forced to shut down for a period. But under the inspirational leadership of Marcus Weldon, the laboratory is undergoing a renaissance. Undoubtedly much of this progress is built on Marcus’ clear strategic goals and emphasis on well-organised collaboration.

Universities might even find inspiration closer to home. Engineering departments worldwide are developing ever-closer ties to industry, with beneficial effects on research culture. From machine learning to aerospace, corporate backing provides not only money but also business sense and career development. These links advantage researchers with some client-facing facets, not the stuffy chalk-covered supermind of yesteryear. That doesn’t mean there isn’t a place for pure research – far from it. But insular positions ought to be exceptional, rather than the norm.

At the Scopus Early Career Researcher Awards a few weeks ago, Elsevier CEO Ron Mobed rightly bemoaned the loss of young research talent from academia. The threefold frustrations of poor job security, hackneyed management and desultory training hung unspoken in the air. If universities, journals and learned societies are serious about tackling this problem they’ll need a revolution. It’s time for the 21st century university. Let’s get down to the business of research.

 

(Not) How to Write your First Paper

18 months ago I embarked on a PhD, fresh-faced and enthusiastic. I was confident I could learn enough to understand research at the coal-face. But when faced with the prospect of producing an original paper, I was frankly terrified. How on earth do you turn vague ideas into concrete results?

In retrospect, my naive brain was crying out for an algorithm. Nowadays we’re so trained to jump through examination hoops that the prospect of an open-ended project terrifies many. Here’s the bad news – there’s no well-marked footpath leading from academic interest to completed write-up.

So far, so depressing. Or is it? After all, a PhD is meant to be a voyage of discovery. Sure, if I put you on a mountain-top with no map you’d likely be rather scared. But there’s also something exhilarating about striking out into the unknown. Particularly, that is, if you’re armed with a few orienteering skills.

IMG_20150820_144659511_HDR
Where next?

I’m about to finish my first paper. I can’t and won’t tell you how to write one! Instead, here’s a few items for your kit-bag on the uncharted mountain of research. With these in hand, you’ll be well-placed to forge your own route towards the final draft.

  1. Your Supervisor

Any good rambler takes a compass. Your supervisor is your primary resource for checking direction. Use them!

compass-map
Yes, I really am going to keep pursuing this analogy.

It took me several months (and one overheard moan) to start asking for pointers. Nowadays, if I’m completely lost for more than a few hours, I’ll seek guidance.

2. Your Notes

You start off without a map. It’s tempting to start with unrecorded cursory reconnaissance, just to get the lie of the land. Although initially speedy, you have to be super-disciplined lest you go too far and can’t retrace your steps. You’d be better off making notes as you go. Typesetting languages and subversion repositories can help you keep track of where you are.

IMG_20151013_134205164
Don’t forget to make a map!

Your notes will eventually become your paper – hence their value! But there’s a balance to be struck. It’s tempting to spend many hours on pretty formatting for content which ends up in Appendix J. If in doubt, consult your compass.

3. Your Colleagues

Some of them have written papers before. All of them have made research mistakes. Mutual support is almost as important in a PhD programme as on a polar expedition! But remember that research isn’t a race. If your colleague has produced three papers in the time it’s taken you to write one, that probably says more about their subfield and style of work than your relative ability.

The Southern Party
You’ll need your colleagues just as much as Shackleton did.

4. Confidence

Aka love of failure. If you sit on top of the mountain and never move then you’ll certainly stay away from dangerous edges. But you’ll also never get anywhere! You will fail much more than you succeed during your PhD. Every time you pursue an idea which doesn’t work, you are honing in on the route which will.

25.01.07-076
Be brave! (Though maybe you should wait for the snow to melt first).

In this respect, research is much like sport – positive psychology is vital. Bottling up frustration is typically unhelpful. You’d be much better off channelling that into…

5. Not Writing Your Paper

You can’t write a paper 24/7 and stay sane. Thankfully a PhD is full of other activities that provide mental and emotional respite. My most productive days have coincided with seminars, teaching commitments and meetings. You should go to these, especially if you’re feeling bereft of motivation.

Godwine+Choir+-+Smart+1
Why not join a choir?

 

And your non-paper pursuits needn’t be limited to the academic sphere. A regular social hobby, like sports, music or debating, can provide a much needed sense of achievement. Many PhDs I know also practice some form of religion, spirituality or meditation. Time especially set aside for mental rest will pay dividends later.

6. Literature 

No, I don’t mean related papers in your field (though these are important). I’ve found fiction, particularly that with intrigue and character development, hugely helpful when I’m struggling to cross an impasse. Perhaps surprisingly, some books aimed at startups are also worth a read. A typical startup faces corporate research problems akin to academic difficulties.

startup-owners-manual-1in

Finally, remember that research is by definition iterative! You cannot expect your journey to end within a month. As you chart the territory around you, try to enjoy the freedom of exploring. Who knows, you might just spot a fascinating detour that leads directly to an unexpected paper.

My thanks to Dr. Inger Mewburn and her wonderful Thesis Whisperer blog for inspiring this post.

Bad Science: Thomson-Reuters Publishes Flawed Ranking of Hottest Research

Thomson-Reuters has reportedly published their yearly analysis of the hottest trends in science research. Increasingly, governments and funding organisations use such documents to identify strategic priorities. So it’s profoundly disturbing that their conclusions are based on shoddy methodology and bad science!

The researchers first split recent papers into 10 broad areas, of which Physics was one. And then the problems began. According to the official document

Research fronts assigned to each of the
10 areas were ranked by total citations and the top 10 percent of the fronts in each area were extracted.

Already the authors have fallen into two fallacies. First, they have failed to normalise for the size of the field. Many fields (like Higgs phenomenology) will necessarily generate large quantities of citations due to their high visibility and current funding. Of course, this doesn’t mean that we’ve cracked naturalness all of a sudden!

Second their analysis is far too coarse-grained. Physics contains many disciplines, with vastly different publication rates and average numbers of citations. Phenomenologists publish swiftly and regularly, while theorists have longer papers with slower turnover. Experimentalists often fall somewhere in the middle. Clearly the Thomson-Reuters methodology favours phenomenology over all else.

But wait, the next paragraph seems to address these concerns. To some extent they “cherry pick” the hottest research fronts to account for these issues. According to the report

Due to the different characteristics and citation behaviors in various disciplines, some fronts are much smaller than others in terms of number of core and citing papers.

Excellent, I hear you say – tell me more! But here comes more bad news. It seems there’s no information on how this cherry picking was done! There’s no mention of experts consulted in each field. No mathematical detail about how vastly different disciplines were fairly compared. Thomson-Reuters have decided that all the reader deserves is a vague placatory paragraph.

And it gets worse. It turns out that the scientific analysis wasn’t performed by a balanced international committee. It was handed off to a single country – China. Who knows, perhaps they were the lowest bidder? Of course, I couldn’t possibly comment. But it seems strange to me to pick a country famed for its grossly flawed approach to scientific funding.

Governments need to fund science based on quality and promise, not merely quantity. Thomson-Reuters simplistic analysis is bad science at its very worst. It seems to offer intelligent information but  in fact is misleading, flawed and ultimately dangerous.

Conference Amplitudes 2015 – Don’t Stop Me Now!

All too soon we’ve reached the end of a wonderful conference. Friday morning dawned with a glimpse of perhaps the most impressive calculation of the past twelve months – Higgs production at three loops in QCD. This high precision result is vital for checking our theory against the data mountain produced by the LHC.

It was well that Professor Falko Dulat‘s presentation came at the end of the week. Indeed the astonishing computational achievement he outlined was only possible courtesy of the many mathematical techniques recently developed by the community. Falko illustrated this point rather beautifully with a word cloud.

Word Cloud Higgs Production

As amplitudeologists we are blessed with a incredibly broad field. In a matter of minutes conversation can encompass hard experiment and abstract mathematics. The talks this morning were a case in point. Samuel Abreu followed up the QCD computation with research linking abstract algebra, graph theory and physics! More specifically, he introduced a Feynman diagram version of the coproduct structure often employed to describe multiple polylogs.

Dr. HuaXing Zhu got the ball rolling on the final mini-session with a topic close to my heart. As you may know I’m currently interested in soft theorems in gauge theory and gravity. HuaXing and Lance Dixon have made an important contribution in this area by computing the complete 2-loop leading soft factor in QCD. Maybe unsurprisingly the breakthrough comes off the back of the master integral and differential equation method which has dominated proceedings this week.

Last but by no means least we had an update from the supergravity mafia. In recent years Dr. Tristan Dennen and collaborators have discovered unexpected cancellations in supergravity theories which can’t be explained by symmetry alone. This raises the intriguing question of whether supergravity can play a role in a UV complete quantum theory of gravity.

The methods involved rely heavily on the color-kinematics story. Intriguingly Tristan suggested that the double copy connection because gauge theory and gravity could form an explanation for the miraculous results (in which roughly a billion terms combine to give zero)! The renormalizability of Yang-Mills theory could well go some way to taming gravity’s naive high energy tantrums.

There’s still some way to go before bottles of wine change hands. But it was fitting to end proceedings with an incomplete story. For all that we’ve thought hard this week, it is now that the graft really starts. I’m already looking forward to meeting in Stockholm next year. My personal challenge is to ensure that I’m among the speakers!

Particular thanks to all the organisers, and the many masters students, PhDs, postdocs and faculty members at ETH Zurich who made our stay such an enjoyable and productive one!

Note: this article was originally written on Friday 10th July.

 

Conference Amplitudes 2015 – Air on the Superstring

One of the first pieces of Bach ever recorded was August Wilhelmj’s arrangement of the Orchestral Suite in D major. Today the transcription for violin and piano goes by the moniker Air on the G String. It’s an inspirational and popular work in all it’s many incarnations, not least this one featuring my favourite cellist Yo-Yo Ma.

This morning we heard the physics version of Bach’s masterpiece. Superstrings are nothing new, of course. But recently they’ve received a reboot courtesy of Dr. David Skinner among others. The ambitwistor string is an infinite tension version which only admit right-moving vibrations! At first the formalism looks a little daunting, until you realise that many calculations follow the well-trodden path of the superstring.

Now superstring amplitudes are quite difficult to compute. So hard, in fact, that Dr. Oliver Schloterrer devoted an entire talk to understanding particular functions that emerge when scattering just  4 strings at next-to-leading order. Mercifully, the ambitwistor string is far more well-behaved. The resulting amplitudes are rather beautiful and simple. To some extent, you trade off the geometrical aesthetics of the superstring for the algebraic compactness emerging from the ambitwistor approach.

This isn’t the first time that twistors and strings have been combined to produce quantum field theory. The first attempt dates back to 2003 and work of Edward Witten (of course). Although hugely influential, Witten’s theory was esoteric to say the least! In particular nobody knows how to encode quantum corrections in Witten’s language.

Ambitwistor strings have no such issues! Adding a quantum correction is easy – just put your theory on a donut. But this conceptually simple step threatened a roadblock for the research. Trouble was, nobody actually knew how to evaluate the resulting formulae.

Nobody, that was, until last week! Talented folk at Oxford and Cambridge managed to reduce the donutty problem to the original spherical case. This is an impressive feat – nobody much suspected that quantum corrections would be as easy as a classical computation!

There’s a great deal of hope that this idea can be rigorously extended to higher loops and perhaps even break the deadlock on maximal supergravity calculations at 7-loop level. The resulting concept of off-shell scattering equations piqued my interest – I’ve set myself a challenge to use them in the next 12 months!

Scattering equations, you say? What are these beasts? For that we need to take a closer look at the form of the ambitwistor string amplitude. It turns out to be a sum over the solutions of the following equations

\sum_{i\neq j}\frac{s_{ij}}{z_i - z_j}=0

The s_{ij} are just two particle invariants – encoding things you can measure about the speed and angle of particle scattering. And the z_i are just some bonus variables. You’d never dream of introducing them unless somebody told you to! But yet they’re exactly what’s required for a truly elegant description.

And these scattering equations don’t just crop up in one special theory. Like spies in a Cold War era film, they seem to be everywhere! Dr. Freddy Cachazo alerted us to this surprising fact in a wonderfully engaging talk. We all had a chance to play detective and identify bits of physics from telltale clues! By the end we’d built up an impressive spider’s web of connections, held together by the scattering equations.

Scattering Equation Theory Web

Freddy’s talk put me in mind of an interesting leadership concept espoused by the conductor Itay Talgam. Away from his musical responsibilities he’s carved out a niche as a business consultant, teaching politicians, researchers, generals and managers how to elicit maximal productivity and creativity from their colleagues and subordinates. Critical to his philosophy is the concept of keynote listening – sharing ideas in a way that maximises the response of your audience. This elusive quality pervaded Freddy’s presentation.

Following this masterclass was no mean feat, but one amply performed by my colleague Brenda Penante. We were transported to the world of on-shell diagrams – a modern alternative to Feynman’s ubiquitous approach. These diagrams are known to produce the integrand in planar $\mathcal{N}=4$ super-Yang-Mills theory to all orders! What’s more, the answer comes out in an attractive d \log form, ripe for integration to multiple polylogarithms.

Cunningly, I snuck the word planar into the paragraph above. This approximation means that the diagrams can be drawn on a sheet of paper rather than requiring 3 dimensions. For technical reasons this is equivalent to working in the theory with an infinite number of color charges, not just the usual 3 we find for the strong force.

Obviously, it would be helpful to move beyond this limit. Brenda explained a decisive step in this direction, providing a mechanism for computing all leading singularities of non-planar amplitudes. By examining specific examples the collaboration uncovered new structure invisible in the planar case.

Technically, they observed that the boundary operation on a reduced graph identified non-trivial singularities which can’t be understood as the vanishing of minors. At present, there’s no proven geometrical picture of these new relations. Amazingly they might emerge from a 1,700-year-old theorem of Pappus!

Bootstraps were back on the agenda to close the session. Dr. Agnese Bissi is a world-expert on conformal field theories. These models have no sense of distance and only know about angles. Not particularly useful, you might think! But they crop up surprisingly often as approximations to realistic physics, both in particle smashing and modelling materials.

Agnese took a refreshingly rigorous approach, walking us through her proof of the reciprocity principle. Until recently this vital tool was little more than an ad hoc assumption, albeit backed up by considerable evidence. Now Agnese has placed it on firmer ground. From here she was able to “soup up” the method. The supercharged variant can compute OPE coefficients as well as dimensions.

Alas, it’s already time for the conference dinner and I haven’t mentioned Dr. Christian Bogner‘s excellent work on the sunrise integral. This charmingly named function is the simplest case where hyperlogarithms are not enough to write down the answer. But don’t just take it from me! You can now hear him deliver his talk by visiting the conference website.

Conversations

I’m very pleased to have chatted with Professor Rutger Boels (on the Lagrangian origin of Yang-Mills soft theorems and concerning the universality of subheading collinear behaviour) and Tim Olson (about determining the relative sign between on-shell diagrams to ensure cancellation of spurious poles).

Note: this post was originally written on Thursday 9th July but remained unpublished. I blame the magnificent food, wine and bonhomie at the conference dinner!