# Conference Amplitudes 2015 – Integrability, Colorful Duality and Hiking

The middle day of a conference. So often this is the graveyard slot – when initial hysteria has waned and the final furlong seems far off. The organisers should take great credit that today was, if anything, the most engaging thus far! Even the weather was well-scheduled, breaking overnight to provide us with more conducive working conditions.

Integrability was our wake-up call this morning. I mentioned this hot topic a while back. Effectively it’s an umbrella term for techniques that give you exact answers. For amplitudes folk, this is the stuff of dreams. Up until recently the best we could achieve was an expansion in small or large parameters!

So what’s new? Dr. Amit Sever brought us up to date on developments at the Perimeter Institute, where the world’s most brilliant minds have found a way to map certain scattering amplitudes in $4$ dimensions onto a $2$ dimensional model which can be exactly solved. More technically, they’ve created a flux tube representation for planar amplitudes in $\mathcal{N}=4$ super-Yang-Mills, which can then by solved using spin chain methods.

The upshot is that they’ve calculated $6$ particle scattering amplitudes to all values of the (‘t Hooft) coupling. Their method makes no mention of Feynman diagrams or string theory – the old-fashioned ways of computing this amplitude for weak and strong coupling respectively. Nevertheless the answer matches exactly known results in both of these regimes.

There’s more! By putting their computation under the microscope they’ve unearthed unexpected new physics. Surprisingly the multiparticle poles familiar from perturbative quantum field theory disappear. Doing the full calculation smoothes out divergent behaviour in each perturbative term. This is perhaps rather counterintuitive, given that we usually think of higher-loop amplitudes as progressively less well-behaved. It reminds me somewhat of Regge theory, in which the UV behaviour of a tower of higher spin states is much better than that of each one individually.

The smorgasbord of progress continued in Mattias Wilhelm’s talk. The Humboldt group have a completely orthogonal approach linking integrability to amplitudes. By computing form factors using unitarity, they’ve been able to determine loop-corrections to anomalous dimensions. Sounds technical, I know. But don’t get bogged down! I’ll give you the upshot as a headline – New Link between Methods, Form Factors Say.

Coffee consumed, and it was time to get colorful. You’ll hopefully remember that the quarks holding protons and neutrons together come in three different shades. These aren’t really colors that you can see. But they are internal labels attached to the particles which seem vital for our theory to work!

About 30 years ago, people realised you could split off the color-related information and just deal with the complicated issues of particle momentum. Once you’ve sorted that out, you write down your answer as a sum. Each term involves some color stuff and a momentum piece. Schematically

$\displaystyle \textrm{gluon amplitude}=\sum \textrm{color}\times \textrm{kinematics}$

What they didn’t realise was that you can shuffle momentum dependence between terms to force the kinematic parts to satisfy the same equations as the color parts! This observation, made back in 2010 by Zvi Bern, John Joseph Carrasco and Henrik Johansson has important consequences for gravity in particular.

Why’s that? Well, if you arrange your Yang-Mills kinematics in the form suggested by those gentlemen then you get gravity amplitudes for free. Merely strip off the color bit and replace it by another copy of the kinematics! In my super-vague language above

$\displaystyle \textrm{graviton amplitude}=\sum \textrm{kinematics}\times \textrm{kinematics}$

Dr. John Joseph Carrasco himself brought us up to date with a cunning method of determining the relevant kinematic choice at loop level. I can’t help but mention his touching modesty. Even though the whole community refers to the relations by the acronym BCJ, he didn’t do so once!

Before that Dr. Donal O’Connell took us on an intriguing detour of solutions to classical gravity theories with an appropriate dual Yang-Mills theory, obtainable via a BCJ procedure. The idea is beautiful, and seems completely obvious once you’ve been told! Kudos to the authors for thinking of it.

After lunch we enjoyed a well-earned break with a hike up the Uetliberg mountain. I learnt that this large hill is colloquially called Gmuetliberg. Yvonne Geyer helpfully explained that this is derogatory reference to the tame nature of the climb! Nevertheless the scenery was very pleasant, particularly given that we were mere minutes away from the centre of a European city. What I wouldn’t give for an Uetliberg in London!

Evening brought us to Heidi and Tell, a touristic yet tasty burger joint. Eager to offset some of my voracious calorie consumption I took a turn around the Altstadt. If you’re ever in Zurich it’s well worth a look – very little beats medieval streets, Alpine water and live swing music in the evening light.

Conversations

It was fantastic to meet Professor Lionel Mason and discuss various ideas for extending the ambitwistor string formalism to form factors. I also had great fun chatting to Julio Martinez about linking CHY and BCJ. Finally huge thanks to Dr. Angnis Schmidt-May for patiently explaining the latest research in the field of massive gravity. The story is truly fascinating, and could well be a good candidate for a tractable quantum gravity model!

Erratum: An earlier version of this post mistakenly claimed that Chris White spoke about BCJ for equations of motion. Of course, it was his collaborator Donal O’Connell who delivered the talk. Many thanks to JJ Carrasco for pointing out my error!

# Conference Amplitudes 2015 – Integration Ahoy!

I recall fondly a maths lesson from my teenage years. Dr. Mike Wade – responsible as much an anyone for my scientific passion – was introducing elementary concepts of differentiation and integration. Differentiation is easy, he proclaimed. But integration is a tricky beast.

That prescient warning perhaps foreshadowed my entry into the field of amplitudes. For indeed integration is of fundamental importance in determining the outcome of scattering events. To compute precise “loop corrections” necessarily requires integration. And this is typically a hard task.

Today we were presented with a smorsgasbord of integrals. Polylogarithms were the catch of the day. This broad class of functions covers pretty much everything you can get when computing amplitudes (provided your definition is generous)! So what are they? It fell to Dr. Erik Panzer to remind us.

Laymen will remember logarithms from school. These magic quantities turn multiplication into addition, giving rise to the ubiquitous schoolroom slide rules predating electronic calculators. Depending on your memory of math class, logarithms are either curious and fascinating or strange and terrifying! But boring they most certainly aren’t.

One of the most amusing properties of a logarithm comes about from (you guessed it) integration. Integrating $x^{a-1}$ is easy, you might recall. You’ll end up with $x^a/a$ plus some constant. But what happens when $a$ is zero? Then the formula makes no sense, because dividing by zero simply isn’t allowed.

And here’s where the logarithm comes to the rescue. As if by witchcraft it turns out that

$\displaystyle \int_0^x x^{-1} = -\log (1-x)$

This kind of integral crops when you compute scattering amplitudes. The traditional way to work out an amplitudes is to draw Feynman diagrams – effectively pictures representing the answer. Every time you get a loop in the picture, you get an integration. Every time a particle propagates from A to B you get a fraction. Plug through the maths and you sometimes see integrals that give you logarithms!

But logarithms aren’t the end of the story. When you’ve got many loop integrations involved, and perhaps many propagators too, things can get messy. And this is where polylogarithms come in. They’ve got an integral form like logarithms, only instead of one integration there are many!

$\displaystyle \textrm{Li}_{\sigma_1,\dots \sigma_n}(x) = \int_0^z \frac{1}{z_1- \sigma_1}\int_0^{z_1} \frac{1}{z_2-\sigma_2} \dots \int_0^{z_{n-1}}\frac{1}{z_n-\sigma_n}$

It’s easy to check that out beloved $\log$ function emerges from setting $n=1$ and $\sigma_1=0$. There’s some interesting sociology underlying polylogs. The polylogs I’ve defined are variously known as hyperlogs, generalized polylogs and Goncharov polylogs depending on who you ask. This confusion stems from the fact that these functions have been studied in several fields besides amplitudes, and predictably nobody can agree on a name! One name that is universally accepted is classical polylogs – these simpler functions emerging when you set all the $\sigma$s to zero.

So far we’ve just given names to some integrals we might find in amplitudes. But this is only the beginning. It turns out there are numerous interesting relations between different polylogs, which can be encoded by clever mathematical tools going by esoteric names – cluster algebras, motives and the symbol to name but a few. Erik warmed us up on some of these topics, while also mentioning that even generalized polylogs aren’t the whole story! Sometimes you need even wackier functions like elliptic polylogs.

All this gets rather technical quite quickly. In fact, complicated functions and swathes of algebra are a sad corollary of the traditional Feynman diagram approach to amplitudes. But thankfully there are new and powerful methods on the market. We heard about these so-called bootstraps from Dr. James Drummond and Dr. Matt von Hippel.

The term bootstrap is an old one, emerging in the 1960s to describe methods which use symmetry, locality and unitarity to determine amplitudes. It’s probably a humorous reference to the old English saying “pull yourself up by your bootstraps” to emphasise the achievement of lofty goals from meagre beginnings. Research efforts in the 60s had limited success, but the modern bootstrap programme is going from strength to strength. This is due in part to our much improved understanding of polylogarithms and their underlying mathematical structure.

The philosophy goes something like this. Assume that your answer can be written as a polylog (more precisely as a sum of polylogs, with the integrand expressed as $\prod latex d \log(R_i)$ for appropriate rational functions $R_i$). Now write down all the possible rational functions that could appear, based on your knowledge of the process. Treat these as alphabet bricks. Now put your alphabet bricks together in every way that seems sensible.

The reason the method works is that there’s only one way to make a meaningful “word” out of your alphabet bricks. Locality forces the first letter to be a kinematic invariant, or else your answer would have branch cuts which don’t correspond to physical particles. Take it from me, that isn’t allowed! Supersymmetry cuts down the possibilities for the final letter. A cluster algebra ansatz also helps keep the possibilities down, though a physical interpretation for this is as yet unknown. For $7$ particles this is more-or-less enough to get you the final answer. But weirdly $6$ particles is smore complicated! Counter-intuitive, but hey – that’s research. To fix the six point result you must appeal to impressive all-loop results from integrability.

Next up for these bootstrap folk is higher loops. According to Matt, the $5$-loop result should be gettable. But beyond that the sheer number of functions involved might mean the method crashes. Naively one might expect that the problem lies with having insufficiently many constraints. But apparently the real issue is more prosaic – we just don’t have the computing power to whittle down the options beyond 5-loop.

With the afternoon came a return to Feynman diagrams, but with a twist. Professor Johannes Henn talked us through an ingenious evaluation method based on differential equations. The basic concept has been known for a long time, but relies heavily on choosing the correct basis of integrals for the diagram under consideration. Johannes’ great insight was to use conjectures about the dlog form of integrands to suggest a particularly nice set of basis integrals. This makes solving the differential equations a cinch – a significant achievement!

Now the big question is – when can this new method be applied? As far as I’m aware there’s no proof that this nice integral basis always exists. But it seems that it’s there for enough cases to be useful! The day closed with some experimentally relevant applications, the acid test. I’m now curious as to whether you can link the developments in symbology and cluster algebras with this differential equation technique to provide a mega-powerful amplitude machine…! And that’s where I ought to head to bed, before you readers start to worry about theoretical physicists taking over the world.

Conversations

It was a pleasure to chat all things form factors with Brenda Penante, Mattias Wilhelm and Dhritiman Nandan at lunchtime. Look out for a “on-shell” blog post soon.

I must also thank Lorenzo Magnea for an enlightening discussion on soft theorems. Time to bury my head in some old papers I’d previously overlooked!

# Conference Amplitudes 2015!

It’s conference season! I’m hanging out in very warm Zurich with the biggest names in my field – scattering amplitudes. Sure it’s good fun to be outside the office. But there’s serious work going on too! Research conferences are a vital forum for the exchange of ideas. Inspiration and collaboration flow far more easily in person than via email or telephone. I’ll be blogging the highlights throughout the week.

Monday | Morning Session

To kick-off we have some real physics from the Large Hadron Collider! Professor Nigel Glover‘s research provides a vital bridge between theory and experiment. Most physicists in this room are almost mathematicians, focussed on developing techniques rather than computing realistic quantities. Yet the motivation for this quest lie with serious experiments, like the LHC.

We’re currently entering an era where the theoretical uncertainty trumps experimental error. With the latest upgrade at CERN, particle smashers will reach unprecedented accuracy. This leaves us amplitudes theorists with a large task. In fact, the experimentalists regularly draw up a wishlist to keep us honest! According to Nigel, the challenge is to make our predictions twice as good within ten years.

At first glance, this 2x challenge doesn’t seem too hard! After all Moore’s Law guarantees us a doubling of computing power in the next few years. But the scale of the problem is so large that more computing power won’t solve it! We need new techniques to get to NNLO – that is, corrections that are multiplied by $\alpha_s^2$ the square of the strong coupling. (Of course, we must also take into account electroweak effects but we’ll concentrate on the strong force for now).

Nigel helpfully broke down the problem into three components. Firstly we must compute the missing higher order terms in the amplitude. The start of the art is lacking at present! Next we need better control of our input parameters. Finally we need to improve our model of how protons break apart when you smash them together in beams.

My research helps in a small part with the final problem. At present I’m finishing up a paper on subleading soft loop corrections, revealing some new structure and developing a couple of new ideas. The hope is that one day someone will use this to better eliminate some irritating low energy effects which can spoil the theoretical prediction.

In May, I was lucky enough to meet Bell Labs president Dr. Marcus Weldon in Murray Hill, New Jersey. He spoke about his vision for a 10x leap forward in every one of their technologies within a decade. This kind of game changing goal requires lateral thinking and truly new ideas.

We face exactly the same challenge in the world of scattering amplitudes. The fact that we’re aiming for only a 2x improvement is by no means a lack of ambition. Rather it underlines that problem that doubling our predictive power entails far more than a 10x increase in complexity of calculations using current techniques.

I’ve talked a lot about accuracy so far, but notice that I haven’t mentioned precision. Nigel was at pains to distinguish the two, courtesy of this amusing cartoon.

Why is this so important? Well, many people believe that NNLO calculations will reduce the renormalization scale uncertainty in theoretical predictions. This is a big plus point! Many checks on known NNLO results (such as W boson production processes) confirm this hunch. This means the predictions are much more precise. But it doesn’t guarantee accuracy!

To hit the bullseye there’s still much work to be done. This week we’ll be sharpening our mathematical tools, ready to do battle with the complexities of the universe. And with that in mind – it’s time to get back to the next seminar. Stay tuned for further updates!

Update | Monday Evening

Only time for the briefest of bulletins, following a productive and enjoyable evening on the roof of the ETH main building. Fantastic to chat again to Tomek Lukowski (on ambitwistor strings), Scott Davies (on supergravity 4-loop calculations and soft theorems) and Philipp Haehnal (on the twistor approach to conformal gravity). Equally enlightening to meet many others, not least our gracious hosts from ETH Zurich.

My favourite moment of the day came in Xuan Chen’s seminar, where he discussed a simple yet powerful method to check the numerical stability of precision QCD calculations. It’s well known that these should factorize in appropriate kinematic regions, well described by imaginatively named antenna functions. By painstakingly verifying this factorization in a number of cases Xuan detected and remedied an important inaccuracy in a Higgs to 4 jet result.

Of course it was a pleasure to hear my second supervisor, Professor Gabriele Travaglini speak about his latest papers on the dilatation operator. The rederivation of known integrability results using amplitudes opens up an enticing new avenue for those intrepid explorers who yearn to solve $\mathcal{N}=4$ super-Yang-Mills!

Finally Dr. Simon Badger‘s update on the Edinburgh group’s work was intriguing. One challenge for NNLO computations is to understand 2-loop corrections in QCD. The team have taken an important step towards this by analysing 5-point scattering of right-handed particles. In principle this is a deterministic procedure: draw some pictures and compute.

But to get a compact formula requires some ingenuity. First you need appropriate integral reduction to identify appropriate master integrals. Then you must apply KK and BCJ relations to weed out the dead wood that’s cluttering up the formula unnecessarily. Trouble is, both of these procedures aren’t uniquely defined – so intelligent guesswork is the order of the day!

That’s quite enough for now – time for some sleep in the balmy temperatures of central Europe.

# Science and Faith – The Arts of the Unknown

I spent this morning singing a Sunday service at St. George’s Church in Borough. An odd occupation for a scientist perhaps, especially given the high profile of several atheist researchers! Yet a large number of scientists see no contradiction between faith and science. In fact, my Christian faith is only deepened by my fascination with the natural world.

Picture a scientist. Chances are you’ve already got in your mind a geeky, rational person, calibrating a precise experiment or poring over a dry mathematical formula! As with any stereotype, it has it’s merits. But it misses a vital quality in research – imagination.

To succeed as a scientist, you must be creative above all else. It’s no use just learning experimental techniques or memorising formulae. Every new idea must necessarily start off as a fantasy. Great painters are not merely lauded for their 10,000 hours of practice with a paintbrush. It is their capacity to conceive and relay vivid scenes which ensures their place in history. And so it is with science.

So why are scientists seen as cold and calculating and exact, rather than passionate and original? The problem lies in education. While young children are encouraged to express themselves in Literacy, Numeracy is all too often a trudge through tedious and predictable sums. In “arts” subjects, questions are a magical tool enabling discussion, debate and opinion. In “sciences” they merely distinguish right from wrong.

After 15 years of schooling, no wonder the stereotype is embedded! As a teenager, I very nearly ditched the sciences in favour of subjects where expression was free and original arguments rewarded. I’m eternally thankful to my teachers, parents and bookshelf for convincing me that the curriculum was utterly unrepresentative of real science.

So what’s to be done. For any budding scientists out there, your best bet is to read some books. Not your school textbooks – chances are they are dull as ditchwater and require no creative input at all. I mean books written by real life mathematicians, physicists, biologists… These will give you an insight into the imagination that drives research, the contentious debates and the lively exchanges of ideas.

You might not understand everything, but that’s the whole point – science is about the unknown, just as much as art or faith. It is exactly this point which we must evangelise again and again. Perhaps then fewer people will write negative reviews criticising science for being complex, poetic and beautiful.

As a wider society, we can take action too! We must demand better science teaching from a young age. Curricula should emphasise problem solving over knowledge, ideas over techniques and originality over regurgitation. This is already the mantra for many traditionally “artistic” discplines. It must be the battle cry for scientists also!

A better approach to science would democratize opportunity for the next generation. No longer will the relative creativity of girls be arbitrarily punished – an approach which can only discourage women from entering science in the long run. No longer will there be a tech skills gap threatening to undermine the thriving software industry. The UK has a uniquely privileged scientific pedigree. For future equality, economy and diversity, we must use it.

In the service this morning Fr Jonathan Sedgwick talked of the danger of applying scientific laws to the world at large. The concepts of “cause and effect” and “zero sum games” may well work in vacuo, but they are artificial and burdensome when applied to interpersonal relationships. Quite right – as Christians we must question these human rules, and look for a divine inspiration to guide our lives!

But this is also precisely what we must do as scientists. A good scientist always questions their models, constantly listening for the voice of intuition. For science – like our own existence – is ever changing. And it’s our job to search for the way, the truth and the life.

My thanks to Margaret Widdess, who prepared me for confirmation two years ago at St. Catharine’s College, Cambridge and with whom I first talked deeply about the infinity of science and faith.

# Why does feedback hurt sometimes?

Research is hard. And not for the reasons you might expect! Sure, my daily life involves equations which look impenetrable to the layman. But by the time you’ve spent years studying them, they aren’t so terrifying!

The real difficulty in research is psychological. The natural state for a scientist is failure – most ideas simply do not succeed! Developing the resilience, maturity and sheer bloody mindedness to just keep on plugging away is a vital but tough skill.

This letter, written by an experienced academic to her PhD student is a wonderfully candid account of the minefield of academic criticism, both professional and personal. What’s more, it lays bare some important coping strategies – I certainly wish I’d read it before embarking on my PhD.

Above all, this letter is an admission of humanity. As researchers, we face huge challenges in our careers. But the very personal process of responding to them is precisely what makes us better scientists, and perhaps even improves us as people.

This letter was written by an experienced academic at ANU to her PhD student, who had just presented his research to a review panel and was still licking her wounds.

The student sent it to me and I thought it was a great response I asked the academic in question, and the student who received it, if I could publish it. I wish all of us could have such nuanced and thoughtfu feedback during the PhD. I hope you enjoy it.

A letter to…My PhD student after her upgradeWell you did it. You got your upgrade. But from the look on your face I could tell you thought it was a hollow victory. The professors did their job and put the boot in. I remember seeing that look in the mirror after my own viva. Why does a win in academia always have the sting of defeat?

Yeah, it’s a…

View original post 1,120 more words

# Mathematica for Physicists

I’ve just finished writing a lecture course for SEPnet, a consortium of leading research universities in the South East of Britain. The course comprises a series of webcasts introducing Mathematica – check it out here!

Although the course starts from the basics, I hope it’ll be useful to researchers at all levels of academia. Rather than focussing on computations, I relay the philosophy of Mathematica. This uncovers some tips, tricks and style maxims which even experienced users might not have encountered.

I ought to particularly thank the Mathematica Summer School for inspiring this project, and demonstrating that Mathematica is so much more than just another programming language. If you’re a theorist who uses computer algebra on a daily basis, I thoroughly recommend you come along to the next edition of the school in September.

# Integrating Differentials in Thermodynamics

I’ve just realised I made a mistake when teaching my statistical physics course last term. Fortunately it was a minor and careless maths mistake, rather than any lack of physics clarity. But it’s time to set the record straight!

Often in thermodynamics you will derive equations in differential form. For example, you might be given some equations of state and asked to derive the entropy of a system using the first law

$\displaystyle dE = TdS - pdV$

My error pertained to exactly such a situation. My students had derived the equation

$\displaystyle dS = (V/E)^{1/2}dE+(E/V)^{1/2}dV$

and were asked to integrate this up to find $S$. Naively you simply integrate each separately and add the answers. But of course this is wrong! Or more precisely this is only correct if you get the limits of integration exactly right.

Let’s return to my cryptic comment about limits of integration later, and for now I’ll recap the correct way to go about the problem. There are four steps.

1. Rewrite it as a system of partial DEs

This is easy – we just have

$\displaystyle \partial S/\partial E = (V/E)^{1/2} \textrm{ and } \partial S / \partial V = (E/V)^{1/2}$

2. Integrate w.r.t. E adding an integration function $g(V)$

Again we do what it says on the tin, namely

$\displaystyle S(E,V) = 2 (EV)^{1/2} + g(V)$

3. Substitute in the $\partial S/\partial V$ equation to derive an ODE for $g$

We get $dG/dV = 0$ in this case, easy as.

4. Solve this ODE and write down the full answer

Immediately we know that $g$ is just a constant function, so we can write

$\displaystyle S(E,V) = 2 (EV)^{1/2} + \textrm{const}$

Contrast this with the wrong answer from naively integrating up and adding each term. This would have produced $4(EV)^{1/2}$, a factor of $2$ out!

So what of my mysterious comment about limits above. Well, because $dS$ is an exact differential, we know that we can integrate it over any path and will get the same answer. This path independence is an important reason that the entropy is a genuine physical quantity, whereas there’s no absolute notion of heat. In particular we can find $S$ by integrating along the $x'$ axis to $x' = x$ then in the $y'$ direction from $(x',y')=(x,0)$ to $(x',y')=(x,y)$.

Mathematically this looks like

$\displaystyle S(E,V) = \int_{(0,0)}^{(E,V)} dS = \int_{(0,0)}^{(E,0)}(V'/E')^{1/2}dE' + \int_{(E,0)}^{(E,V)}(E'/V')^{1/2}dV'$

The first integral now gives $0$ since $V=0$ along the $E$ axis. The second term gives the correct answer $S(E,V) = 2(EV)^{1/2}$ as required.

In case you want a third way to solve this problem correctly, check out this webpage which proves another means of integrating differentials correctly!

So there you have it – your health warning for today. Take care when integrating your differentials!

# Bibliographies and The arXiv

I’m currently writing up my first paper! The hope is that my collaborators and I will release the paper in the next couple of months. When we do, it’ll go on the arXiv – a publically accessible preprint server.

This open-access policy is adopted pretty much universally throughout mathematics and theoretical physics. I think it’s extremely good for science to be freely accessible to all. There’s still a place for journals, allowing research to be ranked by quality and rigorously peer reviewed. But the arXiv is vital in maintaining the pace of research, particularly in hot topic areas.

Every piece of work on the arXiv gets its own unique identifier. I rather like codes, so I tend to remember these numbers for my favourite papers. Just typing the number into Google search immediately takes you to the relevant document.

My current paper draft is peppered with arXiv numbers referring to important papers we need to cite. When we come to making a bibliography I’ll need to convert these into a standard form. Technically this involves making a BibTex file, and referring to it in my typesetting program.

I thought this would take ages, but it turns out that there’s an online Easter Egg solving the problem in a flash. Inspire HEP is a database of physics papers, providing all the metadata you could ever need including ready formatted BibTex. And it even has a feature which automatically generates a bibliography for you – check it out!

If you’re writing up your first paper and this tip helped you out, do drop me a line in the comments! And to the curators of arXiv and Inspire HEP – a huge thank you from me and the whole physics community.

# Financial Times Christmas Carol!

Away from my physics life I spend a lot of time singing. About 6 months ago I co-founded a choir. The development of the group has been phenomenally rapid, culminating in recording a new Christmas carol for the Financial Times, composed by the acclaimed baritone Roderick Williams. Do have a listen and let me know what you think!

I see mathematics and music as natural intellectual cousins. Both involve artistic creativity within certain constraints. As a researcher, I must learn the subject and find new concepts. As a musician I’ll certainly study the notes, but it’s that spark of original interpretation which brings the music alive.

If you liked the FT carol, have a listen to our other recent recording below!

I’m now about to finish my first year as a PhD student. Along the way I’ve done a lot of physics! Some of the concepts are very hard. I’ve sure spent my fair share of hours battling with abstract maths! But I’ve learnt something much tougher and infinitely more valuable in the past 12 months – how to do research.

The blessing and curse of research is it’s very hard to teach. You need just the right combination of perserverance, creativity and inspiration. Unlike most forms of employment, science is wonderfully, frustratingly unpredictable!

There’s one principle that stands out through every success and failure this year. Ask the obvious question! Whether in a seminar, a conversation with colleagues or in front of your desk, never be afraid to say something stupid. Often it’s the most basic idea which leads to the richest consequences.

At the end of the day, research is something of a confidence game. It’s a bit similar to my limited experience on a snooker table. If I think I’m going to win, I usually do. But when those doubts creep in, it’s much harder to keep the break going!

That’s why it’s so important that scientists communicate. Sadly the human brain doesn’t seem to be wired up to think deeply and laterally simultaneously. Regular breaks for discussion, evaluation and presentation of your work are vital!

I’ve had my clearest thoughts on walks to the tube, after chatting over coffee or writing a blog post. Although the life of a scientist might appear relaxed, ours is not a job where you can just clock in and out!

Asking the obvious question is not just important for researchers. Students, journalists, politicians, civil servants, lawyers, managers, even executives pose simple questions every day. In fact, it’s when public figures disguise their questions and answers with complex language that we struggle to relate.

A stupid, obvious question can do no harm. And more often than not, it’s exactly what you need to say.