Unless you’re a supersymmetry expert, the notation should already look odd to you! Why do I write down two numbers to classify supersymmetries in 6D, but one suffices for 4D. The answer comes from a subtlety in the definition of the superalgebra, which isn’t often discussed outside of lengthy (and dull) textbooks. Time to set the record straight!
At kindergarten we learn that supersymmetry adds fermonic generators to the Poincare algebra yielding a “unique” extension to the possible spacetime symmetries. Of course, this hides a possible choice – there are many fermionic representations of the Lorentz algebra one could choose for the supersymmetry generators.
Fortunately, mathematical consistency restricts you to simple options. For the algebra to close, the generators must live in the lowest dimensional representations of the Lorentz algebra – check Weinberg III for a proof. You’re still free to take many independent copies of the supersymmetry generators (up to the restrictions placed by forbidding higher spin particles, which are usually imposed).
Therefore the classification of supersymmetries allowed in different dimensions reduces to the problem of understanding the possible spinor representations. Thankfully, there are tables of these.
Reading carefully, you notice that dimensions , and are particularly special, in that they admit Majorana-Weyl spinors. Put informally, this means you can have your cake and eat it! Normally, the minimal dimension spinor representation is obtained by imposing a Majorana (reality) or Weyl (chirality) condition. But in this case, you can have both!
This means that in or , the supersymmetry generators can be chosen to be chiral. The stipulation says that should be a left-handed Majorana spinor, for instance. In a Majorana spinor must by necessity contain both left-handed and right-handed pieces, so this choice would be impossible! Or, if you like, should I choose to be a left-handed Weyl spinor, then it’s conjugate is forced to be right-handed.
Gravity has a good sense of humour. On the one hand, it’s the weakest force we know. The upward push of your chair is more than enough to counteract the pull of the entire planet! Yet gravity has an ace up its sleeve – unlike all other forces, it’s always attractive. For larger objects, the other forces start bickering and cancelling out. But gravity just keeps on getting stronger, until it’s impossible to escape – a black hole!
As a theoretical physicist, I tend to carry a black hole whenever I’m travelling.
As you can see, the top of this bucket is the surface of a black hole, otherwise known as the event horizon. When I release water from a glass above the black hole, it is attracted to the black hole, and falls inexorably towards it, never to be seen again.
Okay, I suppose my bucket isn’t a real black hole. After all, it’s the gravity of the Earth that pulls the water in. And light can definitely escape because I can see inside it! But it does accurately represent the bending of space and time. Albert Einstein taught us that everything in the universe rolls around on the cosmic quilt of spacetime, like balls on an elastic sheet. A heavy ball distorts the sheet, creating my black hole bucket.
You might not feel too threatened by black holes – after all, the nearest one is probably 8 billion billion miles away. But in actual fact you could be falling into a black hole right now without noticing! Turns out that for a large enough black hole, the event horizon is so far away that gravity there is very weak. So there’s no reason why you should experience anything special.
This disturbing fact has an unexpected consequence from the microscopic world of quantum mechanics. Every quantum theory must have a single vacuum, essentially the most boring and lazy state of affairs. If I stand still the vacuum is just empty space. But as soon as I start accelerating something weird happens. Particles suddenly appear from nowhere!
What does that mean for our black hole? Well if you’re not falling in, you must be accelerating away to oppose the huge pull of gravity! This means that you should see the black hole glowing with particles called Hawking radiation. Remember my black hole full of water? Well, you haven’t fallen in. And that means I have to cover you with Hawking radiation!
Luckily for your computer, the Hawking confetti that came out isn’t the same as the water that went in. From your perspective the water has simply disappeared! Exactly the same thing seems to happen for real black holes.
This black hole magic trick has become infamous among scientists, resisting all efforts at explanation. But a solution might be at hand, courtesy of Hawking himself! What if you could slightly change the vacuum every time something dropped into the black hole? Then, if you’re very careful, you might just be able to reconstruct the original water from the confetti of Hawking radiation.
Put another way, the event horizon takes a lock of soft hair from every passing particle as a memento of its existence. This information is eventually carried off by Hawking’s magic particles, reminding us of what we’d lost. It remains for soft experts, like myself, to work out the exact details.
This post is based on a talk given for the Famelab competition. You can read the full paper by Stephen Hawking, Malcolm Perry and Andy Strominger here.
Next month, I’m running a day-long conference here at QMUL. The meeting is intended to give early career researchers the chance to seek possible collaborations. Despite living in this globalised age, all too often PhD students and postdocs are restricted to working with faculty members in their current institution. This is no surprise – at the conferences and meetings where networking opportunities arise, we’re usually talking about completed work, rather than discussing new problems.
We’re shaking up the status quo by asking our participants to speak about ongoing research, and in particular to outline roadblocks where they need input from theorists with different expertise. What’s more, we’re throwing together random teams for speed collaboration sessions on the issues presented, getting the ball rolling for possible acknowledgements and group projects. We’re extremely fortunate to have the inspirational Fernando Alday as our guest speaker, a serial collaborator himself.
The final novelty of this conference comes in digital form. The conference website doubles as a social network, making it easy to keep track of your connections and maintain interactions after the meeting. We hope to generate good content on the site during the day, where some participants will be invited to act as scribes and note down any interesting ideas that arise. This way, there’ll be a valuable and evolving database of ideas ready for future collaborations to draw on.
Over to you! If you’re doing a PhD or a postdoc in the UK, or you know someone who is, send them a link to the website
The power of the renormalisation group comes from the fact that the $1$-loop leading logarithm suffices to fix the leading logarithm at all loops. Here’s a reference.
The BPHZ renormalisation scheme (widely seen in the physics community as superseded by the Wilsonian renormalisation group) has a fascinating Hopf algebra structure.
The central irony of QFT is thus. IR divergences were discovered before UV divergences and “solved” almost instantly. Theorists then wrangled for a couple of decades over the UV divergences, before finally Wilson laid their qualms to rest. At this point the experimentalists came back and told them that it was the IR divergences that were the real problem again. (This remains true today, hence the motivation behind my work on soft theorems).
IR divergences are a consequence of the Lorentzian signature. In Euclidean spacetime you have a clean separation of scales, but not so in our world. (Struggling to find a reference for this, anybody know of one?)
The next big circular collider will probably have a circumference of 100km, reach energies 7 times that of the LHC and cost at least £20 billion.
The Fourier transform of any polynomial in with roots at for has all positive coefficients. This is equivalent to the no-ghost theorem in string theory, proved by Peter Goddard, and seems to require some highly non-trivial machinery. (Again, does anyone have a reference?)
Never, ever try to copy algebra into Mathematica late at night!
This week I’m down in Canterbury for a conference focussing on the positive Grassmannian. “What’s that?”, I hear you ask. Roughly speaking, it’s a mysterious geometrical object that seems to crop up all over mathematical physics, from scattering amplitudes to solitons, not to mention quantum groups. More formally we define
We can view this as the space of matrices modulo a action, which has homogeneous “Plücker” coordinates given by the minors. Of course, these are not coordinates in the true sense, for they are overcomplete. In particular there exist quadratic Plücker relations between the minors. In principle then, you only need a subset of the homogeneous coordinates to cover the whole Grassmannian.
To get to the positive Grassmannian is easy, you simply enforce that every minor is positive. Of course, you only need to check this for some subset of the Plücker coordinates, but it’s tricky to determine which ones. In the first talk of the day Lauren Williams showed how you can elegantly extract this information from paths on a graph!
In fact, this graph encodes much more information than that. In particular, it turns out that the positive Grassmannian naturally decomposes into cells (i.e. things homeomorphic to a closed ball). The graph can be used to exactly determine this cell decomposition.
And that’s not all! The same structure crops up in the study of quantum groups. Very loosely, these are algebraic structures that result from introducing non-commutativity in a controlled way. More formally, if you want to quantise a given integrable system, you’ll typically want to promote the coordinate ring of a Poisson-Lie group to a non-commutative algebra. This is exactly the sort of problem that Drinfeld et al. started studying 30 years ago, and the field is very much active today.
The link with the positive Grassmannian comes from defining a quantity called the quantum Grassmannian. The first step is to invoke a quantum plane, that is a -dimensional algebra generated by with the relation that for some parameter different from . The matrices that linearly transform this plane are then constrained in their entries for consistency. There’s a natural way to build these up into higher dimensional quantum matrices. The quantum Grassmannian is constructed exactly as above, but with these new-fangled quantum matrices!
The theorem goes that the torus action invariant irreducible varieties in the quantum Grassmannian exactly correspond to the cells of the positive Grassmannian. The proof is fairly involved, but the ideas are rather elegant. I think you’ll agree that the final result is mysterious and intriguing!
And we’re not done there. As I’ve mentioned before, positive Grassmannia and their generalizations turn out to compute scattering amplitudes. Alright, at present this only works for planar super-Yang-Mills. Stop press! Maybe it works for non-planar theories as well. In any case, it’s further evidence that Grassmannia are the future.
From a historical point of view, it’s not surprising that Grassmannia are cropping up right now. In fact, you can chronicle revolutions in theoretical physics according to changes in the variables we use. The calculus revolution of Newton and Leibniz is arguably about understanding properly the limiting behaviour of real numbers. With quantum mechanics came the entry of complex numbers into the game. By the 1970s it had become clear that projectivity was important, and twistor theory was born. And the natural step beyond projective space is the Grassmannian. Viva la revolución!
If you know someone who works in academia, chances are they’ve told you that research takes time. A lot of time, that is. But does it have to?
It’s 3:15pm on an overcast Wednesday afternoon. A group of PhD students, postdocs and senior academics sit down to discuss their latest paper. They’re “just finishing” the write-up, almost ready to submit to a journal. This phase has already been in motion for months, and could well take another week or two.
Of course, this sounds monumentally inefficient to corporate ears. In a world where time is money, three months of tweaking and wrangling could not be tolerated. So why is it acceptable in academia? Because nobody is in charge! Many universities suffer from a middle management leadership vacuum; the combined result of lack of training and unwise promotions.
The solution – a cultural shake-up. Universities must offer more teaching-only posts, breaking the vicious cycle which sees disgruntled researchers forced to lecture badly, and excellent teachers forced out from lack of research output. Senior management should mandate leadership training for group leaders and supervisors, empowering them to manage effectively and motivate their students. Doctoral candidates, for that matter, might also benefit from a course in followership, the latest business fashion. Perhaps most importantly, higher education needs to stop hiring, firing and promoting based purely on research brilliance, with no regard for leadership, teamwork and communication skills.
Conveniently, higher education has ready made role-models in industrial research organisations. Bell Labs is a good example. Not long ago this once famous institution was in the doldrums, even forced to shut down for a period. But under the inspirational leadership of Marcus Weldon, the laboratory is undergoing a renaissance. Undoubtedly much of this progress is built on Marcus’ clear strategic goals and emphasis on well-organised collaboration.
Universities might even find inspiration closer to home. Engineering departments worldwide are developing ever-closer ties to industry, with beneficial effects on research culture. From machine learning to aerospace, corporate backing provides not only money but also business sense and career development. These links advantage researchers with some client-facing facets, not the stuffy chalk-covered supermind of yesteryear. That doesn’t mean there isn’t a place for pure research – far from it. But insular positions ought to be exceptional, rather than the norm.
At the Scopus Early Career Researcher Awards a few weeks ago, Elsevier CEO Ron Mobed rightly bemoaned the loss of young research talent from academia. The threefold frustrations of poor job security, hackneyed management and desultory training hung unspoken in the air. If universities, journals and learned societies are serious about tackling this problem they’ll need a revolution. It’s time for the 21st century university. Let’s get down to the business of research.
18 months ago I embarked on a PhD, fresh-faced and enthusiastic. I was confident I could learn enough to understand research at the coal-face. But when faced with the prospect of producing an original paper, I was frankly terrified. How on earth do you turn vague ideas into concrete results?
In retrospect, my naive brain was crying out for an algorithm. Nowadays we’re so trained to jump through examination hoops that the prospect of an open-ended project terrifies many. Here’s the bad news – there’s no well-marked footpath leading from academic interest to completed write-up.
So far, so depressing. Or is it? After all, a PhD is meant to be a voyage of discovery. Sure, if I put you on a mountain-top with no map you’d likely be rather scared. But there’s also something exhilarating about striking out into the unknown. Particularly, that is, if you’re armed with a few orienteering skills.
I’m about to finish my first paper. I can’t and won’t tell you how to write one! Instead, here’s a few items for your kit-bag on the uncharted mountain of research. With these in hand, you’ll be well-placed to forge your own route towards the final draft.
Any good rambler takes a compass. Your supervisor is your primary resource for checking direction. Use them!
It took me several months (and one overheard moan) to start asking for pointers. Nowadays, if I’m completely lost for more than a few hours, I’ll seek guidance.
2. Your Notes
You start off without a map. It’s tempting to start with unrecorded cursory reconnaissance, just to get the lie of the land. Although initially speedy, you have to be super-disciplined lest you go too far and can’t retrace your steps. You’d be better off making notes as you go. Typesetting languages and subversion repositories can help you keep track of where you are.
Your notes will eventually become your paper – hence their value! But there’s a balance to be struck. It’s tempting to spend many hours on pretty formatting for content which ends up in Appendix J. If in doubt, consult your compass.
3. Your Colleagues
Some of them have written papers before. All of them have made research mistakes. Mutual support is almost as important in a PhD programme as on a polar expedition! But remember that research isn’t a race. If your colleague has produced three papers in the time it’s taken you to write one, that probably says more about their subfield and style of work than your relative ability.
Aka love of failure. If you sit on top of the mountain and never move then you’ll certainly stay away from dangerous edges. But you’ll also never get anywhere! You will fail much more than you succeed during your PhD. Every time you pursue an idea which doesn’t work, you are honing in on the route which will.
In this respect, research is much like sport – positive psychology is vital. Bottling up frustration is typically unhelpful. You’d be much better off channelling that into…
5. Not Writing Your Paper
You can’t write a paper 24/7 and stay sane. Thankfully a PhD is full of other activities that provide mental and emotional respite. My most productive days have coincided with seminars, teaching commitments and meetings. You should go to these, especially if you’re feeling bereft of motivation.
And your non-paper pursuits needn’t be limited to the academic sphere. A regular social hobby, like sports, music or debating, can provide a much needed sense of achievement. Many PhDs I know also practice some form of religion, spirituality or meditation. Time especially set aside for mental rest will pay dividends later.
No, I don’t mean related papers in your field (though these are important). I’ve found fiction, particularly that with intrigue and character development, hugely helpful when I’m struggling to cross an impasse. Perhaps surprisingly, some books aimed at startups are also worth a read. A typical startup faces corporate research problems akin to academic difficulties.
Finally, remember that research is by definition iterative! You cannot expect your journey to end within a month. As you chart the territory around you, try to enjoy the freedom of exploring. Who knows, you might just spot a fascinating detour that leads directly to an unexpected paper.
My thanks to Dr. Inger Mewburn and her wonderful Thesis Whisperer blog for inspiring this post.
Thomson-Reuters has reportedly published their yearly analysis of the hottest trends in science research. Increasingly, governments and funding organisations use such documents to identify strategic priorities. So it’s profoundly disturbing that their conclusions are based on shoddy methodology and bad science!
The researchers first split recent papers into 10 broad areas, of which Physics was one. And then the problems began. According to the official document
Research fronts assigned to each of the
10 areas were ranked by total citations and the top 10 percent of the fronts in each area were extracted.
Already the authors have fallen into two fallacies. First, they have failed to normalise for the size of the field. Many fields (like Higgs phenomenology) will necessarily generate large quantities of citations due to their high visibility and current funding. Of course, this doesn’t mean that we’ve cracked naturalness all of a sudden!
Second their analysis is far too coarse-grained. Physics contains many disciplines, with vastly different publication rates and average numbers of citations. Phenomenologists publish swiftly and regularly, while theorists have longer papers with slower turnover. Experimentalists often fall somewhere in the middle. Clearly the Thomson-Reuters methodology favours phenomenology over all else.
But wait, the next paragraph seems to address these concerns. To some extent they “cherry pick” the hottest research fronts to account for these issues. According to the report
Due to the different characteristics and citation behaviors in various disciplines, some fronts are much smaller than others in terms of number of core and citing papers.
Excellent, I hear you say – tell me more! But here comes more bad news. It seems there’s no information on how this cherry picking was done! There’s no mention of experts consulted in each field. No mathematical detail about how vastly different disciplines were fairly compared. Thomson-Reuters have decided that all the reader deserves is a vague placatory paragraph.
And it gets worse. It turns out that the scientific analysis wasn’t performed by a balanced international committee. It was handed off to a single country – China. Who knows, perhaps they were the lowest bidder? Of course, I couldn’t possibly comment. But it seems strange to me to pick a country famed for its grossly flawed approach to scientific funding.
Governments need to fund science based on quality and promise, not merely quantity. Thomson-Reuters simplistic analysis is bad science at its very worst. It seems to offer intelligent information but in fact is misleading, flawed and ultimately dangerous.
I spent last week at the Perimeter Institute in Waterloo, Ontario. Undoubtedly one of the highlights was Juan Maldenena’s keynote on resolving black hole paradoxes using wormholes. Matt’s review of the talk below is well worth a read!
The cosmic microwave background (CMB) is a key observable in cosmology. Experimentalists can precisely measure the temperature of microwave radiation left over from the big bang. The data shows very small differences in temperature across the sky. It’s up to theorists to figure out why!
The most popular explanation invokes a scalar field early in the universe. Quantum fluctuations in the field are responsible for the classical temperature distribution we see today. This argument, although naively plausible, requires some serious thought for full rigour.
Talks by cosmologists often parrot the received wisdom that the two-point correlation function of the scalar field can be observed on the sky. But how exactly is this done? In this post I’ll carefully explain the winding path from theory to observation.
First off, what really is a correlation function? Given two random variables and we can (roughly speaking) determine their correlation as
Intuitively this definition makes sense – in configurations where and share the same sign there is a positive contribution to the correlation.
There’s another way of looking at correlation. You can think of it as a measure of the probability that for any random sample of there will be a value of within some given distance. Hopefully this too feels intuitive. It can be proved more rigorously using Bayes’ theorem.
This second way of viewing correlation is particularly useful in cosmology. Here the random variables are usually position dependent fields. The correlation then becomes
where the average is over the whole sky with the direction of the vector fixed. The magnitude of this vector provides a natural distance scale for the probabilistic interpretation of correlation. We see that the correlation is an avatar for the lumpiness of the distribution at a particular distance scale!
Now let’s focus on the CMB. The temperature fluctuations are defined as the percentage deviation from the average temperature at each point on the sky. Mathematically we write
where defines a point on the unit -sphere. We want to relate this to theoretical predictions. Given our discussion above, it’s not surprising that our first step is to compute the correlation function
where the average is over the whole sky with the angle between and fixed. This average doesn’t lose any physical information since there’s no preferred direction in the sky! We can conveniently encode the correlation function using spherical harmonics
The coefficients are known as the multipole moments of the temperature distribution. Substituting this in the correlation function definition we obtain
where . We’re almost finished with our derivation! The final step is to convert from the correlation function to it’s momentum space representation, known as the power spectrum. With a little work, you can show that the power at multipole number is given by
This is exactly the quantity we see plotted from sky map data on graphs comparing inflation theory to experiment!
From the theory perspective, this quantity is fairly easy to extract. We must compute the power spectrum of the primordial fluctuations of the inflation field. This is merely a matter of quantum field theory, albeit in de Sitter spacetime. Perhaps the most comprehensive account of this procedure is provided in Daniel Baumann’s notes.
Without going into any details, it’s worth mentioning a few theoretical models. The simplest option is to have a massless free inflaton field. This gives a scale-invariant power spectrum, which is almost correct! Adding mass corrects this result, providing fluctuations in the power spectrum. This is a better approximation, but has been ruled out by Planck data.
Clearly we need a more general potential. Here’s where the fun starts for cosmologists. The buzzwords are effective field theory, string inflation, non-Gaussianity and multiple fields! But that’ll have to wait for another blog post.