Tag Archives: renormalization

Tidbits from the High Table of Physics

This evening, I was lucky enough to dine with Brenda Penante, Stephane Launois, Lionel Mason, Nima Arkani-Hamed, Tom Lenagan and David Hernandez. Here for your delectation are some tidbits from the conversation.

  • The power of the renormalisation group comes from the fact that the $1$-loop leading logarithm suffices to fix the leading logarithm at all loops. Here’s a reference.
  • The BPHZ renormalisation scheme (widely seen in the physics community as superseded by the Wilsonian renormalisation group) has a fascinating Hopf algebra structure.
  • The central irony of QFT is thus. IR divergences were discovered before UV divergences and “solved” almost instantly. Theorists then wrangled for a couple of decades over the UV divergences, before finally Wilson laid their qualms to rest. At this point the experimentalists came back and told them that it was the IR divergences that were the real problem again. (This remains true today, hence the motivation behind my work on soft theorems).
  • IR divergences are a consequence of the Lorentzian signature. In Euclidean spacetime you have a clean separation of scales, but not so in our world. (Struggling to find a reference for this, anybody know of one?)
  • The next big circular collider will probably have a circumference of 100km, reach energies 7 times that of the LHC and cost at least £20 billion.
  • The Fourier transform of any polynomial in cos(x) with roots at \pm (2i+1) / (2n+1) for 1 \leq i \leq n-1 has all positive coefficients. This is equivalent to the no-ghost theorem in string theory, proved by Peter Goddard, and seems to require some highly non-trivial machinery. (Again, does anyone have a reference?)

and finally

  • Never, ever try to copy algebra into Mathematica late at night!
Advertisements

Renormalization and Super Yang Mills Theory

It’s well known that \mathcal{N}=4 super Yang-Mills theory is perturbatively finite. This means that there’s no need to introduce a regulating cutoff to get sensible answers for scattering amplitude computations. In particular the \beta and \gamma functions for the theory vanish.

Recall that the \gamma function tells us about the anomalous dimensions of elementary fields. More specifically, if \phi is some field appearing in the Lagrangian, it must be rescaled to Z \phi during renormalization. The \gamma function then satisfies

\displaystyle \gamma(g)=\frac{1}{2}\mu\frac{d}{d\mu}\log Z(g,\mu)

where g is the coupling and \mu the renormalization scale. At a fixed point g_* of the renormalization group flow, it can be shown that \gamma(g_*) exactly encodes the difference between the classical dimension of \phi and it’s quantum scaling dimension.

Thankfully we can replace all that dense technical detail with the simple picture of a river above. This represents the space of all possible theories, and the mass scale \mu takes the place of usual time evolution. An elementary field operator travelling downstream will experience a change in scaling dimension. If it happens to get drawn into the fixed point in the middle of the whirlpool(!) the anomaly will exactly be encoded by the \gamma function.

For our beloved \mathcal{N}=4 though the river doesn’t flow at all. The theory just lives in one spot all the time, so the elementary field operators just keep their simple, classical dimensions forever!

But there’s a subtle twist in the tale, when you start considering composite operators. These are built up as products of known objects. Naively you might expect that these don’t get renormalized either, but there you would be wrong!

So what’s the problem? Well, we know that propagators have short distance singularities when their separation becomes small. To get sensible answers for the expectation value of composite operators we must regulate these. And that brings back the pesky problem of renormalization with a vengeance.

The punchline is the although \mathcal{N}=4 is finite, the full spectrum of primary operators does contain some with non-trivial scaling dimensions. And that’s just as well really, because otherwise the AdS/CFT correspondence wouldn’t be quite as interesting!

Non-Perturbative QFT: A Loophole?

I’ve been musing on yesterday’s post, and in particular a potential loophole in my argument. Recall that the whole shebang hinges on the fact that e^{1/g^2} is smooth but not analytic. We were interpreting g as the coupling constant in some theory, say QED. But hang about, surely we could just do a rescaling A_\mu \mapsto A_\mu/g^2 to remove this behaviour? After all the theory is invariant classically under such a transformation.

But it turns out that this kind of rescaling is anomalous in (most) quantum field theories. Recall that renormalization endows quantum field theories with a \beta function, which determines the evolution of coupling constants as energy changes. The rescaling will only remain a symmetry if \beta(g) is globally zero. Otherwise the rescaling only superficially eliminates the non-perturbative effect – it will reappear at different energies!

This raises a natural question: can you have instantons in finite quantum field theories? By definition these have \beta function zero. Naively we might expect scale invariance to kill non-perturbative physics. A popular finite theory is \mathcal{N}=4 SYM, which crops up in AdS/CFT. A quick google suggests that my naive thinking is wrong. There are plenty of papers on instantons in this theory!

There must be a still deeper level to non-perturbative understanding. Sadly most physics papers gloss over the details. Let’s keep half an eye out for an explanation!

Non-Perturbative Effects in Quantum Field Theory

In elementary QFT we only really know how to solve free theories. These require quantizing infinitely many harmonic oscillators, which is an easy problem. But the real world is described by interacting theories with Lagrangians like

\mathcal{L}_{\textrm{QED}} = \overline{\psi}(i\not D-m)\psi - \frac{1}{4}F^2

in which the fields are mixed up together. The standard way of dealing with these is to write them as

\mathcal{L} = \mathcal{L}_{\textrm{free}}+\mathcal{L}_{\textrm{int}}

and then split the path integral as

\exp(i\int d^4x \mathcal{L}) = \exp(i\int d^4 x \mathcal{L}_{\textrm{free}})(1+i\int d^4x\mathcal{L}_{\textrm{int}}+\dots)

The first factor explicitly yields free theory propagators. The second factor may now be expanded as a Taylor series in some coupling constant g. Often this is trivial since g appears in \mathcal{L}_{\textrm{int}} precisely to first order. For example in QED we have

\mathcal{L}_{\textrm{int}} = g\overline{\psi}\gamma^{\mu}\psi

with g being the electric charge. This means that the perturbation expansion is exactly the usual exponential series.

At present it looks like perturbation theory must capture all information in a QFT, provided you sum up all the terms. However, we’ve missed a crucial mathematical point. We’ll view the scattering amplitude \Gamma for a given process as a complex-valued function of a real coupling g; that is to say \Gamma \equiv \Gamma(e). We construct this function in (somewhat) trivial way from smooth functions, so it should be smooth.

However, smoothness does not guarantee analyticity! Put another way, we cannot be sure that the Taylor series for g will converge for g\neq 0. Even if it does, it’s not guaranteed to converge to the function \Gamma itself. This means that there are effects that lie outside the realm of perturbation theory, even if you could sum every term.

Let’s take an example. Suppose for instance that \mathcal{L}_{\textrm{int}} has a 1/g^2 dependence. If you’re balking at this “unphysical” choice, then rest assured that such Lagrangians do crop up. Now try to compute the Taylor expansion around g = 0 for exp(-1/x^2). A little thought shows that this is identically zero.

Such non-perturbative effects require new approaches. One fruitful method involves solving the full theory classically then then considering quantum perturbations around such solutions. For Yang-Mills theories in particular this yields the notion of an instanton.

Finally let’s go back to QED and work out whether we should see non-perturbative effects there. Although not immediately as obvious as the simple 1/g^2 example, one can argue that the perturbation series has 0 radius of convergence. We follow Dyson and observe that if the radius of convergence were not zero, we’d be able to “reverse” the electric field by flipping the sign of g. This would render the vacuum unstable against decays into electrons and positrons, since these would repel rather than annihilate within a “Heisenberg allowed” time period. To rule out this option, we must take g = 0 formally.

This argument is not mathematically rigorous, and I don’t know whether one can make it so. Please comment if you know! Thankfully there’s a more modern perspective that sheds new light. We can view the sickness in QED in the context of Wilsonian renormalization. In particular, the theory has no well-defined high energy limit, as the coupling constants flow to infinite values at finite energy. This Landau pole behaviour is perhaps evidence that non-perturbative effects rule high energy QED. Again, I don’t whether this argument can be made watertight, so please take a rain-cheque on that one!