## Burning bright

Bengal tiger portrait shoot, with all four varieties.

## Renormalization and Computation 4

This is the fourth in a series of posts on Yuri Manin’s pair of papers. In the previous posts, I laid out the background; this time I’ll actually get around to his result.

A homomorphism from the Hopf algebra into a target algebra is called a *character*. The functor that assigns an action to a path, whether classical or quantum, is a character. In the classical case, it’s into the rig and we take an infimum over paths; in the quantum it’s into the rig and we take an integral over paths. Moving from the quantum to the classical case is called Maslov dequantization.

Manin mentions that the runtime of a parallel program is a character akin to the classical action, with the runtime of the composition of two programs being the sum of the respective runtimes, while the runtime of two parallel programs is the maximum of the two. A similar result holds for nearly any cost function. He also points out that computably enumerable reals form a rig He examines Rota-Baxter operators as a way to generalize what “polar part” means and extend the theorems on Hopf algebra renormalization to such rigs.

In the second paper, he looks at my work with Calude as an example of a character. He uses our same argument to show that lots of measures of program behavior have the property that if the measure hasn’t stopped growing after reaching a certain large amount with respect to the program size, then the density of finite values the measure could take decreases like Surprisingly, though he referred to these results as cutoffs, he didn’t actually use them anywhere for doing regularization.

Reading between the lines, he might be suggesting something like approximating the Kolmogorov complexity that he uses later by using a time cutoff, motivated by results from our paper: there’s a constant depending only on the programming language such that if you run the th program steps and it hasn’t stopped, then the density of times near at which it could stop is roughly

Levin suggested using a computable complexity that’s the sum of the program length and the log of the number of time steps; I suppose you could “regularize” the Kolmogorov complexity by adding to the length of the program, renormalize, and then let go to zero, but that’s not something Manin does.

Instead, he proposed two other constructions suitable for renormalization; here’s the simplest. Given a partial computable function define the computably enumerable by if is defined, and 0 otherwise. Now define

When is undefined, which has a pole at When is defined, converges everywhere except Birkhoff decomposition would separate these two cases, though I’m not sure what value would take or what it would mean.

The other construction involves turning into a permutation and inventing a function that has poles when the permutation has fixpoints.

So Manin’s idea of renormalizing the halting problem is to do some uncomputable stuff to get an easy-to-renormalize function and then throw the Birkhoff decomposition at it; since we know the halting problem is undecidable, perhaps the fact that he didn’t come up with a new technique for extracting information about the problem is unsurprising, but after putting in so much effort to understand it, I was left rather disappointed: if you’re going to allow yourself to do uncomputable things, why not just solve the halting problem directly?

I must suppose that his intent was not to tackle this hard problem, but simply to play with the analogy he’d noticed; it’s what I’ve done in other papers. And being forced to learn renormalization was exhilarating! I have a bunch of ideas to follow up; I’ll write them up as I get a chance.

## Renormalization and Computation 3

This is the third in a series of posts on Yuri Manin’s recent pair of papers applying Hopf algebra renormalization to the Halting problem. Last time I talked about the way people usually do renormalization; this time I’ll talk about the recent work by Kreimer, Connes, and others in exposing the underlying Hopf algebra in this process.

A Hopf algebra is

- An -module for a commutative rig which means you can add vectors and multiply them by a scalar.
- An algebra, which means you can take two vectors and multiply them. This operation is associative; there’s also a unit vector that satisfies left- and right-unit laws.
- A bialgebra, which means there’s also a coassociative comultiplication and counit, and the structures all work together. When the tensor product is the cartesian product, the comultiplication duplicates the vector and the counit is the constant map to 1 in the base field. Even when the tensor product isn’t the cartesian product, it can still be useful to think of it that way.
- A bialgebra with an involution, called the antipode.

A group is very like a Hopf algebra; in fact, a group object in the category of vector spaces and linear maps is a cocommutative Hopf algebra. You can multiply group elements and there’s a multiplicative unit; you can duplicate and delete them in equations; and you can invert them.

It turns out that Feynman diagrams form a Hopf algebra if you poke yourself in one eye and squint with the other. First, a *cut* of an oriented graph (i.e directed graph with no parallel edges) picks an upper set and a lower set of vertices such that

- given an oriented wheel in the graph, its vertices either all belong to the upper set or all belong to the lower set, and
- any edge connecting a vertex in the upper set to a vertex in the lower set must be directed from to

Now, given a set of Feynman diagrams, consider all formal linear combinations of graph cuts. This is a vector space because you can add these things pointwise and multiply them by a scalar. We can make it into a bialgebra by defining multiplication to be a linear map

with unit

and comultiplication to be a linear map

where ranges over all cuts of with counit

It’s graded: just count the number of vertices. And we can turn it into a Hopf algebra by defining the antipode to be a linear map such that

Each algebra homomorphism (not necessarily preserving the Hopf algebra structure) from to an algebra defines a way to assign a (generalized) probability amplitude to each diagram. The set of such homomorphisms becomes a group when we note that the functor is contravariant, so the comultiplication in gets mapped to a multiplication.

Next: given a complex group (that is, a group that’s also a complex manifold so the multiplication and inverse are complex-analytic functions), a *Birkhoff decomposition* of a loop is an analytic continuation of the loop to

- a holomorphic function on the standard disk inside the circle
- a holomorphic function on the complement of this disk in the projective complex plane
- such that on the unit circle the original loop is reproduced as
where the product and the inverse on the right are taken in the group Notice that is a well defined element of

Take Now imagine our regularization parameter is a complex number and we have some map that’s singular at Then the Connes-Kreimer theorem says that the Birkhoff decomposition always exists and gives an explicit formula. Hopf algebra renormalization is simply rearranging the terms in the Birkhoff decomposition:

where is the convolution product.

As I understand this, is isomorphic to Given a linear combination of graphs, gives you back a Laurent polynomial in which you can split into terms with negative exponents (the polar part) and those with positive exponents (the renormalized part).

## Renormalization and Computation 2

This is the second in a series of posts covering Yuri Manin’s ideas involving Hopf algebra renormalization of the Halting problem. Last time I showed how perturbing a quantum harmonic oscillator gave a sum over integrals involving interactions with the perturbation; we can keep track of the integrals using Feynman diagrams, though in the case of a single QHO they weren’t very interesting.

One point about the QHO needs emphasis at this point. Given a wavefunction describing the state of the QHO, it must be the case that we get *some* value when we measure the energy; so if we sum up the norms of the probability amplitudes, we should get unity:

This is called the *normalization* condition.

When we perturb the QHO, the states are no longer the energy eigenvectors of the new Hamiltonian. We can express the new eigenvectors in terms of the old ones:

where is the strength of the perturbation, and we reexpress our wavefunction in this new basis:

Since we’re working with a new set of coefficients, we have to make sure they sum up to unity, too:

This is the *renormalization* condition. So renormalization is about making sure things sum up right once you perturb the system.

I want to talk about renormalization in quantum field theory; the trouble is, I don’t actually know quantum field theory, so I’ll just be writing up what little I’ve gathered from reading various things and conversations with Dr. Baez. I’ve likely got some things wrong, so please let me know and I’ll fix them.

A field is a function defined on spacetime. Scalar fields are functions with a single output, whereas vector fields are functions with several outputs. The electromagnetic field assigns a single number, called the electric field, and a vector, called the magnetic field, to every point in spacetime. When you have two electrons and move one of them, it feels a reaction force and loses momentum; the other electron doesn’t move until the influence, traveling at the speed of light, reaches it. Conservation of momentum says that the momentum has to be somewhere; it’s useful to consider it to be in the electromagnetic field.

When you take the Fourier transform of the field, you get a function that assigns values to harmonics of the field; in the case of electromagnetism, the transformed field assigns a value to each color of light. Quantizing this transformed field amounts to making into a creation operator, just like in the QHO example from last time. So we have a continuum of QHOs, each indexed by a color (By the way—the zero-dimensional Fourier transform is the identity function, so the QHO example from last time can be thought of both as the field at the unique point in spacetime and the field at the unique frequency.)

When we move to positive-dimensional fields, we get more interesting pictures, like these from quantum electrodynamics:

Here, our coupling constant is the fine structure constant where is the charge of the electron. For each vertex, we write down our coupling constant times times a delta function saying that the incoming momentum minus the outgoing momentum equals zero. For each internal line, we write down a propagator—a function representing the transfer of momentum from one point to another; it’s a function of the four-momentum —and multiply all this stuff together. Then we integrate over all four-momenta and get something that looks like

The trouble is, this integral usually gives infinity for an answer. We try to work around this in two steps: first, we *regularize* the integral by introducing a length scale This represents the point at which gravity starts being important and we need to move to a more fundamental theory. In the quantum field theory of magnetic domains in iron crystals, the length scale is the inter-atom distance in the lattice. Regularization makes the integral finite for away from some singularity.

There are a few different ways of regularizing; one is to use as a momentum cutoff:

This obviously converges, and solutions to this are always a sum of three parts:

- The first part diverges as either logarithmically or as a power of
- The second part is finite and independent of
- The third part vanishes as

Renormalization in this case amounts to getting rid of the first part.

These three parts represent three different length scales: at lengths larger than all quantum or statistical fluctuations are negligible, and we can use the mean field approximation and do classical physics. At lengths between and we use QFT to calculate what’s going on. Finally, at lengths smaller than we need a new theory to describe what’s going on. In the case of QED, the new theory is quantum gravity; string theory and loop quantum gravity are the serious contenders for the correct theory.

The problem with this regularization scheme is that it doesn’t preserve gauge invariance, so usually physicists use another regularization scheme, called “dimensional regularization”. Here, we compute

which gives us an expression involving gamma functions of , where gamma is the continuous factorial function, and then we set The solutions to this are also a sum of three terms—a divergent part, a finite part, and a vanishing part—and then renormalization gets rid of the divergent part.

Assume we have some theory with a single free parameter . We’d like to calculate a function perturbatively in terms of , where represents some physical quantity, and we know . We assume takes the form

and assume that this definition gives us divergent integrals for the The first step is regularization: instead of we have a new function

Now we get to the business of renormalization! We solve this problem at each order; if the theory is renormalizable, knowing the solution at the previous order will give us a constraint for the next order, and we can subtract off all the divergent terms in a consistent way:

- Order
Here, Since it’s a constant, it has to match so In this approximation, the coupling constant takes the classical value.

- Order
Let where Plugging this into the definition of we get

Using we get which diverges as In the case of QED, this says that the charge on the electron is infinite. While the preferred interpretation these days is that quantum gravity is a more fundamental theory that takes precedence on very small scales (a Planck length is to a proton as a proton is to a meter), when the theory was first introduced, there was no reason to think that we’d need another theory. So the interpretation was that with an infinite charge, an electron would be able to extract an infinite amount of energy from the electromagnetic field. Then the uncertainty principle would create virtual particles of all energies, which would exist for a time inversely proportional to the energy. The particles can be charged, so they line up with the field and dampen the strength just like dielectrics. In this interpretation, the charge on the electron depends on the energy of the particles you’re probing it with.

So to second order,

A theory is therefore only renormalizable if the divergent part of is independent of In QED it is. We can now define as the limit

- Higher orders.
In a renormalizable theory, the process continues, with the counterterms entirely specified by knowing

## Renormalization and Computation 1

Yuri Manin recently put two papers on the arxiv applying the methods of renormalization to computation and the Halting problem. Grigori Mints invited me to speak on Manin’s results at the weekly Stanford logic seminar because in his second paper, he expands on some of my work.

In these next few posts, I’m going to cover the idea of Feynman diagrams (mostly taken from the lecture notes for the spring 2004 session of John Baez’s Quantum Gravity seminar); next I’ll talk about renormalization (mostly taken from Andrew Blechman’s overview and B. Delamotte’s “hint”); third, I’ll look at the Hopf algebra approach to renormalization (mostly taken from this post by Urs Schreiber on the n-Category Café); and finally I’ll explain how Manin applies this to computation by exploiting the fact that Feynman diagrams and lambda calculus are both examples of symmetric monoidal closed categories (which John Baez and I tried to make easy to understand in our Rosetta stone paper), together with some results on the density of halting times from my paper “Most programs stop quickly or never halt” with Cris Calude. I doubt all of this will make it into the talk, but writing it up will make it clearer for me.

Renormalization is a technique for dealing with the divergent integrals that arise in quantum field theory. The quantum harmonic oscillator is quantum field theory in 0+1 dimensions—it describes what quantum field theory would be like if space consisted of a single point. It doesn’t need renormalization, but I’m going to talk about it first because it introduces the notion of a Feynman diagram.

“Harmonic oscillator” is a fancy name for a rock on a spring. The force exerted by a spring is proportional to how far you stretch it:

The potential energy stored in a stretched spring is the integral of that:

and to make things work out nicely, we’re going to choose The total energy is the sum of the potential and the kinetic energy:

By choosing units so that we get

where is momentum.

Next we quantize, getting a quantum harmonic oscillator, or QHO. We set taking units where Now

If we define a new observable then

We can think of as and write the energy eigenvectors as polynomials in

The creation operator adds a photon to the mix; there’s only one way to do that, so The annihilation operator destroys one of the photons; in the state , there are photons to choose from, so

Schrödinger’s equation says so

This way of representing the state of a QHO is known as the “Fock basis”.

Now suppose that we don’t have the ideal system, that the quadratic potential is only a good local approximation to the real potential . Then we can write the total as where is a function of position and momentum, or equivalently of and and is small.

Now we solve Schrödinger’s equation perturbatively. We know that

and we assume that

so that it makes sense to solve it perturbatively. Define

and

After a little work, we find that

and integrating, we get

We feed this equation back into itself recursively to get

So here we have a sum of a bunch of terms; the th term involves interactions with the potential interspersed with evolving freely between the interactions, and we integrate over all possible times at which those interactions could occur.

Here’s an example Feynman diagram for this simple system, representing the fourth term in the sum above:

The lines represent evolving under the free Hamiltonian , while the dots are interactions with the potential .

As an example, let’s consider and choose so that When acts on a state we get So at each interaction, the system either gains a photon or changes phase and loses a photon.

A particle moving in a quadratic potential in -dimensional space gives the tensor product of QHOs, which is QFT in a space where there are possible harmonics. Quantum electrodynamics (QED) amounts to considering infinitely many QHOs, one for each possible energy-momentum, which forms a continuum. The diagrams for QED start to look more familiar:

The vertices are interactions with the electromagnetic field. The straight lines are electrons and the wiggly ones are photons; between interactions, they propagate under the free Hamiltonian.

## Transplants without rejection

Cured diabetes in mice via pancreas transplant, but probably works on every other organ, too.

leave a comment