## Astounding protein folding paper

By assuming that to get from point A to point B you don’t have to hit a sequence of points in between—i.e. the way quantum particles work—these guys accurately predict protein folding rates in 15 different real proteins. This is huge.

## Regular tilings of three-dimensional spaces

If you start at the north pole and make an equilateral triangle 6000 miles on a side, the bottom will lie on the equator, each of the angles will be 90 degrees, and only four of them will fit around the pole.

In a similar way, large enough tetrahedra would tile the surface of a hypersphere. This paper identifies the eleven regular tilings of three-dimensional spaces and whether they’re spherical, Euclidean, or hyperbolic tilings, and then looks at the geometry of spacetime to see how it might be tiled.

The “cubic” tilings (where eight polyhedra meet around a vertex like cubes do in Euclidean space) are amenable to taking cross-sections; this tiling of hyperbolic space with dodecahedra

has a cross section with a tiling of the hyperbolic plane with pentagons:

## The Word of God

From desert cliff and mountaintop we trace the wide design,

Strike-slip fault and overthrust and syn and anticline…

We gaze upon creation where erosion makes it known,

And count the countless aeons in the banding of the stone.

Odd, long-vanished creatures and their tracks & shells are found;

Where truth has left its sketches on the slate below the ground.

The patient stone can speak, if we but listen when it talks.

Humans wrote the Bible; God wrote the rocks.

There are those who name the stars, who watch the sky by night,

Seeking out the darkest place, to better see the light.

Long ago, when torture broke the remnant of his will,

Galileo recanted, but the Earth is moving still.

High above the mountaintops, where only distance bars,

The truth has left its footprints in the dust between the stars.

We may watch and study or may shudder and deny,

Humans wrote the Bible; God wrote the sky.

By stem and root and branch we trace, by feather, fang and fur,

How the living things that are descend from things that were.

The moss, the kelp, the zebrafish, the very mice and flies,

These tiny, humble, wordless things–how shall they tell us lies?

We are kin to beasts; no other answer can we bring.

The truth has left its fingerprints on every living thing.

Remember, should you have to choose between them in the strife,

Humans wrote the Bible; God wrote life.

And we who listen to the stars, or walk the dusty grade,

Or break the very atoms down to see how they are made,

Or study cells, or living things, seek truth with open hand.

The profoundest act of worship is to try to understand.

Deep in flower and in flesh, in star and soil and seed,

The truth has left its living word for anyone to read.

So turn and look where best you think the story is unfurled.

Humans wrote the Bible; God wrote the world.

-Catherine Faber, The Word of God

## Imaginary Time 2

Another part of the analogy I started here, but this time using inverse temperature instead of imaginary time. It describes a thermometer where a mass changes position with temperature. I’m guessing this stuff only applies when the temperature is changing adiabatically.

Thermometer (unitless temperature): | ||

[1] | inverse temperature (unitless) | |

[m] | y coordinate | |

[kg/s^2 K] | spring constant * temp unit conversion | |

[m] | how position changes with (inverse) temperature | |

[kg m/s^2 K] | force per Kelvin | |

[kg m^2/s^2 K] | stretching energy per Kelvin | |

[kg m^2/s^2 K] | potential energy per Kelvin | |

[kg m^2/s^2 K] | entropy | |

Thermometer: | ||

[1/K] | inverse temperature | |

[m] | y coordinate | |

[kg/s^2 K^2 = bits/m^2 K] | how information density changes with temp | |

[m K] | how position changes with (inverse) temperature | |

[kg m/s^2 K = bits/m] | force per Kelvin | |

[kg m^2/s^2 = bits K] | stretching energy = change in stretching information with invtemp | |

[kg m^2/s^2 = bits K] | potential energy = change in potential information with invtemp | |

[bits] | entropy |

I assume that the dynamics of such a system would follow a path where is that a minimum-entropy path or a maximum?

## Entropic gravity

Erik Verlinde has been in the news recently for revisiting Ted Jacobson’s suggestion that gravity is an entropic force rather than a fundamental one. The core of the argument is as follows:

Say we have two boxes, one inside the other:

+---------------+ | | | +----------+ | | | | | | | | | | | | | | +----------+ | +---------------+

Say the inner box has room for ten bits on its surface and the outer one room for twenty. Each box can use as many “1”s as there are particles inside it:

+---------------+ | X | | +----------+ | | | | | | | X | | | | | | | +----------+ | +---------------+

In this case, the inner box has only one particle inside, so there are 10 choose 1 = 10 ways to choose a labeling of the inner box; the outer box has two particles inside, so there are 20 choose 2 = 190 ways. Thus there are 1900 ways to label the system in all.

If both particles are in the inner box, though, the number of ways increases:

+---------------+ | | | +----------+ | | | | | | | X X | | | | | | | +----------+ | +---------------+

The inner box now has 10 choose 2 ways = 45, while the outer box still has 190. So using the standard assumption that all labelings are equally likely, it’s 4.5 times as likely to find both particles in the inner box, and we get an entropic force drawing them together.

The best explanation of Verlinde’s paper I’ve seen is Sabine Hossenfelder’s Comments on and Comments on Comments on Verlinde’s paper “On the Origin of Gravity and the Laws of Newton”.

## Simple activity for teaching about radiometric dating

This works best with small groups of about 5-10 students and at least thirty dice. Divide the dice evenly among the students.

- Count the number of dice held by the students and write it on the board.
- Have everyone roll each die once.
- Collect all the dice that show a ‘one’, count them, write the sum on the board, then set them aside.
- Go back to step 1.

A run with 30 dice will look something like this:

dice | number of ones |

30 | 5 |

25 | 4 |

21 | 4 |

17 | 3 |

14 | 1 |

13 | 3 |

10 | 2 |

8 | 1 |

7 | 1 |

6 | 0 |

6 | 1 |

5 | 0 |

5 | 1 |

4 | 1 |

3 | 0 |

3 | 0 |

3 | 0 |

3 | 1 |

2 | 1 |

1 | 0 |

1 | 0 |

1 | 0 |

1 | 0 |

1 | 1 |

Point out how the number of dice rolling a one on each turn is about one sixth of the dice that hadn’t yet rolled a one on the previous turn. Also, that you lose about half of the remaining dice after about four turns.

Send someone out of the room; do either four or eight turns, then bring them back and ask them to guess how many turns the group took. The student should be able to see that if half the dice are left, there were only four turns, but if a quarter of the dice are left, there were eight turns.

If the students are advanced enough to use logarithms, try the above with some number other than four or eight and have the student use logarithms to calculate the number of turns:

turns = log(number remaining/total) / log(5/6),

or, equivalently, in terms of the half-life (which is really closer to 3.8 than 4):

turns = 3.8 * log(number remaining/total) / log(1/2).

When Zircon crystals form, they strongly reject lead atoms: new zircon crystals have no lead in them. They easily accept uranium atoms. Each die represents a uranium atom, and rolling a one represents decaying into a lead atom: because uranium atoms are radioactive, they can lose bits of their nucleus and turn into lead–but only randomly, like rolling a die. Instead of four turns, the half-life of U238 is 4.5 billion years.

Zircon forms in almost all rocks and is hard to break down. So to judge the age of a rock, you get the zircon out, throw it in a mass spectrometer, look at the proportion of uranium to (lead plus uranium) and calculate

years = 4.5 billion * log(mass of uranium/mass of (lead+uranium)) / log(1/2).

Problem: given a zircon crystal where there’s one lead atom for every ninety-nine uranium atoms, how long ago was it formed?

4.5 billion * log(99/100) / log(1/2) = 65 million years ago.

In reality, it’s slightly more complicated: there are two isotopes of uranium and several of lead. But this is a good thing, since we know the half-lives of both isotopes and can use them to cross-check each other; it’s as though each student had both six- and twenty-sided dice, and the student guessing the number of turns could use information from both groups to refine her guess.

## Lazulinos

Lazulinos are quasiparticles in a naturally occurring Bose-Einstein condensate first described in 1977 by the Scottish physicist Alexander Craigie while at the University of Lahore [3]. The quasiparticles are weakly bound by an interaction for which neither the position nor number operator commutes with the Hamiltonian. A measurement of a lazulino’s position will cause the condensate to go into a superposition of number states, and a subsequent measurement of the population will return a random number; also, counting the lazulinos at two different times will likely give different results.

Their name derives from the stone *lapis lazuli* and means, roughly, “little blue stone”. Lazulinos are so named because even though the crystals in which they arise absorb visible light, and would otherwise be jet black, they lose energy through surface plasmons in the form of near-ultraviolet photons, with visible peaks at 380, 402, and 417nm. Optical interference imparts a “laser speckle” quality to the emitted light; Craigie described the effect in a famously poetic way: “Their colour is the blue that we are permitted to see only in our dreams”. What makes lazulinos particularly interesting is that they are massive and macroscopic. Since the number operator does not commute with the Hamiltonian, lazulinos themselves do not have a well-defined mass; if the population is *N*, then the mass of any particular lazulino is *m*/*N*, where *m* is the total mass of the condensate.

In a recent follow-up to the “quantum mirage” experiment [2], Don Eigler’s group at IBM used a scanning tunneling microscope to implement “quantum mancala”—picking up the lazulino ‘stones’ in a particular location usually changes the number of stones, so the strategy for winning becomes much more complicated. In order to pick up a fixed number of stones, you must choose a superposition of locations [1].

- C.P. Lutz and D.M. Eigler, “Quantum Mancala: Manipulating Lazulino Condensates,” Nature 465, 132 (2010).
- H.C. Manoharan, C.P. Lutz and D.M. Eigler, “Quantum Mirages: The Coherent Projection of Electronic Structure,” Nature 403, 512 (2000). Images available at http://www.almaden.ibm.com/almaden/media/image_mirage.html
- A. Craigie, “Surface plasmons in cobalt-doped Y
_{3}Al_{5}O_{12},” Phys. Rev. D 15 (1977). Also available at http://tinyurl.com/35oyrnd.

## Renormalization and Computation 2

This is the second in a series of posts covering Yuri Manin’s ideas involving Hopf algebra renormalization of the Halting problem. Last time I showed how perturbing a quantum harmonic oscillator gave a sum over integrals involving interactions with the perturbation; we can keep track of the integrals using Feynman diagrams, though in the case of a single QHO they weren’t very interesting.

One point about the QHO needs emphasis at this point. Given a wavefunction describing the state of the QHO, it must be the case that we get *some* value when we measure the energy; so if we sum up the norms of the probability amplitudes, we should get unity:

This is called the *normalization* condition.

When we perturb the QHO, the states are no longer the energy eigenvectors of the new Hamiltonian. We can express the new eigenvectors in terms of the old ones:

where is the strength of the perturbation, and we reexpress our wavefunction in this new basis:

Since we’re working with a new set of coefficients, we have to make sure they sum up to unity, too:

This is the *renormalization* condition. So renormalization is about making sure things sum up right once you perturb the system.

I want to talk about renormalization in quantum field theory; the trouble is, I don’t actually know quantum field theory, so I’ll just be writing up what little I’ve gathered from reading various things and conversations with Dr. Baez. I’ve likely got some things wrong, so please let me know and I’ll fix them.

A field is a function defined on spacetime. Scalar fields are functions with a single output, whereas vector fields are functions with several outputs. The electromagnetic field assigns a single number, called the electric field, and a vector, called the magnetic field, to every point in spacetime. When you have two electrons and move one of them, it feels a reaction force and loses momentum; the other electron doesn’t move until the influence, traveling at the speed of light, reaches it. Conservation of momentum says that the momentum has to be somewhere; it’s useful to consider it to be in the electromagnetic field.

When you take the Fourier transform of the field, you get a function that assigns values to harmonics of the field; in the case of electromagnetism, the transformed field assigns a value to each color of light. Quantizing this transformed field amounts to making into a creation operator, just like in the QHO example from last time. So we have a continuum of QHOs, each indexed by a color (By the way—the zero-dimensional Fourier transform is the identity function, so the QHO example from last time can be thought of both as the field at the unique point in spacetime and the field at the unique frequency.)

When we move to positive-dimensional fields, we get more interesting pictures, like these from quantum electrodynamics:

Here, our coupling constant is the fine structure constant where is the charge of the electron. For each vertex, we write down our coupling constant times times a delta function saying that the incoming momentum minus the outgoing momentum equals zero. For each internal line, we write down a propagator—a function representing the transfer of momentum from one point to another; it’s a function of the four-momentum —and multiply all this stuff together. Then we integrate over all four-momenta and get something that looks like

The trouble is, this integral usually gives infinity for an answer. We try to work around this in two steps: first, we *regularize* the integral by introducing a length scale This represents the point at which gravity starts being important and we need to move to a more fundamental theory. In the quantum field theory of magnetic domains in iron crystals, the length scale is the inter-atom distance in the lattice. Regularization makes the integral finite for away from some singularity.

There are a few different ways of regularizing; one is to use as a momentum cutoff:

This obviously converges, and solutions to this are always a sum of three parts:

- The first part diverges as either logarithmically or as a power of
- The second part is finite and independent of
- The third part vanishes as

Renormalization in this case amounts to getting rid of the first part.

These three parts represent three different length scales: at lengths larger than all quantum or statistical fluctuations are negligible, and we can use the mean field approximation and do classical physics. At lengths between and we use QFT to calculate what’s going on. Finally, at lengths smaller than we need a new theory to describe what’s going on. In the case of QED, the new theory is quantum gravity; string theory and loop quantum gravity are the serious contenders for the correct theory.

The problem with this regularization scheme is that it doesn’t preserve gauge invariance, so usually physicists use another regularization scheme, called “dimensional regularization”. Here, we compute

which gives us an expression involving gamma functions of , where gamma is the continuous factorial function, and then we set The solutions to this are also a sum of three terms—a divergent part, a finite part, and a vanishing part—and then renormalization gets rid of the divergent part.

Assume we have some theory with a single free parameter . We’d like to calculate a function perturbatively in terms of , where represents some physical quantity, and we know . We assume takes the form

and assume that this definition gives us divergent integrals for the The first step is regularization: instead of we have a new function

Now we get to the business of renormalization! We solve this problem at each order; if the theory is renormalizable, knowing the solution at the previous order will give us a constraint for the next order, and we can subtract off all the divergent terms in a consistent way:

- Order
Here, Since it’s a constant, it has to match so In this approximation, the coupling constant takes the classical value.

- Order
Let where Plugging this into the definition of we get

Using we get which diverges as In the case of QED, this says that the charge on the electron is infinite. While the preferred interpretation these days is that quantum gravity is a more fundamental theory that takes precedence on very small scales (a Planck length is to a proton as a proton is to a meter), when the theory was first introduced, there was no reason to think that we’d need another theory. So the interpretation was that with an infinite charge, an electron would be able to extract an infinite amount of energy from the electromagnetic field. Then the uncertainty principle would create virtual particles of all energies, which would exist for a time inversely proportional to the energy. The particles can be charged, so they line up with the field and dampen the strength just like dielectrics. In this interpretation, the charge on the electron depends on the energy of the particles you’re probing it with.

So to second order,

A theory is therefore only renormalizable if the divergent part of is independent of In QED it is. We can now define as the limit

- Higher orders.
In a renormalizable theory, the process continues, with the counterterms entirely specified by knowing

## My talk at Perimeter Institute

I spent last week at the Perimeter Institute, a Canadian institute founded by Mike Lazaridis (CEO of RIM, maker of the BlackBerry) that sponsors research in cosmology, particle physics, quantum foundations, quantum gravity, quantum information theory, and superstring theory. The conference, Categories, Quanta, Concepts, was organized by Bob Coecke and Andreas Döring. There were lots of great talks, all of which can be found online, and lots of good discussion and presentations, which unfortunately can’t. (But see Jeff Morton’s comments.) My talk was on the Rosetta Stone paper I co-authored with Dr. Baez.

## Imaginary time

Statics (geometric = no time): | ||

[x] | x coordinate | |

[y] | y coordinate | |

[k] | proportionality constant | |

[y/x] | slope | |

[k y/x] | proportional to slope | |

[k y^2/x^2] | distortion | |

[k y^2/x^2] | original shape | |

[k y^2/x] | least S at equilibrium | |

Statics (with energy): | ||

[x] | parameterization of curve | |

[y] | y coordinate | |

[kg x/s^2] | spring constant at x | |

[y/x] | slope | |

[kg y/s^2] | force due to stretching | |

[kg y^2/s^2 x = J/x] | stretching energy density | |

[kg y^2/s^2 x= J/x] | gravitational energy density | |

[kg y^2/s^2 = J] | energy (least energy at equilibrium) | |

Statics (unitless distance): | ||

[1] | parameterization of curve | |

[m] | y coordinate | |

[kg/s^2] | spring constant | |

[m] | relative displacement | |

[kg m/s^2 = N] | force at x due to stretching | |

[kg m^2 / s^2 = J] | stretching energy at x | |

[kg m^2 / s^2 = J] | gravitational energy at x | |

[kg m^2 / s^2 = J] | energy (least energy at equilibrium) | |

Dynamics (): | ||

[s] | time | |

[m] | y coordinate | |

[kg] | mass | |

[m/s] | velocity | |

[kg m/s] | momentum | |

[-kg m^2/s^2 = -J] | -kinetic energy | |

[kg m^2 / s^2 = J] | potential energy | |

[kg m^2/s] | i * action |

See also Toby Bartels‘ sci.physics post.

leave a comment