# reperiendi

## Cartesian categories and the problem of evil

Posted in Category theory, Math by Mike Stay on 2007 September 19

How many one-element sets are there? Well, given any set $S,$ we can construct the one-element set $\{S\},$ so the collection of one-element sets has to be a proper class, a mindbogglingly enormous collection far larger than any mere set could be. However, they’re all the same from the point of view of functions coming out of them: the one element, no matter what it is, maps to a point in the range. The internal nature of a given one-element set is completely irrelevant to the way it behaves as a member of the category Set of all sets and functions.

For a category theorist, making a distinction between one-element sets is evil. Instead of looking inside an object to see how it’s made, we should only care about how it interacts with the world around it. There are certain kinds of objects that are naturally special because of the way they interact with everything else; we say they satisfy universal properties.

Just as it is evil to dwell on the differences between isomorphic one-element sets, it is evil to care about the inner workings of ordered pairs. Category theory elevates an ordered pair to a primitive concept by ignoring all details about the implementation of an ordered pair except how it interacts with the rest of the world.  Ordered pairs are called “products” in category theory.

A product of the objects $G$ and $H$ in the category $C$ is

• an object of $C$, which we’ll label $G\times H,$ together with
• two maps, called projections
• $\pi_G:G\times H\to G$
• $\pi_H:G\times H\to H$

that satisfy the following universal property: for any triple $(X, f_G:X\to G, f_H:X \to H),$ there is a unique map from $X$ to $G\times H$ making the following diagram commute:

In particular, given two different representations of ordered pairs, there’s a unique way to map between them, so they must be isomorphic.

A category will either have products or it won’t:

——————

1. The category Set has the obvious cartesian product.

2. The trivial category has one object and one morphism, the identity, so there’s only one choice for a triple like the one in the definition:

$(X, 1_X, 1_X),$

and it’s clearly isomorphic to itself, so the trivial category has products.

3. A preorder is a set $S$ equipped with a $\le$ relation on pairs of elements of $S$ that is transitive and reflexive. Given any preorder, we can construct a category whose

• objects are the elements of $S$ and
• there is an arrow from $x$ to $y$ if $x \le y.$

So a product in a preorder is

• an element $z = x \times y$ of $S$ together with maps
• $\pi_x:z\to x$ (that is, $z \le x$)
• $\pi_y:z \to y$ (that is. $z \le y$)

such that for any other element $w \in S, \, w\le x, \, w \le y,$ we have $w \le z.$

In other words, a product $x \times y$ in a preorder is the greatest lower bound of $x, y$.  For example, in the preorder $(\mathbb{R}, \le)$, the product of two numbers $x, y$ is min($x, y$).  In the preorder $(\mbox{Set}, \subseteq)$, the product is $x \cap y$.  In the preorder $(\mathbb{N}, |)$, where “|” is “divides”, the product is gcd($x, y$).

Exercise: in what preorder over $\mathbb{R}$ is the cartesian product the same as multiplication?

——————

A cartesian category is a category with products. (Note the lowercase ‘c:’ you know someone’s famous if things named after them aren’t capitalized. C.f. ‘abelian.’) You can think of the cartesian coordinate system for the plane with its ordered pair of coordinates to remind you that you can make ordered pairs of objects in a cartesian category.

Posted in Category theory, Math by Mike Stay on 2007 September 15

The last example in the previous post said that the collection of all algebraic gadgets of a given kind and structure-preserving maps between them forms a category. The example given was the category of rings. It’s also true that a category itself is an algebraic gadget with structure (the ability to compose morphisms); a structure-preserving map between categories is called a functor. A functor $F:C\to D$ between categories $C$ and $D$ maps

• objects of $C$ to objects of $D$
• morphisms of $C$ to morphisms of $D$

such that identities and composition are preserved.

One way of thinking about a functor $F:C\to D$ from the category $C$ to the category $D$ is as a “copy” or a “shadow” of $C$ in $D.$ For example, recall that graphs and manifolds both give rise to categories. If we let $C$ be the graph of a cube, where

• objects are the eight vertices and
• morphisms are paths along the edges of the cube,

and $D=\mathbb{R}^2$ be the plane, where

• objects are points of the plane and
• morphisms are paths in the plane,

then a functor $F:C\to D$ is a two dimensional picture of a cube, a shadow cast by the cube. The functor $F$ takes

• each vertex to a point in the plane, and
• each path on the cube to a path in the plane

such that

• composing paths is associative and
• identity paths on the cube map to constant paths in the plane.

These last two requirements imply that the paths in the shadow of the cube are generated by the shadows of each edge.

Illustrated are the images of four functors from the cube $C$ to the plane $D$. Figure (a) maps each vertex to a distinct point and each edge to a distinct path. Figure (b) maps the front face of the cube and the back face of the cube to the same square, and maps the edges running from the front to the back to the constant paths at the corners. This is a degenerate functor, i.e. a functor that maps multiple vertices to the same point in $\mathbb{R}^2.$ Figure (c) maps all the points of the cube to the same point and all the edges to the constant path at that point. This functor is totally degenerate, since there’s no way to map to a smaller subset of points in $\mathbb{R}^2.$

Exercise: define a functor $F:C\to D$ such that figure d) is the image of $F$.

## Multiplication:composition::monoid:category

Posted in Category theory, Math by Mike Stay on 2007 September 12

The last example in the previous post was the monoid consisting of all functions from a set $T$ to itself under composition. We could multiply the elements (i.e. compose them) in any order because the source and the target were the same, the set $T$.

$\displaystyle \stackrel{T}{\mathop{\circlearrowleft}_f}$

For arbitrary sets, we still know how to compose, but we have some restrictions on what functions are composable: the target of the first has to match the source of the second. Given $f:T\to U$ and $g:U\to V$, we can compose them:

$\displaystyle T \stackrel{f}{\to} U \stackrel{g}{\to} V.$

So where multiplication had to start and end at the same place, composition can wander around.

Just as the essence of a monoid is multiplication, the essence of a category is composition.

A category consists of

• a collection of objects and
• for each pair of objects $x,y$ a set ${\rm hom}(x,y)$ of morphisms or arrows from $x$ to $y$ (if $f \in {\rm hom}(x,y),$ we write $f:x\to y$)

such that

• for every object $x$ there is a morphism $1_x:x\to x$,
• for every triple of objects $x,y,z$ and pair of morphisms $f:x\to y,$ $g:y\to z$ there is a composite morphism

$g\circ f:x\to z,$

and

• composition is associative

$(h \circ g) \circ f = h \circ (g \circ f)$

• and obeys the left- and right-unit laws

$f \circ 1_x = f = 1_y \circ f.$

One simple structure on which something can wander is a directed graph. It has vertices and directed edges between them. We can’t compose two edges and get a new edge, but we can compose two paths and get a new path. Every directed graph gives rise to a category whose objects are the vertices of the graph and whose morphisms are paths in the graph, including “identity paths” that start and end at the same point without going anywhere in between.

Whereas a graph is discrete, a manifold (like a circle or Euclidean space) is continuous. Any manifold gives rise to a category whose objects are the points of the space and whose morphisms are paths between them.

Here are more examples of categories:

1. The empty category:

• no objects
• no morphisms

2. The category “1″:

• one object $T$
• one trivial morphism $1_T:T\to T$

3. The category “2″:

• two objects $S, T$
• the trivial morphisms $1_S, 1_T$ and one nontrivial morphism $a:S\to T$

4. The category Set of all sets:

• objects are sets
• morphisms are functions
• For example, $S =$ {1,2} and $T =$ {purple, green, yellow} are two objects in Set, and the function $f:S\to T$, where $f(1) ={\rm purple}$ and $f(2)={\rm green},$ is a morphism in ${\rm hom}(S,T).$

5. Any monoid “is” a one-object category:

• one object $T$
• the set of morphisms from $T$ to itself is equipped with extra structure that makes it a monoid. We define the category in terms of the monoid as follows:
• ${\rm hom}(T,T):= S$
• $1_T:= e(\bullet)$
• $b \circ a := m(a,b)$

For example, the real numbers under multiplication form a monoid, so we get a category where

• there’s one object $T,$ and
• ${\rm hom}(T,T) = \mathbb{R}$.
• Given $r, s \in \mathbb{R},$ we compose them by multiplying them:

$T \stackrel{r}{\to} T \stackrel{s}{\to} T = T \stackrel{sr}{\to} T$

Note: $T$ could be anything, as long as it makes sense to multiply it by a real number.

For example, $T$ could be the set of scalings of a picture of a dog. Then a morphism from $T$ to itself would be a function, since functions are the things that go between sets, and the function would be parameterized by a real number $r \in \mathbb{R}.$ We define

$f_r(t) =$ the picture $t$ scaled by a factor $r,$ and

$(f_s \circ f_r)(t) = f_s(f_r(t)) = f_{rs}(t).$

The identity morphism in this case is

$1_T(t) = f_1(t) = t.$

Or, $T$ could be the lone dot in a one-dot graph with infinitely many self-loops, each labelled with a real number. We then say the weight of a path on the graph is the product of the loop labels. The identity morphism is the self-loop labelled with the number 1.

6. Any collection of algebraic gadgets and the structure-preserving maps between them. For example, a ring is an algebraic gadget with two monoids that work together:

• there’s a set $S$
• two different special points, or identities: the multiplicative identity $e_{\cdot}(\bullet) = 1$ and the additive identity $e_+(\bullet) = 0$
• two different associative binary operations
• multiplication:

$(ab)c = a(bc)$

• addition, which is also commutative:

$(a+b)+c = a+(b+c)$
$a+b=b+a$

$a(b+c) = ab + ac$

There is a category Ring consisting of all rings and maps between them that preserve the ring structure.

• objects: rings
• morphisms: ring homomorphisms
• A ring homomorphism $F:R_1 \to R_2$ is a map between rings that preserves the ring structure: identities, multiplication and addition. That is, it doesn’t matter which way you go around the squares below; the answer will be the same either way:

$\begin{array}{ccc}R_1 \times R_1 & \stackrel{F \times F}{\to} & R_2 \times R_2\\ \cdot_1 \downarrow & & \downarrow \cdot_2 \\ R_1 & \stackrel{F}{\to} & R_2\end{array}$

$\vspace{0.5in}$

$\begin{array}{ccc}R_1 \times R_1 & \stackrel{F \times F}{\to} & R_2 \times R_2\\ +_1 \downarrow & & \downarrow +_2 \\ R_1 & \stackrel{F}{\to} & R_2\end{array}$

• For example, the ring of integers $\mathbb{Z}$ is an object in Ring, as is the ring $\mathbb{Z}_2$ (where $\mathbb{Z}_n$ denotes integers modulo $n).$ The set ${\hom}(\mathbb{Z}, \mathbb{Z}_2) = \{ 0, p \},$ where 0 is the ring homomorphism that takes all integers to zero and $p$ is the map that takes each integer to its remainder modulo 2.

$\begin{array}{ccc}(-4, 7) & \stackrel{p \times p}{\mapsto} & (0, 1) \\ \cdot \downarrow & & \downarrow \cdot_2 \\ -28 & \stackrel{p}{\mapsto} & 0\end{array}$

$\vspace{0.5in}$

$\begin{array}{ccc}(-4, 7) & \stackrel{p \times p}{\mapsto} & (0, 1)\\ + \downarrow & & \downarrow +_2 \\ 3 & \stackrel{p}{\mapsto} & 1\end{array}$

## Monoids

Posted in Category theory, Math by Mike Stay on 2007 September 12

A set has no structure. It’s just a collection of things, all of them equally unimportant.

$\{ \blacksquare, \blacktriangle, \bigstar \}$

Figure 1. A set.

$1 = \{ \bullet \}$

Figure 2. Another set, the one-element set we’ll call “1.”

A function, or map, “goes between” sets. It has a source set (also called the domain) and a target set (also called the range). To each element of the source set, it assigns an element of the target set. We’ll use the usual shorthand for denoting functions, with the name of the function, a colon, the source set, an arrow, and the target set. Underneath, we’ll write a typical element of the source, an arrow with a small bar at the start, and the element of the target it goes to. For example, consider the function $d$ mapping each integer to its double. The integers are denoted by the symbol $\mathbb{Z}$ (German Zahlen, “numbers”), so we have

$\begin{array}{rl} d:\mathbb{Z} & \to \mathbb{Z} \\ x & \mapsto 2x\end{array}$

Perhaps the simplest structure we can add to a set is to pick an element of it and make that element important. A pointed set $S$ is

• a set $S$ equipped with
• a function $e:1\to S$, where 1 is the one-element set $\{\bullet\}.$

The special element of $S$ is given by $e(\bullet).$ The name $e$ should bring to mind the word “element,” since there’s a different choice for what this map should be for each element of $S.$ For example, given the set $S=\{ \blacksquare, \blacktriangle, \bigstar \}$, there are three possible choices for the function $e,$ corresponding to the three different elements we could pick:

1. $e(\bullet)=\blacksquare$
2. $e(\bullet)=\blacktriangle$
3. $e(\bullet)=\bigstar$

A monoid is a pointed set with a little more structure. Way back in kindergarten, they taught us how to count. We counted fingers and marbles and cookies. We used base 1, unary, tally marks:

$\blacksquare\blacksquare+\blacksquare\blacksquare\blacksquare=\blacksquare\blacksquare\blacksquare\blacksquare\blacksquare.$

$\blacksquare \blacktriangle \bigstar + \clubsuit \blacksquare = \blacksquare \blacktriangle \bigstar \clubsuit \blacksquare.$

Note that when we add this way, the order matters:

$\blacksquare \blacktriangle \blacksquare \blacktriangle \blacksquare = \blacksquare \blacktriangle \blacksquare + \blacktriangle \blacksquare \ne \blacktriangle \blacksquare + \blacksquare \blacktriangle \blacksquare = \blacktriangle \blacksquare \blacksquare \blacktriangle \blacksquare.$

While the number of symbols is the same, the result is not!

It takes a long time to teach kids to abstract the concept of a number that’s independent of the order of the symbols in the result. Since that’s one of the earliest things we learn in math, it’s particularly hard to forget, so when anyone says “addition,” we automatically assume it’s commutative. When we get to sticking things like matrices together, there are a couple of natural ways to do it; the commutative way we call “addition,” and the noncommutative one we call “multiplication.”

The only thing a generic monoid knows how to do is stick things together like a kindergartener who hasn’t abstracted the concept of a number yet. Since (in the general case) that’s noncommutative, we call it “multiplication.” When we learned about multiplication in school, we had to memorize our “times table.” The table had numbers across the top and numbers down the sides. Given a row and a column of the table, we could look up the result of multiplying those two numbers.

A “times table” is a function $m:S\times S\to S;$ the set $S\times S$ cosists of ordered pairs of elements of a pointed set $S,$ a row and a column. The output of the function is the value in that cell of the table. When we were memorizing the times table, the easiest row was multiplying by 1. You just got the same thing back! So one of the elements of $S$ should behave like 1. And the function $e$ tells us which one we should pick!

So that’s one constraint on the table: multiplying by $e(\bullet)$ shouldn’t do anything. We call this “obeying the left-and right-unit laws.” When we use real numbers, we write it like this:

$1a=a=a1,$

so in our multiplication table, we have

$\displaystyle m(e(\bullet), a) = a = m(a, e(\bullet)).$

The other constraint on the table is that if you’re multiplying three things, it shouldn’t matter in what order you multiply the pairs. We call this “being associative.” For real numbers, it looks like this:

$a(bc) = (ab)c,$

so in our multiplication table, we have

$\displaystyle m(a, m(b,c)) = m(m(a,b), c).$

Now we’re ready for the definition: a monoid consists of a pointed set $S,$ where the special point $e(\bullet)$ is called the “multiplicative identity,” and a times table $m$ that’s associative and obeys the left-and right-unit laws above. Here are several more examples of monoids:

1. The trivial monoid:

• $S=\{1\}$
• $e(\bullet)=1$
• $m(1,1)=1$

2. The free monoid on one element (tally marks under concatenation)

• $S=\{\mbox{(blank)},1,11,111,1111,\ldots\}$
• $e(\bullet)=$ (blank)
• $m(\underbrace{1\ldots 1}_{a},\underbrace{1\ldots 1}_{b}) = \underbrace{1\ldots 1}_{a+b}$

3. The whole numbers under addition:

• $S=\{0,1,2,3,\ldots\}$
• $e(\bullet)=0$
• $m(a,b)=a+b$

4. The free monoid on two elements (binary strings under concatenation):

• $S=\{\mbox{(blank)},0,1,00,01,10,11,000,001,\ldots\}$
• $e(\bullet)=$ (blank)
• $m(a,b)=ab,$ where $ab$ is the string $a$ followed by the string $b$

5. The reals under multiplication:

• $S=\mathbb{R}$
• $e(\bullet)=1$
• $m(a,b)=ab$
• Note: the multiplicative group of real numbers excludes 0, because a group has to have inverses. A monoid does not have to have inverses, so we can include it here.

6. Functions from a set $T$ to itself under composition

• $S=$ all functions of the form $a:T\to T$
• $e(\bullet) =$ the identity function
• $m(a,b) = b\circ a,$ where $\circ$ is composition of functions:

$(b \circ a)(x) = b(a(x)).$

So, for example, given the set $T=\{0,1\},$ we have

• $S$ contains the four possible functions from $T$ back to itself:
00) The constant function mapping everything to zero:

$\begin{array}{rl}k_0:T & \to T\\ t & \mapsto 0\end{array}$

01) The identity function:

$\begin{array}{rl}1_T:T & \to T\\ t & \mapsto t\end{array}$

10) The function that toggles the input:

$\begin{array}{rl}NOT:T & \to T\\ t & \mapsto (1-t)\end{array}$

11) The constant function mapping everything to one:

$\begin{array}{rl}k_1:T & \to T\\ t & \mapsto 1\end{array}$

• $e(\bullet) = 1_T,$ the identity function
• $m(a,b) = b \circ a$

The identity function is the unit for composition:

$(a \circ 1_T)(t) = a(1_T(t)) = a(t)$

$(1_T \circ a)(t) = 1_T(a(t)) = a(t)$

Exercise: verify that composition is associative.

## The partition function and Wick rotation

Posted in Math, Quantum by Mike Stay on 2007 September 6

I was trying to understand Wick rotation by applying it in the case of a finite-dimensional Hilbert space, and came up with something strange. The way I’ve worked it out, it seems to map classical observables to quantum states! I’ve never heard anything like that before.

Say we have an $n$-qubit Hilbert space $X=(\mathbb{C}^2)^{\otimes n}$. This has the set of $n$-bit binary strings $|0\ldots 00\rangle,$ $|0\ldots 01\rangle,$ $|0\ldots 10\rangle, \ldots$ $|1\ldots 11\rangle$ as a basis. For brevity’s sake, I’ll write these as $|0\rangle, |1\rangle, \ldots |2^n-1\rangle.$ Let

$\displaystyle H = \sum_{j=0}^{2^n-1} E_j |j\rangle\langle j|,$

where the $E_j$ are real.

Now define the operators

$\displaystyle V=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\1&-1\end{array}\right)$

and $W=V^{\otimes n}$. $V$ is the discrete Fourier transform on a 2-dimensional space; $W$ is called the Walsh-Hadamard transform. $W$ defines a conjugate basis to the qubit basis. The most important property of $W$ for my purposes here is the fact that

$\displaystyle |\phi\rangle := W|0\rangle = N_\phi\sum_{j=0}^{2^n-1} |j\rangle,$

where the normalizing factor $N_\phi = 2^{-n/2}.$ Modulo the normalizing factor, this is a sum over all possible states.

• What’s the probability amplitude that when you start in the state $|\phi\rangle$ and evolve according to $H$, the system will still be in the state $|\phi\rangle ?$

$\displaystyle \langle \phi|e^{-iHt}|\phi\rangle = \langle 0|W \; e^{-iHt} \; W|0\rangle = N_\phi^2 \sum_{j=0}^{2^n-1} \langle j| \; e^{-iHt} \; |j\rangle$ $\displaystyle = N_\phi^2 \sum_{j=0}^{2^n-1} e^{-iE_jt}.$

Except for a factor of $i$ in the exponent and some normalization, this is the partition function for $H$. It’s been “Wick rotated.”

• What’s the probability amplitude that when you start in the state $|\phi\rangle$ and evolve according to H, the system will move to the arbitrary state $\displaystyle |\psi\rangle ?$ Well,

$\displaystyle |\psi\rangle = N_\psi \sum_{j=0}^{2^n-1} \psi_j |j\rangle,$

where each $\psi_j$ is an arbitrary complex number and $N_\psi$ is a normalizing factor. So

$\displaystyle \langle \psi| \; e^{-iHt} \; |\phi\rangle = N_\psi N_\phi \sum_{j=0}^{2^n-1} \langle j|\psi_j \; e^{-iHt} \; |j\rangle$ $\displaystyle = N_\psi N_\phi \sum_{j=0}^{2^n-1} \psi_j e^{-iE_jt}.$

If we divide this by the quantity above, we get the expectation value of a classical observable $\psi$ at “temperature” $1/it:$

$\displaystyle \frac{N_\psi N_\phi \sum\limits_{j=0}^{2^n-1} \psi_j e^{-iE_jt}}{N_\phi N_\phi \sum\limits_{j=0}^{2^n-1} e^{-iE_jt}} = \frac{N_\psi}{N_\phi} \langle\psi\rangle.$

This mapping from classical to quantum is not quantization. That maps classical observables to Hermetian operators, not to states—although, one might hit the state with the “Currying” isomorphism between states and linear transformations and get something useful.

I’m trying to work out how to connect this to a sum over paths instead of a sum over states; there’s some interesting stuff there, but I haven’t grokked it yet.