# reperiendi

## Coends

Posted in Category theory, Math, Quantum by Mike Stay on 2010 April 11

Coends are a categorified version of “summing over repeated indices”. We do that when we’re computing the trace of a matrix and when we’re multiplying two matrices. It’s categorified because we’re summing over a bunch of sets instead of a bunch of numbers.

Let $C$ be a small category. The functor $\mbox{hom}:C^{\mbox{op}} \times C \to Set$ assigns

• to each pair of objects the set of morphisms between them, and
• to each pair of morphisms $(f:c \to c', h:d\to d')$ a function that takes a morphism $g \in \mbox{hom}(c', d)$ and returns the composite morphism $h \circ g \circ f \in \mbox{hom}(c, d')$, where $c, c', d, d' \in \mbox{Ob}(C).$

It turns out that given any functor $S:C^{\mbox{op}} \times D \to \mbox{Set},$ we can make a new category where $C$ and $D$ are subcategories and $S$ is actually the hom functor; some keywords for more information on this are “collages” and “Artin glueing”. So we can also think of $S$ as assigning

• to each pair of objects a set of morphisms between them, and
• to each pair of morphisms $(f:c \to c', h:d\to d')$ a function that takes a morphism $g \in S(c', d)$ and returns the composite morphism $h \circ g \circ f \in S(c, d')$, where $c,c' \in \mbox{Ob}(C)$ and $d,d' \in \mbox{Ob}(D).$

We can think of these functors as adjacency matrices, where the two parameters are the row and column, except that instead of counting the number of paths, we’re taking the set of paths. So $S$ is kind of like a matrix whose elements are sets, and we want to do something like sum the diagonals.

The coend of $S$ is the coequalizer of the diagram

$\begin{array}{c}\displaystyle \coprod_{f:c \to c'} S(c', c) \\ \\ \displaystyle S(f, c) \downarrow \quad \quad \downarrow S(c', f) \\ \\ \displaystyle \coprod_c S(c, c) \end{array}$

The top set consists of all the pairs where

• the first element is a morphism $f \in \mbox{hom}(c, c')$ and
• the second element is a morphism $g \in S(c', c).$

The bottom set is the set of all the endomorphisms in $S.$

The coequalizer of the diagram, the coend of $S,$ is the bottom set modulo a relation. Starting at the top with a pair $(f, g),$ the two arrows give the relation

$\displaystyle c \stackrel{f}{\to} c' \stackrel{g}{\multimap} c \stackrel{c}{\to} c \quad \sim \quad c' \stackrel{c'}{\to} c' \stackrel{g}{\multimap} c \stackrel{f}{\to} c',$

where I’m using the lollipop to mean a morphism from $S.$

So this says take all the endomorphisms that can be chopped up into a morphism $f$ from $\mbox{hom}$ going one way and a $g$ from $S$ going the other, and then set $fg \sim gf.$ For this to make any sense, it has to identify any two objects related by such a pair. So it’s summing over all the endomorphisms of these equivalence classes.

To get the trace of the hom functor, use $S = \mbox{hom}$ in the analysis above and replace the lollipop with a real arrow. If that category is just a group, this is the set of conjugacy classes. If that category is a preorder, then we’re computing the set of isomorphism classes.

The coend is also used when “multiplying matrices”. Let $S(c', c) = T(b, c) \times U(c', d).$ Then the top set consists of triples $(f: c\to c',\quad g:b \multimap c,\quad h:c' \multimap d),$ the bottom set of pairs $(g:b \multimap c, \quad h:c \multimap d),$ and the coend is the bottom set modulo

$(\displaystyle b \stackrel{g}{\multimap} c \stackrel{c}{\to} c, \quad c \stackrel{f}{\to} c' \stackrel{h}{\multimap} d) \quad \sim \quad (\displaystyle b \stackrel{g}{\multimap} c \stackrel{f}{\to} c', \quad c' \stackrel{c'}{\to} c' \stackrel{h}{\multimap} d)$

That is, it doesn’t matter if you think of $f$ as connected to $g$ or to $h$; the connection is associative, so you can go all the way from $b$ to $d.$

Notice here how a morphism can turn “inside out”: when $f$ and the identities surround a morphism in $S$, it’s the same as being surrounded by morphisms in $T$ and $U$; this is the difference between a trace, where we’re repeating indices on the same matrix, and matrix multiplication, where we’re repeating the column of the first matrix in the row of the second matrix.