## HDRA part 1a

Again, comments are appreciated.

### 2-categories and lambda calculus

This is a follow up post, but not the promised sequel, from A 2-Categorical Approach to the Pi Calculus where I’ll try to correct various errors I made and try to clarify some things.

First, lambda calculus was invented by Church to solve Hilbert’s decision problem, also known as the *Entscheidungsproblem*. The decision problem was third on a list of problems he presented at a conference in 1928, but was not, as I wrote last time, “Hilbert’s third problem”, which was third on a list of problems he laid out in 1900.

Second, the problem Greg Meredith and I were trying to address was how to prevent reduction under a “receive” prefix in the pi calculus, which is intimately connected with preventing reduction under a lambda in lambda calculus. All programming languages that I know of, whether eager or lazy, do not reduce under a lambda.

There are two approaches in literature to the semantics of lambda calculus. The first is *denotational semantics*, which was originally concerned with what function a term computes. This is where computability and type theory live. Denotational semantics treats alpha-beta-eta equivalence classes of lambda terms as *morphisms* in a category. The objects in the category are called “domains”, and are usually “CPOs“, a special kind of poset. Lambek and Scott used this approach to show that alpha-beta-eta equivalence classes of lambda terms with one free variable form a cartesian closed category where composition is given by substitution.

The type of a term describes the structure of its normal form. *Values* are terms that have no beta reduction; they’re either constants or lambda abstractions. The types in lambda calculus are either base types or lambda abstractions, respectively. (While Lambek and Scott introduced new term constructors for products, they’re not strictly necessary, because the lambda term can be used for the pair with projections and )

The second is *operational semantics*, which is more concerned with *how* a function is computed than *what* it computes. All of computational complexity theory and algorithmic analysis lives here, since we have to count the number of steps it takes to complete a computation. Operational semantics treats lambda terms as *objects* in a category and rewrite rules as morphisms. It is a very syntactical approach; the set of terms is an algebra of some endofunctor, usually presented in Backus-Naur Form. This set of terms is then equipped with some equivalence relations and reduction relations. For lambda calculus, we mod out terms by alpha, but not beta. The reduction relations often involve specifying a reduction context, which means that the rewrites can’t occur just anywhere in a term.

In pi calculus, terms don’t have a normal form, so we can’t define an equivalence on pi calculus terms by comparing their normal forms. Instead, we say two terms are equivalent if they behave the same in all contexts, *i.e.* they’re *bisimilar*. Typing becomes rather more complicated; pi calculus types describe the structure of the current term and rewrites should do something weaker than preserve types on the nose; it suggests using a double category, but that’s for next time.

Seely suggested modeling rewrites with 2-morphisms in a 2-category and showed how beta and eta were lax adjoints. We’re suggesting a different, more operational way of modeling rewrites with 2-morphisms. Define a 2-category with

- and as generating objects,
- and as generating 1-morphisms,
- alpha equivalence as an identity 2-morphism, and
- beta reduction as a generating 2-morphism.

A *closed* term is one with no free variables, and the hom category consisting of closed lambda terms and beta reductions between them is a typical category you’d get from looking for the operational semantics of lambda calculus.

The 2-category is a fine semantics for the variant of lambda calculus where beta can apply anywhere within a term, but there are too many morphisms if you want to model the lambda calculus without reduction under a prefix. Given closed terms such that beta reduces to we can whisker beta by to get

To cut down the extra 2-morphisms, we need to model reduction contexts. The difference between the precontexts above and the contexts we need is the notion of the “topmost” context, where we can see enough of the surrounding term to determine that reduction is possible. To model a reduction context, we make a new 2-category by introducing a new generating 1-morphism

and say that a 1-morphism with signature that factors as a precontext followed by is an *-hole term context*. We also interpret beta with a 2-morphism between reduction contexts rather than between precontexts. The hom category is the operational semantics of lambda calculus without reduction under lambda.

In the next post, I’ll relate Seely’s 2-categorical approach and our 2-categorical by extending to double categories and using Melliès and Zeilberger’s notion of type refinement.

leave a comment