## Syntactic string diagrams

I hit on the idea of making lambda a node in a string diagram, where its inputs are an antivariable and a term in which the variable is free, and its output is the same term, but in which the variable is bound. This allows a string diagram notation for lambda calculus that is much closer to the syntactical description than the stuff in our Rosetta Stone paper. Doing it this way makes it easy to also do pi calculus and blue calculus.

There are two types, V (for variable) and T (for term). I’ve done untyped lambda calculus, but it’s straightforward to add subscripts to the types V and T to do typed lambda calculus.

There are six function symbols:

- Lambda takes an antivariable and a term that may use the corresponding variable.
- This turns an antivariable “x” introduced by lambda into the term “x”.
- (Application) This takes and and produces .
- These two mean we can duplicate and delete terms.

The rule is the real meat of the computation. The “P” is an arbitrary subdiagram. The effect is replacing the “A” application node with the “P” subdiagram, modulo some wiring.

I label the upwards arrows out of lambdas with a variable name in parentheses; this is just to assist in matching up the syntactical representation with the string diagram.

In the example, I surround part of the diagram with a dashed line; this is the part to which the rule applies. Within that, I surround part with a dash-dot line; this is the subdiagram P in the rule.

When I do blue calculus this way, there are a few more function symbols and the relations aren’t confluent, but the flavor is very much the same.

reperiendisaid, on 2009 January 27 at 12:44 pmPeter Selinger was kind enough to send this response:

Hi Mike,

your string diagrams are known as “sharing graphs” in the literature,

and are extremely well-studied. See e.g.

[1] Stefano Guerrini, A general theory of sharing graphs (1997)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.52.530

There is a reduction strategy on sharing graphs that is provably the

most efficient possible reduction strategy for lambda calculus; this

goes back to the work of Levy and Lamping (both cited in the above

paper).

Guerrini also gives an algebraic semantics of these sharing graphs,

see section 7. I don’t know whether it is related to your categorical

view.

I also like the book by Peyton-Jones:

[2] Simon Peyton Jones, The Implementation of Functional Programming

Languages (1987)

http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/

It shows how to use sharing graphs as the basis for a practical

implementation of lazy programming languages. As far as I know, this

is still state-of-the-art and is used in the implementation of Haskell.

In keeping with the practical nature of the book, the sharing graphs

are represented in slightly different form (with syntactic variables

rather than backpointers), but this is of course equivalent.

As for the categorical semantics, what you have in mind is a kind of

abstract syntax with variable binding. To put this into perspective,

the semantics of ordinary abstract syntax (i.e., without variable

binding), is given by an object A in a cartesian category, together

with interpretations for each n-ary function symbol f : A^n -> A. One

can then inductively define the interpretation of terms, speak of the

free such object, etc.

Things get slightly more complicated if one adds variable binding to

this picture. This has also been studied, though perhaps not in the

same form as you are proposing. Perhaps the closest to your approach

is

[3] Martin Hofmann, Semantical analysis of higher-order abstract

syntax (1999)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.4082

Here one has var : V -> T, app : T x T -> T, and lam : (V => T) -> T

(see top of p.9, with the obvious changes in notation). You will note

the use of a function space (V => T) in place of your cartesian

product V^* x T.

Another very similar paper is

[4] M.P.Fiore, G.D.Plotkin and D.Turi. Abstract syntax and variable binding (1999)

http://citeseerx.ist.psu.edu/showciting?cid=198434

http://www.cl.cam.ac.uk/~mpf23/papers/Types/Types.html

Here, one has var : V -> T, app : TxT -> T, and lam : delta T -> T (as

contained e.g. in the commutative diagram on p.6). Again, delta T is

something akin to the function space V => T, but is also isomorphic,

in a suitable sense, to T => T, as far as I remember (this is

important for substitution, see below).

A third, technically slightly different (but conceptually similar)

approach to abstract syntax with variable binders is:

[5] Murdoch J. Gabbay, Andrew M. Pitts: A new approach to abstract syntax

with variable binding (1999)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9383

[6] Murdoch J. Gabbay, Andrew M. Pitts: A new approach to abstract

syntax with variable binding (2002)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.62.9845

See especially the second-to-last line of [6, p.356], and you

immediately see the similarity. Their [A]T operation is akin to

V => T in [3] and delta T in [4].

It is fun to note that [3-5] all appeared (independently) in the same

conference in 1999. Semantics of variable binders was clearly a

pressing problem that year.

I can comment briefly on the main difference between these works and

what you are proposing. One thing that is important in syntax is

substitution (of a term for a variable). In fact, in the usual

abstract syntax (without binders), a term with n variables is

represented as a morphism A^n -> A. This is useful for substitution:

namely, if f : A x A^n -> A represents the term t(x,y_1,…,y_n) and

g : A^n -> A represents s(y_1,…,y_n), then f o represents

t[s/x].

In the presence of variables, a term with n variables is represented

as 1 -> V*^n x T (in your notation). However, for reasons of

substitution, one would still like this hom-set to be in 1-1

correspondence with T^n -> T. Somehow this is what the papers [3-6]

manage to do, each in their own way.

I hope these references will be useful. It would be great if you had a

more abstract monoidal framework of which the particular constructions

in [3-6] are concrete examples. I have always wondered about the

precise connection between [3-6], and whether there is a bigger

picture.

Good luck, and let me know how it goes! — Peter