# reperiendi

## The best of the worst

Posted in Fun links by Mike Stay on 2008 August 13

## Blue cell

Posted in Programming by Mike Stay on 2008 August 11

It’s not immediately obvious how the blue calculus lets you do object references, so here’s a worked example.

I’m going to define a process, called Reference, that takes a new variable and turns it into an object reference. Because it’s easier to type, I’m going to use \ for lambda and v for nu, and when I apply a term to something other than a variable, you’re supposed to do the embedding from lambda calculus into blue calculus in your head. I’m leaving it out because the embedding behaves the same as lambda calculus and things are confusing enough as it is.

Reference = \ref.vc.( c | c = \x.(ref <= Pair c (x | cx)) )

Notice the Pair combinator (Pair = \xyz.zxy); it’s being applied to two processes that function like write and read, respectively, and is waiting for a third; if the third input is K, we get Kxy = x, and if it’s KI, we get KIxy = Iy = y.

You create an object by feeding it a variable for the reference and an initial state:

Reference ref u = vc.(cu | c = \x.(ref <= Pair c (x | cx)))

Now if you have a reference with some state in parallel with a writer process, it updates the state:

(Reference ref u | ref K w)
= ( vc.(cu | c = \x.(ref <= Pair c (x | cx))) | ref K w )
= vc.(cu | c = \x.(ref <= Pair c (x | cx)) | ref K w ) [by scope migration]
= vc.(\x.(ref <= Pair c (x | cx)) u | c = \x.(ref <= Pair c (x | cx)) | ref K w ) [by resource fetching on c]
= vc.(ref <= Pair c (u | cu) | c = \x.(ref <= Pair c (x | cx)) | ref K w ) [by small beta reduction on u]
= vc.(c = \x.(ref <= Pair c (x | cx)) | Pair c (u | cu) K w ) [by resource fetching on ref]
= vc.(c = \x.(ref <= Pair c (x | cx)) | K c (u | cu) w ) [by definition of Pair = \xyz.zxy]
= vc.(c = \x.(ref <= Pair c (x | cx)) | cw ) [by definition of K = \xy.x]
= vc.(cw | c = \x.(ref <= Pair c (x | cx))) [by symmetry of |]
= Reference ref w

Similarly, if you put it in parallel with a reader process,

(Reference ref u | ref KI w x y … z )
= vc.(c = \x.(ref <= Pair c (x | cx)) | K I c (u | cu) w x y … z) [by the same as above]
= vc.(c = \x.(ref <= Pair c (x | cx)) | I (u | cu) w x y … z) [by the definition of K]
= vc.(c = \x.(ref <= Pair c (x | cx)) | (u | cu) w x y … z) [by the definition of I]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | cuwxy…z) [by distributivity]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | \x.(ref <= Pair c (x | cx)) uwxy…z) [by resource fetching on c]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | (ref <= Pair c (u | cu)) wxy…z) [by small beta reduction on u]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | (ref <= Pair c (u | cu))) [by small beta reduction on w, x, y, …, z]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | \x.(ref <= Pair c (x | cx)) u) [by extensionality]
= vc.(c = \x.(ref <= Pair c (x | cx)) | uwxy…z | cu) [by extensionality]
= vc.(cu | c = \x.(ref <= Pair c (x | cx)) | uwxy…z) [by extensionality]
= (Reference ref u | uwxy…z)

## Blue calculus

Posted in Programming by Mike Stay on 2008 August 3

The lambda calculus is a Turing-complete language, but doesn’t handle concurrency very well; in particular, there’s no concept of a reference as opposed to a value.  The lambda calculus has three productions:

$P := x \;|\; (P P) \;|\; \lambda x.P$

The first production, $x$, is a variable; the second is an application of one term to another; and the third is abstraction.

The beta-reduction rule says

$((\lambda x.P)\; P')$

$\Downarrow$

$P[P'/x]$ if the variables in $P'$ are all free in $P$.

The part in brackets reads “with $P'$ replacing $x$.”  There are different evaluation strategies that one can use with the lambda calculus; one strategy that always reduces the term to its normal form if it has one is called “lazy evaluation.”  With this strategy, you only reduce a term if it’s necessary to do so.

The blue calculus builds this strategy into the language.  Here is an application term in the blue calculus:

$\nu x.((P\; x) \;|\; (x \Leftarrow P'))$

Blue calculus splits application into four parts, or “processes”:

1. a “small” application

$(P\; x)$

in which a term $P$ may only be applied to a variable $x$;

2. a resource assignment

$x \Leftarrow P$

in which a variable $x$ is associated to a one-time use of the term $P$;

3. a new variable declaration

$\nu x.P$

that introduces a new bound variable into scope; and

4. a symmetric monoidal product of terms, thought of as “parallel processes”

$P \;|\; P$

The monoidal unit here is the “do-nothing” term $(\nu x.x)$.

Blue calculus also splits beta reduction into two parts:

1. a “small” beta reduction

$((\lambda x.P)\; x')$

$\Downarrow$

$P[x'/x]$ if $x'$ is free in $P$;

2. and a resource fetching rule that gets the value of a variable only when it’s needed

$(\ldots ((x\;y_0)\; y_1)\; \ldots \;y_{n-1}) \;|\; (x \Leftarrow P)$

$\Downarrow$

$(\ldots ((P\; y_0)\; y_1)\; \ldots \; y_{n-1})$.

By splitting things up this way, the blue calculus allows patterns like nondeterministic choice.  The blue calculus term

$(x\; y) \;|\; (x \Leftarrow P) \;|\; (x \Leftarrow Q)$

has two valid reductions,

$(P\; y) \;|\; (x \Leftarrow Q)$

or

$(Q\; y) \;|\; (x \Leftarrow P)$.

It also allows for resource contention; here, the resource assignment acts as a mutex:

$(x\; y) \;|\; (x\; z) \;|\; (x\; \Leftarrow P)$

$\Downarrow$

$(P\; y) \;|\; (x\; z)$ or $(x\; y) \;|\; (P\; z)$.

The application term $\nu k.((\lambda x.P)\; k) \;|\; (k \Leftarrow P'))$ is a linear term. If the term $P$ uses $x$ more than once, then only the first usage will be able to fetch the resource: a resource assignment is consumed by the small beta rule.  However, the blue calculus also supports resource replication:

$x = P$

$\Updownarrow$

$(x \Leftarrow P) \;|\; (x = P)$

This allows the blue calculus to subsume both the lambda calculus and the pi calculus in a nice way. Blue terms are formed as follows:

$P := x \;|\; (P\; x) \;|\; \lambda x.P \;|\; \nu x.P \;|\; (x \Leftarrow P) \;|\; (x = P) \;|\; (P|P)$

Lambda terms embed like this:

$\begin{array}{rcl} [x] & \Rightarrow & x \\ ~[\lambda x.M] & \Rightarrow & \lambda x.[M] \\ ~[MN] &\Rightarrow & \nu u.([M]u \;|\; (u = [N])) \quad \mbox{where } u \mbox{ does not appear in } M \mbox{ or } N\end{array}$

Pi terms embed like this:

$\begin{array}{rcl} [\bar{u}x_1\ldots x_n] & \Rightarrow & ux_1\ldots x_n \\ ~[ux_1\ldots x_n.P] & \Rightarrow & (u \Leftarrow \lambda x_1\ldots x_n.P) \\ ~[(P \;|\; Q)] & \Rightarrow & ([P] \;|\; [Q]) \\ ~[\nu u.P] & \Rightarrow & \nu u.[P] \end{array}$

There are reduction rules for scope migration, associativity, distributivity, replication, and so forth; see the paper for more details.

The upshot of all this is that blue calculus variables are really references, and so one can build the whole theory of concurrent objects on top of it.