Analysis says there are four different kinds of sepsis that need to be treated differently

Posted in Uncategorized by Mike Stay on 2019 May 21

Using a combination of statistical, machine learning, and simulation tools, the researchers combed through data relating to 20,000 past hospital patients who had developed sepsis within six hours of admission.

In particular, they looked for clusters of symptoms and patterns of organ dysfunction, then correlated them against biomarkers and mortality.

The findings revealed not one discernible type of sepsis, but four, which the researchers label alpha, beta, gamma and delta.

Alpha sepsis was the most common, affecting 33% of patients, and carried the lowest fatality rate, about 2%. At the other end of the scale, delta sepsis occurred in only 13% of patients, but had an in-hospital death rate of 32%.

To further test their discovery, Seymour and colleagues ran the same analysis on records for a further 43,000 patients, all of whom had been treated for sepsis between 2012 and 2014. The finding held.


Pluto may have subsurface ocean insulated by clathrates

Posted in Uncategorized by Mike Stay on 2019 May 21

Popular account: “And if Pluto has such materials, it stands to reason that other outer solar system bodies also have them, including not only the moons of Jupiter and Saturn, but possibly other large Kuiper Belt objects, even farther out from the sun than Pluto.”

Nature article:

Geordi LaForge-style visor in clinical trials

Posted in Uncategorized by Mike Stay on 2019 May 21

Latest claim: Voynich manuscript was written in proto-Romance

Posted in Uncategorized by Mike Stay on 2019 May 15

Prop rental for old laboratories

Posted in Uncategorized by Mike Stay on 2019 May 14

Medical and Scientific Props

Ghostly transparent eel

Posted in Uncategorized by Mike Stay on 2019 May 3

This is the larval form of the Leptocephalus Bali Seraya:

More strange fish larvae.

Each one had six wings; with twain he covered his face, and with twain he covered his feet, and with twain he did fly.

Posted in Uncategorized by Mike Stay on 2019 April 28

All insects have two pairs of wings, sprouting from the second and third segment of the thorax.  Even insects that appear to have only one pair, like flies and beetles, really have two: in flies, the second pair of wings diminished to new structures called halteres, while in beetles, the first pair became the elytra. In cockroaches and bugs, the first pair only got halfway to elytra and are called hemelytra.

But!  A 2011 paper in Nature gives morphological and genetic evidence that the helmet of treehoppers is derived from third pair of wing buds on the first segment.  Hox genes have suppressed wing expression on every segment but 2 and 3 for 300 million years. Hox genes control repetition of certain body parts.  They are very strongly preserved, since they govern the gross body plan of a creature—in humans, for instance, they control how many vertebrae we have.  The authors suspected that the treehopper hox gene no longer suppressed production on the first gene and inserted it into drosophila flies to see if wing primordia formed, but they didn’t.  So somehow, despite the hox genes being preserved, treehoppers found a way around the inhibition of wing production inhibited and were able to evolve their fantastic helmets.


More coverage here.


TIL: Till is not a shortening of until

Posted in Uncategorized by Mike Stay on 2019 April 26

‘Til is a corruption of till under the assumption that it is a shortening of until, but till is, in fact, an older word than until.  Both till and until mean the same thing.

Journey to the black hole at the center of the galaxy

Posted in Uncategorized by Mike Stay on 2019 April 24

Here’s some context for the recent black hole image.

Shock diamonds and aerospikes

Posted in Uncategorized by Mike Stay on 2019 April 23

I was curious what caused the oscillations in rocket exhaust.  I learned they’re called “shock diamonds”, among other names, and are due to the lateral pressure of the exhaust being mismatched with the atmospheric pressure.  At low altitudes / high pressure, the exhaust is immediately pinched and then bounces off itself as the pressure increases, then becomes repinched as it expands again.  At high altitudes / low pressure, the exhaust expands first, then gets pinched at the lower pressure.  The bell-style rocket only has its maximum efficiency at one specific altitude; this is one reason for multistage rockets.


There’s a rocket design called an aerospike that effectively uses air for half the bell and then controls the flow to match the air pressure.  It’s more efficient, but has had bad luck as far as deployment goes.  Since fuel is about the cheapest part of a rocket, new space companies have opted for using known, proven designs instead.  This is a good video about the history and development of the aerospike.

While looking for the explanation, I came across this wonderful paper that examines what happens when two laminar flows collide.  Depending on the viscosity, surface tension, and velocity of the streams, you get several different “phases” of interaction.

Screen Shot 2019-04-23 at 1.46.24 PM.png

A modest proposal for overloading arithmetic operators in JavaScript

Posted in Uncategorized by Mike Stay on 2019 April 19

Bignum usage:

var x, y;
with(BN) {
  y = L5;
  x = BN()(L282376418273964982736149872 * L29618763948127639 - y);
console.log('' + x);
// 8.363640477374324661331988841549874089512203e42

Complex usage:

console.log(Complex()({r:2, i:0} / {r:1, i:1} + {r:-3, i:2}))
// {r:-2, i:1}


Note that JavaScript does not support overloading operators in the same way as C++, but here they work fine.  Note also that it would be easy to write something like eval(translateToJS(myDSLstring)), but here I’m using raw JavaScript arithmetic operators on objects, something that usually just gets you concatenations of “NaN” and “[object Object]”.

The setup for operator overloading enumerates all expressions involving the binary operations +, -, *, / with up to five variables indexed from left to right.  It calls Math.random() five times and stores the results in a list R.  It substitutes the nth random number for the nth variable in the expressions, then stores the expression under the result in a table.  Real numbers chosen uniformly at random in [0,1) are transcendental with probability 1 and therefore satisfy no algebraic equation; while we don’t have transcendentals, the number of bits in an IEEE 754 double easily suffice to distinguish the set of expressions we want to handle.

BN is a Proxy object with a get handler that returns, for any property beginning with ‘L’, a Bignum object from your favorite bignum library encoding the rest of the property name as its value.  BN temporarily replaces Bignum.prototype.valueOf so that it pushes “this” onto a list V and returns the corresponding random number at the same index from the list R.  The function BN() looks up the appropriate expression in the table, gets the actual values from V and returns the evaluation of the expression with the variables bound to the values, emptying the list V and replacing the original valueOf before returning.

Complex() replaces the Object.prototype.valueOf with one that does the right thing with an object like {r:-5, i:-3}.  It’s also straightforward to implement automatic differentiation in a similar way.

We don’t necessarily need to stick to mere objects; we can even do things like monadic bind by replacing Function.prototype.valueOf.

Current developers are sadly undereducated in the fantastic tools that JavaScript provides for enriching the lives of future developers tasked with maintaining their code.  I encourage everyone to make broader use of them.

The interesting code starts at line 956.
Start at line 1044 to see how to define your own overloadings.

The “valueOf pushes objects into a list” trick for operator overloading came from Brian McKenna’s Bilby functional library, but it only allowed a single kind of operator in an expression: when applied to two functions, >= returns true, > returns 0, * returns NaN, and + returns a string (the concatenation of their sourcecode). I came up with the “enumerate expressions and store them under the result of evaluating them on transcendental numbers” trick to allow multiple operators and the “proxy handler to parse the identifier and construct an object” trick.

The modern Sisyphus

Posted in Uncategorized by Mike Stay on 2019 April 3

Lightfield 3d photos from an inkjet!

Posted in Uncategorized by Mike Stay on 2019 April 3

Lumii uses multilayer Moire patterns to simulate lightfields.  You print out the layers on a regular transparency from an inkjet printer!

Amazing water spout photo

Posted in Uncategorized by Mike Stay on 2019 April 3

(Found on Reddit)

Fantastic fossil find from dinosaur killer asteroid

Posted in Uncategorized by Mike Stay on 2019 April 3

Seismic waves from the Chicxulub impact set up enormous standing waves in a large inland sea in North Dakota.  The waves hit the mouth of an incoming river, which focused the waves into a 30 foot wall of water that knocked down trees and put fish next to triceratops and hadrosaurs in the fossil bed.  The fish’s gills were also clogged with tektites, ejecta from the impact made of hot glass spherules that set the world’s forests on fire.

Seismic metamaterials can “cloak” buildings

Posted in Uncategorized by Mike Stay on 2019 April 2


Posted in Uncategorized by Mike Stay on 2019 March 7

The whitepaper for my company’s new cryptocurrency project is out!

Screen Shot 2019-03-06 at 6.07.41 PM.png

JavaScript can’t be fixed

Posted in Uncategorized by Mike Stay on 2019 January 18

In 2015, Andreas Rossberg of the V8 team proposed making JavaScript a little saner by making classes, objects, and functions unalterable after their construction so that a sound type system becomes possible.

A year later, his team reported on their progress and said that there’s basically no way to do that without breaking compatibility with everything that came before.

Any sound type system that’s compatible with old code will have to admit that JavaScript “functions” really aren’t and will have to model a lot more of JavaScript’s insanity.


A whale-eating whale

Posted in Uncategorized by Mike Stay on 2019 January 16

The Basilosaurus is not, as its name suggests, a lizard, but a prehistoric cetacean, a mammal.  It grew up to 60 feet long (18 m).  It could bite with a force of 3,600 pounds per square inch (25,000 kPa), chewing and then swallowing its food.

The Gnostic Brotherhood of Mathematicians

Posted in Uncategorized by Mike Stay on 2019 January 15

3D printed cam system for drawing pictures with a laser

Posted in Uncategorized by Mike Stay on 2019 January 14

Bob Pepper’s Dragonmaster Cards

Posted in Uncategorized by Mike Stay on 2019 January 6

I got these cards for my birthday one year in the early 80s. We didn’t play the game very much—it was too hard for me as a little kid, and I was the oldest of my siblings—but I was always impressed by the art work and would pull them out just to look at them

New alternative to platinum in catalytic converters uses gelatin and molybdenum, tungsten or cobalt

Posted in Uncategorized by Mike Stay on 2018 December 26

These fossilized clams turned into opals

Posted in Uncategorized by Mike Stay on 2018 December 18

New Zealand batflies

Posted in Uncategorized by Mike Stay on 2018 December 18

South American batflies are parasitic on their bats, but the ones in New Zealand are vegetarian coprophages, eating the pre-digested fruit in the bats’ guano.

New cervical cancer test has 100% accuracy

Posted in Uncategorized by Mike Stay on 2018 December 17

“A new test for cervical cancer was found to detect all of the cancers in a randomised clinical screening trial of 15,744 women, outperforming both the current Pap smear and human papillomavirus (HPV) test at a reduced cost, according to a study led by Queen Mary University of London.”

Happy Quantum Day!

Posted in Uncategorized by Mike Stay on 2018 December 14

On this day in 1900, Max Planck presented his theory of blackbody radiation to the German Physical Society (Deutsche Physikalische Gesellschaft, DPG).

Sometime during December of 1925, Schrödinger came up with the wave equation of a nonrelativistic electron; he wrote about it in a letter to Wien on 1925-12-27. So we could say that Quantum day has a half-life on the order of 10 days!

No fees plus 3% interest on checking & savings balances

Posted in Uncategorized by Mike Stay on 2018 December 13

A recording of Mark Twain (sort of)

Posted in Uncategorized by Mike Stay on 2018 December 5

Not a recording of Twain, but a recording of a professional imitator who lived next to Twain for years.

The Spooky instrument

Posted in Uncategorized by Mike Stay on 2018 November 30

Scarab beetles and circularly polarized light

Posted in Uncategorized by Mike Stay on 2018 November 27

Time lapse of rocket launch as seen from ISS

Posted in Uncategorized by Mike Stay on 2018 November 26

Using microfluidics to direct T-cells to cancer cells, learn what antigens are on their surface, and create tumor-specific treatments

Posted in Uncategorized by Mike Stay on 2018 November 7

Hackberries apparently taste like peanut M&Ms

Posted in Uncategorized by Mike Stay on 2018 November 7

Lovecraftian tentacled horrors bursting from their leathery ova

Posted in Uncategorized by Mike Stay on 2018 November 7


Posted in Uncategorized by Mike Stay on 2018 October 29

I started parsing DFS as “discrete finite something-or-other” then realized I had to backtrack and try a different sibling of “discrete”.

Boys’ Life Tripods trilogy

Posted in Uncategorized by Mike Stay on 2018 October 26

This comic adaptation of John Christopher’s Tripods trilogy was the best part of Boys’ Life magazine for me. It started coming out the month after I became a boy scout.

Piano Genie

Posted in Uncategorized by Mike Stay on 2018 October 19

This project trained a neural net to predict, essentially, high res piano from low res input. You get to choose whether the note is higher or lower than the last one, but not the note itself; that’s up to the neural net to decide. There’s a version you can try out on the web!

Grail coin by romanbooteen

Posted in Uncategorized by Mike Stay on 2018 October 19

Custom coin carving with a knight, a removable sword, and a hidden grail!

Making of:

Meteor shower flows in 3d

Posted in Uncategorized by Mike Stay on 2018 October 19

A 3d interactive rendering of the solar system illustrating orbits of the meteors for each of the major showers.

G+ and website gone

Posted in Uncategorized by Mike Stay on 2018 October 10

I got rid of my Facebook account a month ago.  I used Google+ until it was announced that they’re shutting it down.  I also had a website on, but that server recently kicked the bucket, and it’s unclear whether the content is coming back.  The wayback machine has a copy from earlier this year, so not much would be lost if it doesn’t. But given those losses, it looks like I’ll be posting here more often. 

So to celebrate, here’s a fantastic video of a zebrafish embryo growing a nervous system.


Capability Myths Demolished

Posted in Uncategorized by Mike Stay on 2016 February 16

A nice summary of erights’ Capability Myths Demolished paper.

the morning paper

Capability Myths Demolished – Miller et. al 2003

Pretty much everyone is familiar with an ACL-based approach to security. Despite having been around for a very long time, the capabilities approach to security is less well-known. Today’s paper choice provides an excellent introduction to the capabilities model and how it compares to ACLs. Along the way we’ll learn about 7 fundamental properties of security systems, and which combinations of those are required to offer certain higher-level guarantees. Capabilities are central to the type system of the Pony language which we’ll be looking at tomorrow.

Let’s start out by looking at one of the fundamental differences between ACLs and capabilities, the direction of the relationship between subject and resource. Consider a classic access matrix such as the one below. Each row is a subject, and each column a resource. The entry in a given cell describes the permissions the subject has…

View original post 949 more words

Serializing Javascript Closures

Posted in Uncategorized by Mike Stay on 2016 January 21

In Javascript, the eval function lets you dynamically look up variables in scope, so you can do a terrible hack like this to sort-of serialize a closure if you’re in the same scope as the variables being closed over:

Given StringMap.js and atLeastFreeVarNames.js from SES, one can define the following:

var s = function(f) {
  // Find free vars in f. (This depends, of course, on
  // Function.prototype.toString being unchanged.)
  var code = f.toString();
  var free = ses.atLeastFreeVarNames(code);
  // Construct code that evaluates to an environment object.
  var env = ["({"];
  for (var i = 0, len = free.length; i < len; ++i) {
    env.push('":(function(){try{return eval("(');
    env.push(')")}catch(_){return {}}})()');
  return "({code:" + JSON.stringify(code) + ",env:" + env.join("") + "})";

// See or
// for versions of stringify that handle cycles.
var t = function(x) { return '(' + JSON.stringify(x) + ')'; }

Then you can use these definitions to serialize inline definitions that only close over “stringifiable” objects or objects behind cut points (see deserialization below):

var baz = {x: 1};
var serializedClosure = t(eval(s(
  function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; }
// serializedClosure === '({"code":"function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; }","env":{"function":{},"bar":{},"foo":{},"console":{},"log":{},"hi":{},"return":{},"baz":{"x":1},"x":{},"fiddle":{}}})'

The string serializedClosure can then be stashed somewhere.  When it’s time to deserialize, do the following:

var d = function(closure, localBindings) {
  localBindings = localBindings || {};
  return function() {
    with(closure.env) {
      with (localBindings) {
        return eval("(" + closure.code + ")").apply(this, arguments);
var closure = eval(serializedClosure);
// Hook up local values if you want them.
var deserializedFn = d(closure, {console: console});
// Prints "2foo" to the console.
// Prints "3bar" to the console.

If you want to store the updated state, just re-stringify the closure:

var newSerializedClosure = t(closure);
// newSerializedClosure === '({"code":"function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; }","env":{"function":{},"bar":{},"foo":{},"console":{},"log":{},"hi":{},"return":{},"baz":{"x":3},"x":{},"fiddle":{}}})'
// Note that baz.x is now 3.

As I said, a very ugly hack, but still might be useful somewhere.

HDRA part 1a

Posted in Uncategorized by Mike Stay on 2015 May 30

Again, comments are appreciated.

2-categories and lambda calculus

This is a follow up post, but not the promised sequel, from A 2-Categorical Approach to the Pi Calculus where I’ll try to correct various errors I made and try to clarify some things.

First, lambda calculus was invented by Church to solve Hilbert’s decision problem, also known as the Entscheidungsproblem. The decision problem was third on a list of problems he presented at a conference in 1928, but was not, as I wrote last time, “Hilbert’s third problem”, which was third on a list of problems he laid out in 1900.

Second, the problem Greg Meredith and I were trying to address was how to prevent reduction under a “receive” prefix in the pi calculus, which is intimately connected with preventing reduction under a lambda in lambda calculus. All programming languages that I know of, whether eager or lazy, do not reduce under a lambda.

There are two approaches in literature to the semantics of lambda calculus. The first is denotational semantics, which was originally concerned with what function a term computes. This is where computability and type theory live. Denotational semantics treats alpha-beta-eta equivalence classes of lambda terms as morphisms in a category. The objects in the category are called “domains”, and are usually “\omegaCPOs“, a special kind of poset. Lambek and Scott used this approach to show that alpha-beta-eta equivalence classes of lambda terms with one free variable form a cartesian closed category where composition is given by substitution.

The type of a term describes the structure of its normal form. Values are terms that have no beta reduction; they’re either constants or lambda abstractions. The types in lambda calculus are either base types or lambda abstractions, respectively. (While Lambek and Scott introduced new term constructors for products, they’re not strictly necessary, because the lambda term \lambda z.ztu can be used for the pair \langle t,u\rangle with projections \lambda p.p(\lambda xy.x) and \lambda p.p(\lambda xy.y).)

The second is operational semantics, which is more concerned with how a function is computed than what it computes. All of computational complexity theory and algorithmic analysis lives here, since we have to count the number of steps it takes to complete a computation. Operational semantics treats lambda terms as objects in a category and rewrite rules as morphisms. It is a very syntactical approach; the set of terms is an algebra of some endofunctor, usually presented in Backus-Naur Form. This set of terms is then equipped with some equivalence relations and reduction relations. For lambda calculus, we mod out terms by alpha, but not beta. The reduction relations often involve specifying a reduction context, which means that the rewrites can’t occur just anywhere in a term.

In pi calculus, terms don’t have a normal form, so we can’t define an equivalence on pi calculus terms by comparing their normal forms. Instead, we say two terms are equivalent if they behave the same in all contexts, i.e. they’re bisimilar. Typing becomes rather more complicated; pi calculus types describe the structure of the current term and rewrites should do something weaker than preserve types on the nose; it suggests using a double category, but that’s for next time.

Seely suggested modeling rewrites with 2-morphisms in a 2-category and showed how beta and eta were lax adjoints.  We’re suggesting a different, more operational way of modeling rewrites with 2-morphisms.  Define a 2-category L with

  • V and T as generating objects,
  • -, -(=), and \lambda as generating 1-morphisms,
  • alpha equivalence as an identity 2-morphism, and
  • beta reduction as a generating 2-morphism.

A closed term is one with no free variables, and the hom category L(I, T) consisting of closed lambda terms and beta reductions between them is a typical category you’d get from looking for the operational semantics of lambda calculus.

The 2-category L is a fine semantics for the variant of lambda calculus where beta can apply anywhere within a term, but there are too many morphisms if you want to model the lambda calculus without reduction under a prefix. Given closed terms t, t'\colon I \to T such that t beta reduces to t', we can whisker beta by \lambda x to get

\displaystyle \lambda x.\beta\colon \lambda x.t \Rightarrow \lambda x.t'.

To cut down the extra 2-morphisms, we need to model reduction contexts. The difference between the precontexts above and the contexts we need is the notion of the “topmost” context, where we can see enough of the surrounding term to determine that reduction is possible. To model a reduction context, we make a new 2-category L' by introducing a new generating 1-morphism

\displaystyle [-]\colon T \to T

and say that a 1-morphism with signature T^{\otimes n} \to T that factors as a precontext followed by [-] is an n-hole term context. We also interpret beta with a 2-morphism between reduction contexts rather than between precontexts. The hom category L'(I, T) is the operational semantics of lambda calculus without reduction under lambda.

In the next post, I’ll relate Seely’s 2-categorical approach and our 2-categorical by extending to double categories and using Melliès and Zeilberger’s notion of type refinement.


Posted in Uncategorized by Mike Stay on 2015 May 18

Greg Meredith and I have a short paper that’s been accepted to Higher-Dimensional Rewriting and Applications
(HDRA) 2015
on modeling the asynchronous polyadic pi calculus with 2-categories. We avoid domain theory entirely and model the operational semantics directly; full abstraction is almost trivial. As a nice side-effect, we get a new tool for reasoning about consumption of resources during a computation.

It’s a small piece of a much larger project, which I’d like to describe here in a series of posts. This post will talk about lambda calculus for a few reasons. First, lambda calculus is simpler, but complex enough to illustrate one of our fundamental insights. Lambda calculus is to serial computation what pi calculus is to concurrent computation; lambda calculus talks about a single machine doing a computation, while pi calculus talks about a network of machines communicating over a network with potentially random delays. There is at most one possible outcome for a computation in the lambda calculus, while there are many possible outcomes in a computation in the pi calculus. Both the lazy lambda calculus and the pi calculus, however, have as an integral part of their semantics the notion of waiting for a sub-computation to complete before moving onto another one. Second, the denotational semantics of lambda calculus in Set is well understood, as is its generalization to cartesian closed categories; this semantics is far simpler than the denotational semantics of pi calculus and serves as a good introduction. The operational semantics of lambda calculus is also simpler than that of pi calculus and there is previous work on modeling it using higher categories.


Alonzo Church invented the lambda calculus as part of his attack on Hilbert’s third problem, also known as the Entscheidungsproblem, which asked for an algorithm to solve any mathematical problem. Church published his proof that no such algorithm exists in 1936. Turing invented his eponymous machines, also to solve the Entscheidungsproblem, and published his independent proof a few months after Church. When he discovered that Church had beaten him to it, Turing proved in 1937 that the two approaches were equivalent in power. Since Turing machines were much more “mechanical” than the lambda calculus, the development of computing machines relied far more on Turing’s approach, and it was only decades later that people started writing compilers for more friendly programming languages. I’ve heard it quipped that “the history of programming languages is the piecemeal rediscovery of the lambda calculus by computer scientists.”

The lambda calculus consists of a set of “terms” together with some relations on the terms that tell how to “run the program”. Terms are built up out of “term constructors”; in the lambda calculus there are three: one for variables, one for defining functions (Church denoted this operation with the Greek letter lambda, hence the name of the calculus), and one for applying those functions to inputs. I’ll talk about these constructors and the relations more below.

Church introduced the notion of “types” to avoid programs that never stop. Modern programming languages also use types to avoid programmer mistakes and encode properties about the program, like proving that secret data is inaccessible outside certain parts of the program. The “simply-typed” lambda calculus starts with a set of base types and takes the closure under the binary operation \to to get a set of types. Each term is assigned a type; from this one can deduce the types of the variables used in the term. An assignment of types to variables is called a typing context.

The search for a semantics for variants of the lambda calculus has typically been concerned with finding sets or “domains” such that the interpretation of each lambda term is a function between domains. Scott worked out a domain D such that the continuous functions from D to itself are precisely the computable ones. Lambek and Scott generalized the category where we look for semantics from Set to arbitrary cartesian closed categories (CCCs).

Lambek and Scott constructed a CCC out of lambda terms; we call this category the syntactical category. Then a structure-preserving functor from the syntactical category to Set or some other CCC would provide the semantics. The syntactical category has types as objects and equivalence classes of certain terms as morphisms. A morphism in the syntactical category goes from a typing context to the type of the term.

John Baez has a set of lecture notes from Fall 2006 through Spring 2007 describing Lambek and Scott’s approach to the category theory of lambda calculus and generalizing it from cartesian closed categories to symmetric monoidal closed categories so it can apply to quantum computation as well: rather than taking a functor from the syntactical category into Set, we can take a functor into Hilb instead. He and I also have a “Rosetta stone” paper summarizing the ideas and connecting them with the corresponding generalization of the Curry-Howard isomorphism.

The Curry-Howard isomorphism says that types are to propositions as programs are to proofs. In practice, types are used in two different ways: one as propositions about data and the other as propositions about code. Programming languages like C, Java, Haskell, and even dynamically typed languages like JavaScript and Python use types to talk about propositions that data satisfies: is it a date or a name? In these languages, equivalence classes of programs constitute constructive proofs. Concurrent calculi are far more concerned about propositions that the code satisfies: can it reach a deadlocked state? In these languages, it is the rewrite rules taking one term to another that behave like proofs. Melliès and Zeilberger’s excellent paper “Functors are Type Refinement Systems” relates these two approaches to typing to each other.

Note that Lambek and Scott’s approach does not have the sets of terms or variables as objects! The algebra that defines the set of terms plays only a minor role in the category; there’s no morphism in the CCC, for instance, that takes a term t and a variable x to produce the term \lambda x.t. This failure to capture the structure of the term in the morphism wasn’t a big deal for lambda calculus because of “confluence” (see below), but it turns out to matter a lot more in calculi like Milner’s pi calculus that describe communicating over a network, where messages can be delayed and arrival times matter for the end result (consider, for instance, two people trying to buy online the last ticket to a concert).

The last few decades have seen domains becoming more and more complicated in order to try to “unerase” the information about the structure of terms that gets lost in the domain theory approach and recover the operational semantics. Fiore, Moggi, and Sangiorgi, Stark and Cattani, Stark, and Winskel all present domain models of the pi calculus that recursively involve the powerset in order to talk about all the possible futures for a term. Industry has never cared much about denotational semantics: the Java Virtual Machine is an operational semantics for the Java language.

What we did

Greg Meredith and I set out to model the operational semantics of the pi calculus directly in a higher category rather than using domain theory. An obvious first question is, “What about types?” I was particularly worried about how to relate this approach to the kind of thing Scott and Lambek did. Though it didn’t make it into the HDRA paper and the details won’t make it into this post, we found that we’re able to use the “type-refinement-as-a-functor” idea of Melliès and Zeilberger to show how the algebraic term-constructor functions relate to the morphisms in the syntactical category.

We’re hoping that this categorical approach to modeling process calculi will help with reasoning about practical situations where we want to compose calculi; for instance, we’d like to put a hundred pi calculus engines around the edges of a chip and some ambient calculus engines, which have nice features for managing the location of data, in the middle to distribute work among them.

Lambda calculus

The lambda calculus consists of a set of “terms” together with some relations on the terms. The set T of terms is defined recursively, parametric in a countably infinite set V of variables. The base terms are the variables: if x is an element of V, then x is a term in T. Next, given any two terms t, t' \in T, we can apply one to the other to get t(t'). We say that t is in the head position of the application and t' in the tail position. (When the associativity of application is unclear, we’ll also use parentheses around subterms.) Finally, we can abstract out a variable from a term: given a variable x and a term t, we get a term \lambda x.t.

The term constructors define an algebra, a functor LC from Set to Set that takes any set of variables V to the set of terms T = LC(V). The term constructors themselves become functions:

\displaystyle   \begin{array}{rll}  -\colon & V \to T &\mbox{variable}\\  -(-)\colon & T \times T \to T &\mbox{application}\\  \lambda\colon & V \times T \to T &\mbox{abstraction}  \end{array}

Church described three relations on terms. The first relation, alpha, relates any two lambda abstractions that differ only in the variable name. This is exactly the same as when we consider the function f(x) = x^2 to be identical to the function f(y) = y^2. The third relation, eta, says that there’s no difference between a function f and a “middle-man” function that gets an input x and applies the function f to it: \lambda x.f(x) = f. Both alpha and eta are equivalences.

The really important relation is the second one, “beta reduction”. In order to define beta reduction, we have to define the free variables of a term: a variable occurring by itself is free; the set of free variables in an application is the union of the free variables in its subterms; and the free variables in a lambda abstraction are the free variables of the subterm except for the abstracted variable.

\displaystyle   \begin{array}{rl}  \mathrm{FV}(x) = & \{x\} \\  \mathrm{FV}(t(t')) = & \mathrm{FV}(t) \cup \mathrm{FV}(t') \\  \mathrm{FV}(\lambda x.t) = & \mathrm{FV}(t) / \{x\} \\  \end{array}

Beta reduction says that when we have a lambda abstraction \lambda x.t applied to a term t', then we replace every free occurrence of x in t by t':

\displaystyle  (\lambda x.t)(t') \downarrow_\beta t\{t' / x\},

where we read the right hand side as “t with t' replacing x.” We see a similar replacement of y in action when we compose the following functions:

\displaystyle   \begin{array}{rl}  f(x) = & x + 1 \\  g(y) = & y^2 \\  g(f(x)) = & (x + 1)^2 \\  \end{array}

We say a term has a normal form if there’s some sequence of beta reductions that leads to a term where no beta reduction is possible. When the beta rule applies in more than one place in a term, it doesn’t matter which one you choose to do first: any sequence of betas that leads to a normal form will lead to the same normal form. This property of beta reduction is called confluence. Confluence means that the order of performing various subcomputations doesn’t matter so long as they all finish: in the expression (2 + 5) * (3 + 6) it doesn’t matter which addition you do first or whether you distribute the expressions over each other; the answer is the same.

“Running” a program in the lambda calculus is the process of computing the normal form by repeated application of beta reduction, and the normal form itself is the result of the computation. Confluence, however, does not mean that when there is more than one place we could apply beta reduction, we can choose any beta reduction and be guaranteed to reach a normal form. The following lambda term, customarily denoted \omega, takes an input and applies it to itself:

\displaystyle \omega = \lambda x.x(x)

If we apply \omega to itself, then beta reduction produces the same term, customarily called \Omega:

\displaystyle \Omega = \omega(\omega)

\displaystyle \Omega \downarrow_\beta \Omega.

It’s an infinite loop! Now consider this lambda term that has \Omega as a subterm:

\displaystyle (\lambda x.\lambda y.x)(\lambda x.x)(\Omega)

It says, “Return the first element of the pair (identity function, \Omega)”. If it has an answer at all, the answer should be “the identity function”. The question of whether it has an answer becomes, “Do we try to calculate the elements of the pair before applying the projection to it?”

Lazy lambda calculus

Many programming languages, like Java, C, JavaScript, Perl, Python, and Lisp are “eager”: they calculate the normal form of inputs to a function before calculating the result of the function on the inputs; the expression above, implemented in any of these languages, would be an infinite loop. Other languages, like Miranda, Lispkit, Lazy ML, and Haskell and its predecessor Orwell are “lazy” and only apply beta reduction to inputs when they are needed to complete the computation; in these languages, the result is the identity function. Abramsky wrote a 48-page paper about constructing a domain that captures the operational semantics of lazy lambda calculus.

The idea of representing operational semantics directly with higher categories originated with R. A. G. Seely, who suggested that beta reduction should be a 2-morphism; Barney Hilken and Tom Hirschowitz have also contributed to looking at lambda calculus from this perspective. In the “Rosetta stone” paper that John Baez and I wrote, we made an analogy between programs and Feynman diagrams. The analogy is precise as far as it goes, but it’s unsatisfactory in the sense that Feynman diagrams describe processes happening over time, while Lambek and Scott mod out by the process of computation that occurs over time. If we use 2-categories that explicitly model rewrites between terms, we get something that could potentially be interpreted with concepts from physics: types would become analogous to strings, terms would become analogous to space, and rewrites would happen over time.

The idea from the “algebra of terms” perspective is that we have objects V and T for variables and terms, term constructors as 1-morphisms, and the nontrivial 2-morphisms generated by beta reduction. Seely showed that this approach works fine when you’re unconcerned with the context in which reduction can occur.

This approach, however, doesn’t work for lazy lambda calculus! Horizontal composition in a 2-category is a functor, so if a term t reduces to a term t', then by functoriality, \lambda x.t must reduce to \lambda x.t'—but this is forbidden in the lazy lambda calculus! Functoriality of horizontal composition is a “relativity principle” in the sense that reductions in one context are the same as reductions in any other context. In lazy programming languages, on the other hand, the “head” context is privileged: reductions only happen here. It’s somewhat like believing that measuring differences in temperature is like measuring differences in space, that only the difference is meaningful—and then discovering absolute zero. When beta reduction can happen anywhere in a term, there are too many 2-morphisms to model lazy lambda calculus.

In order to model this special context, we reify it: we add a special unary term constructor [-]\colon T \to T that marks contexts where reduction is allowed, then redefine beta reduction so that the term constructor [-] behaves like a catalyst that enables the beta reduction to occur. This lets us cut down the set of 2-morphisms to exactly those that are allowed in the lazy lambda calculus; Greg and I did essentially the same thing in the pi calculus.

More concretely, we have two generating rewrite rules. The first propagates the reduction context to the head position in an application; the second is beta reduction restricted to a reduction context.

\displaystyle  [t(t')]\, \downarrow_{ctx}\, [[t](t')]

\displaystyle  [[\lambda x.t](t')]\, \downarrow_\beta\, [t]\, \{t'/x\}

When we surround the example term from the previous section with a reduction context marker, we get the following sequence of reductions:

\displaystyle   \begin{array}{rl}  & [(\lambda x.\lambda y.x)(\lambda x.x)(\Omega)] \\    \downarrow_{ctx}& [[(\lambda x.\lambda y.x)(\lambda x.x)](\Omega)] \\    \downarrow_{ctx}& [[[\lambda x.\lambda y.x](\lambda x.x)](\Omega)] \\    \downarrow_{\beta}& [[\lambda y.(\lambda x.x)](\Omega)]\\    \downarrow_{\beta}& [\lambda x.x] \\  \end{array}

At the start, none of the subterms were of the right shape for beta reduction to apply. The first two reductions propagated the reduction context down to the projection in head position. At that point, the only reduction that could occur was at the application of the projection to the first element of the pair, and after that to the second element. At no point was \Omega ever in a reduction context.

Compute resources

In order to run a program that does anything practical, you need a processor, time, memory, and perhaps disk space or a network connection or a display. All of these resources have a cost, and it would be nice to keep track of them. One side-effect of reifying the context is that we can use it as a resource.

The rewrite rule \downarrow_{ctx} increases the number of occurrences of [-] in a term while \downarrow_\beta decreases the number. If we replace \downarrow_{ctx} by the rule

\displaystyle   [t(t')]\, \downarrow_{ctx'}\, [t](t')

then the number of occurences of [-] can never increase. By forming the term [[\cdots[t]\cdots]], we can bound the number of beta reductions that can occur in the computation of t.

If we have a nullary constructor c\colon 1 \to T, then we can define [t] = c(t) and let the program dynamically decide whether to evaluate an expression eagerly or lazily.

In the pi calculus, we have the ability to run multiple processes at the same time; each [-] in that situation represents a core in a processor or computer in a network.

These are just the first things that come to mind; we’re experimenting with variations.


We figured out how to model the operational semantics of a term calculus directly in a 2-category by requiring a catalyst to carry out a rewrite, which gave us full abstraction without needing a domain based on representing all the possible futures of a term. As a side-effect, it also gave us a new tool for modeling resource consumption in the process of computation. Though I haven’t explained how yet, there’s a nice connection between the “algebra-of-terms” approach that uses V and T as objects and Lambek and Scott’s approach that uses types as objects that uses Melliès and Zeilberger’s ideas about type refinement. Next time, I’ll talk about the pi calculus and types.

Q, Jokers, and Clowns

Posted in Uncategorized by Mike Stay on 2014 August 5

Conor McBride has a beautiful functional pearl, “Clowns to the Left of me, Jokers to the Right”, in which he discusses the idea of suspending the computation of a catamorphism. I just wanted to point out the relation of his work to the q-derivative. Since Star Trek’s character Q is a trickster god like Loki, Anansi, or Coyote, q-derivatives also fit nicely with McBride’s theme.

The usual definition of the derivative is

\displaystyle\frac{df(x)}{dx} = \lim_{h \to 0} \frac{f(x + h) - f(x)}{(x + h) - x}.

Instead of translating x by an infinitesimal amount h, we can scale x by an infinitesimal amount q; these two definitions coincide in the limit:

\displaystyle\frac{df(x)}{dx} = \lim_{q \to 1} \frac{f(qx) - f(x)}{qx - x}.

However, when people talk about the q-derivative, they usually mean the operator we get when we don’t take the limit and q \ne 1. It should probably be called the “q-difference”, but we’ll see that the form of the difference is so special that it deserves the exceptional name.

The q-derivative, as one would hope, behaves linearly:

\begin{array}{rl} \displaystyle \frac{d_qf(x)}{d_qx} & \displaystyle = \frac{f(qx) - f(x)}{qx - x} \\ \\  \displaystyle \frac{d_q(f(x) + g(x))}{d_qx} & \displaystyle = \frac{(f(qx) + g(qx)) - (f(x) + g(x))}{qx - x} \\  & \displaystyle = \frac{(f(qx) - f(x)) + (g(qx)- g(x))}{qx - x} \\  & \displaystyle = \frac{d_qf(x)}{d_qx} + \frac{d_qg(x)}{d_qx}\end{array}

Even better, the q-derivative of a power of x is separable into a coefficient that depends only on q and a single smaller power of x:

\begin{array}{rl}\displaystyle \frac{d_q x^n}{d_qx} & \displaystyle = \frac{(qx)^n - x^n}{qx - x} \\  &\displaystyle = \frac{q^n - 1}{q - 1}\frac{x^n}{x} \\  &\displaystyle = [n]_q x^{n-1},\end{array}


\displaystyle [n]_q = \frac{q^n - 1}{q - 1} = 1 + q + \cdots + q^{n-1}.

Clearly, as q \to 1, [n]_q \to n. Whereas n counts the number of ways to insert a point into an ordered list of n-1 items, [n]_q counts the number of ways to insert a linearly independent ray passing through the origin into an (n-1)-dimensional vector space over a field with q elements with a given ordered basis.

The q-derivative even works when q is an operator rather than a number. Polynomials work in any rig, so if x is, say, a function instead of a number, q could be the Fourier transform.

Let’s lift this to types. The derivative of a datatype is the type of its one-holed contexts, so we expect the q-derivative to have a similar interpretation. When we take Q and X to be types, the Q-derivative of a tuple X^n is

\displaystyle [n]_Q X^{n-1} = X^{n-1} + (QX)X^{n-2} + (QX)^2X^{n-3} + \cdots + (QX)^{n-1}.

Each option factors into two parts: the ‘clowns’ are a power of QX to the left of the hole, followed by the ‘jokers’, a power of X after the hole. This type is the one we expect for the intermediate state when mapping a function f\colon X \to Q\times X over the tuple; any function g\colon X \to Q can be lifted to such a function f(x) = (g(x), x).

Similarly, the Q-derivative of the list datatype 1/(1-X) is 1/(1-QX)(1-X); that is, the ‘clowns’ form the list of the outputs of type Q \times X for the elements that have already been processed and the ‘jokers’ form the list of the elements yet to process.

When we abstract from thinking of q as a number to thinking of it as an operator on ring elements, the corresponding action on types is to think of Q as a functor. In this case, QX is not a pair type, but rather a parametric type Q[X]. One might, for example, consider mapping the function f(x) = 1/x over a list of real numbers. The resulting outputs will be real except when the input is zero, so we’ll want to adjoin an undefined value. Taking Q = Option, the Q-derivative of List[Int] is List[Option[Int]]\times List[Int], consisting of the list of values that have already been processed, followed by the list of values yet to process.

Funny identity matrix

Posted in Math by Mike Stay on 2013 June 11

Given any category C with finite products and coproducts such that the products distribute over the coproducts, you can make a compact closed bicategory Mat(C) of (natural numbers, matrices of elements of Ob(C), matrices of elements of Mor(C)) and define matrix composition by \displaystyle (M \circ N)(i,k) = \sum_j M(i,j) \times N(j,k).

Take the partial order whose objects are nonnegative integers and whose morphisms m \to n mean m | n; the product is gcd and the coproduct is lcm. In this category, the terminal object is 0 and the initial object is 1, so the identity matrix looks like

\left( \begin{array}{cccc}0 & 1 & \ldots & 1 \\ 1 & 0 & \ldots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \ldots & 0 \end{array} \right)

and matrix multiplication is \displaystyle (M \circ N)(i,k) = \mathop{\mbox{lcm}}_j \mbox{gcd}(M(i,j), N(j,k)).

Reflective calculi

Posted in Category theory, Programming by Mike Stay on 2013 May 15

This is mostly for my own benefit.

We start with some term-generating functor T(G, X) parametric in a set of ground terms G and a set of names X.

Now specialize to a single ground term:

T_2(X) = T(1, X)

Now mod out by structural equivalence:

T_3(X) = T_2(X) / \equiv

Let N be a set of names and let M be the set of name-equivalence classes N / \equiv_N (where \equiv_N is yet to be defined—it’s part of the theory of names in which we’re still parametric).

Prequoting and predereference are an algebra and coalgebra of T_3, respectively:

PQ: T_3(M) \to M

PD: M \to T_3(M)

such that

PD \circ PQ: T_3(M) \to T_3(M) = 1_{T_3(M)}.

Real quoting and dereference have to use T_4(X), defined below.

Define N = \mu X.T_2(X). Does N / \equiv = \mu X.T_3(X)? I think so; assuming it does, define

M = \mu X.T_3(X)

so name equivalence is structural equivalence; equivalence of prequoted predereferences is automatic by the definition above.

The fixed point gives us an isomorphism

F: T_3(M) \to M.

We can define PQ = F, PD = F^{-1}, because undecidability doesn’t come into play until we add operational semantics. It’s decidable whether two terms are structurally equivalent. Thus

PD \circ PQ:T_3(M) \to T_3(M)

is the identity, satisfying the condition, and

PQ \circ PD: M \to M

is the identity, which we get for free.

When we mod out by operational semantics (following the traditional approach rather than the 2-categorical one needed for pi calculus):

T_4(X) = T_3(X) / \beta,

we have the quotient map

q_X: T_3(X) \to T_4(X)

and a map

r_X: T_4(X) \to T_3(X)

that picks a representative from the equivalence class.

It’s undecidable whether two terms are in the same operational equivalence class, so q_X may not halt. However, it’s true by construction that

q_X \circ r_X: T_4(X) \to T_4(X)

is the identity.

We can extend prequoting and predereference to quoting and dereference on T_4 by

Q: T_4(M) \stackrel{r_M}{\to} T_3(M) \stackrel{PQ}{\to} M

D: M \stackrel{PD}{\to} T_3(M) \stackrel{q_M}{\to} T_4(M)

and then

D \circ Q

= T_4(M) \stackrel{r_M}{\to} T_3(M) \stackrel{PQ}{\to} M \stackrel{PD}{\to} T_3(M) \stackrel{q_M}{\to} T_4(M)

= T_4(M) \stackrel{r_M}{\to} T_3(M) \stackrel{q_M}{\to} T_4(M)

= T_4(M) \stackrel{id}{\to} T_4(M)

which is what we want for quoting and dereference. The other way around involves undecidability.

Monads without category theory, redux

Posted in Category theory, Programming by Mike Stay on 2012 September 26

I intended the last version of this post to be a simple one-off, but people cared too much. A fire-breathing Haskellite said that jQuery must never be used as an example of a monad because it’s (gasp) not even functional! And someone else didn’t understand JavaScript well enough to read my code snippets. So here’s another attempt that acknowledges the cultural divide between functional and imperative programmers and tries to explain everything in English as well as in source code.

Monads are a design pattern that is most useful to functional programmers because their languages prefer to implement features as libraries rather than as syntax. In Haskell there are monads for input / output, side-effects and exceptions, with special syntax for using these to do imperative-style programming. Imperative programmers look at that and laugh: “Why go through all that effort? Just use an imperative language to start with.” Functional programmers also tend to write programs as—surprise!—applying the composite of a bunch of functions to some data. The basic operation for a monad is really just composition in a different guise, so functional programmers find this comforting. Functional programmers are also more likely to use “continuations”, which are something like extra-powerful exceptions that only work well when there is no global state; there’s a monad for making them easier to work with.

There are, however, some uses for monads that even imperative programmers find useful. Collections like sets and lists (with or without parentheses), parsers, promises, and membranes are a few examples, which I’ll explain below.


Many collection types support the following three operations:

  1. Map a function over the elements of the collection.
  2. Flatten a collection of collections into a collection.
  3. Create a collection from a single element.

A monad is a class that provides a generalized version of these three operations. When I say “generalized”, I mean that they satisfy some of the same rules that the collections’ operations satisfy in much the same way that multiplication of real numbers is associative and commutative just like addition of real numbers.

The way monads are usually used is by mapping a function and then flattening. If we have a function f that takes an element and returns a list, then we can say and get a new list.


A parser is an object with a list of tokens that have already been parsed and the remainder of the object (usually a string) to be parsed.

var Parser = function (obj, tokens) {
  this.obj = obj;
  // If tokens are not provided, use the empty list.
  this.tokens = tokens || [];

It has three operations like the collections above.

  1. Mapping a function over a parser applies the function to the contained obj. = function (f) {
      return new Parser(f(this.obj), this.tokens);
  2. Flattening a parser of parsers concatenates the list of tokens.
    Parser.prototype.flatten = function () {
      return new Parser(this.obj.obj, this.obj.tokens.concat(this.tokens));

    The definition above means that

    new Parser(new Parser(x, tokens1), tokens2).flatten()
    is equivalent to
    new Parser(x, tokens1.concat(tokens2)).

  3. We can create a parser from an element x: new Parser(x).

If we have a function f that takes a string, either parses out some tokens or throws an exception, and returns a parser with the tokens and the remainder of the string, then we can say

and get a new parser. In what follows, I create a parser with the string “Hi there” and then expect a word, then some whitespace, then another word.

var makeMatcher = function (re) {
  return function (s) {
    var m = s.match(re);
    if (!m) { throw new Error('Expected to match ' + re); }
    return new Parser(m[2], [m[1]]);

var alnum = makeMatcher(/^([a-zA-Z0-9]+)(.*)/);
var whitespace = makeMatcher(/^(s+)(.*)/);

new Parser('Hi there')
// is equivalent to new Parser('', ['Hi', ' ', 'there']);


A promise is an object that represents the result of a computation that hasn’t finished yet; for example, if you send off a request over the network for a webpage, the promise would represent the text of the page. When the network transaction completes, the promise “resolves” and code that was waiting on the result gets executed.

  1. Mapping a function f over a promise for x results in a promise for f(x).
  2. When a promise represents remote data, a promise for a promise is still just remote data, so the two layers can be combined; see promise pipelining.
  3. We can create a resolved promise for any object that we already have.

If we have a function f that takes a value and returns a promise, then we can say

and get a new promise. By stringing together actions like this, we can set up a computation that will execute properly as various network actions complete.


An object-capability language is an object-oriented programming language where you can’t get a reference to an object unless you create it or someone calls one of your methods and passes a reference to it. A “membrane” is a design pattern that implements access control.

Say you have a folder object with a bunch of file objects. You want to grant someone temporary access to the folder; if you give them a reference to the folder directly, you can’t force them to forget it, so that won’t work for revokable access. Instead, suppose you create a “proxy” object with a switch that only you control; if the switch is on, the object forwards all of its method calls to the folder and returns the results. If it’s off, it does nothing. Now you can give the person the object and turn it off when their time is up.

The problem with this is that the folder object may return a direct reference to the file objects it contains; the person could lose access to the folder but could retain access to some of the files in it. They would not be able to have access to any new files placed in the folder, but would see updates to the files they retained access to. If that is not your intent, then the proxy object should hide any file references it returns behind similar new proxy objects and wire all the switches together. That way, when you turn off the switch for the folder, all the switches turn off for the files as well.

This design pattern of wrapping object references that come out of a proxy in their own proxies is a membrane.

  1. We can map a function f over a membrane for x and get a membrane for f(x).
  2. A membrane for a membrane for x can be collapsed into a single membrane that checks both switches.
  3. Given any object, we can wrap it in a membrane.

If we have a function f that takes a value and returns a membrane, then we can say

and get a new membrane. By stringing together actions like this, we can set up arbitrary reference graphs, while still preserving the membrane creator’s right to turn off access to his objects.


Monads implement the abstract operations map and flatten, and have an operation for creating a new monad out of any object. If you start with an instance m of a monad and you have a function f that takes an object and returns a monad, then you can say;

and get a new instance of a monad. You’ll often find scenarios where you repeat that process over and over.

Overloading JavaScript’s dot operator with direct proxies

Posted in Category theory, Math, Programming by Mike Stay on 2012 August 28

With the new ECMAScript 6 Proxy object that Firefox has implemented, you can make dot do pretty much anything you want. I made the dot operator in JavaScript behave like Haskell’s bind:

// I'll give a link to the code for lift() later,
// but one thing it does is wrap its input in brackets.
lift(6); // [6]
lift(6)[0]; // 6
lift(6).length; // 1

// lift(6) has no "upto" property
lift(6).upto; // undefined

// But when I define this global function, ...

// Takes an int n, returns an array of ints [0, ..., n-1].
var upto = function (x) {
  var r = [], i;
  for (i = 0; i < x; ++i) {
  return r;

// ... now the object lift(6) suddenly has this property
lift(6).upto; // [0,1,2,3,4,5]
// and it automagically maps and flattens!
lift(6).upto.upto; // [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4]
lift(6).upto.upto.length; // 15

To be clear, ECMAScript 6 has changed the API for Proxy since Firefox adopted it, but you can implement the new one on top of the old one. Tom van Cutsem has code for that.

I figured this out while working on a contracts library for JavaScript. Using the standard monadic style (e.g. jQuery), I wrote an implementation that doesn’t use proxies; it looked like this:

lift(6)._(upto)._(upto).value; // [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4]

The lift function takes an input, wraps it in brackets, and stores it in the value property of an object. The other property of the object, the underscore method, takes a function as input, maps that over the current value and flattens it, then returns a new object of the same kind with that flattened array as the new value.

The direct proxy API lets us create a “handler” for a target object. The handler contains optional functions to call for all the different things you can do with an object: get or set properties, enumerate keys, freeze the object, and more. If the target is a function, we can also trap when it’s used as a constructor (i.e. new F()) or when it’s invoked.

In the proxy-based implementation, rather than create a wrapper object and set the value property to the target, I created a handler that intercepted only get requests for the target’s properties. If the target has the property already, it returns that; you can see in the example that the length property still works and you can look up individual elements of the array. If the target lacks the property, the handler looks up that property on the window object and does the appropriate map-and-flatten logic.

I’ve explained this in terms of the list monad, but it’s completely general. In the code below, mon is a monad object defined in the category theorist’s style, a monoid object in an endofunctor category, with multiplication and unit. On line 2, it asks for a type to specialize to. On line 9, it maps the named function over the current state, then applies the monad multiplication. On line 15, it applies the monad unit.

var kleisliProxy = function (mon) {
  return function (t) {
    var mont = mon(t);
    function M(mx) {
      return Proxy(mx, {
        get: function (target, name, receiver) {
          if (!(name in mx)) {
            if (!(name in window)) { return undefined; }
            return M(mont['*'](mon(window[name]).t(mx)));
          return mx[name];
    return function (x) { return M(mont[1](x)); };

var lift = kleisliProxy(listMon)(int32);
lift(6).upto.upto; // === [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4]