Capability Myths Demolished – Miller et. al 2003 Pretty much everyone is familiar with an ACL-based approach to security. Despite having been around for a very long time, the capabilities approach to security is less well-known. Today’s paper choice provides an excellent introduction to the capabilities model and how…]]>

A nice summary of erights’

Capability Myths Demolishedpaper.

Capability Myths Demolished – Miller et. al 2003

Pretty much everyone is familiar with an ACL-based approach to security. Despite having been around for a very long time, the *capabilities* approach to security is less well-known. Today’s paper choice provides an excellent introduction to the capabilities model and how it compares to ACLs. Along the way we’ll learn about 7 fundamental properties of security systems, and which combinations of those are required to offer certain higher-level guarantees. Capabilities are central to the type system of the Pony language which we’ll be looking at tomorrow.

Let’s start out by looking at one of the fundamental differences between ACLs and capabilities, the *direction of the relationship* between subject and resource. Consider a classic access matrix such as the one below. Each row is a subject, and each column a resource. The entry in a given cell describes the permissions the subject has…

View original post 949 more words

]]>

Given StringMap.js and atLeastFreeVarNames.js from SES, one can define the following:

var s = function(f) { // Find free vars in f. (This depends, of course, on // Function.prototype.toString being unchanged.) var code = f.toString(); var free = ses.atLeastFreeVarNames(code); // Construct code that evaluates to an environment object. var env = ["({"]; for (var i = 0, len = free.length; i < len; ++i) { env.push('"'); env.push(free[i]); env.push('":(function(){try{return eval("('); env.push(free[i]); env.push(')")}catch(_){return {}}})()'); env.push(','); } env.pop(); env.push("})"); return "({code:" + JSON.stringify(code) + ",env:" + env.join("") + "})"; }; // See https://gist.github.com/Hoff97/9842228 or // http://jsfiddle.net/7UYd4/1/ // for versions of stringify that handle cycles. var t = function(x) { return '(' + JSON.stringify(x) + ')'; }

Then you can use these definitions to serialize inline definitions that only close over “stringifiable” objects or objects behind cut points (see deserialization below):

var baz = {x: 1}; var serializedClosure = t(eval(s( function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; } ))); // serializedClosure === '({"code":"function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; }","env":{"function":{},"bar":{},"foo":{},"console":{},"log":{},"hi":{},"return":{},"baz":{"x":1},"x":{},"fiddle":{}}})'

The string serializedClosure can then be stashed somewhere. When it’s time to deserialize, do the following:

var d = function(closure, localBindings) { localBindings = localBindings || {}; return function() { with(closure.env) { with (localBindings) { return eval("(" + closure.code + ")").apply(this, arguments); } } }; }; var closure = eval(serializedClosure); // Hook up local values if you want them. var deserializedFn = d(closure, {console: console}); deserializedFn("foo"); // Prints "2foo" to the console. deserializedFn("bar"); // Prints "3bar" to the console.

If you want to store the updated state, just re-stringify the closure:

var newSerializedClosure = t(closure); // newSerializedClosure === '({"code":"function bar(foo) { console.log('hi'); return ++(baz.x)+foo /*fiddle*/; }","env":{"function":{},"bar":{},"foo":{},"console":{},"log":{},"hi":{},"return":{},"baz":{"x":3},"x":{},"fiddle":{}}})' // Note that baz.x is now 3.

As I said, a very ugly hack, but still might be useful somewhere.

]]>

This is a follow up post, but not the promised sequel, from A 2-Categorical Approach to the Pi Calculus where I’ll try to correct various errors I made and try to clarify some things.

First, lambda calculus was invented by Church to solve Hilbert’s decision problem, also known as the *Entscheidungsproblem*. The decision problem was third on a list of problems he presented at a conference in 1928, but was not, as I wrote last time, “Hilbert’s third problem”, which was third on a list of problems he laid out in 1900.

Second, the problem Greg Meredith and I were trying to address was how to prevent reduction under a “receive” prefix in the pi calculus, which is intimately connected with preventing reduction under a lambda in lambda calculus. All programming languages that I know of, whether eager or lazy, do not reduce under a lambda.

There are two approaches in literature to the semantics of lambda calculus. The first is *denotational semantics*, which was originally concerned with what function a term computes. This is where computability and type theory live. Denotational semantics treats alpha-beta-eta equivalence classes of lambda terms as *morphisms* in a category. The objects in the category are called “domains”, and are usually “CPOs“, a special kind of poset. Lambek and Scott used this approach to show that alpha-beta-eta equivalence classes of lambda terms with one free variable form a cartesian closed category where composition is given by substitution.

The type of a term describes the structure of its normal form. *Values* are terms that have no beta reduction; they’re either constants or lambda abstractions. The types in lambda calculus are either base types or lambda abstractions, respectively. (While Lambek and Scott introduced new term constructors for products, they’re not strictly necessary, because the lambda term can be used for the pair with projections and )

The second is *operational semantics*, which is more concerned with *how* a function is computed than *what* it computes. All of computational complexity theory and algorithmic analysis lives here, since we have to count the number of steps it takes to complete a computation. Operational semantics treats lambda terms as *objects* in a category and rewrite rules as morphisms. It is a very syntactical approach; the set of terms is an algebra of some endofunctor, usually presented in Backus-Naur Form. This set of terms is then equipped with some equivalence relations and reduction relations. For lambda calculus, we mod out terms by alpha, but not beta. The reduction relations often involve specifying a reduction context, which means that the rewrites can’t occur just anywhere in a term.

In pi calculus, terms don’t have a normal form, so we can’t define an equivalence on pi calculus terms by comparing their normal forms. Instead, we say two terms are equivalent if they behave the same in all contexts, *i.e.* they’re *bisimilar*. Typing becomes rather more complicated; pi calculus types describe the structure of the current term and rewrites should do something weaker than preserve types on the nose; it suggests using a double category, but that’s for next time.

Seely suggested modeling rewrites with 2-morphisms in a 2-category and showed how beta and eta were lax adjoints. We’re suggesting a different, more operational way of modeling rewrites with 2-morphisms. Define a 2-category with

- and as generating objects,
- and as generating 1-morphisms,
- alpha equivalence as an identity 2-morphism, and
- beta reduction as a generating 2-morphism.

A *closed* term is one with no free variables, and the hom category consisting of closed lambda terms and beta reductions between them is a typical category you’d get from looking for the operational semantics of lambda calculus.

The 2-category is a fine semantics for the variant of lambda calculus where beta can apply anywhere within a term, but there are too many morphisms if you want to model the lambda calculus without reduction under a prefix. Given closed terms such that beta reduces to we can whisker beta by to get

To cut down the extra 2-morphisms, we need to model reduction contexts. The difference between the precontexts above and the contexts we need is the notion of the “topmost” context, where we can see enough of the surrounding term to determine that reduction is possible. To model a reduction context, we make a new 2-category by introducing a new generating 1-morphism

and say that a 1-morphism with signature that factors as a precontext followed by is an *-hole term context*. We also interpret beta with a 2-morphism between reduction contexts rather than between precontexts. The hom category is the operational semantics of lambda calculus without reduction under lambda.

In the next post, I’ll relate Seely’s 2-categorical approach and our 2-categorical by extending to double categories and using Melliès and Zeilberger’s notion of type refinement.

]]>

(HDRA) 2015 on modeling the asynchronous polyadic pi calculus with 2-categories. We avoid domain theory entirely and model the operational semantics directly; full abstraction is almost trivial. As a nice side-effect, we get a new tool for reasoning about consumption of resources during a computation.

It’s a small piece of a much larger project, which I’d like to describe here in a series of posts. This post will talk about lambda calculus for a few reasons. First, lambda calculus is simpler, but complex enough to illustrate one of our fundamental insights. Lambda calculus is to serial computation what pi calculus is to concurrent computation; lambda calculus talks about a single machine doing a computation, while pi calculus talks about a network of machines communicating over a network with potentially random delays. There is at most one possible outcome for a computation in the lambda calculus, while there are many possible outcomes in a computation in the pi calculus. Both the lazy lambda calculus and the pi calculus, however, have as an integral part of their semantics the notion of *waiting* for a sub-computation to complete before moving onto another one. Second, the denotational semantics of lambda calculus in Set is well understood, as is its generalization to cartesian closed categories; this semantics is far simpler than the denotational semantics of pi calculus and serves as a good introduction. The operational semantics of lambda calculus is also simpler than that of pi calculus and there is previous work on modeling it using higher categories.

Alonzo Church invented the lambda calculus as part of his attack on Hilbert’s third problem, also known as the *Entscheidungsproblem*, which asked for an algorithm to solve any mathematical problem. Church published his proof that no such algorithm exists in 1936. Turing invented his eponymous machines, also to solve the *Entscheidungsproblem*, and published his independent proof a few months after Church. When he discovered that Church had beaten him to it, Turing proved in 1937 that the two approaches were equivalent in power. Since Turing machines were much more “mechanical” than the lambda calculus, the development of computing machines relied far more on Turing’s approach, and it was only decades later that people started writing compilers for more friendly programming languages. I’ve heard it quipped that “the history of programming languages is the piecemeal rediscovery of the lambda calculus by computer scientists.”

The lambda calculus consists of a set of “terms” together with some relations on the terms that tell how to “run the program”. Terms are built up out of “term constructors”; in the lambda calculus there are three: one for variables, one for defining functions (Church denoted this operation with the Greek letter lambda, hence the name of the calculus), and one for applying those functions to inputs. I’ll talk about these constructors and the relations more below.

Church introduced the notion of “types” to avoid programs that never stop. Modern programming languages also use types to avoid programmer mistakes and encode properties about the program, like proving that secret data is inaccessible outside certain parts of the program. The “simply-typed” lambda calculus starts with a set of base types and takes the closure under the binary operation to get a set of types. Each term is assigned a type; from this one can deduce the types of the variables used in the term. An assignment of types to variables is called a *typing context*.

The search for a semantics for variants of the lambda calculus has typically been concerned with finding sets or “domains” such that the interpretation of each lambda term is a function between domains. Scott worked out a domain such that the continuous functions from to itself are precisely the computable ones. Lambek and Scott generalized the category where we look for semantics from Set to arbitrary cartesian closed categories (CCCs).

Lambek and Scott constructed a CCC out of lambda terms; we call this category the *syntactical category*. Then a structure-preserving functor from the syntactical category to Set or some other CCC would provide the semantics. The syntactical category has types as objects and equivalence classes of certain terms as morphisms. A morphism in the syntactical category goes from a typing context to the type of the term.

John Baez has a set of lecture notes from Fall 2006 through Spring 2007 describing Lambek and Scott’s approach to the category theory of lambda calculus and generalizing it from cartesian closed categories to symmetric monoidal closed categories so it can apply to quantum computation as well: rather than taking a functor from the syntactical category into Set, we can take a functor into Hilb instead. He and I also have a “Rosetta stone” paper summarizing the ideas and connecting them with the corresponding generalization of the Curry-Howard isomorphism.

The Curry-Howard isomorphism says that types are to propositions as programs are to proofs. In practice, types are used in two different ways: one as propositions about *data* and the other as propositions about *code*. Programming languages like C, Java, Haskell, and even dynamically typed languages like JavaScript and Python use types to talk about propositions that data satisfies: is it a date or a name? In these languages, equivalence classes of programs constitute constructive proofs. Concurrent calculi are far more concerned about propositions that the code satisfies: can it reach a deadlocked state? In these languages, it is the rewrite rules taking one term to another that behave like proofs. Melliès and Zeilberger’s excellent paper “Functors are Type Refinement Systems” relates these two approaches to typing to each other.

Note that Lambek and Scott’s approach does not have the sets of terms or variables as objects! The algebra that defines the set of terms plays only a minor role in the category; there’s no morphism in the CCC, for instance, that takes a term and a variable to produce the term . This failure to capture the structure of the term in the morphism wasn’t a big deal for lambda calculus because of “confluence” (see below), but it turns out to matter a lot more in calculi like Milner’s pi calculus that describe communicating over a network, where messages can be delayed and arrival times matter for the end result (consider, for instance, two people trying to buy online the last ticket to a concert).

The last few decades have seen domains becoming more and more complicated in order to try to “unerase” the information about the structure of terms that gets lost in the domain theory approach and recover the operational semantics. Fiore, Moggi, and Sangiorgi, Stark and Cattani, Stark, and Winskel all present domain models of the pi calculus that recursively involve the powerset in order to talk about all the possible futures for a term. Industry has never cared much about denotational semantics: the Java Virtual Machine is an operational semantics for the Java language.

Greg Meredith and I set out to model the operational semantics of the pi calculus directly in a higher category rather than using domain theory. An obvious first question is, “What about types?” I was particularly worried about how to relate this approach to the kind of thing Scott and Lambek did. Though it didn’t make it into the HDRA paper and the details won’t make it into this post, we found that we’re able to use the “type-refinement-as-a-functor” idea of Melliès and Zeilberger to show how the algebraic term-constructor functions relate to the morphisms in the syntactical category.

We’re hoping that this categorical approach to modeling process calculi will help with reasoning about practical situations where we want to compose calculi; for instance, we’d like to put a hundred pi calculus engines around the edges of a chip and some ambient calculus engines, which have nice features for managing the location of data, in the middle to distribute work among them.

The lambda calculus consists of a set of “terms” together with some relations on the terms. The set of terms is defined recursively, parametric in a countably infinite set of variables. The base terms are the variables: if is an element of , then is a term in . Next, given any two terms , we can apply one to the other to get . We say that is in the *head position* of the application and in the *tail position*. (When the associativity of application is unclear, we’ll also use parentheses around subterms.) Finally, we can abstract out a variable from a term: given a variable and a term we get a term .

The term constructors define an algebra, a functor from Set to Set that takes any set of variables to the set of terms . The term constructors themselves become functions:

Church described three relations on terms. The first relation, alpha, relates any two lambda abstractions that differ only in the variable name. This is exactly the same as when we consider the function to be identical to the function . The third relation, eta, says that there’s no difference between a function and a “middle-man” function that gets an input and applies the function to it: . Both alpha and eta are equivalences.

The really important relation is the second one, “beta reduction”. In order to define beta reduction, we have to define the *free* variables of a term: a variable occurring by itself is free; the set of free variables in an application is the union of the free variables in its subterms; and the free variables in a lambda abstraction are the free variables of the subterm except for the abstracted variable.

*Beta reduction* says that when we have a lambda abstraction applied to a term , then we replace every free occurrence of in by :

where we read the right hand side as “ with replacing .” We see a similar replacement of in action when we compose the following functions:

We say a term has a normal form if there’s some sequence of beta reductions that leads to a term where no beta reduction is possible. When the beta rule applies in more than one place in a term, it doesn’t matter which one you choose to do first: any sequence of betas that leads to a normal form will lead to the same normal form. This property of beta reduction is called *confluence*. Confluence means that the order of performing various subcomputations doesn’t matter so long as they all finish: in the expression it doesn’t matter which addition you do first or whether you distribute the expressions over each other; the answer is the same.

“Running” a program in the lambda calculus is the process of computing the normal form by repeated application of beta reduction, and the normal form itself is the result of the computation. Confluence, however, does not mean that when there is more than one place we could apply beta reduction, we can choose any beta reduction and be guaranteed to reach a normal form. The following lambda term, customarily denoted , takes an input and applies it to itself:

If we apply to itself, then beta reduction produces the same term, customarily called :

It’s an infinite loop! Now consider this lambda term that has as a subterm:

It says, “Return the first element of the pair (identity function, )”. If it has an answer at all, the answer should be “the identity function”. The question of whether it has an answer becomes, “Do we try to calculate the elements of the pair before applying the projection to it?”

Many programming languages, like Java, C, JavaScript, Perl, Python, and Lisp are “eager”: they calculate the normal form of inputs to a function before calculating the result of the function on the inputs; the expression above, implemented in any of these languages, would be an infinite loop. Other languages, like Miranda, Lispkit, Lazy ML, and Haskell and its predecessor Orwell are “lazy” and only apply beta reduction to inputs when they are needed to complete the computation; in these languages, the result is the identity function. Abramsky wrote a 48-page paper about constructing a domain that captures the operational semantics of lazy lambda calculus.

The idea of representing operational semantics directly with higher categories originated with R. A. G. Seely, who suggested that beta reduction should be a 2-morphism; Barney Hilken and Tom Hirschowitz have also contributed to looking at lambda calculus from this perspective. In the “Rosetta stone” paper that John Baez and I wrote, we made an analogy between programs and Feynman diagrams. The analogy is precise as far as it goes, but it’s unsatisfactory in the sense that Feynman diagrams describe processes happening over time, while Lambek and Scott mod out by the process of computation that occurs over time. If we use 2-categories that explicitly model rewrites between terms, we get something that could potentially be interpreted with concepts from physics: types would become analogous to strings, terms would become analogous to space, and rewrites would happen over time.

The idea from the “algebra of terms” perspective is that we have objects and for variables and terms, term constructors as 1-morphisms, and the nontrivial 2-morphisms generated by beta reduction. Seely showed that this approach works fine when you’re unconcerned with the context in which reduction can occur.

This approach, however, doesn’t work for lazy lambda calculus! Horizontal composition in a 2-category is a functor, so if a term reduces to a term , then by functoriality, must reduce to —but this is forbidden in the lazy lambda calculus! Functoriality of horizontal composition is a “relativity principle” in the sense that reductions in one context are the same as reductions in any other context. In lazy programming languages, on the other hand, the “head” context is privileged: reductions only happen here. It’s somewhat like believing that measuring differences in temperature is like measuring differences in space, that only the difference is meaningful—and then discovering absolute zero. When beta reduction can happen anywhere in a term, there are *too many 2-morphisms* to model lazy lambda calculus.

In order to model this special context, we reify it: we add a special unary term constructor that marks contexts where reduction is allowed, then redefine beta reduction so that the term constructor behaves like a catalyst that enables the beta reduction to occur. This lets us cut down the set of 2-morphisms to exactly those that are allowed in the lazy lambda calculus; Greg and I did essentially the same thing in the pi calculus.

More concretely, we have two generating rewrite rules. The first propagates the reduction context to the head position in an application; the second is beta reduction *restricted to a reduction context*.

When we surround the example term from the previous section with a reduction context marker, we get the following sequence of reductions:

At the start, none of the subterms were of the right shape for beta reduction to apply. The first two reductions propagated the reduction context down to the projection in head position. At that point, the only reduction that could occur was at the application of the projection to the first element of the pair, and after that to the second element. At no point was ever in a reduction context.

In order to run a program that does anything practical, you need a processor, time, memory, and perhaps disk space or a network connection or a display. All of these resources have a cost, and it would be nice to keep track of them. One side-effect of reifying the context is that we can use it as a resource.

The rewrite rule increases the number of occurrences of in a term while decreases the number. If we replace by the rule

then the number of occurences of can never increase. By forming the term , we can bound the number of beta reductions that can occur in the computation of .

If we have a nullary constructor , then we can define and let the program dynamically decide whether to evaluate an expression eagerly or lazily.

In the pi calculus, we have the ability to run multiple processes at the same time; each in that situation represents a core in a processor or computer in a network.

These are just the first things that come to mind; we’re experimenting with variations.

We figured out how to model the operational semantics of a term calculus directly in a 2-category by requiring a catalyst to carry out a rewrite, which gave us full abstraction without needing a domain based on representing all the possible futures of a term. As a side-effect, it also gave us a new tool for modeling resource consumption in the process of computation. Though I haven’t explained how yet, there’s a nice connection between the “algebra-of-terms” approach that uses and as objects and Lambek and Scott’s approach that uses types as objects that uses Melliès and Zeilberger’s ideas about type refinement. Next time, I’ll talk about the pi calculus and types.

]]>

The usual definition of the derivative is

Instead of translating by an infinitesimal amount , we can scale by an infinitesimal amount ; these two definitions coincide in the limit:

However, when people talk about the -derivative, they usually mean the operator we get when we *don’t* take the limit and . It should probably be called the “-difference”, but we’ll see that the form of the difference is so special that it deserves the exceptional name.

The -derivative, as one would hope, behaves linearly:

Even better, the -derivative of a power of is separable into a coefficient that depends only on and a single smaller power of :

where

Clearly, as , . Whereas counts the number of ways to insert a point into an ordered list of items, counts the number of ways to insert a linearly independent ray passing through the origin into an -dimensional vector space over a field with elements with a given ordered basis.

The -derivative even works when is an operator rather than a number. Polynomials work in any rig, so if is, say, a function instead of a number, could be the Fourier transform.

Let’s lift this to types. The derivative of a datatype is the type of its one-holed contexts, so we expect the -derivative to have a similar interpretation. When we take and to be types, the -derivative of a tuple is

Each option factors into two parts: the ‘clowns’ are a power of to the left of the hole, followed by the ‘jokers’, a power of after the hole. This type is the one we expect for the intermediate state when mapping a function over the tuple; any function can be lifted to such a function .

Similarly, the -derivative of the list datatype is ; that is, the ‘clowns’ form the list of the outputs of type for the elements that have already been processed and the ‘jokers’ form the list of the elements yet to process.

When we abstract from thinking of as a number to thinking of it as an operator on ring elements, the corresponding action on types is to think of as a functor. In this case, is not a pair type, but rather a parametric type . One might, for example, consider mapping the function over a list of real numbers. The resulting outputs will be real except when the input is zero, so we’ll want to adjoin an undefined value. Taking , the -derivative of is , consisting of the list of values that have already been processed, followed by the list of values yet to process.

]]>

Take the partial order whose objects are nonnegative integers and whose morphisms mean ; the product is gcd and the coproduct is lcm. In this category, the terminal object is 0 and the initial object is 1, so the identity matrix looks like

and matrix multiplication is

]]>

We start with some term-generating functor parametric in a set of ground terms and a set of names .

Now specialize to a single ground term:

Now mod out by structural equivalence:

Let be a set of names and let M be the set of name-equivalence classes (where is yet to be defined—it’s part of the theory of names in which we’re still parametric).

Prequoting and predereference are an algebra and coalgebra of , respectively:

such that

Real quoting and dereference have to use , defined below.

Define . Does ? I think so; assuming it does, define

so name equivalence is structural equivalence; equivalence of prequoted predereferences is automatic by the definition above.

The fixed point gives us an isomorphism

We can define , because undecidability doesn’t come into play until we add operational semantics. It’s decidable whether two terms are structurally equivalent. Thus

is the identity, satisfying the condition, and

is the identity, which we get for free.

When we mod out by operational semantics (following the traditional approach rather than the 2-categorical one needed for pi calculus):

we have the quotient map

and a map

that picks a representative from the equivalence class.

It’s undecidable whether two terms are in the same operational equivalence class, so may not halt. However, it’s true by construction that

is the identity.

We can extend prequoting and predereference to quoting and dereference on by

and then

which is what we want for quoting and dereference. The other way around involves undecidability.

]]>

Monads are a design pattern that is most useful to functional programmers because their languages prefer to implement features as libraries rather than as syntax. In Haskell there are monads for input / output, side-effects and exceptions, with special syntax for using these to do imperative-style programming. Imperative programmers look at that and laugh: “Why go through all that effort? Just use an imperative language to start with.” Functional programmers also tend to write programs as—surprise!—applying the composite of a bunch of functions to some data. The basic operation for a monad is really just composition in a different guise, so functional programmers find this comforting. Functional programmers are also more likely to use “continuations”, which are something like extra-powerful exceptions that only work well when there is no global state; there’s a monad for making them easier to work with.

There are, however, some uses for monads that even imperative programmers find useful. Collections like sets and lists (with or without parentheses), parsers, promises, and membranes are a few examples, which I’ll explain below.

Many collection types support the following three operations:

- Map a function over the elements of the collection.
- Flatten a collection of collections into a collection.
- Create a collection from a single element.

A monad is a class that provides a generalized version of these three operations. When I say “generalized”, I mean that they satisfy some of the same rules that the collections’ operations satisfy in much the same way that multiplication of real numbers is associative and commutative just like addition of real numbers.

The way monads are usually used is by mapping a function and then flattening. If we have a function `f`

that takes an element and returns a list, then we can say `myList.map(f).flatten()`

and get a new list.

A **parser** is an object with a list of tokens that have already been parsed and the remainder of the object (usually a string) to be parsed.

var Parser = function (obj, tokens) { this.obj = obj; // If tokens are not provided, use the empty list. this.tokens = tokens || []; };

It has three operations like the collections above.

- Mapping a function over a parser applies the function to the contained obj.
Parser.prototype.map = function (f) { return new Parser(f(this.obj), this.tokens); };

- Flattening a parser of parsers concatenates the list of tokens.
Parser.prototype.flatten = function () { return new Parser(this.obj.obj, this.obj.tokens.concat(this.tokens)); };

The definition above means that

`new Parser(new Parser(x, tokens1), tokens2).flatten()`

is equivalent to`new Parser(x, tokens1.concat(tokens2))`

. - We can create a parser from an element
`x`

:`new Parser(x)`

.

If we have a function `f`

that takes a string, either parses out some tokens or throws an exception, and returns a parser with the tokens and the remainder of the string, then we can say

`myParser.map(f).flatten()`

and get a new parser. In what follows, I create a parser with the string “Hi there” and then expect a word, then some whitespace, then another word.

var makeMatcher = function (re) { return function (s) { var m = s.match(re); if (!m) { throw new Error('Expected to match ' + re); } return new Parser(m[2], [m[1]]); }; }; var alnum = makeMatcher(/^([a-zA-Z0-9]+)(.*)/); var whitespace = makeMatcher(/^(s+)(.*)/); new Parser('Hi there') .map(alnum).flatten() .map(whitespace).flatten() .map(alnum).flatten(); // is equivalent to new Parser('', ['Hi', ' ', 'there']);

A **promise** is an object that represents the result of a computation that hasn’t finished yet; for example, if you send off a request over the network for a webpage, the promise would represent the text of the page. When the network transaction completes, the promise “resolves” and code that was waiting on the result gets executed.

- Mapping a function
`f`

over a promise for`x`

results in a promise for`f(x)`

. - When a promise represents remote data, a promise for a promise is still just remote data, so the two layers can be combined; see promise pipelining.
- We can create a resolved promise for any object that we already have.

If we have a function `f`

that takes a value and returns a promise, then we can say

`myPromise.map(f).flatten()`

and get a new promise. By stringing together actions like this, we can set up a computation that will execute properly as various network actions complete.

An **object-capability language** is an object-oriented programming language where you can’t get a reference to an object unless you create it or someone calls one of your methods and passes a reference to it. A “membrane” is a design pattern that implements access control.

Say you have a folder object with a bunch of file objects. You want to grant someone temporary access to the folder; if you give them a reference to the folder directly, you can’t force them to forget it, so that won’t work for revokable access. Instead, suppose you create a “proxy” object with a switch that only you control; if the switch is on, the object forwards all of its method calls to the folder and returns the results. If it’s off, it does nothing. Now you can give the person the object and turn it off when their time is up.

The problem with this is that the folder object may return a direct reference to the file objects it contains; the person could lose access to the folder but could retain access to some of the files in it. They would not be able to have access to any new files placed in the folder, but would see updates to the files they retained access to. If that is not your intent, then the proxy object should hide any file references it returns behind similar new proxy objects and wire all the switches together. That way, when you turn off the switch for the folder, all the switches turn off for the files as well.

This design pattern of wrapping object references that come out of a proxy in their own proxies is a **membrane**.

- We can map a function
`f`

over a membrane for`x`

and get a membrane for`f(x)`

. - A membrane for a membrane for
`x`

can be collapsed into a single membrane that checks both switches. - Given any object, we can wrap it in a membrane.

If we have a function `f`

that takes a value and returns a membrane, then we can say

`myMembrane.map(f).flatten()`

and get a new membrane. By stringing together actions like this, we can set up arbitrary reference graphs, while still preserving the membrane creator’s right to turn off access to his objects.

Monads implement the abstract operations `map`

and `flatten`

, and have an operation for creating a new monad out of any object. If you start with an instance `m`

of a monad and you have a function `f`

that takes an object and returns a monad, then you can say

m.map(f).flatten();

and get a new instance of a monad. You’ll often find scenarios where you repeat that process over and over.

]]>

// I'll give a link to the code for lift() later, // but one thing it does is wrap its input in brackets. lift(6); // [6] lift(6)[0]; // 6 lift(6).length; // 1 // lift(6) has no "upto" property lift(6).upto; // undefined // But when I define this global function, ... // Takes an int n, returns an array of ints [0, ..., n-1]. var upto = function (x) { var r = [], i; for (i = 0; i < x; ++i) { r.push(i); } return r; }; // ... now the object lift(6) suddenly has this property lift(6).upto; // [0,1,2,3,4,5] // and it automagically maps and flattens! lift(6).upto.upto; // [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4] lift(6).upto.upto.length; // 15

To be clear, ECMAScript 6 has changed the API for Proxy since Firefox adopted it, but you can implement the new one on top of the old one. Tom van Cutsem has code for that.

I figured this out while working on a contracts library for JavaScript. Using the standard monadic style (*e.g.* jQuery), I wrote an implementation that doesn’t use proxies; it looked like this:

lift(6)._(upto)._(upto).value; // [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4]

The `lift`

function takes an input, wraps it in brackets, and stores it in the `value`

property of an object. The other property of the object, the underscore method, takes a function as input, maps that over the current value and flattens it, then returns a new object of the same kind with that flattened array as the new value.

The direct proxy API lets us create a “handler” for a target object. The handler contains optional functions to call for all the different things you can do with an object: get or set properties, enumerate keys, freeze the object, and more. If the target is a function, we can also trap when it’s used as a constructor (*i.e.* `new F()`

) or when it’s invoked.

In the proxy-based implementation, rather than create a wrapper object and set the `value`

property to the target, I created a handler that intercepted only get requests for the target’s properties. If the target has the property already, it returns that; you can see in the example that the length property still works and you can look up individual elements of the array. If the target lacks the property, the handler looks up that property on the window object and does the appropriate map-and-flatten logic.

I’ve explained this in terms of the list monad, but it’s completely general. In the code below, `mon`

is a monad object defined in the category theorist’s style, a monoid object in an endofunctor category, with multiplication and unit. On line 2, it asks for a type to specialize to. On line 9, it maps the named function over the current state, then applies the monad multiplication. On line 15, it applies the monad unit.

var kleisliProxy = function (mon) { return function (t) { var mont = mon(t); function M(mx) { return Proxy(mx, { get: function (target, name, receiver) { if (!(name in mx)) { if (!(name in window)) { return undefined; } return M(mont['*'](mon(window[name]).t(mx))); } return mx[name]; } }); } return function (x) { return M(mont[1](x)); }; }; }; var lift = kleisliProxy(listMon)(int32); lift(6).upto.upto; // === [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4]

]]>

In which book was the first book listed? It could have been listed in itself, which is consistent with the claim that it lists all the books that contain their own titles. Or it could have been listed in the second book and not in itself, which would also be consistent.

The riddle is this: in which index was the second index book listed? If we suppose that the second book did not list itself, then it would be incomplete, since it is a book in the library that does not contain its title between the covers. If we suppose that it did list itself, then it would be inconsistent, since it is supposed to list only those books that do not contain their own titles. There is no way for the second index book to be both consistent and complete.

Georg Cantor was a mathematician who used this idea to show that there are different sizes of infinity! But before I go there, consider the Munduruku tribe of the Amazon, which has no words for specific numbers larger than five; how would a man with eight children tell if he has enough fruit to give one to each child? He would probably name each child and set aside a piece of fruit for them—that is, he would set up an isomorphism between the children and the fruit. Even though he cannot count either set, he knows that they are the same size.

We can do the same thing to compare infinite sets: even though we can’t count them, if we can show that there is an isomorphism between two infinite sets, we know they are the same size. Likewise, if we can prove that there is no possible isomorphism between the sets, then they must be different sizes.

So now suppose that we take the set of natural numbers {0, 1, 2, 3, …} and the set of positive even numbers {0, 2, 4, 6, …}. These sets both have infinitely many elements. Are they the same size? We can double every natural number and get an even one, and halve every positive even number and get a natural number. That’s an isomorphism, so they’re the same size.

How about natural numbers and pairs of natural numbers? Given two numbers like 32768 and 137, we can pad them to the same length and then interleave them: 3020716387; they come apart just as easily.

Since we can take the first number to be the denominator of a nonnegative rational number and the second number to be the numerator, we also find that the size of the set of rationals is the same as the size of the set of natural numbers.

Cantor came up with isomorphisms for each of these sets and thought that perhaps all infinite sets were the same size. But then he tried to come up with a way to match up the natural numbers and infinite sequences of bits, like the sequence of all zeros:

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...

or the sequence that’s all zeros except for the third one:

0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ...

or the sequence whose nth bit is 1 if n is prime:

0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, ...

or sequences with no pattern at all:

0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, ...

He tried and tried, but couldn’t figure out a way to make them match up. Then he thought of an ingenious proof that it’s impossible! It uses the same insight as the library riddle above.

Suppose we try to match them up. We write in the left column a natural number and in the right column a binary sequence:

1: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 2: 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ... 3: 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, ... 4: 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, ... 5: 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, ... 6: 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, ... ...

Consider the diagonal of the table above: 0, 0, 1, 1, 0, 1, … This sequence is number 6 above, and is like the first index book: the sixth bit in the sequence could be either 0 or 1, and it would be consistent.

Now consider the opposite of the diagonal, its “photographic negative” sequence: 1, 1, 0, 0, 1, 0, … This sequence is like the second index book: it cannot consistently appear anywhere in the table. To see that, imagine if we had already assigned this sequence to number 7. (The number 7 is arbitrary; any other number will work just as well.) Then the table would look like this:

1: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 2: 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ... 3: 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, ... 4: 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, ... 5: 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, ... 6: 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, ... 7: 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, ...

But look at the seventh bit of sequence 6! It’s supposed to be the seventh bit along the diagonal, and it’s wrong. If we correct it, then the 7th bit of sequence 7 is wrong, since it’s supposed to be the opposite of the 7th bit of sequence 6.

This proof shows that there are too many sequences of bits to match up to natural numbers. No matter how we try match them up, we can always find a contradiction by looking at the sequence defined by the opposite of the diagonal sequence. And as if two different sizes of infinity weren’t enough, Cantor was able to repeat this process and show that there are infinitely many sizes of infinity, each bigger than the last!

Many influential mathematicians didn’t like this idea, and attacked Cantor personally rather than trying to find fault with his proof. Religious philosophers misinterpreted his results and equated the idea of multiple infinities with pantheism! Confronted with so much antagonism, Cantor often fought with depression.

He persevered, however, and later in life was greatly praised for his work. The Royal Society gave him their highest honor, the Sylvester Medal; and no less a mathematician than David Hilbert exclaimed, “No one shall expel us from the paradise that Cantor has created.”

]]>

]]>

- sylleptic monoidal functors (of bicategories),
- braided monoidal transformations and
- monoidal modifications

is equivalent as a 2-category to the 2-category **SMCC** of

- symmetric monoidal closed categories,
- braided monoidal closed functors, and
- monoidal closed natural isomorphisms.

If we model **Th(SMCC)** in the compact closed bicategory **3Cob _{2}**, then we get

where I didn’t draw

- the right unitor
- the pentagon equation for the associator
- the triangle equations for the unitors
- the hexagon equations for the braiding
- the yanking for internal hom
- a,b,c,l,r composed with their formal inverses equals the identity

but I think they’re pretty obvious given this other stuff.

]]>

]]>

]]>

The stories are short; they contradict one another and the Book of Mormon account itself—but taken together, they make a satisfying whole. If you don’t want spoilers, stop reading here; I summarize my favorites below.

Some stories don’t seem to have a point. In one, Moroni has barely survived a plague—perhaps diphtheria—and returns to his home late in the spring. He’s told in a dream that he has to get gold to make plates, but knows he needs to plow and plant his fields. Trusting in a miracle, he goes up into the mountains to gather gold from alluvial streambeds using fleece to trap the flakes. He spends several weeks setting up traps; when he fears he can’t wait any longer to plant, he returns to find his fields plowed and planted, but also finds his thatch hut occupied by a Lamanite scouting party. That night, he creeps up to the hut, shouts a Lamanite insult and stabs a man through one of the gaps in the wall. In the confusion, the Lamanites kill each other. After burying the men, he returns to the mountain and hangs out the gold-laden fleece to dry. It’s never made clear who plowed the field; I like to think it was the Three Nephites.

There’s an amusing chapter in the Book of Mormon where Mormon repeats his own name twelve times in a single chapter, and five times in a single verse, all without mentioning himself once. Nephi, son of Helaman, writes about how his father named him after the titan-hero Nephi that came out of Jerusalem. In the Euphraneid, Moroni keeps up the tradition, playing the role of a bard singing war stories about Captain Moroni and his men.

My favorite was the retelling of the king-men plot. In this version, Amalickiah is almost vampiric: a landed lord of old blood, meting out extraordinarily harsh punishments in his jurisdiction. Both amputation and impaling were not uncommon, though any punishment could be avoided by paying a large enough fine; by paying a regular tribute, a wealthy man could be immune to the law. Amalickiah’s oath to drink Moroni’s blood was not unusual; he’d done the same to many other enemies, though where he did not have power he sought to gain it by intrigue rather than by force. Like Rasputin, he miraculously survived multiple assassination attempts. I thought it particularly fitting that Teancum played Van Helsing to Amalickiah’s Dracula, killing him with a stake through his heart.

The last story details how, after sixteen years of wandering, Moroni returns to Cumorah. He describes the lay of the land he spent so many months scouting, the remnants of the great battle he fought. He sees the bowl-shaped depression where Cumenihah’s battalion allowed themselves to be drawn out by the Lamanites using a feinted retreat. He comes to the stones of Cumorah’s fortifications and sights along them to the hidden mouth of the cave where he has stored the records; when he arrives, he finds the door open, the room empty and dark, the treasures gone. Moroni feels as though he is drowning, unable to breathe. Then a calm comes over him as he sees that this is not his Cumorah, but another; he is on a different continent, among the ruins of a different civilization that collapsed in a different great battle, their historian’s treasures plundered. His book yet lies hidden and safe, somewhere among the hills.

I said it was the last story, but here even the pseudepigrapha has dubious appendices. The first is told in third person, with no indication of who the narrator is. As it begins, Moroni is carrying the plates, struggling to reach the cave. Six men are in pursuit; they slip on scree. Moroni throws aside the vegetation masking the entrance and makes his way down long narrow passage into a room. He strides past implements, vestments, precious metals and stones, drops the plates on a table, grabs Laban’s sword from the wall. He meets the pursuers halfway down the hall; they measure swords, then attack. Moroni smites him and he falls dead; another advances and contends with him. This one also falls by his sword; a third then steps forth and meets the same fate; a fourth afterwards contends with him, but in the struggle with the fourth, Moroni, being exhausted, is killed. The remaining two tread on Moroni as they enter the cave; they slip on blood bathing the smooth rock. The cave is empty; they curse and rage. When they turn to leave, even Moroni’s sword is gone.

The final appendix is an alternate account of Joseph Smith getting plates from Moroni. It’s all wrong, but very familiar at the same time.

Joseph has recently acquired a seerstone made of polished quartz; to get some time alone, he takes a deer-trail through the woods. He reaches a widening of the trail where the canopy does not block the sunlight and he takes the stone from his pocket; it is late afternoon, and he starts a small fire by focusing the light with the lens. He’s alarmed to find that he’s unable to extinguish it and fears that he’ll start the forest on fire, but it doesn’t spread. He notices that the wood is not consumed, so immediately bends down to remove his shoes.

At this point, he’s seized from behind; despite Joseph’s fame as a grappler, his assailant was better, or at least good enough to keep Joseph from escaping. “Put out the fire!” the man commands. Joseph replies “I have tried; I cannot.” The assailant mutters some words in a language Joseph does not speak but recognizes, and he calls Moroni by name and tells Moroni to release him. Moroni refuses, citing his fear of seers. Joseph tells Moroni that he knows Moroni carries the second sign and that he should hand it over. At that, Moroni releases him. Joseph, knowing Moroni is still skittish, doesn’t move.

“I have the first sign already,” says Joseph, and tells Moroni to check his pocket. Moroni puts his hand in to touch the stone but quickly pulls his hand back as though burned. Moroni says of the plates, “They are so heavy.”

Only now does Joseph turn and looks at Moroni. He’s about 5 feet 8 or 9 inches tall and heavy set; his face is as large and hawk-like. He is dressed in a suit of brown woolen clothes, his hair and beard white, and has on his back a sort of knapsack with something in it, shaped like a book. He is miserable and worn. Joseph summons himself and commands Moroni to give it to him immediately, or “this very moment either you or I shall die.”

At this, Moroni hands over the plates; he is ecstatic to be rid of them. As Joseph receives them, the fire goes out, and then a woman steps out of the darkness and calls Joseph by name.

It is Sallie Chase, sister to Willard, and a seer. She casts a hex on them with her green glass, then takes the plates. Mocking Joseph, she takes the seerstone from his pocket and places it into silver frames next to her green one.

At this, a column of light appears from heaven and a personage appears; walls of fire spring up on either side of the trail and force her off it. She is forced to leave the plates and stones behind. Joseph is scolded, but allowed to keep the items. Moroni fades from view and the personage ascends to heaven.

I didn’t give the book five stars because there were parts that tended to drag on a bit, but there were plenty of fun tales, too. Overall well worth the read.

]]>

In the MonCat column, is the categorified version of the tensor product of monoids.

]]>

*For any set of jobs that must be done, there exists a fun choice function defined on *

This axiom asserts that one can always find the fun in any job that must be done; a theorem of Poppins deduces from this that all such jobs are games.

]]>

]]>

Better than Lou Kauffman’s knots,

Better than all of the string theory

Witten’s forgot,

Better than Feynman’s graffiti,

Better than Calabi-Yau,

Better than Pauli’s neutrino,

Better than mu and than tau,

Better than Curie, Poincaré,

And Niels and Albert at Solvay

Better than anything except being in love.

]]>

Better than lasers on mars,

Better than alien obelisks

Chock full of stars,

Better than blade runners dreaming,

Better than spice that must flow,

Better than River and reavers,

Better than lightsaber glow,

Better than “Dammit, I’m a doc!”

Or simply “fascinating” Spock,

Better than anything except being in love.

]]>

Can you guess how we made it?

]]>

Single spiral the other way:

Double spiral the other way:

]]>

A picture that contains a scaled-down copy of itself is periodic both in theta

[Edit] See also these better versions.

]]>

This image uses a logarithmic spiral to zoom through a factor of 2048 after one turn. There’s a little bit of misalignment between the largest scale and the smallest one, but I think it still turned out pretty well (click for a larger view):

]]>

We ran out of tape while making the inner cuboctahedron–one of the triangular sides is missing. But we came darn close! Compare:

]]>

]]>

So it is, and so it will be, for so it has been, time out of mind:

Into the darkness they go, the wise and the lovely. Crowned

With lilies and laurel they go: but I am not resigned.

Lovers and thinkers, into the earth with you.

Be one with the dull, the indiscriminate dust.

A fragment of what you felt, of what you knew,

A formula, a phrase remains – but the best is lost.

The answers quick and keen, the honest look, the laughter, the love –

They are gone. They have gone to feed the roses. Elegant and curled

Is the blossom. Fragrant is the blossom. I know. But I do not approve.

More precious was the light in your eyes than all the roses in the world.

Down, down, down into the darkness of the grave

Gently they go, the beautiful, the tender, the kind:

Quietly they go, the intelligent, the witty, the brave.

I know. But I do not approve. And I am not resigned.

– Edna St Vincent Millay

]]>

]]>

But it takes a special kind of nerd to have reset his trip odometer 132.1 miles ahead of time in anticipation of taking the picture.

]]>

]]>

]]>

William chose–who else?–Yoda to pilot the green ship.

]]>

]]>

In a similar way, large enough tetrahedra would tile the surface of a hypersphere. This paper identifies the eleven regular tilings of three-dimensional spaces and whether they’re spherical, Euclidean, or hyperbolic tilings, and then looks at the geometry of spacetime to see how it might be tiled.

The “cubic” tilings (where eight polyhedra meet around a vertex like cubes do in Euclidean space) are amenable to taking cross-sections; this tiling of hyperbolic space with dodecahedra

has a cross section with a tiling of the hyperbolic plane with pentagons:

]]>

]]>

]]>

]]>

With some time I could probably find a crystal pure enough to emit light.

]]>

Transcript:

Say you’re me and you just watched Vi Hart’s video on infinity elephants and you totally missed the joke about Mr. Tusks even though you read Dinosaur Comics all the time but you liked the bit about Apollonian gaskets, which don’t blow out in Battle Mountain like the one in your car did but rather on the way to the L1 point and then you need Richard Feynman to tell you why. You thought she was going to draw the tiniest camels going through the eye of a needle, but you suppose that would ruin the hyperbole in the parable, so the ellipsis was justified. Anyway, you decide to avoid circular reasoning and doodle rectangles instead, filling them up with squares.

Eventually you wonder which ones you can fill up with finitely many squares and which ones you need infinitely many for, and so you start with squares and build up some rectangles. One square can only make one rectangle, itself. Two squares can only make one rectangle, but it can be lying down or standing up so you decide to say they’re different. Three squares make four rectangles and four squares make eight rectangles, and then you start thinking about Vi Hart’s video on binary trees. So you put the numbers into a tree, but it looks kind of stern, so you add some fareys to cheer it up. Then you see that the height of a rectangle is just the sum of its neighbors’ heights, and similarly for the width. You see lots of nice patterns in the dimensions involving flipping things over or running them backwards, kind of like the Blues Brothers’ police car when it was being chased by the Neo Nazis or that V6 racecar that Johann sent to Frederick.

Now instead of breadth, you decide to go for depth. Making the rectangles very long or very tall is too boring, so you add one square each time, alternately making it longer and taller. 1 1 2 3 5 8 13 21… You get the Fibonacci numbers; the limiting ratio is the golden ratio, [1+sqrt(5)]/2 to 1. This rectangle is the worst at being approximated by repeated squares so it shows up in systems where repetition is bad, like the angle at which plant leaves grow so they overlap the least and gather the most sunlight or how sunflowers pack the most seeds into a flowerhead and Roger Penrose thinks the Fibonacci spirals in the microtubules in the neurons in your brain are doing quantum error correction.

You decide to look at other irrational numbers to see if they have any nice patterns. e does. pi doesn’t. square roots of positive integers give repeating palindromes! You wonder whether all palindromes occur and if not which of the lyrics to Weird Al’s song Bob are special that way. And maybe then you make up a palindrome with vi hart’s name in it and turn it into a square root.

sqrt(1770203383334463140868642687939525148769043583402360581400094929361780283347187467842099172837131164923233584044530)

Maybe you decide that you want to doodle some circles after all, so you start with this gasket and figure out where the circles touch the line. The numbers look very familiar. You wonder what the areas of the circles are and how the gasket relates to the modular group and Poincare’s half-plane model of the hyperbolic plane and wish you had time to just sit in math class and doodle…

]]>

]]>

]]>

]]>

]]>

]]>

Strike-slip fault and overthrust and syn and anticline…

We gaze upon creation where erosion makes it known,

And count the countless aeons in the banding of the stone.

Odd, long-vanished creatures and their tracks & shells are found;

Where truth has left its sketches on the slate below the ground.

The patient stone can speak, if we but listen when it talks.

Humans wrote the Bible; God wrote the rocks.

There are those who name the stars, who watch the sky by night,

Seeking out the darkest place, to better see the light.

Long ago, when torture broke the remnant of his will,

Galileo recanted, but the Earth is moving still.

High above the mountaintops, where only distance bars,

The truth has left its footprints in the dust between the stars.

We may watch and study or may shudder and deny,

Humans wrote the Bible; God wrote the sky.

By stem and root and branch we trace, by feather, fang and fur,

How the living things that are descend from things that were.

The moss, the kelp, the zebrafish, the very mice and flies,

These tiny, humble, wordless things–how shall they tell us lies?

We are kin to beasts; no other answer can we bring.

The truth has left its fingerprints on every living thing.

Remember, should you have to choose between them in the strife,

Humans wrote the Bible; God wrote life.

And we who listen to the stars, or walk the dusty grade,

Or break the very atoms down to see how they are made,

Or study cells, or living things, seek truth with open hand.

The profoundest act of worship is to try to understand.

Deep in flower and in flesh, in star and soil and seed,

The truth has left its living word for anyone to read.

So turn and look where best you think the story is unfurled.

Humans wrote the Bible; God wrote the world.

-Catherine Faber, The Word of God

]]>

Once, when the clock read noon, it traveled without hesitation in a straight path to my retina. Once it took another course, only to bend around a molecule of nitrogen and reach the same destination.

Once it traced the signature of a man no one remembers.

Once, at half past three, it decayed into a tiny spark and its abominable opposite image; their mutual horrified fascination drew them together, each a moth in the other’s flame. The last visible flash of light from their fiery consummation was indistinguishable from the one that spawned them.

Once, when the clock ticked in the silence just before dawn, the light decayed into two mirrored worlds, somewhat better than ours due to the fact that I was never born there. Both worlds were consumed by mirrored dragons before collapsing back into the chaos from which they arose; all that remained was an orange flash of light.

Once it traveled to a far distant galaxy, reflected off the waters of a billion worlds and witnessed the death of a thousand stars before returning to the small room in which we sat.

Once it transcribed a short story of Borges in his own cramped scrawl, the six versions he discarded, the corrupt Russian translation by Nabokov, and a version in which the second-to-last word was illegible.

Once it traveled every path, each in its time; once it became and destroyed every possible world. All these summed to form what was: I saw an orange flash, and in that moment, I was enlightened.

]]>

Kublai Khan died in 1294 and his successor Temur Khan was convinced to reinstate the astrologers. Despite this, the young mathematician Zhu Shijie took up the old Khan’s challenge in 1305. Zhu had already completed two enormously influential mathematical texts: *Introduction to Mathematical Studies*, published in 1299, and *True reflections of the four unknowns*, published in 1303. This latter work included a table of “the ancient method of powers”, now known as Pascal’s triangle, and Zhu used it extensively in his analysis of polynomials in up to four unknowns.

In turning to the analysis of divination, Zhu naturally focused his attention on the *I Ching*. The first step in performing an *I Ching* divination is casting coins or yarrow stalks to construct a series of hexagrams. In 1308, Zhu published his treatise on probability theory, *Path of the falling stone*. It included an analysis of the probability for generating each hexagram as well as betting strategies for several popular games of chance. Using his techniques, Zhu became quite wealthy and began to travel; it was during this period that he was exposed to the work of the mathematicians in northern China. In the preface to *True reflections*, Mo Ruo writes that “Zhu Shijie of Yan-shan became famous as a mathematician. He travelled widely for more than twenty years and the number of those who came to be taught by him increased each day.”

Zhu worked for nearly a decade on the subsequent problem, that of interpreting a series of hexagrams. Hexagrams themselves are generated one bit at a time by looking at remainders modulo four of random handfuls of yarrow stalks; the four outcomes either specify the next bit directly or in terms of the previous bit. These latter rules give *I Ching* its subtitle, *The Book of Changes*. For mystical reasons, Zhu asserted that the proper interpretation of a series of hexagrams should also be given by a set of changes, but for years he could find no reason to prefer one set of changes to any other. However, in 1316, Zhu wrote to Yang Hui:

“I dreamed that I was summoned to the royal palace. As I stepped upon the threshold, the sun burst forth over the gilded tile; I was blinded and, overcome, I fell to my knees. I lifted my hand to shield my eyes from its brilliance, and the Emperor himself took it and raised me up. To my surprise, he changed his form as I watched; he became so much like me that I thought I was looking in a mirror.

“‘How can this be?’ I cried. He laughed and took the form of a phoenix; I fell back from the flames as he ascended to heaven, then sorrowed as he dove toward the Golden Water River, for the water would surely quench the bird. Yet before reaching the water, he took the form of an eel, dove into the river and swam to the bank; he wriggled ashore, then took the form of a seed, which sank into the earth and grew into a mighty tree. Finally he took his own form again and spoke to me: ‘I rule all things; things above the earth and in the earth and under the earth, land and sea and sky. I can rule all these because I rule myself.’

“I woke and wondered at the singularity of the vision; when my mind reeled in amazement and could stand no more, it retreated to the familiar problem of the tables of changes. It suddenly occurred to me that as the Emperor could take any form, there could be a table of changes that could take the form of any other. Once I had conceived the idea, the implementation was straightforward.”

The rest of the letter has been lost, but Yang Hui described the broad form of the changes in a letter to a friend; the *Imperial Changes* were a set of changes that we now recognize as a Turing-complete programming language, nearly seven hundred years before Turing. It was a type of register machine similar to Melzak’s model, where seeds were ‘planted’ in pits; the lists of hexagrams generated by the yarrow straws were the programs, and the result of the computation was taken as the interpretation of the casting. Zhu recognized that some programs never stopped–some went into infinite loops, some grew without bound, and some behaved so erratically he couldn’t decide whether they would ever give an interpretation.

Given his fascination with probabilities, it was natural that Zhu would consider the probability that a string of hexagrams had an interpretation. We do not have Zhu’s reasoning, only an excerpt from his conclusion: “The probability that a list of hexagrams has an interpretation is a secret beyond the power of fate to reveal.” It may be that Zhu anticipated Chaitin’s proof of the algorithmic randomness of this probability as well.

All of Zhu’s works were lost soon after they were published; *True reflections* survived in a corrupted form through Korean (1433 AD) and Japanese (1658 AD) translations and was reintroduced to China only in the nineteenth century. One wonders what the world might have been like had the *Imperial Changes* been understood and exploited. We suppose it is a secret beyond the power of fate to reveal.

]]>

Thermometer (unitless temperature): | ||

[1] | inverse temperature (unitless) | |

[m] | y coordinate | |

[kg/s^2 K] | spring constant * temp unit conversion | |

[m] | how position changes with (inverse) temperature | |

[kg m/s^2 K] | force per Kelvin | |

[kg m^2/s^2 K] | stretching energy per Kelvin | |

[kg m^2/s^2 K] | potential energy per Kelvin | |

[kg m^2/s^2 K] | entropy | |

Thermometer: | ||

[1/K] | inverse temperature | |

[m] | y coordinate | |

[kg/s^2 K^2 = bits/m^2 K] | how information density changes with temp | |

[m K] | how position changes with (inverse) temperature | |

[kg m/s^2 K = bits/m] | force per Kelvin | |

[kg m^2/s^2 = bits K] | stretching energy = change in stretching information with invtemp | |

[kg m^2/s^2 = bits K] | potential energy = change in potential information with invtemp | |

[bits] | entropy |

I assume that the dynamics of such a system would follow a path where is that a minimum-entropy path or a maximum?

]]>

amplitude distribution (wave function)

row vector population

,

where are the coefficients that, when normalized, maximize the inner product with ,

where is the complex conjugate of . normalization transitions stochastic unitary harmonic oscillator many HOs at temperature one QHO evolving for time uniform distribution over states special distribution Gibbs distribution Free evolution partition function = inner product with path integrals = maximum entropy? = least action?

]]>

]]>