CS Jokes, in the Sense of Littlewood

I was reading a classic Math Overflow thread the other day about jokes “in the sense of Littlewood.” The original joke was the Euler product formula n>0ns=p  prime(1ps)1,\sum_{n > 0} n^{-s} = \prod_{p\;\text{prime}} \left(1 - p^{-s}\right)^{-1}, but there are several other cute entries. As a CS major who has fallen under Klaus’s influence,1 one that I found particularly funny is the following: given a (not necessarily commutative) ring where 1ab1-ab is invertible, (1ba)1(1-ba)^{-1} also exists, and is given by the formula (1ba)1=1+b(1ab)1a.(1-ba)^{-1} = 1 + b(1-ab)^{-1}a. The joke is to derive this using a neat trick with a power series, but as a comment points out, the result actually makes a lot of sense if you think of these things as regular expressions over Σ={a,b}\Sigma = \{a,b\}, or more generally as Kleene algebras. In a way, you can think of (1u)1(1-u)^{-1} as uu^{\star} (i.e. Kleene star): 11u=k=0uk=u,\frac{1}{1-u} = \sum_{k=0}^{\infty} u^k = u^{\star}, to play a little fast and loose with notation. Likewise, one can think of 11 as the empty string ε\varepsilon, since they form the identities for multiplication and string concatenation respectively.2 Then the result is remarkably obvious: (ba)=ε+b(ab)a.(ba)^{\star} = \varepsilon + b(ab)^{\star}a. In my opinion, this is rather funny.

Types and Algebra

This “joke” exploits the “convergence” of a geometric series in a setting where such convergence has no obvious meaning: abstract algebra and regular expressions.3 Paul Halmos once remarked on the utility of exactly this trick in an expository article,4 calling the geometric series, along with quotient structures and eigenvalues (and more generally fixed points), an “element of mathematics” that keeps cropping up in odd places where one least expects it. In fact, he used the very same example with (1ba)1(1-ba)^{-1}.

In a similar spirit, I propose the following joke in the realm of type theory, based on things that I picked up from 98-317 (Hype for Types). We can impose some algebraic structure on types in programming languages; for instance, forming a tuple of types τ1\tau_1 and τ2\tau_2 is analogous to taking their Cartesian product τ1×τ2\tau_1 \times \tau_2.5 I would assume that this is what inspires the Standard ML syntax for tuples:

type ('t1, 't2) product = 't1 * 't2

val _ : (int, real) product = (15, 1.50)

Similarly, given types τ1\tau_1 and τ2\tau_2, we can interpret the type “something from τ1\tau_1 or something from τ2\tau_2” as their tagged union, or with a more algebraic flavor, as their sum τ1+τ2\tau_1 + \tau_2.6 In Standard ML:

datatype ('t1, 't2) sum = Left of 't1
                        | Right of 't2

val _ : (string, bool) sum = Left "foo"
val _ : (string, bool) sum = Right false

An intuitive way to think about this is that if τ1\tau_1 is inhabited by τ1|\tau_1| values and τ2\tau_2 is inhabited by τ2|\tau_2| values, then τ1×τ2\tau_1 \times \tau_2 is inhabited by τ1×τ2|\tau_1| \times |\tau_2| values, and τ1+τ2\tau_1 + \tau_2 is inhabited by τ1+τ2|\tau_1| + |\tau_2| values.

With these preliminaries aside, let’s try to express the type of a list algebraically. An α  list\alpha\;\mathsf{list} (i.e. a list whose elements are of type α\alpha) is really just either an empty list, or a singleton list, or a pair of elements, or a triple of elements…essentially, a list is a sum of products:

datatype 'a list = Empty     of unit
                 | Single    of 'a
                 | Pair      of 'a * 'a
                 | Triple    of 'a * 'a * 'a
                 | Quadruple of 'a * 'a * 'a * 'a
                 | ...

And now using our representation of product types and sum types, we can do the very curious thing of thinking of a list as a power series: α  list=1+α+α2+α3+α4+=k=0αk.\alpha\;\mathsf{list} = 1 + \alpha + \alpha^2 + \alpha^3 + \alpha^4 + \cdots = \sum_{k=0}^{\infty} \alpha^k. Now let’s step back for a moment and apply the usual trick of rewriting the geometric series using the formula: α  list=k=0αk=11α.\alpha\;\mathsf{list} = \sum_{k=0}^{\infty} \alpha^k = \frac{1}{1-\alpha}. Now rearranging both sides, we see that this is the same as saying α  list=1+α×(α  list).\alpha\;\mathsf{list} = 1 + \alpha \times (\alpha\;\mathsf{list}). But let’s write this out in code:

datatype 'a list = Nil  of unit
                 | Cons of 'a * 'a list

This is precisely the canonical recursive definition of a list!

As a side note, Scott Aaronson has a blog post containing a few funny proofs, although this sense of humor is slightly different from the one above. Actually, I think I was asked a few of the questions in his blog post in a complexity theory class!

  1. Klaus Sutner teaches computational discrete math at CMU, and he is notably enthusiastic about the role of abstract algebra in computation.↩︎

  2. Or more formally and with less notational abuse, L(1)={ε}\mathcal{L}(1) = \{\varepsilon\}. Notice that this plays well with the Kleene star idea: if 00 is the regular expression \varnothing (the identities for addition and set union respectively), we see that (10)1=1(1-0)^{-1} = 1, and likewise =ε\varnothing^\star = \varepsilon.↩︎

  3. Though funnily enough, I recently stumbled across a blog post on the permanence of identities, which uses category theory and model theory to justify such manipulations.↩︎

  4. Halmos, P.R. “Does mathematics have elements?”. The Mathematical Intelligencer 3, 147-153 (1981).↩︎

  5. As an aside, this also explains the name of the unit type, which is inhabited by the single value (): it acts as the multiplicative unit.↩︎

  6. If you’ve studied category theory before, this is just the binary coproduct.↩︎