*posted by Darryl on 29 September 2015*

This post is somewhat tangential to what I've been doing with Language Engine, but since it's about programming language design, and since I'm designing a language for LE, I guess it's kind of related.

The Github repo for this blog post can be found here.

The topic of todays post is an answer to a question I posed on Twitter a few days ago about whether or not it would be possible to define evaluation in a way that was "style generic". That is to say, whether you use the standard lazy or eager evaluation functions that just recurse using the host language's mechanism, or a stack machine that builds up evaluation contexts, or whatever, the evaluated term you produce at the end is the same — the only difference is the style of the evaluator, not the meaning of evaluation. So is there a way to define *the* lazy evaluator, or *the* eager evaluator, or *the* lazy stack machine, and so on, and then choose one or the other as you want, for use in your language, independent of the language itself? The answer, I think, is yes, and so I'll present here my solution.

### What is evaluation, anyway?

A typical presentation of eager evaluation for the simply typed λ calculus with pairs and functions goes something like this:

```
data Term = Pair Term Term | Fst Term | Snd Term
| Lam (Term -> Term) | App Term Term
eval :: Term -> Term
eval (Pair x y)
= Pair (eval x) (eval y)
eval (Fst p)
= case eval p of
Pair x y -> x
p' -> Fst p'
eval (Snd p)
= case eval p of
Pair x y -> y
p' -> Snd p'
eval (Lam b)
= Lam b
eval (App f x)
= case eval f of
Lam b -> eval (b (eval x))
f' -> App f' (eval x)
```

If we look at this, we can observe that there are two things going on. The first is the recursive evaluations, and the second, which shows up only in the cases for `Fst`

, `Snd`

, and `App`

, is the actual process of β reduction.

If we think about how the type theory is set up, our β rules are in fact defined completely in isolation of recursive evaluation. For pairs and functions, it goes something like

```
fst <M,N> ⇝ M
snd <M,N> ⇝ N
(λx.M) N ⇝ [N/x]M
```

These reductions are used to define a relation of normalization or evaluation, often written `M ⇓ M'`

, that relates arbitrary terms to the an equivalent term that is not β-reducible at its outmost`⇓`

versions are called "big step semantics" and there are variants called "small step semantics" which basically just means stack machine evaluation. And there are still other more exotic forms of evaluation as well. But the β reductions never change. They get coded in different ways into these systems, but but the reductions are the ultimate key to what it means to evaluate. Regardless of *style*, a function is an evaluation function just in case it reduces the relevant reducible (sub)terms.

To do this requires finding relevant redexes — subterms which β can apply to — and applying β, but it also means finding *possible* redexes — subterms which might become redexes after some reduction takes place. For example, the outermost term in `fst (fst <<1,2>,3>)`

is not a redex, but if we reduce `fst <<1,2>,3>`

to `<1,2>`

, to produce `fst <1,2>`

, we now have a redex. Notice here, that this means that to do any normalization, we have to find these possible redexes and do at least *some particular* recursive evaluation.

Is there anything that tells you where this particular recursive evaluation occurs? Why yes! If we look at these things through the typed lens, we see that the terms that have to be recursively evaluated are precisely those terms which are the targets of elimination rules. Look at the elim rules for pairs and functions:

```
Γ ⊢ P : A * B
------------- *-Elim-1
Γ ⊢ fst P : A
Γ ⊢ P : A * B
------------- *-Elim-2
Γ ⊢ snd P : B
Γ ⊢ M : A -> B Γ ⊢ N : A
--------------------------- ->-Elim
Γ ⊢ M N : B
```

For pairs, it's *pair* elimination precisely because the premise has a pair type operator that we target and get rid of in the conclusion. For functions, it's *function* elimination precisely because one of the premises has a function type operator that we target and get rid of in the conclusion. Let's call these targeted subterms — `P : A * B`

and `M : A -> B`

— the Main subterms of the elims, and everything else (in this case just `N : A`

from function elim) will be Auxiliary subterms. We'll also call the subterms of intros (so pair elements) auxiliary as well, since they're not the targets of reductions, except all scopal subterms (e.g. bodies of λs or branches of case expressions will be treated as distinct, simply because evaluation under binders is something we might not want to do).

So then we can say this about evaluation: the main subterms are definitely evaluated, because we need to turn them into intros in order to β reduce. Every evaluator has this in common, whether its lazy or eager, stack based or not. What differs is how they traverse the program tree, what they do with auxiliary terms, and so on. Lazy evaluators don't also evaluate aux terms, but eager ones do. Stack evaluators traverse the tree by building a doubly-typed zipper of some form, while the `eval`

style evaluator above does it by stealing traversal from the host language.

This provides us with a basis of the idea for generic evaluation: separate out the *reduction* task from the *recursive* tasks and let each evaluator define how to compose them.

### A simplified version for the simply typed LC

Before going fully generic, it might help to see how this works for the very simple case of the STLC shown above. We'll start by considering the representation for terms. An `Intro s b a`

is an intro form (`Pair`

, `Lam`

), with parameters `s`

for the type of scope arguments, `b`

for scope bodies, and `a`

for auxiliary terms. As noted, pairs have aux terms, but scopes are treated separately.

```
data Intro s b a
= Pair a a
| Lam (s -> b)
```

`Elim m a`

is similarly the type of elim forms (`Fst`

, `Snd`

, and `App`

), parameterized by the type `m`

of main terms, and `a`

again for aux terms.

```
data Elim m a
= Fst m
| Snd m
| App m a
```

A `Term`

is just either an `Intro`

where all subterms can be `Term`

s, or it's an `Elim`

where all subterms can be terms.

```
data Term
= I (Intro Term Term Term)
| E (Elim Term Term)
```

We also ought to be able to map over both auxiliary and main positions.

```
class Aux f where
mapAux :: (a -> a') -> f a -> f a'
instance Aux (Intro s b) where
mapAux f (Pair x y) = Pair (f x) (f y)
mapAux _ (Lam g) = Lam g
instance Aux (Elim m) where
mapAux _ (Fst p) = Fst p
mapAux _ (Snd p) = Snd p
mapAux f (App g x) = App g (f x)
mapMain :: (m -> m') -> Elim m a -> Elim m' a
mapMain f (Fst p) = Fst (f p)
mapMain f (Snd p) = Snd (f p)
mapMain f (App g x) = App (f g) x
```

We could also map over scope body positions, but since this version won't do evaluation under binders, we can omit it. It should be obvious how to do that, however.

A `Redex a`

is just an `Intro`

in the main position of an `Elim`

and `a`

s everywhere else, as mentioned before.

`type Redex a = Elim (Intro a a a) a`

And we can reduce by converting a `Redex a`

into an `a`

in exactly the way we want from β reduction.

```
reduce :: Redex a -> a
reduce (Fst (Pair x _)) = x
reduce (Snd (Pair _ y)) = y
reduce (App (Lam f) x) = f x
```

The polymorphism of this function serves to help ensure that the reduction really is a reduction. We can only extract an `a`

by combining the pre-existing `a`

s and scopes. This is exactly the notion of β reduction that type theory demands: transforming a complex term into a simpler one that does the same thing, and which must have already in some sense existed inside the complex term, but for arrangement.

If we now want to define lazy evaluation, it's relatively straight forward. We can turn an arbitrary `Term`

into a constructor-headed `Intro Term Term Term`

. If the term is already constructor-headed, then we're done. If it's not, we recursively evaluate the main term to produce a redex, reduce that, and continue evaluating. No evaluation of auxiliary terms happens at all.

```
lazyEval :: Term -> Intro Term Term Term
lazyEval (I i) = i
lazyEval (E e) = lazyEval (reduce (mapMain lazyEval e))
```

On the other hand, we can similarly evaluate eagerly. For terms that are already constructor headed, we ought to evaluate their aux terms to make them normalized as well. And for other terms, we recursively evaluate the main subterm, then the aux terms, to produce a redex, then we continue evaluating that.

```
eagerEval :: Term -> Intro Term Term Term
eagerEval (I i) = mapAux (I . eagerEval) i
eagerEval (E e) = eagerEval (reduce (mapAux (I . eagerEval)
(mapMain eagerEval e)))
```

Observe, however, that these two definitions make no mention at all of what the forms are that we have. Neither actually knows about pairs or functions, all of that is kept in `reduce`

and the map functions. We can happily add new forms to `Intro`

and `Elim`

, such as sum types, and we only have to modify these functions. We never need to change the definition of `lazyEval`

and `eagerEval`

.

As a consequence, these two evaluators are generic over the choice of `Intro`

and `Elim`

, and we can abstract over these type constructors and their associated functions, and so I've done just that here. This version abstracts a bit farther, treating forms as just names, with the general shape of intros and elims reified into their own data types. It also has another type called a special form, which corresponds to forms other than intros or elims, such as let's and fixed point combinators. Both elims and special forms can compute, however, so they're joined together as reducibles. There are also four evaluators (the two shown here, plus lazy and eager stack machines), and the simply typed λ calculus, which you can note is quite short (the ignoring pattern synonyms, it's 10 lines of code!).

If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell). Here's the HN thread if you prefer that mode, and also the Reddit thread.