*posted by Darryl on 24 Mar 2015*

Why do we compute things? What's the point of computing? This seems like a silly question, but the answer to it is interesting and connects a lot of different things.

To bring the question into better focus, consider the factorial function, `fac`

which sends any number to it's factorial. So of course `fac(3) = 6`

and `fac(4) = 24`

and `fac(5) = 120`

and so on. But if these equations hold, and they had better, then why do we compute `fac(5)`

at all? The equations hold, so they're *equal*, and everything that's true of one is true of the other. So why bother computing?

What it comes down to is the sense/reference distinction and obviousness. When we write programs, whether they're procedural or functional, interpreted or compiled, we're writing syntactic forms that represent things. The *program* `fac(5)`

is not the same thing as the *number* `fac(5)`

, and it's only at the number level that the equation `fac(5) = 120`

holds. At the program level it doesn't hold at all.

A good way to think about it is that programs are, well, codes, or encodings, or "quotations" of a sort. Just as we might say that Clark Kent and Superman are the same, but deny that "Clark Kent" and "Superman" are the same, the numbers `fac(5)`

and `120`

are the same, but the programs `fac(5)`

and `120`

are not. This leads us some of the way to understanding why we compute: because computers can only manipulate programs (/data/representations), not actual numbers, people, or whatever else.

Let's look at a slightly more complicated example tho, that will make it easier to see what computing does. Consider the following Haskell program. If you don't know Haskell, it shouldn't be too confusing.

```
if 1+1 == 2
then putStr "Ok"
else putStr "Uh oh"
```

Obviously, this program prints `Ok`

to the terminal. But why is it obvious? Well we're pretty smart humans, we can look at this and say, well obviously the equation is true so it'll do the first branch. But how about this program (which isn't quite Haskell, but go with me here):

```
if (the Riemann zeta function for negative even integers is 0)
then putStr "Ok"
else putStr "Uh oh"
```

You're probably less confident in saying it prints `Ok`

, unless you're a mathematician and know that indeed the Riemann zeta function is 0 for negative even integers. Not all things are obvious, and this is especially true for computers, so if we want a computer to perform these actions we specify, we have to make it obvious just which actions the computer should perform.

Returning to the previous program, if the computer is to perform the specified action (and the whole program does indeed specify an action), the computer has to see what the action "obviously" is. But in it's current form, it's not obvious which branch of the conditional the computer should execute. If only the condition was of the form `if True then ... else ...`

or the form `if False then ... else ...`

it would be obvious! The computer needs to transform it into a more obvious form, using some built-in rules.

Fortunately, in Haskell, the equality function `(==)`

has rules that specify how to turn it into either `True`

or `False`

when applied to numbers: given an expression `x == y`

, if the numbers represented by `x`

and `y`

are the same (note: the *numbers represented*, not the programs that represent), then turn `x == y`

into `True`

, otherwise turn it into `False`

. So, easy. Are the numbers represented by `1+1`

and `2`

the same? Well.. it's not obvious.

So again, we need a way to make it more obvious. And of course we can do that: we can convert `1+1`

into some other form, consisting of just digits, and then it would be obvious what number it represents. So the computer uses the rules that Haskell provides, converting `1+1`

into `2`

, thus changing `1+1 == 2`

into `2 == 2`

. Now the equality function can look at this representation and see, indeed, `2`

and `2`

represent the same number, so we should turn `2 == 2`

into `True`

.

So: we've turned `1+1 == 2`

into `2 == 2`

into `True`

, which means we've turned this:

```
if 1+1 == 2
then putStr "Ok"
else putStr "Uh oh"
```

into this:

```
if True
then putStr "Ok"
else putStr "Uh oh"
```

And now it's obvious what the computer should do: to perform this action, the computer performs `putStr "Ok"`

, which it can do trivially.

That's what computing is for: turning the non-obvious into the obvious, so our rather stupid computers can perform the actions we expect of them. And ultimately that means something roughly like making equality obvious: is this part of the program `True`

or `False`

, is this part `"foo"`

or `"bar"`

, is this `Lambda`

or `Var`

, or whatever else. Even at the lowest level of representation, in the CPU itself, it comes down to being able to ask, is the nth bit of something 0 or 1.

But.. why would we write that program in the first place? I mean, it can only ever print `Ok`

, barring cosmic rays hitting the CPU, so .. why not just write `putStr "Ok"`

instead of the entire thing? Of course, we *should* just write `putStr "Ok"`

! Any programmer who wrote that larger program for real would be rather silly. So maybe there's no reason to write programs that need to compute?

Well.. not quite. We humans like to think about things in various ways, we find it easier to use different ways of thinking for different tasks. Even tho Clark Kent and Superman are the same person, it's probably easier to think of what his job is if you're asked "What is Clark Kent's job?" vs the question "What is Superman's job?". Or we might find it easier to reason about the numbers 1, 2, 4, 8, ... by thinking of them as the powers of 2. Heck, we can't even think about all of the powers of 2 at once without thinking of them as such (or in some other way): there's infinitely many of them, so we can't just keep them all in mind at once, we need some finite concept.

But beyond just how we think about things, there's also the matter of incompleteness of thoughts and programs. Not every program is complete and finished, at least if by program we mean something that could run (perhaps in simulation) on an idealized computer like a Turing Machine. Real computers have IO, and it's the I part — the input — that represents a certain incompleteness of the program. A program that reads input is a program waiting to be finished, or at least made more complete, so that some action can happen.

So this is the other reason we compute: because we often want, or need, to write incomplete programs — programs that need user input. We do this because the input isn't known when we write the program (such as with a audio player, which doesn't hardcode a single song but takes song files as input), or because the single program needs to run in different times and places with different users, and provide appropriately different answers (as with video games, web servers, etc.).

The only way to do that is to write programs which are incomplete, and don't obviously say which actions to perform, but which would become obvious (or more obvious, anyway) when input is provide, by computing.

This view of things — that computation is all about making things obvious (and especially, obviousness of equalities) — turns out to be related to quite a number of things. My personal favorite, because it's seemingly unrelated, is Deep Learning as applied to problems like speech recognition, object recognition, etc. In these applications, the goal is to classify an input — let's say an image — into some class like `dog`

or `motorcycle`

or whatever. Why? So that the equations `image == Dog`

is obvious to compute, of course! Just relying on the raw image is not so easy, but if the image could be computed down to a class symbol via a DNN, then the equation would be obvious. We don't usually think of these neural networks as being interpreters for images-as-programs, but that's *exactly* what they are!

If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell).