# Joachim Breitner's Homepage

## Blog

## Isabelle functions: Always total, sometimes undefined

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”^{1}) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of *what is a function*, are function *total* and what is the business with *undefined*. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians.

Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

### HOL is a logic of total functions

If I have a Isabelle function `f :: a ⇒ b`

between two types `a`

and `b`

(the function arrow in Isabelle is `⇒`

, not `→`

), then – by definition of what it means to be a function in HOL – whenever I have a value `x :: a`

, then the expression `f x`

(i.e. `f`

applied to `x`

) *is* a value of type `b`

. Therefore, and without exception, *every Isabelle function is total*.

In particular, it cannot be that `f x`

does not exist for some `x :: a`

. This is a first difference from Haskell, which does have partial functions like

```
spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))
```

Here, neither the expression `spin Nothing`

nor the expression `spin (Just 42)`

produce a value of type `Bool`

: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type `Bool`

.

Because every function is total, this confusion cannot arise in Isabelle: If an expression `e`

has type `t`

, then it *is* a value of type `t`

. This trait is shared with other total systems, including Coq.

Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion:

### Isabelle functions do not compute

We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it^{2}. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind.

What is the difference?

- A function
`a → b`

in functional programming is an algorithm that, given a value of type`a`

, calculates (returns, evaluates to) a value of type`b`

. - A function
`a ⇒ b`

in math (or Isabelle) associates with each value of type`a`

a value of type`b`

.

For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:

```
definition foo :: "(nat ⇒ real) ⇒ real" where
"foo seq = (if convergent seq then lim seq else 0)"
```

This assigns a real number to every sequence, but it does not *compute* it in any useful sense.

From this it follows that

### Isabelle functions are specified, not defined

Consider this function definition:

```
fun plus :: "nat ⇒ nat ⇒ nat" where
"plus 0 m = m"
| "plus (Suc n) m = Suc (plus n m)"
```

To a functional programmer, this reads

`plus`

is a function that analyses its first argument. If that is`0`

, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.

which is clearly a description of a computation.

But to Isabelle, the above reads

`plus`

is a binary function on natural numbers, and it satisfies the following two equations: …

And in fact, it is not so much Isabelle that reads it this way, but rather the `fun`

command, which is external to the Isabelle logic. The `fun`

command analyses the given equations, constructs a non-recursive definition of `plus`

under the hood, passes *that* to Isabelle and then proves that the given equations hold for `plus`

.

One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define `plus'`

by recursing on the second argument, we’d obtain the the same function (i.e. `plus = plus'`

is a theorem, and there would be no way of telling the two apart).

### Termination is a property of specifications, not functions

Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises *before* the function is defined: The `fun`

command can only construct `plus`

in a way that the equations hold if it passes a termination check – very much like `Fixpoint`

in Coq.

But while the termination check of `Fixpoint`

in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a “termination proof of the function” exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument!

For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument `f`

on the second argument `x`

until it finds a fixpoint. It is completely polymorphic (the single quote in `'a`

indicates that this is a type variable):

```
partial_function (tailrec)
fixpoint :: "('a ⇒ 'a) ⇒ 'a ⇒ 'a"
where
"fixpoint f x = (if f x = x then x else fixpoint f (f x))"
```

We can work with this definition just fine. For example, if we instantiate `f`

with `(λx. x-1)`

, we can prove that it will always return 0:

```
lemma "fixpoint (λ n . n - 1) (n::nat) = 0"
by (induction n) (auto simp add: fixpoint.simps)
```

Similarly, if we have a function that works within the option monad (i.e. |Maybe| in Haskell), its specification can always be turned into a function without an explicit termination proof – here one that calculates the Collatz sequence:

```
partial_function (option) collatz :: "nat ⇒ nat list option"
where "collatz n =
(if n = 1 then Some [n]
else if even n
then do { ns <- collatz (n div 2); Some (n # ns) }
else do { ns <- collatz (3 * n + 1); Some (n # ns)})"
```

Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function “returns” a list only if the collatz sequence eventually reaches 1.

I expect these definitions to make a Coq user very uneasy. How can `fixpoint`

be a total function? What is `fixpoint (λn. n+1)`

? What if we run `collatz n`

for a `n`

where the Collatz sequence does *not* reach 1?^{3} We will come back to that question after a little detour…

### HOL is a logic of non-empty types

Another big difference between Isabelle and Coq is that in Isabelle, *every type is inhabited*. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type.

Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions.

This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely

`undefined :: 'a`

The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as `arbitrary`

or, even better, `unknown`

.

Since `undefined`

can be instantiated at any type, we can instantiate it for example at `bool`

, and we can observe an important fact: `undefined`

is not an *extra* value besides the “usual ones”. It is simply *some* value of that type, which is demonstrated in the following lemma:

`lemma "undefined = True ∨ undefined = False" by auto`

In fact, if the type has only one value (such as the unit type), then we know the value of `undefined`

for sure:

`lemma "undefined = ()" by auto`

It is very handy to be able to produce an expression of any type, as we will see as follows

### Partial functions are just underspecified functions

For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle’s equivalent of Haskell’s partial `fromJust`

function:

```
fun fromSome :: "'a option ⇒ 'a" where
"fromSome (Some x) = x"
```

This definition is accepted by `fun`

(albeit with a warning), and the generated function `fromSome`

behaves exactly as specified: when applied to `Some x`

, it is `x`

. The term `fromSome None`

is also a value of type `'a`

, we just do not know which one it is, as the specification does not address that.

So `fromSome None`

behaves just like `undefined`

above, i.e. we can prove

`lemma "fromSome None = False ∨ fromSome None = True" by auto`

Here is a small exercise for you: Can you come up with an explanation for the following lemma:

```
fun constOrId :: "bool ⇒ bool" where
"constOrId True = True"
lemma "constOrId = (λ_.True) ∨ constOrId = (λx. x)"
by (metis (full_types) constOrId.simps)
```

Overall, this behavior makes sense if we remember that function “definitions” in Isabelle are not really definitions, but rather specifications. And a partial function “definition” is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

### Nonterminating functions are also just underspecified

Let us return to the puzzle posed by `fixpoint`

above. Clearly, the function – seen as a functional program – is not total: When passed the argument `(λn. n + 1)`

or `(λb. ¬b)`

it will loop forever trying to find a fixed point.

But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when `f`

has no fixed-point? It states that the equation `fixpoint f x = fixpoint f (f x)`

holds. And this equation has a solution, for example `fixpoint f _ = undefined`

.

Or more concretely: The specification of the `fixpoint`

function states that `fixpoint (λb. ¬b) True = fixpoint (λb. ¬b) False`

has to hold, but it does not specify which particular value (`True`

or `False`

) it should denote – any is fine.

### Not all function specifications are ok

At this point you might wonder: Can I just specify any equations for a function `f`

and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function `bogus :: () ⇒ nat`

with the equation `bogus () = Suc (bogus ())`

, because this equation does not have a solution.

We can actually prove that such a function cannot exist:

`lemma no_bogus: "∄ bogus. bogus () = Suc (bogus ())" by simp`

(Of course, `not_bogus () = not_bogus ()`

is just fine…)

### You cannot reason about partiality in Isabelle

We have seen that there are many ways to define functions that one might consider “partial”. Given a function, can we prove that it is not “partial” in that sense?

Unfortunately, but unavoidably, no: Since `undefined`

is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that “A function result is not specified”.

Here is an example that demonstrates this: Two “partial” functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:

```
fun partial1 :: "bool ⇒ unit" where
"partial1 True = ()"
partial_function (tailrec) partial2 :: "bool ⇒ unit" where
"partial2 b = partial2 b"
fun total :: "bool ⇒ unit" where
"total True = ()"
| "total False = ()"
lemma "partial1 = total ∧ partial2 = total" by auto
```

If you really *do* want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain *continuous functions* between *domains*. In that setting, `⊥ ≠ ()`

and `partial2 = ⊥ ≠ total`

. We have done that to verify some of HLint’s equations.

### You can still compute with Isabelle functions

I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to `fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to “compute” that function quite like you would in Coq or Haskell.

Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle.

While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones.

Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a “moral reasoning” foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like `fixpoint`

, which is not possible in Coq.

There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

### Conclusion

We have seen how in Isabelle, *every function is total*. Function declarations have equations, but these do not *define* the function in an computational sense, but rather *specify* them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

### PS: Axiom `undefined`

in Coq

*This section is speculative, and an invitation for discussion.*

Coq already distinguishes between types used in programs (`Set`

) and types used in proofs `Prop`

.

Could Coq ensure that every `t : Set`

is non-empty? I imagine this would require additional checks in the `Inductive`

command, similar to the checks that the Isabelle command `datatype`

has to perform^{4}, and it would disallow `Empty_set`

.

If so, then it would be sound to add the following axiom

`Axiom undefined : forall (a : Set), a.`

wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality.

With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case `_ ⇒ undefined`

. Would it “help” with non-obviously terminating functions? Would it allow a Coq command `Tailrecursive`

that accepts any tailrecursive function without a termination check?

Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.↩

Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.↩

Let me know if you find such an

*n*. Besides*n*= 0.↩Like

`fun`

, the constructions by`datatype`

are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.↩

## e.g. in TeX

When I learned TeX, I was told to not write `e.g. something`

, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write `e.g.\␣`

.

Years later, I learned from a convincing, but since forgotten source, that in fact `e.g.\@`

is the proper thing to write. I vaguely remembering that `e.g.\␣`

supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact `e.g.\␣`

is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that `e.g.\@`

is proper.

What now? I guess I just have to find it out myself.

The above image shows three variants: The obviously broken version with `e.g.`

, and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how `\@`

and `\␣`

react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

Again, I see the same behaviour.

**Conclusion**: It does not matter, but `e.g.\␣`

is less hassle when using lhs2tex than `e.g.\@`

(which has to be escaped as `e.g.\@@`

), so the winner is `e.g.\␣`

!

(Unless you put it in a macro, then `\@`

might be preferable, and it is still needed between a captial letter and a sentence period.)

## Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

### Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say `analyseExpr`

and `analyseAlt`

, like these:

```
analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
Case (analyseExpr scrut) (analyseAlt <$> alts)
analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)
```

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the `Var`

case. You thus add a parameter to all these functions, and hence an argument to every call:

```
type Flag = Bool
analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)
analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)
```

I find this code problematic. The intention was: “`flag`

is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, `flag`

should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, `flag`

does “change”, as `analyseExpr`

passes something else in the `Lam`

case. The idiom is indistinguishable from the *environment idiom*, where a *locally changing* environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

```
type Flag = Bool
section (flag :: Flag) where
analyseExpr :: Expr -> Expr
analyseExpr (Var v) = if flag then change1 v else change2v
analyseExpr (App e1 e2) =
App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr e)
analyseExpr (Case scrut alts) =
Case (analyseExpr scrut) (analyseAlt <$> alts)
analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)
```

Now the intention is clear: Within a clearly marked block, `flag`

is fixed and when reasoning about this code I do not have to worry that it might change. Either *all* variables will be passed to `change1`

, or *all* to `change2`

. An important distinction!

Therefore, inside the `section`

, the type of `analyseExpr`

does not mention `Flag`

, whereas outside its type is `Flag -> Expr -> Expr`

. This is a bit unusual, but not completely: You see precisely the same effect in a `class`

declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the `Reader`

monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the `seal-module`

package!

### Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the `$`

operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

```
foo :: Baz
foo = bracket
• some complicated code
that is evaluated first
• other complicated code for later
• even more complicated code
```

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines `(•) = id`

with `infixl 0 •`

. A dedicated syntax extension (`-XArgumentBullets`

) is preferable:

- It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
- I would like to use
`$`

inside these complex arguments, and multiple operators of precedence 0 do not mix. (`infixl -1 •`

would help). - It should be possible to nest these, and distinguish different nesting levers based on their indentation.

### Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

```
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)
```

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

```
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
x:xs ++ ys = x : xs++ys
```

The rule is simple: *A sequence of tokens without any space is implicitly parenthesised.*

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for `(`

and `)`

, matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

*What about an operator with space on one side but not on the other?*I’d simply forbid that, and hence enforce readable code.*Do operator sections still require parenthesis?*Yes, I’d say so.- Does this overrule operator precedence? Yes!
`a * b+c == a * (b+c)`

. *What is a token*? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then`(Succ a)+b * c`

parses as`((Succ a)+b) * c`

, otherwise it should probably simply be illegal.*Can we extend this so that one space binds tighter than two spaces, and so on?*Yes we can, but really, we should not.*This is incompatible with Agda’s syntax!*Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.*Has this been done before?*I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

## Compose Conference talk video online

Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit):

It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online.

Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.

## Communication Failure

I am still far from being a professor, but I recently got a glimpse of what awaits you in that role…

From:Sebastian R. <…@gmail.com>To:joachim@cis.upenn.eduSubject:re: ErrorsI've spotted a basic error in your course on Haskell (https://www.seas.upenn.edu/~cis194/fall16/). Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change.

In your introduction note you have written:

n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

Howeverm there is no

`input CodeWorld`

in the code above. Have you been made aware of this error earlier?Regards, ...

Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences.

Strangley, I don’t see a mistake in the material, so I respond:

From:Joachim BreitnerTo:Sebastian R. <…@gmail.com>Subject:Re: ErrorsDear Sebastian,

thanks for pointing out errors. But the first piece of code under “Basic Haskell” starts with

`{-# LANGUAGE OverloadedStrings #-} import CodeWorld`

so I am not sure what you are referring to.

Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things.

Regards, Joachim

A while later, I receive this response:

From:Sebastian R. <…@gmail.com>To:Joachim BreitnerSubject:Re: ErrorsGreetings, Joachim.

Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code.

You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from

yourlectures; they're not my words!In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

This time around, I've highlighted the issue. I hope that made it easier for you to spot...

Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else...

On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner wrote:

…

I am a bit reminded of Sean Spicer … “they’re not my words!” … but clearly I am missing something. And indeed I am: In the code snippet, I wrote – correctly – `import CodeWorld`

, but in the text I had `input CodeWorld`

. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back:

From:Joachim BreitnerTo:Sebastian R. <…@gmail.com>Betreff:Re: ErrorsDear Sebastian,

nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote “input” instead of “import”.

Thanks for the report, although you did turn it into quite a riddle… a simple “you wrote import when it should have been import” would have been a better user of both our time.

Regards, Joachim

Am Donnerstag, den 03.08.2017, 13:32 +1000 schrieb Sebastian R.:

…

(And it seems I now made the inverse typo, writing “import“ instead of “input”. Anyways, I did not think of this any more until a few days later, when I found this nice message in my mailbox:

From:Sebastian R. <…@gmail.com>To:Joachim BreitnerSubject:Re: Errorsa simple “you wrote import when it should have been import” would have been a better user of both our time.

We're both programmers. How about I cut ALL of the unnecessary garbage and just tell you to s/import/input/ on that last quotation (the thing immediately before this paragraph, in case you didn't know).

I blatantly quoted the error, like this:

In your introduction note you have written:

n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

Howeverm there is no

`input CodeWorld`

in the code above.Since that apparently wasn't clear enough, in my second email to you I had to highlight it like so:

You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from

yourlectures; they're not my words!In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

This time around, I've highlighted the issue. I hope that made it easier for you to spot...

I'm not sure if you're memeing at me or not now, but it seems either your reading comprehension, or your logical deduction skills might be substandard. Unfortunately, there isn't much either of us can do about that, so I'm happy to accept that some people will be so stupid; after all, it's to be expected and if we don't accept that which is to be expected then we live our lives in denial.

Happy to wrap up this discusson here, Seb...

On Fri, Aug 4, 2017 at 12:22 AM, Joachim Breitner wrote:

…

Well, I chose to be amused by this, and I am sharing my amusement with you.

## How is coinduction the dual of induction?

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this.

The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.

### Inductive lists

I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:

`datatype 'a list = nil | cons (hd : 'a) (tl : "'a list")`

with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):

`P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs`

### Coinductive lists

In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing

`codatatype 'a llist = lnil | lcons (hd : 'a) (tl : "'a llist")`

we get the following coinduction principle:

```
(∀ xs ys.
R xs ys' → (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys' ≠ lnil ⟶
hd xs = hd ys ∧ R (tl xs) (tl ys))) →
→ (∀ xs ys. R xs ys → xs = ys)
```

This is less scary that it looks at first. It tell you “if you give me a relation `R`

between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by `R`

, then any two lists related by `R`

are actually equal.”

If you think of the infinte list as a series of states of a computer program, then this is nothing else than a *bisimulation*.

So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate `P`

, in the other a relation `R`

, to point out just one difference.

### Relation induction

To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.

The `datatype`

declaration automatically creates a *relator*:

`rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool`

The definition of `rel_list R xs ys`

is that `xs`

and `ys`

have the same shape (i.e. length), and that the corresponding elements are pairwise related by `R`

. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:

```
Q nil nil →
(∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) →
(∀xs ys → rel_list R xs ys → Q xs ys)
```

Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose `Q xs ys ↔ (P xs ∧ xs = ys)`

and `R x y ↔ (x = y)`

, then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.

### Relation coinduction

The same observation can be made in the coinductive world. Here, as well, the `codatatype`

declaration introduces a function

`rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool`

which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:

```
(∀xs ys.
R xs ys → (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) →
(∀ xs ys → R xs ys → rel_llist Q xs ys)
```

It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate `Q`

with equality, which turns `rel_llist Q`

into equality on the lists, and you have the theorem above.

### The duality

With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of “co*something* is *something* with arrows reversed”.

But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:

```
(∀xs ys.
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { case analysis (the other two cases are vacuously true) }
(∀xs ys.
xs = lnil → ys = lnil →
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
∧ (∀xs ys.
xs ≠ lnil ⟶ ys ≠ lnil
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { simplification }
(∀xs ys. xs = lnil → ys = lnil → R xs ys
∧ (∀x xs y ys. R (cons x xs) (cons y ys) ← (Q x y ∧ R xs ys))
= { more rewriting }
R nil nil
∧ (∀x xs y ys. Q x y → R xs ys → R (cons x xs) (cons y ys))
```

### Conclusion

The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.

More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations – this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.

## Coinduction in Coq and Isabelle

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle

### The task

Define the extended natural numbers coinductively. Define the min function and the ≤ relation. Show that min(*n*, *m*) ≤ *n* holds.

### Coq

The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation:

```
CoInductive ENat :=
| N : ENat
| S : ENat -> ENat.
CoFixpoint min (n : ENat) (m : ENat)
:=match n, m with | S n', S m' => S (min n' m')
| _, _ => N end.
CoInductive le : ENat -> ENat -> Prop :=
| leN : forall m, le N m
| leS : forall n m, le n m -> le (S n) (S m).
```

The lemma is specified as

`Lemma min_le: forall n m, le (min n m) n.`

and the proof method of choice to show that some coinductive relation holds, is `cofix`

. One would wish that the following proof would work:

```
Lemma min_le: forall n m, le (min n m) n.
Proof.
cofix.
destruct n, m.
* apply leN.
* apply leN.
* apply leN.
* apply leS.
apply min_le.
Qed.
```

but we get the error message

```
Error:
In environment
min_le : forall n m : ENat, le (min n m) n
Unable to unify "le N ?M170" with "le (min N N) N
```

Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation `min N N = N`

, and like a kid scared of coinduction, it did not dare to “run” the `min`

function. The reason it does not just “run” a `CoFixpoint`

is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint *only* when it is the scrutinee of a `match`

statement.

So we need to get a `match`

statement in place. We can do so with a helper function:

```
Definition evalN (n : ENat) :=
match n with | N => N
| S n => S n end.
Lemma evalN_eq : forall n, evalN n = n.
Proof. intros. destruct n; reflexivity. Qed.
```

This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (`N`

or `S _`

). We can use it in the proof to guide Coq, and the following goes through:

```
Lemma min_le: forall n m, le (min n m) n.
Proof.
cofix.
destruct n, m; rewrite <- evalN_eq with (n := min _ _).
* apply leN.
* apply leN.
* apply leN.
* apply leS.
apply min_le.
Qed.
```

### Isabelle

In Isabelle, definitions and types are very different things, so we use different commands to define `ENat`

and `le`

:

```
theory ENat imports Main begin
codatatype ENat = N | S ENat
primcorec min where
"min n m = (case n of
N ⇒ N
| S n' ⇒ (case m of
N ⇒ N
| S m' ⇒ S (min n' m')))"
coinductive le where
leN: "le N m"
| leS: "le n m ⟹ le (S n) (S m)"
```

There are actually many ways of defining `min`

; I chose the one most similar to the one above. For more details, see the `corec`

tutorial.

Now to the proof:

```
lemma min_le: "le (min n m) n"
proof (coinduction arbitrary: n m)
case le
show ?case
proof(cases n)
case N then show ?thesis by simp
next
case (S n') then show ?thesis
proof(cases m)
case N then show ?thesis by simp
next
case (S m') with ‹n = _› show ?thesis
unfolding min.code[where n = n and m = m]
by auto
qed
qed
qed
```

The `coinduction`

proof methods produces this goal:

```
proof (state)
goal (1 subgoal):
1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨
(∃n' m'.
min n m = S n' ∧
n = S m' ∧
((∃n m. n' = min n m ∧ m' = n) ∨ le n' m'))
```

I chose to spell the proof out in the Isar proof language, where the outermost proof structure is done relatively explicity, and I proceed by case analysis mimiking the `min`

function definition.

In the cases where one argument of `min`

is `N`

, Isabelle’s *simplifier* (a term rewriting tactic, so to say), can solve the goal automatically. This is because the `primcorec`

command produces a bunch of lemmas, one of which states `n = N ∨ m = N ⟹ min n m = N`

.

In the other case, we need to help Isabelle a bit to reduce the call to `min (S n) (S m)`

using the `unfolding`

methods, where `min.code`

contains exactly the equation that we used to specify `min`

. Using just `unfolding min.code`

would send this method into a loop, so we restrict it to the concrete arguments `n`

and `m`

. Then `auto`

can solve the remaining goal (despite all the existential quantifiers).

### Summary

Both theorem provers are able to prove the desired result. To me it seems that it is slightly more convenient in Isabelle because a lot of Coq infrastructure relies on the type checker being able to effectively evaluate expressions, which is tricky with cofixpoints, wheras *evaluation* plays a much less central role in Isabelle, where *rewriting* is the crucial technique, and while one still cannot simply throw `min.code`

into the simpset, so working with objects that do not evaluate easily or completely is less strange.

### Agda

I was challenged to do it in Agda. Here it is:

```
module ENat where
open import Coinduction
data ENat : Set where
N : ENat
S : ∞ ENat → ENat
min : ENat → ENat → ENat
min (S n') (S m') = S (♯ (min (♭ n') (♭ m')))
min _ _ = N
data le : ENat → ENat → Set where
leN : ∀ {m} → le N m
leS : ∀ {n m} → ∞ (le (♭ n) (♭ m)) → le (S n) (S m)
min_le : ∀ {n m} → le (min n m) n
min_le {S n'} {S m'} = leS (♯ min_le)
min_le {N} {S m'} = leN
min_le {S n'} {N} = leN
min_le {N} {N} = leN
```

I will refrain from commenting it, because I do not really know what I have been doing here, but it typechecks, and refer you to the official documentation on coinduction in Agda. But let me note that I wrote this using plain inductive types and recursion, and added `∞`

, `♯`

and `♭`

until it worked.

## The Micro Two Body Problem

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

## Bundestag beschließt Ehe für alle

Nach Jahrzehnten des Mauerns und des Blockierens der konservativen Parteien im Bundestag ging plötzlich alles ganz schnell. Auslöser war ein Besuch der Bundeskanzlerin Fräulein Merkel bei Ihrem Amtskollegen Donald Trump. Der Immobilienmogul lebt, sozusagen als Paradebeispiel für den gesellschaftlichen Fortschritt in seinem Land, in einer Ehe mit einer Frau (sogar einer Ausländerin!) lebt. Merkel erzählte danach im Interview mit dem Playboy:

Ich war sehr überrascht und zutiefst bewegt, mitzuerleben, wie fürsorglich, innig und vertraulich die Ehemänner, Verzeihung, Eheleute Trump miteinander umgehen. Und sie sind in den USA damit nicht alleine!

Daher muss ich eingestehen dass ich mir unsicher geworden bin, ob meine Überzeugung in Deutschland noch mehrheitsfähig ist. Ich könnte mir vorstellen, dazu einmal den Bundestag zu befragen – natürlich als Gewissensentscheidung und ohne Fraktionszwang.

Der Koaliationspartner SPD griff diesen Vorschlag umgehend auf und brachte ein „Ehe-für-Alle“ Gesetz auf den Weg, das im entsprechenden Paragrafen des Bürgerlichen Gesetzbuchs den Passus „Die Ehe wird zwischen zwei *Männern* auf Lebenszeit geschlossen.“ durch „Die Ehe wird zwischen zwei *Menschen* auf Lebenszeit geschlossen.“ Der Rest ist Geschichte: Mit den Stimmen der Grünen, der Linkspartei und einiger Abgeordneter der Union (darunter etliche verheiratete Männer!) stimmten dafür. In nur wenigen Monaten können Adam und Eva Müller also im Standesamt den Bund der Ehe eingehen – als wären sie Adam und Stefan Müller.

Besonders ausgiebig feierten die Grünen, die seit ihrer Gründung in den 80ern fanatisch auf diesen Moment hingearbeitet haben. Unvergessen wie Ströbele Anfang der 90er mit einer Dame auf dem Bundespresseball erschien und sie als seine Ehefrau Monika vorstellte (offiziell wohnte sie als Gärtnerin bei ihm). Ebenso unvergessen der März 1996, als die Herren der Grünenfraktion die Abgeordneten der anderen Parteien zu Steak und Pommes einluden und erst hinterher verrieten, dass die Speise tatsächlich und ausschließlich von Frauen zubereitet wurden. Seit dem waren Restaurants mit Unisex-Speisekarten und gemeinsamer Küche im progressiven Berlin immer häufiger anzutreffen.

Frau Gerhard-Ecking (bürgerlich natürlich Fräulein Ecking), Fraktionssprecherin der Grünen, kommentiert den Bundestagsbeschluss:

Frauen dürfen wählen, dürfen als Soldaten dienen. Frauen leiten heutzutage DAX-Konzerne und sogar Samengroßbanken. Es ist unverständlich warum Ihnen trotz dem nicht zugetraut wird, einem Ehemann zu lieben, zu umsorgen, zu unterstützen, mit ihm alt zu werden und sich gegenseitig im Alter abzusichern. Dabei geht es den meisten Ehemännern ja gar nicht darum um diese hohen Werte der Ehe, sie haben es lediglich auf den Steuervorteil abgesehen haben! Endlich also auch Ehemännersplitting für Frauen!

Während Ihrer Rede stand, wie so oft, das bekannte Photomodel Herr Gerhard, hinter ihr und entzückte die Reporter mit einem grau-mellierten Zweiteiler und amüsanter Blümlichenkrawatte. Es heißt, sie hätten gemeinsam ein Kleintierzuchtverein gegründet, um wenigstens bei der Bank ein gemeinsames Konto eröffnen zu könne.

Vehemente Kritik kam wie erwartet von Seiten der katholischen Kirche. Die Doppelspitze der Deutschen Bischofskonferenz, Ehemänner Kardinale Jonas und Hannes Meissner-Stolz, verkündeten in einer Presseerklärung:

Heute ist ein schwarzer Tag für die Christen in der Bundesrepublik. Das heilige Sakrament der Ehe, von Jesus zwölf mal gestiftet, lässt sich nicht per Abstimmung auf Frauen übertragen. Jesus zog mit Jüngern durch das Land, nicht mit Jüngerinnen, und genau so ist Gottes Wille zu verstehen. Frauen und Männer sind von Natur aus zu verschieden. Wenn jetzt noch mehr Männer in Deutschland in so gottloser Weise das Haus und das Bett mit Frauen teilen, wird die Gesellschaft daran zugrunde gehen.

Die Kirche kann Mitglieder, die eine solche „Bundestagsehe“ mit einer Frau eingehen, nicht mehr in ihren Reihen aufnehmen können.

Die Börsen reagierten verhalten. Lediglich die Aktien der Vereinigten Samenbanken notierten mit deutlichen Verlusten. Analysten vermuten dass sich die Gesetzeslage bezüglich Direktzeugung – bisher illegal aber nicht strafbar – in Kürze entsprechend anpasst und dann weniger Frauen die Dienste der Banken in Anspruch nehmen werden.

Fräulein Merkel, die selbst gegen den Gesetzesvorschlag votierte, kommentierte dass sie auch weiterhin an traditionellen, christlichen Werten festhalten wird, und eine Ehe für sie nicht daher in Frage kommt.

## The perils of live demonstrations

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target^{1} in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

```
commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <cdsmith@gmail.com>
Date: Wed Jun 21 23:53:53 2017 +0000
Change dilated to take one parameter.
Function is nearly unused, so I'm not concerned about breakage.
This new version better aligns with standard educational usage,
in which "dilation" means uniform scaling. Taken as a separate
operation, it commutes with rotation, and preserves similarity
of shapes, neither of which is true of scaling in general.
```

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless^{2}. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.