Joachim Breitner

Blog

Finding bugs in Haskell code by proving it

Published 2017-12-05 in sections English, Haskell.

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
  where
    go _ [] = []
    go [] _ = []
    go (i1:is1) (i2:is2)
        -- reorder for symmetry
        | to i1 < to i2 = go (i2:is2) (i1:is1)
        -- disjoint
        | from i1 >= to i2 = go (i1:is1) is2
        -- subset
        | to i1 == to i2 = I f' (to i2) : go is1 is2
        -- overlapping
        | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
      where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
  match is with
    | [] => True
    | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
   end.

Definition good is := match is with
  ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z :=
  (fun z => (f <= z)%Z /\ (z < t)%Z).

Definition semI (i : Interval) : Ensemble Z :=
  match i with I f t => range f t end.

Fixpoint semLIs (is : list Interval) : Ensemble Z :=
  match is with
    | [] => Empty_set Z
    | (i :: is) => Union Z (semI i) (semLIs is)
end.

Definition sem is := match is with
  ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals),
  good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.

Lemma intersection_spec : forall (is1 is2 : Intervals),
  good is1 -> good is2 ->
  sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool :=
  match is1, is2 with
    | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
    | _, _ => false
  end.

Definition size2 (is1 is2 : list Interval) : nat :=
  (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Existence and Termination

Published 2017-11-25 in sections English, Digital World.

I recently had some intense discussions that revolved around issues of existence and termination of functions in Coq, about axioms and what certain proofs actually mean. We came across some interesting questions and thoughts that I’ll share with those of my blog readers with an interest in proofs and interactive theorem proving.

tl;dr

  • It can be meaningful to assume the existence of a function in Coq, and under that assumption prove its termination and other properties.
  • Axioms and assumptions are logically equivalent.
  • Unsound axioms do not necessary invalidate a theory development, when additional meta-rules govern their use.

Preparation

Our main running example is the infamous Collatz series. Starting at any natural number, the next is calculated as follow:

Require Import Coq.Arith.Arith.

Definition next (n : nat) :nat :=
  if Nat.even n then n / 2 else 3*n + 1.

If you start with some positive number, you are going to end up reaching 1 eventually. Or are you? So far nobody has found a number where that does not happen, but we also do not have a proof that it never happens. It is one of the great mysteries of Mathematics, and if you can solve it, you’ll be famous.

A failed definition

But assume we had an idea on how to prove that we are always going to reach 1, and tried to formalize this in Coq. One attempt might be to write

Fixpoint good (n : nat) : bool :=
  if n <=? 1
    then true
    else good (next n).

Theorem collatz: forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

Unfortunately, this does not work: Coq rejects this recursive definition of the function good, because it does not see how that is a terminating function, and Coq requires all such recursive function definitions to be obviously terminating – without this check there would be a risk of Coq’s type checking becoming incomplete or its logic being unsound.

The idiomatic way to avoid this problem is to state good as an inductive predicate... but let me explore another idea here.

Working with assumptions

What happens if we just assume that the function good, described above, exists, and then perform our proof:

Theorem collatz
  (good : nat -> bool)
  (good_eq : forall n,
     good n = if n <=? 1 then true else good (next n))
  : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

Would we accept this as a proof of Collatz’ conjecture? Or did we just assume what we want to prove, in which case the theorem is vacuously true, but we just performed useless circular reasoning?

Upon close inspection, we find that the assumptions of the theorem (good and good_eq) are certainly satisfiable:

Definition trivial (n: nat) : bool := true.

Lemma trivial_eq: forall n,
  trivial n = if n <=? 1 then true else trivial (next n).
Proof. intro; case (n <=? 1); reflexivity. Qed.

Lemma collatz_trivial: forall n, trivial n = true.
Proof.
  apply (collatz trivial trivial_eq).
Qed.

So clearly there exists a function of type nat -> bool that satisfies the assumed equation. This is good, because it means that the collatz theorem is not simply assuming False!

Some (including me) might already be happy with this theorem and proof, as it clearly states: “Every function that follows the Collatz series eventually reaches 1”.

Others might still not be at ease with such a proof. Above we have seen that we cannot define the real collatz series in Coq. How can the collatz theorem say something that is not definable?

Classical reasoning

One possible way of getting some assurance it to define good as a classical function. The logic of Coq can be extended with the law of the excluded middle without making it inconsistent, and with that axiom, we can define a version of good that is pretty convincing (sorry for the slightly messy proof):

Require Import Coq.Logic.ClassicalDescription.
Require Import Omega.
Definition classical_good (n:nat) : bool :=
  if excluded_middle_informative (exists m, Nat.iter m next n <= 1)
  then true else false.

Lemma iter_shift:
  forall a f x (y:a), Nat.iter x f (f y) = f (Nat.iter x f y).
Proof.
 intros. induction x. reflexivity. simpl. rewrite IHx. reflexivity. Qed.

Lemma classical_good_eq: forall n,
  classical_good n = if n <=? 1 then true else classical_good (next n).
Proof.
  intros.
  unfold classical_good at 1.
  destruct (Nat.leb_spec n 1).
  * destruct (excluded_middle_informative _); try auto.
    contradict n0. exists 0. simpl. assumption.
  * unfold classical_good.
    destruct (Nat.eqb_spec (next n) 0); try auto.
    destruct (excluded_middle_informative _), (excluded_middle_informative _); auto.
    - contradict n0.
      destruct e0.
      destruct x; simpl in *. omega.
      exists x. rewrite iter_shift. assumption.
    - contradict n0.
      destruct e0.
      exists (S x). simpl. rewrite iter_shift in H0. assumption.
Qed.

Lemma collatz_classical: forall n, classical_good n = true.
Proof. apply (collatz classical_good classical_good_eq). Qed.

The point of this is not so much to use this particular definition of good, but merely to convince ourselves that the assumptions of the collatz theorem above encompass “the” Collatz series, and thus constitutes a proof of the Collatz conjecture.

The main take-away so far is that existence and termination of a function are two separate issues, and it is possible to assume the former, prove the latter, and not have done a vacuous proof.

The ice gets thinner

Sections

Starting with the above Theorem collatz, there is another train of thought I invite to to follow along.

Probably the “genius idea” proof will be more than a few lines long, and we probably to be able to declare helper lemmas and other things along the way. Doing all that in the body of the collatz proof is not very convenient, so instead of using assumptions, we might write

Section collatz:
Variable good : nat -> bool.
Variable good_eq : forall n,
  good n = if n <=? 1 then true else good (next n)

Theorem collatz2 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.
End collatz.

So far so good: Clearly, I just refactored my code a bit, but did not make any significant change. The theorems collatz2 and collatz are equivalent.

Sound axioms

But note that we do not really intend to instantiate collatz2. We know that the assumptions are satisfiable (e.g. since we can define trivial or classical_good). So maybe, we would rather avoid the Section mechanism and simply write

Axiom good : nat -> bool.
Axiom good_eq : forall n,
  good n = if n <=? 1 then true else good (next n)

Theorem collatz3 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

I assume this will make a few of my readers’ eyebrows go up: How can I dare to start with such Axioms? Do they not invalidate my whole development?

On the other hand, all that a Coq axiom is doing is saying “the following theorems are under the assumption that the axiom holds”. In that sense, collatz3 and collatz2 are essentially equivalent.

Unsound axioms

Let me take it one step further, and change that to:

Axiom unsafeFix : forall a, (a -> a) -> a.
Axiom unsafeFix_eq : forall f, unsafeFix f = f (unsafeFix f).
Definition good : nat -> bool :=
  unsafeFix (fun good n => if n <=? 1 then true else good (next n)).

Theorem collatz4 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

At this point, the majority of my readers will cringe. The axiom unsafeFix is so blatantly unsound (in Coq), how do I even dare to think of using it. But bear with me for a moment: I did not change the proof. So maybe the collatz4 theorem is still worth something?

I want to argue that it is: Both unsafeFix and unsafeFix_eq are unsound in their full generality. But as long as I instantiate them only with functions f which have a fixpoint, then I cannot prove False this way. So while “Coq + unsafeFix” is unsound, “Coq + unsafeFix + unsafeFix_eq + metarule that these axioms are only called with permissible f” is not.

In that light, my collatz4 proof carries the same meaning as the collatz3 proof, it is just less convenient to check: If I were to check the validity of collatz3, I have to maybe look for uses of admit, or some misleading use of syntax or other tricks, or other smells. When I have to check the validity of collatz4, I also have to additionally check the meta-rule -- tedious, but certainly possible (e.g. by inspecting the proof term).

Beyond Collatz

The questions discussed here did not come up in the context of the Collatz series (for which I unfortunately do not have a proof), but rather the verification of Haskell code in Coq using hs-to-coq. I started with the idiomatic Haskell definition of “Quicksort”:

quicksort :: Ord a => [a] -> [a]
quicksort [] = []
quicksort (p:xs) = quicksort lesser ++ [p] ++ quicksort greater
    where (lesser, greater) = partition (<p) xs

This function is not terminating in a way that is obvious to the Coq type checker. Conveniently, hs-to-coq can optionally create the Coq code using the unsafeFix axiom above, producing (roughly):

Definition quicksort {a} `{Ord a} : list a -> list a :=
  unsafeFix (fun quicksort xs =>
    match xs with
      | nil => nil
      | p :: xs => match partition (fun x => x <? p) xs with
         | (lesser, greater) => quicksort lesser ++ [p] ++ quicksort greater
      end
    end).

I then proved (roughly)

Theorem quicksort_sorted:
  forall a `(Ord a) (xs : list a), StronglySorted (quicksort xs).

and

Theorem quicksort_permutation:
  forall a `(Ord a) (xs : list a), Permutation (quicksort xs) xs.

These proofs proceed by well-founded induction on the length of the argument xs, and hence encompass a termination proof of quicksort. Note that with a only partially correct but non-terminating definition of quicksort (e.g. quicksort := unsafeFix (fun quicksort xs => quicksort xs)) I would not be able to conclude these proofs.

My (not undisputed) claim about the meaning of these theorems is therefore

If the Haskell equations for quicksort actually have a fixed point, then the use of unsafeFix in its definition does not introduce any inconsistency. Under this assumption, we showed that quicksort always terminates and produces a sorted version of the input list.

Do you agree?

Isabelle functions: Always total, sometimes undefined

Published 2017-10-12 in sections English, Digital World.

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians.

Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions

If I have a Isabelle function f :: a ⇒ b between two types a and b (the function arrow in Isabelle is , not ), then – by definition of what it means to be a function in HOL – whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total.

In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like

spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))

Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type Bool.

Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq.

Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion:

Isabelle functions do not compute

We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it2. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind.

What is the difference?

  • A function a → b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a ⇒ b in math (or Isabelle) associates with each value of type a a value of type b.

For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:

definition foo :: "(nat ⇒ real) ⇒ real" where
  "foo seq = (if convergent seq then lim seq else 0)"

This assigns a real number to every sequence, but it does not compute it in any useful sense.

From this it follows that

Isabelle functions are specified, not defined

Consider this function definition:

fun plus :: "nat ⇒ nat ⇒ nat"  where
   "plus 0       m = m"
 | "plus (Suc n) m = Suc (plus n m)"

To a functional programmer, this reads

plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.

which is clearly a description of a computation.

But to Isabelle, the above reads

plus is a binary function on natural numbers, and it satisfies the following two equations: …

And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus.

One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we’d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions

Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check – very much like Fixpoint in Coq.

But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a “termination proof of the function” exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument!

For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):

partial_function (tailrec)
  fixpoint :: "('a ⇒ 'a) ⇒ 'a ⇒ 'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"

We can work with this definition just fine. For example, if we instantiate f with (λx. x-1), we can prove that it will always return 0:

lemma "fixpoint (λ n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)

Similarly, if we have a function that works within the option monad (i.e. |Maybe| in Haskell), its specification can always be turned into a function without an explicit termination proof – here one that calculates the Collatz sequence:

partial_function (option) collatz :: "nat ⇒ nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do { ns <- collatz (n div 2);    Some (n # ns) }
           else do { ns <- collatz (3 * n + 1);  Some (n # ns)})"

Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function “returns” a list only if the collatz sequence eventually reaches 1.

I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint (λn. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour…

HOL is a logic of non-empty types

Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type.

Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions.

This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely

undefined :: 'a

The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown.

Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the “usual ones”. It is simply some value of that type, which is demonstrated in the following lemma:

lemma "undefined = True ∨ undefined = False" by auto

In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:

lemma "undefined = ()" by auto

It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions

For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle’s equivalent of Haskell’s partial fromJust function:

fun fromSome :: "'a option ⇒ 'a" where
  "fromSome (Some x) = x"

This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that.

So fromSome None behaves just like undefined above, i.e. we can prove

lemma "fromSome None = False ∨ fromSome None = True" by auto

Here is a small exercise for you: Can you come up with an explanation for the following lemma:

fun constOrId :: "bool ⇒ bool" where
  "constOrId True = True"

lemma "constOrId = (λ_.True) ∨ constOrId = (λx. x)"
  by (metis (full_types) constOrId.simps)

Overall, this behavior makes sense if we remember that function “definitions” in Isabelle are not really definitions, but rather specifications. And a partial function “definition” is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified

Let us return to the puzzle posed by fixpoint above. Clearly, the function – seen as a functional program – is not total: When passed the argument (λn. n + 1) or (λb. ¬b) it will loop forever trying to find a fixed point.

But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined.

Or more concretely: The specification of the fixpoint function states that fixpoint (λb. ¬b) True = fixpoint (λb. ¬b) False has to hold, but it does not specify which particular value (True or False) it should denote – any is fine.

Not all function specifications are ok

At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () ⇒ nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution.

We can actually prove that such a function cannot exist:

lemma no_bogus: "∄ bogus. bogus () = Suc (bogus ())" by simp

(Of course, not_bogus () = not_bogus () is just fine…)

You cannot reason about partiality in Isabelle

We have seen that there are many ways to define functions that one might consider “partial”. Given a function, can we prove that it is not “partial” in that sense?

Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that “A function result is not specified”.

Here is an example that demonstrates this: Two “partial” functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:

fun partial1 :: "bool ⇒ unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool ⇒ unit" where
  "partial2 b = partial2 b"
fun total :: "bool ⇒ unit" where
  "total True = ()"
| "total False = ()"

lemma "partial1 = total ∧ partial2 = total" by auto

If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, ⊥ ≠ () and partial2 = ⊥ ≠ total. We have done that to verify some of HLint’s equations.

You can still compute with Isabelle functions

I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to `fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to “compute” that function quite like you would in Coq or Haskell.

Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle.

While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones.

Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a “moral reasoning” foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq.

There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion

We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq

This section is speculative, and an invitation for discussion.

Coq already distinguishes between types used in programs (Set) and types used in proofs Prop.

Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set.

If so, then it would be sound to add the following axiom

Axiom undefined : forall (a : Set), a.

wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality.

With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ ⇒ undefined. Would it “help” with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?


  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  3. Let me know if you find such an n. Besides n = 0.

  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.

e.g. in TeX

Published 2017-10-08 in sections English, Digital World.

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\␣.

Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\␣ supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact e.g.\␣ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper.

What now? I guess I just have to find it out myself.

The problem and two solutions

The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how \@ and \␣ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

Expanding whitespace, take 1

Expanding whitespace, take 1

Expanding whitespace, take 2

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

Expanding whitespace, take 3

Expanding whitespace, take 3

Expanding whitespace, take 4

Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but e.g.\␣ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\␣!

(Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)

Less parentheses

Published 2017-09-10 in sections English, Haskell.

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:

analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)

analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:

type Flag = Bool

analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)

analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)

I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)

  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction!

Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

foo :: Baz
foo = bracket
        • some complicated code
          that is evaluated first
        • other complicated code for later
        • even more complicated code

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines (•) = id with infixl 0 •. A dedicated syntax extension (-XArgumentBullets) is preferable:

  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 • would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys

The rule is simple: A sequence of tokens without any space is implicitly parenthesised.

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

  • What about an operator with space on one side but not on the other? I’d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I’d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda’s syntax! Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

Compose Conference talk video online

Published 2017-08-20 in sections English, Haskell.

Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit):

It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online.

Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.

Communication Failure

Published 2017-08-06 in sections English, Haskell.

I am still far from being a professor, but I recently got a glimpse of what awaits you in that role…

From: Sebastian R. <…@gmail.com>
To: joachim@cis.upenn.edu
Subject: re: Errors

I've spotted a basic error in your course on Haskell (https://www.seas.upenn.edu/~cis194/fall16/). Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change.

In your introduction note you have written:

n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

Howeverm there is no input CodeWorld in the code above. Have you been made aware of this error earlier?

Regards, ...

Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences.

Strangley, I don’t see a mistake in the material, so I respond:

From: Joachim Breitner
To: Sebastian R. <…@gmail.com>
Subject: Re: Errors

Dear Sebastian,

thanks for pointing out errors. But the first piece of code under “Basic Haskell” starts with

{-# LANGUAGE OverloadedStrings #-}
import CodeWorld

so I am not sure what you are referring to.

Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things.

Regards, Joachim

A while later, I receive this response:

From: Sebastian R. <…@gmail.com>
To: Joachim Breitner
Subject: Re: Errors

Greetings, Joachim.

Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code.

You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words!

In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

This time around, I've highlighted the issue. I hope that made it easier for you to spot...

Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else...

On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner wrote:

I am a bit reminded of Sean Spicer … “they’re not my words!” … but clearly I am missing something. And indeed I am: In the code snippet, I wrote – correctly – import CodeWorld, but in the text I had input CodeWorld. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back:

From: Joachim Breitner
To: Sebastian R. <…@gmail.com>
Betreff: Re: Errors

Dear Sebastian,

nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote “input” instead of “import”.

Thanks for the report, although you did turn it into quite a riddle… a simple “you wrote import when it should have been import” would have been a better user of both our time.

Regards, Joachim

Am Donnerstag, den 03.08.2017, 13:32 +1000 schrieb Sebastian R.:

(And it seems I now made the inverse typo, writing “import“ instead of “input”. Anyways, I did not think of this any more until a few days later, when I found this nice message in my mailbox:

From: Sebastian R. <…@gmail.com>
To: Joachim Breitner
Subject: Re: Errors

a simple “you wrote import when it should have been import” would have been a better user of both our time.

We're both programmers. How about I cut ALL of the unnecessary garbage and just tell you to s/import/input/ on that last quotation (the thing immediately before this paragraph, in case you didn't know).

I blatantly quoted the error, like this:

In your introduction note you have written:

n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

Howeverm there is no input CodeWorld in the code above.

Since that apparently wasn't clear enough, in my second email to you I had to highlight it like so:

You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words!

In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.

This time around, I've highlighted the issue. I hope that made it easier for you to spot...

I'm not sure if you're memeing at me or not now, but it seems either your reading comprehension, or your logical deduction skills might be substandard. Unfortunately, there isn't much either of us can do about that, so I'm happy to accept that some people will be so stupid; after all, it's to be expected and if we don't accept that which is to be expected then we live our lives in denial.

Happy to wrap up this discusson here, Seb...

On Fri, Aug 4, 2017 at 12:22 AM, Joachim Breitner wrote:

Well, I chose to be amused by this, and I am sharing my amusement with you.

How is coinduction the dual of induction?

Published 2017-07-27 in sections English, Haskell.

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this.

The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.

Inductive lists

I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:

datatype 'a list = nil | cons (hd : 'a) (tl : "'a list")

with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):

P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs

Coinductive lists

In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing

codatatype 'a llist = lnil | lcons (hd : 'a)  (tl : "'a llist")

we get the following coinduction principle:

(∀ xs ys.
    R xs ys' → (xs = lnil) = (ys = lnil) ∧
               (xs ≠ lnil ⟶ ys' ≠ lnil ⟶
                 hd xs = hd ys ∧ R (tl xs) (tl ys))) →
→ (∀ xs ys. R xs ys → xs = ys)

This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.”

If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation.

So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference.

Relation induction

To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.

The datatype declaration automatically creates a relator:

rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool

The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:

Q nil nil →
(∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) →
(∀xs ys → rel_list R xs ys → Q xs ys)

Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.

Relation coinduction

The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function

rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool

which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:

(∀xs ys.
    R xs ys → (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
                Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) →
(∀ xs ys → R xs ys → rel_llist Q xs ys)

It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q with equality, which turns rel_llist Q into equality on the lists, and you have the theorem above.

The duality

With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of “cosomething is something with arrows reversed”.

But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:

(∀xs ys.
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
                Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { case analysis (the other two cases are vacuously true) }
  (∀xs ys.
    xs = lnil → ys = lnil →
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
                Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
∧ (∀xs ys.
    xs ≠ lnil ⟶ ys ≠ lnil
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
                Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { simplification }
  (∀xs ys.  xs = lnil → ys = lnil → R xs ys
∧ (∀x xs y ys.  R (cons x xs) (cons y ys) ← (Q x y ∧ R xs ys))
= { more rewriting }
  R nil nil
∧ (∀x xs y ys. Q x y → R xs ys → R (cons x xs) (cons y ys))

Conclusion

The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.

More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations – this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.

Coinduction in Coq and Isabelle

Published 2017-07-27 in sections English, Digital World.

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle

The task

Define the extended natural numbers coinductively. Define the min function and the  ≤  relation. Show that min(n, m) ≤ n holds.

Coq

The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation:

CoInductive ENat :=
  | N : ENat
  | S : ENat -> ENat.

CoFixpoint min (n : ENat) (m : ENat)
  :=match n, m with | S n', S m' => S (min n' m')
                    | _, _       => N end.

CoInductive le : ENat -> ENat -> Prop :=
  | leN : forall m, le N m
  | leS : forall n m, le n m -> le (S n) (S m).

The lemma is specified as

Lemma min_le: forall n m, le (min n m) n.

and the proof method of choice to show that some coinductive relation holds, is cofix. One would wish that the following proof would work:

Lemma min_le: forall n m, le (min n m) n.
Proof.
  cofix.
  destruct n, m.
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.
Qed.

but we get the error message

Error:
In environment
min_le : forall n m : ENat, le (min n m) n
Unable to unify "le N ?M170" with "le (min N N) N

Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation min N N = N, and like a kid scared of coinduction, it did not dare to “run” the min function. The reason it does not just “run” a CoFixpoint is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint only when it is the scrutinee of a match statement.

So we need to get a match statement in place. We can do so with a helper function:

Definition evalN (n : ENat) :=
  match n with | N => N
               | S n => S n end.

Lemma evalN_eq : forall n, evalN n = n.
Proof. intros. destruct n; reflexivity. Qed.

This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (N or S _). We can use it in the proof to guide Coq, and the following goes through:

Lemma min_le: forall n m, le (min n m) n.
Proof.
  cofix.
  destruct n, m; rewrite <- evalN_eq with (n := min _ _).
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.
Qed.

Isabelle

In Isabelle, definitions and types are very different things, so we use different commands to define ENat and le:

theory ENat imports  Main begin

codatatype ENat =  N | S  ENat

primcorec min where
   "min n m = (case n of
       N ⇒ N
     | S n' ⇒ (case m of
        N ⇒ N
      | S m' ⇒ S (min n' m')))"

coinductive le where
  leN: "le N m"
| leS: "le n m ⟹ le (S n) (S m)"

There are actually many ways of defining min; I chose the one most similar to the one above. For more details, see the corec tutorial.

Now to the proof:

lemma min_le: "le (min n m) n"
proof (coinduction arbitrary: n m)
  case le
  show ?case
  proof(cases n)
    case N then show ?thesis by simp
  next
    case (S n') then show ?thesis
    proof(cases m)
      case N then show ?thesis by simp
    next
      case (S m')  with ‹n = _› show ?thesis
        unfolding min.code[where n = n and m = m]
        by auto
    qed
  qed
qed

The coinduction proof methods produces this goal:

proof (state)
goal (1 subgoal):
 1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨
          (∃n' m'.
               min n m = S n' ∧
               n = S m' ∧
               ((∃n m. n' = min n m ∧ m' = n) ∨ le n' m'))

I chose to spell the proof out in the Isar proof language, where the outermost proof structure is done relatively explicity, and I proceed by case analysis mimiking the min function definition.

In the cases where one argument of min is N, Isabelle’s simplifier (a term rewriting tactic, so to say), can solve the goal automatically. This is because the primcorec command produces a bunch of lemmas, one of which states n = N ∨ m = N ⟹ min n m = N.

In the other case, we need to help Isabelle a bit to reduce the call to min (S n) (S m) using the unfolding methods, where min.code contains exactly the equation that we used to specify min. Using just unfolding min.code would send this method into a loop, so we restrict it to the concrete arguments n and m. Then auto can solve the remaining goal (despite all the existential quantifiers).

Summary

Both theorem provers are able to prove the desired result. To me it seems that it is slightly more convenient in Isabelle because a lot of Coq infrastructure relies on the type checker being able to effectively evaluate expressions, which is tricky with cofixpoints, wheras evaluation plays a much less central role in Isabelle, where rewriting is the crucial technique, and while one still cannot simply throw min.code into the simpset, so working with objects that do not evaluate easily or completely is less strange.

Agda

I was challenged to do it in Agda. Here it is:

module ENat where

open import Coinduction

data ENat : Set where
  N : ENat
  S : ∞ ENat → ENat

min : ENat → ENat → ENat
min (S n') (S m') = S (♯ (min (♭ n') (♭ m')))
min _ _ = N

data le : ENat → ENat → Set where
  leN : ∀ {m} → le N m
  leS : ∀ {n m} → ∞ (le (♭ n) (♭ m)) → le (S n) (S m)

min_le : ∀ {n m} → le (min n m) n
min_le {S n'} {S m'} = leS (♯ min_le)
min_le {N}    {S m'} = leN
min_le {S n'} {N} = leN
min_le {N}    {N} = leN

I will refrain from commenting it, because I do not really know what I have been doing here, but it typechecks, and refer you to the official documentation on coinduction in Agda. But let me note that I wrote this using plain inductive types and recursion, and added , and until it worked.

The Micro Two Body Problem

Published 2017-07-06 in sections English.

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

Two bodies traveling the world

Two bodies traveling the world