Joachim Breitner

Blog

The diameter of German+English

Published 2018-05-23 in sections English, Digital World.

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

Update (2018-06-17):

I ran the code again, this time lower-casing all words, and allowing false-friends translations, as suggested above. The resulting graph has – surprisingly – precisely the same diamater (55), but with a partly different list:

  • peyote
  • peyote
  • mescal
  • meskal
  • mezcal
  • blaue agave
  • maguey
  • amerikanische agave
  • american agave
  • jahrhundertpflanze
  • century plant
  • fächerlilie
  • tumbleweed
  • weißer fuchsschwanz
  • common tumbleweed
  • acker-fuchsschwanz / ackerfuchsschwanz
  • rough pigweed
  • grünähriger fuchsschwanz
  • love-lies-bleeding
  • stiefmütterchen
  • pansy
  • schwuchtel
  • fruit
  • ertrag
  • gain
  • vorgehen
  • approach
  • sich an jdn. wenden
  • address oneself to sb.
  • jdn. ansprechen
  • accost sb.
  • jdn. belästigen
  • pester sb.
  • jdn. traktieren
  • abuse sb.
  • jdn. beschimpfen
  • call sb. names
  • jdn. etw. heißen
  • tell sb. to do sth.
  • jdm. etw. befehlen
  • entrust sth. to sb.
  • jdm. etw. anvertrauen
  • entrust sb. with sth.
  • jdn. mit etw. [dat.] betrauen
  • charge sb. with sth.
  • jdm. etw. zur last legen
  • hold sth. against sb.
  • jdm. etw. nachtragen
  • bear sb. a grudge
  • jdm. grollen
  • have a grudge against sb.
  • jdm. böse sein
  • be angry with sb.
  • auf jdn. böse sein
  • be mad at sb.
  • auf jdn. einen (dicken) hals haben

Note that there is not actually a false-friend in this list – it seems that adding the edges just changed the order of edges in the graph representation and my code just happened to find a different diamer.

Proof reuse in Coq using existential variables

Published 2018-05-18 in sections English, Coq.

This is another technical post that is only of interest only to Coq users.

TL;DR: Using existential variable for hypotheses allows you to easily refactor a complicated proof into an induction schema and the actual proofs.

Setup

As a running example, I will use a small theory of “bags”, which you can think of as lists represented as trees, to allow an O(1) append operation:

Require Import Coq.Arith.Arith.
Require Import Psatz.
Require FunInd.

(* The data type *)
Inductive Bag a : Type :=
  | Empty : Bag a
  | Unit  : a -> Bag a
  | Two   : Bag a -> Bag a -> Bag a.

Arguments Empty {_}.
Arguments Unit {_}.
Arguments Two {_}.

Fixpoint length {a} (b : Bag a) : nat :=
  match b with
  | Empty     => 0
  | Unit _    => 1
  | Two b1 b2 => length b1 + length b2
  end.

(* A smart constructor that ensures that a [Two] never
   has [Empty] as subtrees. *)
Definition two {a} (b1 b2 : Bag a) : Bag a := match b1 with
  | Empty => b2
  | _ => match b2 with | Empty => b1
                       | _ => Two b1 b2 end end.

Lemma length_two {a} (b1 b2 : Bag a) :
  length (two b1 b2) = length b1 + length b2.
Proof. destruct b1, b2; simpl; lia. Qed.

(* A first non-trivial function *)
Function take {a : Type} (n : nat) (b : Bag a) : Bag a :=
  if n =? 0
  then Empty
  else match b with
       | Empty     => b
       | Unit x    => b
       | Two b1 b2 => two (take n b1) (take (n - length b1) b2)
       end.

The theorem

The theorem that I will be looking at in this proof describes how length and take interact:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).

Before I dive into it, let me point out that this example itself is too simple to warrant the techniques that I will present in this post. I have to rely on your imagination to scale this up to appreciate the effect on significantly bigger proofs.

Naive induction

How would we go about proving this lemma? Surely, induction is the way to go! And indeed, this is provable using induction (on the Bag) just fine:

Proof.
  intros.
  revert n.
  induction b; intros n.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + rewrite Nat.min_0_r. reflexivity.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. lia.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

But there is a problem: A proof by induction on the Bag argument immediately creates three subgoals, one for each constructor. But that is not how take is defined, which first checks the value of n, independent of the constructor. This means that we have to do the case-split and the proof for the case n = 0 three times, although they are identical. It’s a one-line proof here, but imagine something bigger...

Proof by fixpoint

Can we refactor the proof to handle the case n = 0 first? Yes, but not with a simple invocation of the induction tactic. We could do well-founded induction on the length of the argument, or we can do the proof using the more primitive fix tactic. The latter is a bit hairy, you won’t know if your proof is accepted until you do Qed (or check with Guarded), but when it works it can yield some nice proofs.

Proof.
  intros a.
  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n. rewrite Nat.min_0_l. reflexivity.
  + destruct b.
    * rewrite Nat.min_0_r. reflexivity.
    * simpl. lia.
    * simpl. rewrite length_two, !IH. lia.
Qed.

Nice: we eliminated the duplication of proofs!

A functional induction lemma

Again, imagine that we jumped through more hoops here ... maybe some well-founded recursion with a tricky size measure and complex proofs that the measure decreases ... or maybe you need to carry around an invariant about your arguments and you have to work hard to satisfy the assumption of the induction hypothesis.

As long as you do only one proof about take, that is fine. As soon as you do a second proof, you will notice that you have to repeat all of that, and it can easily make up most of your proof...

Wouldn’t it be nice if you can do the common parts of the proofs only once, obtain a generic proof scheme that you can use for (most) proofs about take, and then just fill in the blanks?

Incidentally, the Function command provides precisely that:

take_ind
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall (n : nat) (b : Bag a), (n =? 0) = true -> P n b Empty) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> b = Empty -> P n Empty b) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> forall x : a, b = Unit x -> P n (Unit x) b) ->
       (forall (n : nat) (b : Bag a),
        (n =? 0) = false ->
        forall b1 b2 : Bag a,
        b = Two b1 b2 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

which is great if you can use Function (although not perfect – we’d rather see n = 0 instead of (n =? 0) = true), but often Function is not powerful enough to define the function you care about.

Extracting the scheme from a proof

We could define our own take_ind' by hand, but that is a lot of work, and we may not get it right easily, and when we change out functions, there is now this big proof statement to update.

Instead, let us use existentials, which are variables where Coq infers their type from how we use them, so we don’t have to declare them. Unfortunately, Coq does not support writing just

Lemma take_ind':
  forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
  forall (IH1 : ?) (IH2 : ?) (IH3 : ?) (IH4 : ?),
  forall n b, P n b (take n b).

where we just leave out the type of the assumptions (Isabelle does...), but we can fake it using some generic technique.

We begin with stating an auxiliary lemma using a sigma type to say “there exist some assumption that are sufficient to show the conclusion”:

Lemma take_ind_aux:
  forall a (P : _ -> _ -> _ -> Prop),
  { Hs : Prop |
    Hs -> forall n (b : Bag a), P n b (take n b)
  }.

We use the [eexist tactic])(https://coq.inria.fr/refman/proof-engine/tactics.html#coq:tacv.eexists) (existential exists) to construct the sigma type without committing to the type of Hs yet.

Proof.
  intros a P.
  eexists.
  intros Hs.

This gives us an assumption Hs : ?Hs – note the existential type. We need four of those, which we can achieve by writing

  pose proof Hs as H1. eapply proj1 in H1. eapply proj2 in Hs.
  pose proof Hs as H2. eapply proj1 in H2. eapply proj2 in Hs.
  pose proof Hs as H3. eapply proj1 in H3. eapply proj2 in Hs.
  rename Hs into H4.

we now have this goal state:

1 subgoal
a : Type
P : nat -> Bag a -> Bag a -> Prop
H4 : ?Goal2
H1 : ?Goal
H2 : ?Goal0
H3 : ?Goal1
______________________________________(1/1)
forall (n : nat) (b : Bag a), P n b (take n b)

At this point, we start reproducing the proof of length_take: The same approach to induction, the same case splits:

  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n.
    revert b.
    refine H1.
  + rename n0 into Hnot_null.
    destruct b.
    * revert n Hnot_null.
      refine H2.
    * rename a0 into x.
      revert x n Hnot_null.
      refine H3.
    * assert (IHb1 : P n b1 (take n b1)) by apply IH.
      assert (IHb2 : P (n - length b1) b2 (take (n - length b1) b2)) by apply IH.
      revert n b1 b2 Hnot_null IHb1 IHb2.
      refine H4.
Defined. (* Important *)

Inside each case, we move all relevant hypotheses into the goal using revert and refine with the corresponding assumption, thus instantiating it. In the recursive case (Two), we assert that P holds for the subterms, by induction.

It is important to end this proofs with Defined, and not Qed, as we will see later.

In a next step, we can remove the sigma type:

Definition take_ind' a P := proj2_sig (take_ind_aux a P).

The type of take_ind' is as follows:

take_ind'
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       proj1_sig (take_ind_aux a P) ->
       forall n b, P n b (take n b)

This looks almost like an induction lemma. The assumptions of this lemma have the not very helpful type proj1_sig (take_ind_aux a P), but we can already use this to prove length_take:

Theorem length_take:
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind' with (P := fun n b r => length r = min n (length b)).
  repeat apply conj; intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

In this case I have to explicitly state P where I invoke take_ind', because Coq cannot figure out this instantiation on its own (it requires higher-order unification, which is undecidable and unpredictable). In other cases I had more luck.

After I apply take_ind', I have this proof goal:

______________________________________(1/1)
proj1_sig (take_ind_aux a (fun n b r => length r = min n (length b)))

which is the type that Coq inferred for Hs above. We know that this is a conjunction of a bunch of assumptions, and we can split it as such, using repeat apply conj. At this point, Coq needs to look inside take_ind_aux; this would fail if we used Qed to conclude the proof of take_ind_aux.

This gives me four goals, one for each case of take, and the remaining proofs really only deals with the specifics of length_take – no more general dealing with worrying about getting the induction right and doing the case-splitting the right way.

Also note that, very conveniently, Coq uses the same name for the induction hypotheses IHb1 and IHb2 that we used in take_ind_aux!

Making it prettier

It may be a bit confusing to have this proj1_sig in the type, especially when working in a team where others will use your induction lemma without knowing its internals. But we can resolve that, and also turn the conjunctions into normal arrows, using a bit of tactic support. This is completely generic, so if you follow this procedure, you can just copy most of that:

Lemma uncurry_and: forall {A B C}, (A /\ B -> C) -> (A -> B -> C).
Proof. intros. intuition. Qed.
Lemma under_imp:   forall {A B C}, (B -> C) -> (A -> B) -> (A -> C).
Proof. intros. intuition. Qed.
Ltac iterate n f x := lazymatch n with
  | 0 => x
  | S ?n => iterate n f uconstr:(f x)
end.
Ltac uncurryN n x :=
  let n' := eval compute in n in
  lazymatch n' with
  | 0 => x
  | S ?n => let uc := iterate n uconstr:(under_imp) uconstr:(uncurry_and) in
            let x' := uncurryN n x in
            uconstr:(uc x')
end.

With this in place, we can define our final proof scheme lemma:

Definition take_ind'' a P
  := ltac:(let x := uncurryN 3 (proj2_sig (take_ind_aux a P)) in exact x).
Opaque take_ind''.

The type of take_ind'' is now exactly what we’d wish for: All assumptions spelled out, and the n =? 0 already taken of (compare this to the take_ind provided by the Function command above):

take_ind''
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall b : Bag a, P 0 b Empty) ->
       (forall n : nat, n <> 0 -> P n Empty Empty) ->
       (forall (x : a) (n : nat), n <> 0 -> P n (Unit x) (Unit x)) ->
       (forall (n : nat) (b1 b2 : Bag a),
        n <> 0 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

At this point we can mark take_ind'' as Opaque, to hide how we obtained this lemma.

Our proof does not change a lot; we merely no longer have to use repeat apply conj:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind'' with (P := fun n b r => length r = min n (length b)); intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

Is it worth it?

It was in my case: Applying this trick in our ongoing work of verifying parts of the Haskell compiler GHC separated a somewhat proof into a re-usable proof scheme (go_ind), making the actual proofs (go_all_WellScopedFloats, go_res_WellScoped) much neater and to the point. It saved “only” 60 lines (if I don’t count the 20 “generic” lines above), but the pay-off will increase as I do even more proofs about this function.

Avoid the dilemma of the trailing comma

Published 2018-04-30 in sections English, Haskell.

The Haskell syntax uses comma-separated lists in various places and does, in contrast to other programming language, not allow a trailing comma. If everything goes on one line you write

  (foo, bar, baz)

and everything is nice.

Lining up

But if you want to have one entry on each line, then the obvious plan

  (foo,
   bar,
   baz
  )

is aesthetically unpleasing and moreover, extending the list by one to

  (foo,
   bar,
   baz,
   quux
  )

modifies two lines, which produces less pretty diffs.

Because it is much more common to append to lists rather than to prepend, Haskellers have developed the idiom of leading comma:

  ( foo
  , bar
  , baz
  , quux
  )

which looks strange until you are used to it, but solves the problem of appending to a list. And we see this idiom in many places:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
                   , array
                   , deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures
               , EmptyCase
               , ExistentialQuantification
               , FlexibleContexts
               , FlexibleInstances
               , GADTs
               , InstanceSigs
               , KindSignatures
               , RankNTypes
               , ScopedTypeVariables
               , TemplateHaskell
               , TypeFamilies
               , TypeInType
               , TypeOperators
               , UndecidableInstances #-}

Think outside the list!

I started to avoid this pattern where possible. And it is possible everywhere instead of having a declaration with a list, you can just have multiple declarations. I.e.:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
      build-depends: array
      build-depends: deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures #-}
    {-# LANGUAGE EmptyCase #-}
    {-# LANGUAGE ExistentialQuantification #-}
    {-# LANGUAGE FlexibleContexts #-}
    {-# LANGUAGE FlexibleInstances #-}
    {-# LANGUAGE GADTs #-}
    {-# LANGUAGE InstanceSigs #-}
    {-# LANGUAGE KindSignatures #-}
    {-# LANGUAGE RankNTypes #-}
    {-# LANGUAGE ScopedTypeVariables #-}
    {-# LANGUAGE TemplateHaskell #-}
    {-# LANGUAGE TypeFamilies #-}
    {-# LANGUAGE TypeInType #-}
    {-# LANGUAGE TypeOperators #-}
    {-# LANGUAGE UndecidableInstances #-}

It is a bit heavier, but it has a number of advantages:

  1. Both appending and prepending works without touching other lines.
  2. It is visually more homogeneous, making it – despite the extra words – easier to spot mistakes visually.
  3. You can easily sort the declarations alphabetically with your editor.
  4. Especially in Cabal files: If you have syntax error in your dependency specification (which I always have, writing << instead of < due to my Debian background), cabal will actually give you a helpful error location – it always only tells you which build-depends stanza was wrong, so if you have only one, then that’s not helpful.

What when it does not work?

Unfortunately, not every list in Haskell can have that treatment, and that’s why the recent GHC proposal on ExtraCommas wants to lift the restriction. In particular, it wants to allow trailing commas in subexport lists:

module Foo
    ( Foo(
        A,
	B,
      ),
      fromFoo,
    )

(Weirdly, export lists already allow trailing commas). An alternative here might be to write

module Foo
    ( Foo(A),
      Foo(B),
      fromFoo,
    )

and teach the compiler to not warn about the duplicate export of the Foo type.

For plain lists, this idiom can be useful:

list :: [Int]
list = let (>>) = (++) in do
   [ 1 ]
   [ 2 ]
   [ 3 ]
   [ 4 ]

It requires RebindableSyntax, so I do not recommend it for regular code, but it can be useful in a module that is dedicated to hold some generated data or configuration. And of course it works with any binary operator, not just (++)

Verifying local definitions in Coq

Published 2018-04-22 in sections English, Coq.

TL;DR: We can give top-level names to local definitions, so that we can state and prove stuff about them without having to rewrite the programs.

When a Haskeller writes Coq

Imagine you teach Coq to a Haskell programmer, and give them the task of pairing each element in a list with its index. The Haskell programmer might have

addIndex :: [a] -> [(Integer, a)]
addIndex xs = go 0 xs
  where go n [] = []
        go n (x:xs) = (n,x) : go (n+1) xs

in mind and write this Gallina function (Gallina is the programming language of Coq):

Require Import Coq.Lists.List.
Import ListNotations.

Definition addIndex {a} (xs : list a) : list (nat * a) :=
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (S n) xs
                     end
  in go 0 xs.

Alternatively, imagine you are using hs-to-coq to mechanically convert the Haskell definition into Coq.

When a Coq user tries to verify that

Now your task is to prove something about this function, for example

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.

If you just have learned Coq, you will think “I can do this, this surely holds by induction on xs.” But if you have a bit more experience, you will already see a problem with this (if you do not see the problem yet, I encourage you to stop reading, copy the few lines above, and try to prove it).

The problem is that – as so often – you have to generalize the statement for the induction to go through. The theorem as stated says something about addIndex or, in other words, about go 0. But in the inductive case, you will need some information about go 1. In fact, you need a lemma like this:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.

But go is not a (top-level) function! How can we fix that?

  • We can try to awkwardly work-around not having a name for go in our proofs, and essentially prove go_spec inside the proof of addIndex_spec. Might work in this small case, but does not scale up to larger proofs.
  • We can ask the programmer to avoid using local functions, and first define go as a top-level fixed point. But maybe we don’t want to bother them because of that. (Or, more likely, we are using hs-to-coq and that tool stubbornly tries to make the output as similar to the given Haskell code as possible.)
  • We can copy’n’paste the definition of go and make a separate, after-the-fact top-level definition. But this is not nice from a maintenance point of view: If the code changes, we have to update this copy.
  • Or we apply this one weird trick...

The weird trick

We can define go after-the-fact, but instead of copy’n’pasting the definition, we can use Coq’s tactics to define it. Here it goes:

Definition go {a} := ltac:(
  let e := eval cbv beta delta [addIndex] in (@addIndex a []) in
  (* idtac e; *)
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Let us take it apart:

  1. We define go, and give the parameters that go depends upon. Note that of the two parameters of addIndex, the definition of go only depends on (“captures”) a, but not xs.
  2. We do not give a type to go. We could, but that would again just be copying information that is already there.
  3. We define go via an ltac expression: Instead of a term we give a tactic that calculates the term.
  4. This tactic first binds e to the body of addIndex. To do so, it needs to pass enough arguments to addIndex. The concrete value of the list argument does not matter, so we pass []. The term @addIndex a [] is now evaluated with the evaluation flags eval cbv beta delta [addIndex], which says “unfold addIndex and do beta reduction, but nothing else”. In particularly, we do not do zeta reduction, which would reduce the let go := … definition. (The user manual very briefly describes these flags.)
  5. The idtac e line can be used to peek at e, for example when the next tactic fails. We can use this to check that e really is of the form let fix go := … in ….
  6. The lazymatch line matches e against the pattern let x := ?def in _, and binds the definition of go to the name def.
  7. And the exact def tactic tells Coq to use def as the definition of go.

We now have defined go, of type go : forall {a}, nat -> list a -> list (nat * a), and can state and prove the auxiliary lemma:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.
Proof.
  intros ?????.
  revert n m k.
  induction xs; intros; destruct n; subst; simpl.
  1-3:reflexivity.
  apply IHxs; lia.
Qed.

When we come to the theorem about addIndex, we can play a little trick with fold to make the proof goal pretty:

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.
Proof.
  intros.
  unfold addIndex.
  fold (@go a).
  (* goal here: nth n (map fst (go 0 xs)) n = n *)
  apply go_spec; lia.
Qed.

Multiple local definitions

The trick extends to multiple local definitions, but needs some extra considerations to ensure that terms are closed. A bit contrived, but let us assume that we have this function definition:

Definition addIndex' {a} (xs : list a) : list (nat * a) :=
  let inc := length xs in
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (inc + n) xs
                     end in
  go 0 xs.

We now want to give names to inc and to go. I like to use a section to collect the common parameters, but that is not essential here. The trick above works flawlessly for `inc':

Section addIndex'.
Context {a} (xs : list a).

Definition inc := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

But if we try it for go', like such:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := ?def in _ =>
    exact def
  end).

we get “Ltac variable def depends on pattern variable name x which is not bound in current context”. To fix this, we use higher-order pattern matchin (@?def) to substitute “our” inc for the local inc:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := @?def x in _ =>
    let def' := eval cbv beta in (def inc) in
    exact def'
  end).

instead. We have now defined both inc and go' and can use them in proofs about addIndex':

Theorem addIndex_spec':
  forall n, nth n (map fst (addIndex' xs)) n = n * length xs.
Proof.
  intros.
  unfold addIndex'.
  fold inc go'. (* order matters! *)
  (* goal here: nth n (map fst (go' 0 xs)) n = n * inc *)

Reaching into a match

This trick also works when the local definition we care about is inside a match statement. Consider:

Definition addIndex_weird {a} (oxs : option (list a))
  := match oxs with
     | None => []
     | Some xs =>
       let fix go n xs := match xs with
                          | []    => []
                          | x::xs => (n, x) :: go (S n) xs
                          end in
       go 0 xs
     end.

Definition go_weird {a} := ltac:(
  let e := eval cbv beta match delta [addIndex_weird]
           in (@addIndex_weird a (Some [])) in
  idtac e;
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Note the addition of match to the list of evaluation flags passed to cbv.

Conclusion

While local definitions are idiomatic in Haskell (in particular thanks to the where syntax), they are usually avoided in Coq, because they get in the way of verification. If, for some reason, one is stuck with such definitions, then this trick presents a reasonable way out.

Reservoir sampling with few random bits

Published 2018-03-25 in sections English, Math.

Assume you are Nisus, the “one cross each” guy in Monty Python’s “Life of Brian” movie. Condemned prisoners file past you one by one, on their way to crucifixion. Now it happens that this particular day, feel both both benevolent and very indecisive. Because you feel benevolent, you want to liberate one of the condemned. But because you feel indecisive, you don’t want to pick the lucky one yourself, so you have to leave it to chance. And you don’t want anybody to have grounds to to complain, so you want to give each prisoner exactly the same chance.

This would be a relatively simple task if you knew the number of prisoners, n, ahead of time: Pick a random number k ∈ {1, …, n}, count while you hand our the crosses, and tell the kth prisoner to get lost.

But alas, the Romans judiciary system is big and effective, and you have no idea how many prisoners will emerge from the dungeons today. So you might plan to just let them all wait next to you while you count them (to figure out what n is), but that won’t work either: There is not enough space to have them waiting, and some might try to escape in the confusion before you are done counting. So in fact, you can only have one prisoner waiting at a time. As soon as the next one comes, you have to send one of them towards execution.

This is still a solvable problem, and the solution is Reservoir sampling: You let the first prisoner wait. When the second one comes, you pick one of them with probability ½, and let him wait. When the third one comes, with probability you send him on, and with probability to let him wait. The fourth is sent forth with probability ¼ and kept waiting with probability ¼. And so on. When finally the dungeon master tells you that all prisoners have emerged, whoever is waiting next to you is the lucky guy.

Before you do that, you should convince yourself that this is fair: The kth prisoner is the lucky one if you let him wait and from then on, all other prisoners are sent forth. The probability of this happening is ¹ ⁄  ⋅  ⁄ ₊₁ ⋅ ⁺¹ ⁄ ₊₂ ⋅ ⋯ ⋅ ⁻¹ ⁄  which, after a number of cancellations, resolves to ¹ ⁄ . Because k does no longer show up in this calculation, the probability is the same for everybody, and hence fair.

How to not make a decision

A fair RNG

A fair RNG

So far so good. But the story misses an important point: Where do you get your randomness from? Just picking randomly is hardly random. Luckily, you just have been given your guerdon, and you have a Roman coin in your pocket. Let us assume the coin is perfectly fair: With probability ½ it shows the head of Libertas, and with probability ½ a bunch of horses. It is easy to use this coin to distinguish between the first and second prisoners, but when the third comes out, things get hairy.

You can try to throw the coin many times; say, 10 times. This has 210 = 1024 outcomes, which we can interpret as a number between 1 and 1024. If you give the third prisoner his freedom when this number is between 1 and 341, and you give him a cross if the number is between 342 and 1024, then you made pretty fair call. Not yet fully satisfying, but let’s go with it for now.

There is, however, another problem with this: You have to throw the coin a lot of times: For each prisoner, you toss it 10 times! This is too slow, and the plebs will get impatient. And it clearly is wasteful. The great philosopher Shannonus taught you that throwing the coin ten times gives you 10 bits of entropy, but you actually need only −(⅓ ⋅ log(⅓)+⅔⋅log(⅔)) = 0.918… bits to make this - decision. We should use those left-over 9.81 bits! How do we do that?

Unreal real randomness

To come up with a better (and maybe even optimal) plan, imagine for a moment that you had a source of randomness that gives you a real real number in the range (0, 1), uniformly. Clearly, that would solve your problem, because when prisoner k emerges, you can draw a random number r and let this prisoner wait if r < ¹ ⁄ .

But you don’t actually have to draw a random number more than once: It suffices to pick a single r ∈ (0, 1) at the very beginning. You just have to be a bit cleverer when deciding whether to keep the kths prisoner next to you. Let’s go through it step by step:

  • When Augustus, the first prisoner, comes out, you simply keep him.
  • When Brutus, the second prisoner, comes out, you keep him with probability ½. You do that if r < ½. You also start some bookkeeping, by remembering the range of r that has led to this outcome; in this case, the range is (0, ½). If you picked Augustus, the range is (½,1).
  • Now Claudius comes out, and you want to keep him with probability . If the range that you remembered is (0, ½), you keep him if r < ⅙. Similarly, if the range is (½,1) you keep him if r < ½ + ⅙ = ⁴ ⁄ ₆.

    The currently remembered range is now (⁴⁄₆,1) if Augustus is waiting, (⅙,½) if Brutus is waiting and either (0, ⅙) or (½,⁴⁄₆) if Claudius is waiting. Notice that the ranges are not all of the same size, but if you add the lengths of each prisoner’s range, you have , which means that every prisoner has the same chance to be waiting so far.
  • Finally, Decimus shows up. You repeat this procedure: Look at the current interval; if r is in the left quarter of it, Decimus stays, otherwise he leaves.

I tried to visualize this, and came up with this. Pick r, locate the right spot on the x axis, and you can read off, by going down, who is waiting next you.

A table for Nisus

A table for Nisus

Back to using coins

With this strategy in mind, we can go back to using a coin. Now, a single coin flip does not give us r. 1000 coin flips don’t either. If we could flip the coin an infinite number of times, yes, then we could get r out of this: The first coin flip decides the first bit after the decimal binary point: Head is 0, horses are 1. The second the second bit. And so on.

But note that you don’t need to know precisely r. To decide whether Augustus or Brutus stay, you only need to know if r is in the left or the right half -- and you know that after the first coin flip. You can also tell whether Claudius stays as soon as you threw the coin often enough to determine if r < ⅙. This might already be the case after a single coin throw (if it shows horses), or after a few more. The likelihood that you can’t decide with certainty whether r < ⅓ goes down exponentially, so with probability 1, you will come to a conclusion eventually.

And the good thing is: If you were unlucky and had to throw the coin very often, then you learned a lot about r, and you can decide the next few prisoners without throwing a coin at all! In this sense, we managed to use “left-over entropy”.

And the other good thing is: There are no rounding errors any more. Every prisoner has the same chance to be freed.

A visualization

I tried to visualize the whole story, using CodeWorld. In this not very pretty animation, you see the prisoners (so far just simple numbers) coming from the left. Nisus (not shown) either lets them wait in the slot in the top, or sends them to the right. Sometimes he needs to toss a coin. Below you can see, in numbers and in a bar, what ranges of r are interesting: In green, the range which indicates that the current prisoner can stay, in red the range where he has to go, and in blue the range that r is known so far.

Nisus (not shown) at work

Nisus (not shown) at work

Probably, by the time you have read this until now, the animation is not at the beginning any more. Move the mouse over it, and you will see controls in the lower left corner; the first button resets the animation. You can also check out the source code.

Open questions

So Nisus is happy, because with few coin tosses, he can fairly pick a random prisoner to free. I am not fully satisfied yet, because of two option questions:

  • Is this algorithm optimal? (And what does it mean to be optimal?)
  • What is the expected number of coin throws for n prisoners? Does this number converge to log(n), which is the entropy of the result?

If you know the anwers, let me know!

Brent Yorgey pointed me to a sequence of blog posts by Jeremy Gibbons, on “Arithmetic Coding”, which use a similar structure of dividing the unit intervals.

Interleaving normalizing reduction strategies

Published 2018-02-15 in sections English, Math.

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λx.xxx), and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence


M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

The magic “Just do it” type class

Published 2018-02-02 in sections English, Haskell.

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

What is this justDoIt?

*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

Finding bugs in Haskell code by proving it

Published 2017-12-05 in sections English, Coq.

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
  where
    go _ [] = []
    go [] _ = []
    go (i1:is1) (i2:is2)
        -- reorder for symmetry
        | to i1 < to i2 = go (i2:is2) (i1:is1)
        -- disjoint
        | from i1 >= to i2 = go (i1:is1) is2
        -- subset
        | to i1 == to i2 = I f' (to i2) : go is1 is2
        -- overlapping
        | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
      where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
  match is with
    | [] => True
    | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
   end.

Definition good is := match is with
  ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z :=
  (fun z => (f <= z)%Z /\ (z < t)%Z).

Definition semI (i : Interval) : Ensemble Z :=
  match i with I f t => range f t end.

Fixpoint semLIs (is : list Interval) : Ensemble Z :=
  match is with
    | [] => Empty_set Z
    | (i :: is) => Union Z (semI i) (semLIs is)
end.

Definition sem is := match is with
  ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals),
  good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.

Lemma intersection_spec : forall (is1 is2 : Intervals),
  good is1 -> good is2 ->
  sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool :=
  match is1, is2 with
    | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
    | _, _ => false
  end.

Definition size2 (is1 is2 : list Interval) : nat :=
  (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Existence and Termination

Published 2017-11-25 in sections English, Coq.

I recently had some intense discussions that revolved around issues of existence and termination of functions in Coq, about axioms and what certain proofs actually mean. We came across some interesting questions and thoughts that I’ll share with those of my blog readers with an interest in proofs and interactive theorem proving.

tl;dr

  • It can be meaningful to assume the existence of a function in Coq, and under that assumption prove its termination and other properties.
  • Axioms and assumptions are logically equivalent.
  • Unsound axioms do not necessary invalidate a theory development, when additional meta-rules govern their use.

Preparation

Our main running example is the infamous Collatz series. Starting at any natural number, the next is calculated as follow:

Require Import Coq.Arith.Arith.

Definition next (n : nat) :nat :=
  if Nat.even n then n / 2 else 3*n + 1.

If you start with some positive number, you are going to end up reaching 1 eventually. Or are you? So far nobody has found a number where that does not happen, but we also do not have a proof that it never happens. It is one of the great mysteries of Mathematics, and if you can solve it, you’ll be famous.

A failed definition

But assume we had an idea on how to prove that we are always going to reach 1, and tried to formalize this in Coq. One attempt might be to write

Fixpoint good (n : nat) : bool :=
  if n <=? 1
    then true
    else good (next n).

Theorem collatz: forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

Unfortunately, this does not work: Coq rejects this recursive definition of the function good, because it does not see how that is a terminating function, and Coq requires all such recursive function definitions to be obviously terminating – without this check there would be a risk of Coq’s type checking becoming incomplete or its logic being unsound.

The idiomatic way to avoid this problem is to state good as an inductive predicate... but let me explore another idea here.

Working with assumptions

What happens if we just assume that the function good, described above, exists, and then perform our proof:

Theorem collatz
  (good : nat -> bool)
  (good_eq : forall n,
     good n = if n <=? 1 then true else good (next n))
  : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

Would we accept this as a proof of Collatz’ conjecture? Or did we just assume what we want to prove, in which case the theorem is vacuously true, but we just performed useless circular reasoning?

Upon close inspection, we find that the assumptions of the theorem (good and good_eq) are certainly satisfiable:

Definition trivial (n: nat) : bool := true.

Lemma trivial_eq: forall n,
  trivial n = if n <=? 1 then true else trivial (next n).
Proof. intro; case (n <=? 1); reflexivity. Qed.

Lemma collatz_trivial: forall n, trivial n = true.
Proof.
  apply (collatz trivial trivial_eq).
Qed.

So clearly there exists a function of type nat -> bool that satisfies the assumed equation. This is good, because it means that the collatz theorem is not simply assuming False!

Some (including me) might already be happy with this theorem and proof, as it clearly states: “Every function that follows the Collatz series eventually reaches 1”.

Others might still not be at ease with such a proof. Above we have seen that we cannot define the real collatz series in Coq. How can the collatz theorem say something that is not definable?

Classical reasoning

One possible way of getting some assurance it to define good as a classical function. The logic of Coq can be extended with the law of the excluded middle without making it inconsistent, and with that axiom, we can define a version of good that is pretty convincing (sorry for the slightly messy proof):

Require Import Coq.Logic.ClassicalDescription.
Require Import Omega.
Definition classical_good (n:nat) : bool :=
  if excluded_middle_informative (exists m, Nat.iter m next n <= 1)
  then true else false.

Lemma iter_shift:
  forall a f x (y:a), Nat.iter x f (f y) = f (Nat.iter x f y).
Proof.
 intros. induction x. reflexivity. simpl. rewrite IHx. reflexivity. Qed.

Lemma classical_good_eq: forall n,
  classical_good n = if n <=? 1 then true else classical_good (next n).
Proof.
  intros.
  unfold classical_good at 1.
  destruct (Nat.leb_spec n 1).
  * destruct (excluded_middle_informative _); try auto.
    contradict n0. exists 0. simpl. assumption.
  * unfold classical_good.
    destruct (Nat.eqb_spec (next n) 0); try auto.
    destruct (excluded_middle_informative _), (excluded_middle_informative _); auto.
    - contradict n0.
      destruct e0.
      destruct x; simpl in *. omega.
      exists x. rewrite iter_shift. assumption.
    - contradict n0.
      destruct e0.
      exists (S x). simpl. rewrite iter_shift in H0. assumption.
Qed.

Lemma collatz_classical: forall n, classical_good n = true.
Proof. apply (collatz classical_good classical_good_eq). Qed.

The point of this is not so much to use this particular definition of good, but merely to convince ourselves that the assumptions of the collatz theorem above encompass “the” Collatz series, and thus constitutes a proof of the Collatz conjecture.

The main take-away so far is that existence and termination of a function are two separate issues, and it is possible to assume the former, prove the latter, and not have done a vacuous proof.

The ice gets thinner

Sections

Starting with the above Theorem collatz, there is another train of thought I invite to to follow along.

Probably the “genius idea” proof will be more than a few lines long, and we probably to be able to declare helper lemmas and other things along the way. Doing all that in the body of the collatz proof is not very convenient, so instead of using assumptions, we might write

Section collatz:
Variable good : nat -> bool.
Variable good_eq : forall n,
  good n = if n <=? 1 then true else good (next n)

Theorem collatz2 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.
End collatz.

So far so good: Clearly, I just refactored my code a bit, but did not make any significant change. The theorems collatz2 and collatz are equivalent.

Sound axioms

But note that we do not really intend to instantiate collatz2. We know that the assumptions are satisfiable (e.g. since we can define trivial or classical_good). So maybe, we would rather avoid the Section mechanism and simply write

Axiom good : nat -> bool.
Axiom good_eq : forall n,
  good n = if n <=? 1 then true else good (next n)

Theorem collatz3 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

I assume this will make a few of my readers’ eyebrows go up: How can I dare to start with such Axioms? Do they not invalidate my whole development?

On the other hand, all that a Coq axiom is doing is saying “the following theorems are under the assumption that the axiom holds”. In that sense, collatz3 and collatz2 are essentially equivalent.

Unsound axioms

Let me take it one step further, and change that to:

Axiom unsafeFix : forall a, (a -> a) -> a.
Axiom unsafeFix_eq : forall f, unsafeFix f = f (unsafeFix f).
Definition good : nat -> bool :=
  unsafeFix (fun good n => if n <=? 1 then true else good (next n)).

Theorem collatz4 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.

At this point, the majority of my readers will cringe. The axiom unsafeFix is so blatantly unsound (in Coq), how do I even dare to think of using it. But bear with me for a moment: I did not change the proof. So maybe the collatz4 theorem is still worth something?

I want to argue that it is: Both unsafeFix and unsafeFix_eq are unsound in their full generality. But as long as I instantiate them only with functions f which have a fixpoint, then I cannot prove False this way. So while “Coq + unsafeFix” is unsound, “Coq + unsafeFix + unsafeFix_eq + metarule that these axioms are only called with permissible f” is not.

In that light, my collatz4 proof carries the same meaning as the collatz3 proof, it is just less convenient to check: If I were to check the validity of collatz3, I have to maybe look for uses of admit, or some misleading use of syntax or other tricks, or other smells. When I have to check the validity of collatz4, I also have to additionally check the meta-rule -- tedious, but certainly possible (e.g. by inspecting the proof term).

Beyond Collatz

The questions discussed here did not come up in the context of the Collatz series (for which I unfortunately do not have a proof), but rather the verification of Haskell code in Coq using hs-to-coq. I started with the idiomatic Haskell definition of “Quicksort”:

quicksort :: Ord a => [a] -> [a]
quicksort [] = []
quicksort (p:xs) = quicksort lesser ++ [p] ++ quicksort greater
    where (lesser, greater) = partition (<p) xs

This function is not terminating in a way that is obvious to the Coq type checker. Conveniently, hs-to-coq can optionally create the Coq code using the unsafeFix axiom above, producing (roughly):

Definition quicksort {a} `{Ord a} : list a -> list a :=
  unsafeFix (fun quicksort xs =>
    match xs with
      | nil => nil
      | p :: xs => match partition (fun x => x <? p) xs with
         | (lesser, greater) => quicksort lesser ++ [p] ++ quicksort greater
      end
    end).

I then proved (roughly)

Theorem quicksort_sorted:
  forall a `(Ord a) (xs : list a), StronglySorted (quicksort xs).

and

Theorem quicksort_permutation:
  forall a `(Ord a) (xs : list a), Permutation (quicksort xs) xs.

These proofs proceed by well-founded induction on the length of the argument xs, and hence encompass a termination proof of quicksort. Note that with a only partially correct but non-terminating definition of quicksort (e.g. quicksort := unsafeFix (fun quicksort xs => quicksort xs)) I would not be able to conclude these proofs.

My (not undisputed) claim about the meaning of these theorems is therefore

If the Haskell equations for quicksort actually have a fixed point, then the use of unsafeFix in its definition does not introduce any inconsistency. Under this assumption, we showed that quicksort always terminates and produces a sorted version of the input list.

Do you agree?

Isabelle functions: Always total, sometimes undefined

Published 2017-10-12 in sections English, Coq.

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians.

Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions

If I have a Isabelle function f :: a ⇒ b between two types a and b (the function arrow in Isabelle is , not ), then – by definition of what it means to be a function in HOL – whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total.

In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like

spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))

Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type Bool.

Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq.

Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion:

Isabelle functions do not compute

We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it2. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind.

What is the difference?

  • A function a → b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a ⇒ b in math (or Isabelle) associates with each value of type a a value of type b.

For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:

definition foo :: "(nat ⇒ real) ⇒ real" where
  "foo seq = (if convergent seq then lim seq else 0)"

This assigns a real number to every sequence, but it does not compute it in any useful sense.

From this it follows that

Isabelle functions are specified, not defined

Consider this function definition:

fun plus :: "nat ⇒ nat ⇒ nat"  where
   "plus 0       m = m"
 | "plus (Suc n) m = Suc (plus n m)"

To a functional programmer, this reads

plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.

which is clearly a description of a computation.

But to Isabelle, the above reads

plus is a binary function on natural numbers, and it satisfies the following two equations: …

And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus.

One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we’d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions

Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check – very much like Fixpoint in Coq.

But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a “termination proof of the function” exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument!

For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):

partial_function (tailrec)
  fixpoint :: "('a ⇒ 'a) ⇒ 'a ⇒ 'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"

We can work with this definition just fine. For example, if we instantiate f with (λx. x-1), we can prove that it will always return 0:

lemma "fixpoint (λ n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)

Similarly, if we have a function that works within the option monad (i.e. |Maybe| in Haskell), its specification can always be turned into a function without an explicit termination proof – here one that calculates the Collatz sequence:

partial_function (option) collatz :: "nat ⇒ nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do { ns <- collatz (n div 2);    Some (n # ns) }
           else do { ns <- collatz (3 * n + 1);  Some (n # ns)})"

Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function “returns” a list only if the collatz sequence eventually reaches 1.

I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint (λn. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour…

HOL is a logic of non-empty types

Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type.

Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions.

This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely

undefined :: 'a

The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown.

Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the “usual ones”. It is simply some value of that type, which is demonstrated in the following lemma:

lemma "undefined = True ∨ undefined = False" by auto

In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:

lemma "undefined = ()" by auto

It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions

For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle’s equivalent of Haskell’s partial fromJust function:

fun fromSome :: "'a option ⇒ 'a" where
  "fromSome (Some x) = x"

This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that.

So fromSome None behaves just like undefined above, i.e. we can prove

lemma "fromSome None = False ∨ fromSome None = True" by auto

Here is a small exercise for you: Can you come up with an explanation for the following lemma:

fun constOrId :: "bool ⇒ bool" where
  "constOrId True = True"

lemma "constOrId = (λ_.True) ∨ constOrId = (λx. x)"
  by (metis (full_types) constOrId.simps)

Overall, this behavior makes sense if we remember that function “definitions” in Isabelle are not really definitions, but rather specifications. And a partial function “definition” is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified

Let us return to the puzzle posed by fixpoint above. Clearly, the function – seen as a functional program – is not total: When passed the argument (λn. n + 1) or (λb. ¬b) it will loop forever trying to find a fixed point.

But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined.

Or more concretely: The specification of the fixpoint function states that fixpoint (λb. ¬b) True = fixpoint (λb. ¬b) False has to hold, but it does not specify which particular value (True or False) it should denote – any is fine.

Not all function specifications are ok

At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () ⇒ nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution.

We can actually prove that such a function cannot exist:

lemma no_bogus: "∄ bogus. bogus () = Suc (bogus ())" by simp

(Of course, not_bogus () = not_bogus () is just fine…)

You cannot reason about partiality in Isabelle

We have seen that there are many ways to define functions that one might consider “partial”. Given a function, can we prove that it is not “partial” in that sense?

Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that “A function result is not specified”.

Here is an example that demonstrates this: Two “partial” functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:

fun partial1 :: "bool ⇒ unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool ⇒ unit" where
  "partial2 b = partial2 b"
fun total :: "bool ⇒ unit" where
  "total True = ()"
| "total False = ()"

lemma "partial1 = total ∧ partial2 = total" by auto

If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, ⊥ ≠ () and partial2 = ⊥ ≠ total. We have done that to verify some of HLint’s equations.

You can still compute with Isabelle functions

I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to `fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to “compute” that function quite like you would in Coq or Haskell.

Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle.

While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones.

Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a “moral reasoning” foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq.

There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion

We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq

This section is speculative, and an invitation for discussion.

Coq already distinguishes between types used in programs (Set) and types used in proofs Prop.

Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set.

If so, then it would be sound to add the following axiom

Axiom undefined : forall (a : Set), a.

wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality.

With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ ⇒ undefined. Would it “help” with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?


  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  3. Let me know if you find such an n. Besides n = 0.

  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.