Joachim Breitner

Blog

Swing Dancer Profile

Published 2018-08-07 in sections English, Reisen.

During my two years in Philadelphia (I was a post-doc with Stephanie Weirich at the University of Pennsylvania) I danced a lot of Swing, in particular at the weekly social “Jazz Attack”.

They run a blog, that blog features dancers, and this week – just in time for my last dance – they featured me with a short interview.

The merits of a yellow-red phase

Published 2018-07-30 in sections English, Reisen.

In my yesterday post on Continuity Lines in North America I mentioned in passing that I am big fan of German1 traffic lights, which have a brief yellow-red phase between the red and and the green phase (3 to 5 seconds, depending on road speed). A German reader of my blog was surprised by this, said that he considers it pointless, and would like to hear my reasons. So where we go…

Life without Yellow-Red

Lights that switch directly from red to green cause more stress. Picture yourself at the first car waiting at a traffic light, with a bunch of cars behind you. Can you relax, maybe observe the cars passing in front of you, switch the radio station, or simply look somewhere else for a moment? Well, you can, but you risk missing how the light switches from red to green. When your look at the traffic light again and see it bright green, you have no idea how long it has been on green. Hyper-aware of all the cars behind you waiting to get going, you’ll rush to get started, and if you don’t do that really fast now, surely one of the people waiting behind you will have honked.

So at the next intersection, you better don’t take your eyes off the light – or, alternatively, you develop a screw-you-I-don’t-care-attitude towards the other cars, which might allow you to relax in this situation, but is in general not desirable.

Maybe this is less of a problem on the West Coast, where supposedly everybody is nice, relaxed, and patient, and you can take all the time you want to get rolling. But in the stereotypical East Coast traffic full of angry, impatient and antisocial drivers, you really don’t get away with that.

Life with Yellow-Red

The yellow-red phase solves this problem elegantly: As long as the light is red, you don’t have to watch the light constantly and with maximum attention. You can relax: it suffices to check it often enough to catch the red-yellow phase, once every two or three seconds. When you see it, you get ready to start; and when it actually turns green, you start on time.

I would expect that it is not only less stressful, it is also more efficient: Because every car in the lane has the heads-up warning “green in a moment”, the cars can start rolling in quicker succession. Without the warning, every car to account much more for the possible slower reaction of the car before.

PS: A friend of mine wonders whether the German Yellow-Red is needed because cars with manual transmissions are much more common in Germany than in the US, and you need more time to, well, get into gear with these cars.


  1. Also Great Britain, Croatia, Latvia, Norway, Austria, Poland, Russia, Saudi-Arabia, Sweden, Switzerland, Hungary and others.

Continuity Lines

Published 2018-07-29 in sections English, Reisen.

I am currently on a road trip going from Philly going North along the Atlantic Coast into Canada, and aiming for Nova Scotia. When I passed from the US into Canada, I made had an unexpected emotional response to the highways there: I felt more at home!

And I believe it has to do with the pavement markings on US vs. Canadian freeways.

Consider this image, taken from the Manual on Uniform Traffic Control Devices of the United States, an official document describing (among other things) how the pavement of a freeway ought to be paved:

Parallel Deceleration Lane for Exit Ramp

Parallel Deceleration Lane for Exit Ramp

This is a typical exit ramp from the freeway. On ramps and cloverleaf ramps look similar. Note that the right-most lane goes somewhere else than the left and the middle lane, yet the lanes look completely identical. In particular, the lines between the lanes are shaped the same!

Now, for comparison, the corresponding image in a Canadian manual, namely the Manual of Standard Traffic Signs & Pavement Markings for British Columbia:

Canadian off-ramps

Canadian off-ramps

Here, there are different lines between the different lanes: normal lane lines left, but a so-called continuity line, with a distinctly different pattern, between the normal lanes and the exit lane. It’s like in Germany!

With this is mind I understand one reason1 why driving in the US2 noticeably more stressful: There is just always anxiety whether you are accidentally in an exit lane!

Update (2018-07-30): AS Paul Johnson pointed out (see comment below), I was looking at an old version of the MUTCD. The current version, from 2009, has these lines:

Current US off-ramps

Current US off-ramps

They have published a (very long) document describing the changes in the new version of the manual , and Section 306 describes the rationale:

[..] the FHWA adopts language to clarify that dotted lane lines, rather than broken lane lines, are to be used for non-continuing lanes, including acceleration lanes, deceleration lanes, and auxiliary lanes. [..] a number of States and other jurisdictions currently follow this practice, which is also the standard practice in Europe and most other developed countries. The FHWA believes that the existing use of a normal broken lane line for these non- continuing lanes does not adequately inform road users of the lack of lane continuity ahead and that the standardized use of dotted lane lines for non-continuing lanes as adopted in this final rule will better serve this important purpose in enhancing safety and uniformity.

So all is well! But it means that either Pennsylvania was slower than allowed in implementing these changes (the deadline was December 2016), or it was something else alltogether that made me feel more at home on the Canadian freeway.


  1. I say “one reason”, not “the reason”, because there are many more – “Rechtsfahrgebot”, no red-and-yellow-phase in the traffic light, Pennsylvanian road quality…

  2. Supposedly, Pennsylvania is particularly bad with roads in general, but also with this particular problem, and California has exit lanes clearly separated. But, of course, because this is the US, not using the same patter as the others (Canada, Europe), but using spaced dots…

Build tool semantic aware build systems

Published 2018-07-28 in sections English, Digital World.

I just had a lovely visit at Ben Gamari and Laura Dietz, and at the breakfast table we had an idea that I want to record here.

The problem

Laura and Ben talked about the struggles they had using build systems like make or Nix in data science applications. A build system like nix is designed around the idea that builds are relatively cheap, and that any change in a dependency ought to trigger a rebuild, just to be sure that all build outputs are up-to-date. This is a perfectly valid assumption when building software, where build times are usually minutes, or maybe hours. But when some of the data processing takes days, then you really really want to avoid any unnecessary builds.

One way of avoiding unnecessary rebuilds that is supported by build systems like shake and (I presume, maybe with some extra work) Nix, is to do output hashing: If one build stage has its input changed and need to be re-run, but its output is actually unchanged, then later build stages do not have to be re-run. Great!

But even with this feature in place, there one common problem remains: Build tools! Your multi-gigabyte data is going to be processed by some code you write. What if the code changes? Clearly, if you change the algorithm, or fix a bug in your code, you want the output to be re-run. But if you just change some strings in the --help flag, or update some libraries, or do something else that does not change the program in a way that is significant for the task at hand, wouldn’t you prefer to not pay for that by futile multi-day data processing step?

Existing approaches

There are various ways you can tackle this these days:

  • You bite the bullet, add the build tool as a dependency of the processing step. You get the assurance that your data always reflects the output of the latest version of your tool, but you get lots of rebuilds.

  • You don’t track the tool as part of your build system. It is now completely up to you to decide when the tool has changed in significant ways. When you think it has, you throw away all build artifacts and start from scratch. Very crude, and easy to get wrong.

  • You keep track of a “build tool version”, e.g. a text file with a number, that you depend on in lieu of the build tool itself. When you make a change that you think is significant, you bump that version number. This is similar to the previous approach, but more precise: Only the build outputs that used this particular tools will be invalidated, and it also scales better to working in a team. But of course, you can still easily get it wrong.

Build tool semantics

Why are all these approaches so unsatisfying? Because none allow you to express the intent, which is to say “this build step depends on the semantics (i.e. behaviour) of the build tool”. If we could somehow specify that, then all would be great: Build tool changes, but its semantics is unaffected? no rebuild. Build tool changes, semantics change? rebuild.

This ideal is not reachable (blah blah halting problem blah blah) – but I believe we can approximate it. At least if good practices were used and the tool has a test suite!

Assume for now that the tool is a simple patch-processing tool, i.e. takes some input and produces some output. Assume further that there is a test suite with a small but representative set of example inputs, or maybe some randomly generated inputs (using a fixed seed). If the test suite is good, then the (hash of) the output on the test suite example is an approximation of the programs semantic.

And the build system can use this “semantics hash”. If the build tool changes, the build system can re-run the test suite and compare the output with the previous run. If they change, then the tools seems to have changed in significant ways, and it needs to re-run the data processing. But if the test suite outputs are unchanged, then it can assume that the behaviour of the tool has not changed, re-use the existing data outputs.

That is the core idea! A few additional thoughts, though:

  • What if the test suite changes? If the above idea is implemented naively, then adding a test case to the test suite will change the semantics, and re-build everything. That would be horrible in terms of incentives! So instead, the build systems should keep the old version of the build tool around, re-calculate its semantics hash based on the new test suite, and then compare that. This way, extending the test suite does not cause recompilation.

Hash-and-store-based build systems like Nix should have no problem keeping the previous version of the build tool around as long there is output that depends on it.

  • A consequence of this approach is that if you find and fix a bug in your tool that is not covered by the existing test suite, then you absolutely have to add a test for that to your test suite, otherwise the bug fix will not actually make it to your data output. This might be the biggest downside of the approach, but at least it sets incentives right in that it makes you maintain a good regression test suite.

  • What if things go wrong, the test suite is incomplete and the build system re-uses build output because it wrongly assumes that two versions of the build tool have the same semantics?

If done right, this can be detected and fixed: The build system needs to record which tool version (and not just which “semantics hash”) was used to produce a particular output. So once the test suite uncovers the difference, the build systems will no longer consider the two versions equivalent and – after the fact – invalidate the re-used of the previous build output, and re-build what needs to be rebuild

I am curious to hear if anybody has played with these or similar ideas before? How did it go? Would it be possible to implement this in Nix? In Shake? Other systems? Tell me your thoughts!

WebGL, Fragment Shader, GHCJS and reflex-dom

Published 2018-07-22 in sections English, Haskell.

What a potpourri of topics... too long to read? Click here!

On the side and very slowly I am working on a little game that involves breeding spherical patterns… more on that later (maybe). I want to implement it in Haskell, but have it run in the browser, so I reached for GHCJS, the Haskell-to-Javascript compiler.

WebGL for 2D images

A crucial question was: How do I draw a generative pattern onto a HTML canvas element. My first attempt was to calculate the pixel data into a bit array and use putImageData() to push it onto the canvas, but it was prohibitively slow. I might have done something stupid along the way, and some optimization might have helped, but I figured that I should not myself calculate the colors of each pixel, but leave this to who is best at it: The browser and (ideally) the graphic card.

So I took this as an opportunity to learn about WebGL, in particular fragment shaders. The term shader is misleading, and should mentally be replaced with “program”, because it is no longer (just) about shading. WebGL is intended to do 3D graphics, and one sends a bunch of coordinates for triangles, a vertex shader and a fragment shader to the browser. The vertex shader can places the vertices, while the fragment shader colors each pixel on the visible triangles. This is a gross oversimplification, but that is fine: We only really care about the last step, and if our coordinates always just define a rectangle that fills the whole canvas, and the vertex shader does not do anything interesting, then what remains is a HTML canvas that takes a program (written in the GL shader language), which is run for each pixel and calculates the color to be shown at that pixel.

Perfect! Just what I need. Dynamically creating a program that renders the pattern I want to show is squarely within Haskell’s strengths.

A reflex-dom widget

As my game UI grows, I will at some point no longer want to deal with raw DOM access, events etc., and the abstraction that makes creating such user interfaces painless is Functional Reactive Programming (FRP). One of the main mature implementations is Ryan Trinkle's reflex-dom, and I want to use this project to get more hands-on experience with it.

Based on my description above, once I hide all the details of the WebGL canvas setup, what I really have is a widget that takes a text string (representing the fragment shader), and creates a DOM element for it. This would suggest a function with this type signature

fragmentShaderCanvas ::
    MonadWidget t m =>
    Dynamic t Text ->
    m ()

where the input text is dynamic, meaning it can change over time (and the canvas will be updated) accordingly. In fact, I also want to specify attributes for the canvas (especially width and height), and if the supplied fragment shader source is invalid and does not compile, I want to get my hands on error messages, as provided by the browser. So I ended up with this:

fragmentShaderCanvas ::
    MonadWidget t m =>
    Map Text Text ->
    Dynamic t Text ->
    m (Dynamic t (Maybe Text))

which very pleasingly hides all the complexity of setting up the WebGL context from the user. This is abstraction at excellence!

I published this widget in the hackage.haskell.org/package/reflex-dom-fragment-shader-canvas package on Hackage.

A Demo

And because reflex-dom make it so nice, I created a little demo program; it is essentially a fragment shader playground!

On https://nomeata.github.io/reflex-dom-fragment-shader-canvas/ you will find a text area where you can edit the fragment shader code. All your changes are immediately reflected in the canvas on the right, and in the list of warnings and errors below the text area. The code for this demo is pretty short.

A few things could be improved, of course: For example, the canvas element should have its resolution automatically adjusted to the actual size on screen, but it is somewhat tricky to find out when and if a DOM element has changed size. Also, the WebGL setup should be rewritten to be more defensively, and fail more gracefully if things go wrong.

BTW, if you need a proper shader playground, check out Shadertoy.

Development and automatic deployment

The reflex authors all use Nix as their development environment, and if you want to use reflex-dom, then using Nix is certainly the path of least resistance. But I would like to point out that it is not a necessity, and you can stay squarely in cabal land if you want:

  • You don’t actually need ghcjs to develop your web application: reflex-dom builds on jsaddle which has a mode where you build your program using normal GHC, and it runs a web server that your browser connects to. It works better with Chrome than with Firefox at the moment, but is totally adequate to develop a program.

  • If you do want to install ghcjs, then it is actually relatively easily: The README on the ghc-8.2 branch of GHCJS tells you how to build and install GHCJS with cabal new-build.

  • cabal itself supports ghcjs just like ghc! Just pass --ghcjs -w ghcjs to it.

  • Because few people use ghcjs and reflex with cabal some important packages (ghcjs-base, reflex, reflex-dom) are not on Hackage, or only with old versions. You can point cabal to local checkouts using a cabal.project file or even directly to the git repositories. But it is simpler to just use a Hackage overlay that I created with these three packages, until they are uploaded to Hackage.

  • If the application you create is a pure client-based program and could therefore be hosted on any static web host, wouldn’t it be nice if you could just have it appear somewhere in the internet whenever you push to your project? Even that is possible, as I describe in an example repository!

It uses Travis CI to build GHCJS and the dependencies, caches them, builds your program and – if successful – uploads the result to GitHub Pages. In fact, the demo linked above is produced using that. Just push, and 5 minutes later the changes available online!

I know about rumors that Herbert’s excellent multi-GHC PPA repository might provide .deb packages with GHCJS prebuilt soon. Once that happens, and maybe ghcjs-base and reflex get uploaded to Hackage, then the power of reflex-based web development will be conveniently available to all Haskell developers (even those who shunned Nix so far), and I am looking forward to many cool projects coming out of that.

The diameter of German+English

Published 2018-05-23 in sections English, Digital World.

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

Update (2018-06-17):

I ran the code again, this time lower-casing all words, and allowing false-friends translations, as suggested above. The resulting graph has – surprisingly – precisely the same diamater (55), but with a partly different list:

  • peyote
  • peyote
  • mescal
  • meskal
  • mezcal
  • blaue agave
  • maguey
  • amerikanische agave
  • american agave
  • jahrhundertpflanze
  • century plant
  • fächerlilie
  • tumbleweed
  • weißer fuchsschwanz
  • common tumbleweed
  • acker-fuchsschwanz / ackerfuchsschwanz
  • rough pigweed
  • grünähriger fuchsschwanz
  • love-lies-bleeding
  • stiefmütterchen
  • pansy
  • schwuchtel
  • fruit
  • ertrag
  • gain
  • vorgehen
  • approach
  • sich an jdn. wenden
  • address oneself to sb.
  • jdn. ansprechen
  • accost sb.
  • jdn. belästigen
  • pester sb.
  • jdn. traktieren
  • abuse sb.
  • jdn. beschimpfen
  • call sb. names
  • jdn. etw. heißen
  • tell sb. to do sth.
  • jdm. etw. befehlen
  • entrust sth. to sb.
  • jdm. etw. anvertrauen
  • entrust sb. with sth.
  • jdn. mit etw. [dat.] betrauen
  • charge sb. with sth.
  • jdm. etw. zur last legen
  • hold sth. against sb.
  • jdm. etw. nachtragen
  • bear sb. a grudge
  • jdm. grollen
  • have a grudge against sb.
  • jdm. böse sein
  • be angry with sb.
  • auf jdn. böse sein
  • be mad at sb.
  • auf jdn. einen (dicken) hals haben

Note that there is not actually a false-friend in this list – it seems that adding the edges just changed the order of edges in the graph representation and my code just happened to find a different diamer.

Proof reuse in Coq using existential variables

Published 2018-05-18 in sections English, Coq.

This is another technical post that is only of interest only to Coq users.

TL;DR: Using existential variable for hypotheses allows you to easily refactor a complicated proof into an induction schema and the actual proofs.

Setup

As a running example, I will use a small theory of “bags”, which you can think of as lists represented as trees, to allow an O(1) append operation:

Require Import Coq.Arith.Arith.
Require Import Psatz.
Require FunInd.

(* The data type *)
Inductive Bag a : Type :=
  | Empty : Bag a
  | Unit  : a -> Bag a
  | Two   : Bag a -> Bag a -> Bag a.

Arguments Empty {_}.
Arguments Unit {_}.
Arguments Two {_}.

Fixpoint length {a} (b : Bag a) : nat :=
  match b with
  | Empty     => 0
  | Unit _    => 1
  | Two b1 b2 => length b1 + length b2
  end.

(* A smart constructor that ensures that a [Two] never
   has [Empty] as subtrees. *)
Definition two {a} (b1 b2 : Bag a) : Bag a := match b1 with
  | Empty => b2
  | _ => match b2 with | Empty => b1
                       | _ => Two b1 b2 end end.

Lemma length_two {a} (b1 b2 : Bag a) :
  length (two b1 b2) = length b1 + length b2.
Proof. destruct b1, b2; simpl; lia. Qed.

(* A first non-trivial function *)
Function take {a : Type} (n : nat) (b : Bag a) : Bag a :=
  if n =? 0
  then Empty
  else match b with
       | Empty     => b
       | Unit x    => b
       | Two b1 b2 => two (take n b1) (take (n - length b1) b2)
       end.

The theorem

The theorem that I will be looking at in this proof describes how length and take interact:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).

Before I dive into it, let me point out that this example itself is too simple to warrant the techniques that I will present in this post. I have to rely on your imagination to scale this up to appreciate the effect on significantly bigger proofs.

Naive induction

How would we go about proving this lemma? Surely, induction is the way to go! And indeed, this is provable using induction (on the Bag) just fine:

Proof.
  intros.
  revert n.
  induction b; intros n.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + rewrite Nat.min_0_r. reflexivity.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. lia.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

But there is a problem: A proof by induction on the Bag argument immediately creates three subgoals, one for each constructor. But that is not how take is defined, which first checks the value of n, independent of the constructor. This means that we have to do the case-split and the proof for the case n = 0 three times, although they are identical. It’s a one-line proof here, but imagine something bigger...

Proof by fixpoint

Can we refactor the proof to handle the case n = 0 first? Yes, but not with a simple invocation of the induction tactic. We could do well-founded induction on the length of the argument, or we can do the proof using the more primitive fix tactic. The latter is a bit hairy, you won’t know if your proof is accepted until you do Qed (or check with Guarded), but when it works it can yield some nice proofs.

Proof.
  intros a.
  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n. rewrite Nat.min_0_l. reflexivity.
  + destruct b.
    * rewrite Nat.min_0_r. reflexivity.
    * simpl. lia.
    * simpl. rewrite length_two, !IH. lia.
Qed.

Nice: we eliminated the duplication of proofs!

A functional induction lemma

Again, imagine that we jumped through more hoops here ... maybe some well-founded recursion with a tricky size measure and complex proofs that the measure decreases ... or maybe you need to carry around an invariant about your arguments and you have to work hard to satisfy the assumption of the induction hypothesis.

As long as you do only one proof about take, that is fine. As soon as you do a second proof, you will notice that you have to repeat all of that, and it can easily make up most of your proof...

Wouldn’t it be nice if you can do the common parts of the proofs only once, obtain a generic proof scheme that you can use for (most) proofs about take, and then just fill in the blanks?

Incidentally, the Function command provides precisely that:

take_ind
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall (n : nat) (b : Bag a), (n =? 0) = true -> P n b Empty) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> b = Empty -> P n Empty b) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> forall x : a, b = Unit x -> P n (Unit x) b) ->
       (forall (n : nat) (b : Bag a),
        (n =? 0) = false ->
        forall b1 b2 : Bag a,
        b = Two b1 b2 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

which is great if you can use Function (although not perfect – we’d rather see n = 0 instead of (n =? 0) = true), but often Function is not powerful enough to define the function you care about.

Extracting the scheme from a proof

We could define our own take_ind' by hand, but that is a lot of work, and we may not get it right easily, and when we change out functions, there is now this big proof statement to update.

Instead, let us use existentials, which are variables where Coq infers their type from how we use them, so we don’t have to declare them. Unfortunately, Coq does not support writing just

Lemma take_ind':
  forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
  forall (IH1 : ?) (IH2 : ?) (IH3 : ?) (IH4 : ?),
  forall n b, P n b (take n b).

where we just leave out the type of the assumptions (Isabelle does...), but we can fake it using some generic technique.

We begin with stating an auxiliary lemma using a sigma type to say “there exist some assumption that are sufficient to show the conclusion”:

Lemma take_ind_aux:
  forall a (P : _ -> _ -> _ -> Prop),
  { Hs : Prop |
    Hs -> forall n (b : Bag a), P n b (take n b)
  }.

We use the [eexist tactic])(https://coq.inria.fr/refman/proof-engine/tactics.html#coq:tacv.eexists) (existential exists) to construct the sigma type without committing to the type of Hs yet.

Proof.
  intros a P.
  eexists.
  intros Hs.

This gives us an assumption Hs : ?Hs – note the existential type. We need four of those, which we can achieve by writing

  pose proof Hs as H1. eapply proj1 in H1. eapply proj2 in Hs.
  pose proof Hs as H2. eapply proj1 in H2. eapply proj2 in Hs.
  pose proof Hs as H3. eapply proj1 in H3. eapply proj2 in Hs.
  rename Hs into H4.

we now have this goal state:

1 subgoal
a : Type
P : nat -> Bag a -> Bag a -> Prop
H4 : ?Goal2
H1 : ?Goal
H2 : ?Goal0
H3 : ?Goal1
______________________________________(1/1)
forall (n : nat) (b : Bag a), P n b (take n b)

At this point, we start reproducing the proof of length_take: The same approach to induction, the same case splits:

  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n.
    revert b.
    refine H1.
  + rename n0 into Hnot_null.
    destruct b.
    * revert n Hnot_null.
      refine H2.
    * rename a0 into x.
      revert x n Hnot_null.
      refine H3.
    * assert (IHb1 : P n b1 (take n b1)) by apply IH.
      assert (IHb2 : P (n - length b1) b2 (take (n - length b1) b2)) by apply IH.
      revert n b1 b2 Hnot_null IHb1 IHb2.
      refine H4.
Defined. (* Important *)

Inside each case, we move all relevant hypotheses into the goal using revert and refine with the corresponding assumption, thus instantiating it. In the recursive case (Two), we assert that P holds for the subterms, by induction.

It is important to end this proofs with Defined, and not Qed, as we will see later.

In a next step, we can remove the sigma type:

Definition take_ind' a P := proj2_sig (take_ind_aux a P).

The type of take_ind' is as follows:

take_ind'
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       proj1_sig (take_ind_aux a P) ->
       forall n b, P n b (take n b)

This looks almost like an induction lemma. The assumptions of this lemma have the not very helpful type proj1_sig (take_ind_aux a P), but we can already use this to prove length_take:

Theorem length_take:
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind' with (P := fun n b r => length r = min n (length b)).
  repeat apply conj; intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

In this case I have to explicitly state P where I invoke take_ind', because Coq cannot figure out this instantiation on its own (it requires higher-order unification, which is undecidable and unpredictable). In other cases I had more luck.

After I apply take_ind', I have this proof goal:

______________________________________(1/1)
proj1_sig (take_ind_aux a (fun n b r => length r = min n (length b)))

which is the type that Coq inferred for Hs above. We know that this is a conjunction of a bunch of assumptions, and we can split it as such, using repeat apply conj. At this point, Coq needs to look inside take_ind_aux; this would fail if we used Qed to conclude the proof of take_ind_aux.

This gives me four goals, one for each case of take, and the remaining proofs really only deals with the specifics of length_take – no more general dealing with worrying about getting the induction right and doing the case-splitting the right way.

Also note that, very conveniently, Coq uses the same name for the induction hypotheses IHb1 and IHb2 that we used in take_ind_aux!

Making it prettier

It may be a bit confusing to have this proj1_sig in the type, especially when working in a team where others will use your induction lemma without knowing its internals. But we can resolve that, and also turn the conjunctions into normal arrows, using a bit of tactic support. This is completely generic, so if you follow this procedure, you can just copy most of that:

Lemma uncurry_and: forall {A B C}, (A /\ B -> C) -> (A -> B -> C).
Proof. intros. intuition. Qed.
Lemma under_imp:   forall {A B C}, (B -> C) -> (A -> B) -> (A -> C).
Proof. intros. intuition. Qed.
Ltac iterate n f x := lazymatch n with
  | 0 => x
  | S ?n => iterate n f uconstr:(f x)
end.
Ltac uncurryN n x :=
  let n' := eval compute in n in
  lazymatch n' with
  | 0 => x
  | S ?n => let uc := iterate n uconstr:(under_imp) uconstr:(uncurry_and) in
            let x' := uncurryN n x in
            uconstr:(uc x')
end.

With this in place, we can define our final proof scheme lemma:

Definition take_ind'' a P
  := ltac:(let x := uncurryN 3 (proj2_sig (take_ind_aux a P)) in exact x).
Opaque take_ind''.

The type of take_ind'' is now exactly what we’d wish for: All assumptions spelled out, and the n =? 0 already taken of (compare this to the take_ind provided by the Function command above):

take_ind''
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall b : Bag a, P 0 b Empty) ->
       (forall n : nat, n <> 0 -> P n Empty Empty) ->
       (forall (x : a) (n : nat), n <> 0 -> P n (Unit x) (Unit x)) ->
       (forall (n : nat) (b1 b2 : Bag a),
        n <> 0 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

At this point we can mark take_ind'' as Opaque, to hide how we obtained this lemma.

Our proof does not change a lot; we merely no longer have to use repeat apply conj:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind'' with (P := fun n b r => length r = min n (length b)); intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

Is it worth it?

It was in my case: Applying this trick in our ongoing work of verifying parts of the Haskell compiler GHC separated a somewhat proof into a re-usable proof scheme (go_ind), making the actual proofs (go_all_WellScopedFloats, go_res_WellScoped) much neater and to the point. It saved “only” 60 lines (if I don’t count the 20 “generic” lines above), but the pay-off will increase as I do even more proofs about this function.

Avoid the dilemma of the trailing comma

Published 2018-04-30 in sections English, Haskell.

The Haskell syntax uses comma-separated lists in various places and does, in contrast to other programming language, not allow a trailing comma. If everything goes on one line you write

  (foo, bar, baz)

and everything is nice.

Lining up

But if you want to have one entry on each line, then the obvious plan

  (foo,
   bar,
   baz
  )

is aesthetically unpleasing and moreover, extending the list by one to

  (foo,
   bar,
   baz,
   quux
  )

modifies two lines, which produces less pretty diffs.

Because it is much more common to append to lists rather than to prepend, Haskellers have developed the idiom of leading comma:

  ( foo
  , bar
  , baz
  , quux
  )

which looks strange until you are used to it, but solves the problem of appending to a list. And we see this idiom in many places:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
                   , array
                   , deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures
               , EmptyCase
               , ExistentialQuantification
               , FlexibleContexts
               , FlexibleInstances
               , GADTs
               , InstanceSigs
               , KindSignatures
               , RankNTypes
               , ScopedTypeVariables
               , TemplateHaskell
               , TypeFamilies
               , TypeInType
               , TypeOperators
               , UndecidableInstances #-}

Think outside the list!

I started to avoid this pattern where possible. And it is possible everywhere instead of having a declaration with a list, you can just have multiple declarations. I.e.:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
      build-depends: array
      build-depends: deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures #-}
    {-# LANGUAGE EmptyCase #-}
    {-# LANGUAGE ExistentialQuantification #-}
    {-# LANGUAGE FlexibleContexts #-}
    {-# LANGUAGE FlexibleInstances #-}
    {-# LANGUAGE GADTs #-}
    {-# LANGUAGE InstanceSigs #-}
    {-# LANGUAGE KindSignatures #-}
    {-# LANGUAGE RankNTypes #-}
    {-# LANGUAGE ScopedTypeVariables #-}
    {-# LANGUAGE TemplateHaskell #-}
    {-# LANGUAGE TypeFamilies #-}
    {-# LANGUAGE TypeInType #-}
    {-# LANGUAGE TypeOperators #-}
    {-# LANGUAGE UndecidableInstances #-}

It is a bit heavier, but it has a number of advantages:

  1. Both appending and prepending works without touching other lines.
  2. It is visually more homogeneous, making it – despite the extra words – easier to spot mistakes visually.
  3. You can easily sort the declarations alphabetically with your editor.
  4. Especially in Cabal files: If you have syntax error in your dependency specification (which I always have, writing << instead of < due to my Debian background), cabal will actually give you a helpful error location – it always only tells you which build-depends stanza was wrong, so if you have only one, then that’s not helpful.

What when it does not work?

Unfortunately, not every list in Haskell can have that treatment, and that’s why the recent GHC proposal on ExtraCommas wants to lift the restriction. In particular, it wants to allow trailing commas in subexport lists:

module Foo
    ( Foo(
        A,
	B,
      ),
      fromFoo,
    )

(Weirdly, export lists already allow trailing commas). An alternative here might be to write

module Foo
    ( Foo(A),
      Foo(B),
      fromFoo,
    )

and teach the compiler to not warn about the duplicate export of the Foo type.

For plain lists, this idiom can be useful:

list :: [Int]
list = let (>>) = (++) in do
   [ 1 ]
   [ 2 ]
   [ 3 ]
   [ 4 ]

It requires RebindableSyntax, so I do not recommend it for regular code, but it can be useful in a module that is dedicated to hold some generated data or configuration. And of course it works with any binary operator, not just (++)

Verifying local definitions in Coq

Published 2018-04-22 in sections English, Coq.

TL;DR: We can give top-level names to local definitions, so that we can state and prove stuff about them without having to rewrite the programs.

When a Haskeller writes Coq

Imagine you teach Coq to a Haskell programmer, and give them the task of pairing each element in a list with its index. The Haskell programmer might have

addIndex :: [a] -> [(Integer, a)]
addIndex xs = go 0 xs
  where go n [] = []
        go n (x:xs) = (n,x) : go (n+1) xs

in mind and write this Gallina function (Gallina is the programming language of Coq):

Require Import Coq.Lists.List.
Import ListNotations.

Definition addIndex {a} (xs : list a) : list (nat * a) :=
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (S n) xs
                     end
  in go 0 xs.

Alternatively, imagine you are using hs-to-coq to mechanically convert the Haskell definition into Coq.

When a Coq user tries to verify that

Now your task is to prove something about this function, for example

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.

If you just have learned Coq, you will think “I can do this, this surely holds by induction on xs.” But if you have a bit more experience, you will already see a problem with this (if you do not see the problem yet, I encourage you to stop reading, copy the few lines above, and try to prove it).

The problem is that – as so often – you have to generalize the statement for the induction to go through. The theorem as stated says something about addIndex or, in other words, about go 0. But in the inductive case, you will need some information about go 1. In fact, you need a lemma like this:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.

But go is not a (top-level) function! How can we fix that?

  • We can try to awkwardly work-around not having a name for go in our proofs, and essentially prove go_spec inside the proof of addIndex_spec. Might work in this small case, but does not scale up to larger proofs.
  • We can ask the programmer to avoid using local functions, and first define go as a top-level fixed point. But maybe we don’t want to bother them because of that. (Or, more likely, we are using hs-to-coq and that tool stubbornly tries to make the output as similar to the given Haskell code as possible.)
  • We can copy’n’paste the definition of go and make a separate, after-the-fact top-level definition. But this is not nice from a maintenance point of view: If the code changes, we have to update this copy.
  • Or we apply this one weird trick...

The weird trick

We can define go after-the-fact, but instead of copy’n’pasting the definition, we can use Coq’s tactics to define it. Here it goes:

Definition go {a} := ltac:(
  let e := eval cbv beta delta [addIndex] in (@addIndex a []) in
  (* idtac e; *)
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Let us take it apart:

  1. We define go, and give the parameters that go depends upon. Note that of the two parameters of addIndex, the definition of go only depends on (“captures”) a, but not xs.
  2. We do not give a type to go. We could, but that would again just be copying information that is already there.
  3. We define go via an ltac expression: Instead of a term we give a tactic that calculates the term.
  4. This tactic first binds e to the body of addIndex. To do so, it needs to pass enough arguments to addIndex. The concrete value of the list argument does not matter, so we pass []. The term @addIndex a [] is now evaluated with the evaluation flags eval cbv beta delta [addIndex], which says “unfold addIndex and do beta reduction, but nothing else”. In particularly, we do not do zeta reduction, which would reduce the let go := … definition. (The user manual very briefly describes these flags.)
  5. The idtac e line can be used to peek at e, for example when the next tactic fails. We can use this to check that e really is of the form let fix go := … in ….
  6. The lazymatch line matches e against the pattern let x := ?def in _, and binds the definition of go to the name def.
  7. And the exact def tactic tells Coq to use def as the definition of go.

We now have defined go, of type go : forall {a}, nat -> list a -> list (nat * a), and can state and prove the auxiliary lemma:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.
Proof.
  intros ?????.
  revert n m k.
  induction xs; intros; destruct n; subst; simpl.
  1-3:reflexivity.
  apply IHxs; lia.
Qed.

When we come to the theorem about addIndex, we can play a little trick with fold to make the proof goal pretty:

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.
Proof.
  intros.
  unfold addIndex.
  fold (@go a).
  (* goal here: nth n (map fst (go 0 xs)) n = n *)
  apply go_spec; lia.
Qed.

Multiple local definitions

The trick extends to multiple local definitions, but needs some extra considerations to ensure that terms are closed. A bit contrived, but let us assume that we have this function definition:

Definition addIndex' {a} (xs : list a) : list (nat * a) :=
  let inc := length xs in
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (inc + n) xs
                     end in
  go 0 xs.

We now want to give names to inc and to go. I like to use a section to collect the common parameters, but that is not essential here. The trick above works flawlessly for `inc':

Section addIndex'.
Context {a} (xs : list a).

Definition inc := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

But if we try it for go', like such:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := ?def in _ =>
    exact def
  end).

we get “Ltac variable def depends on pattern variable name x which is not bound in current context”. To fix this, we use higher-order pattern matchin (@?def) to substitute “our” inc for the local inc:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := @?def x in _ =>
    let def' := eval cbv beta in (def inc) in
    exact def'
  end).

instead. We have now defined both inc and go' and can use them in proofs about addIndex':

Theorem addIndex_spec':
  forall n, nth n (map fst (addIndex' xs)) n = n * length xs.
Proof.
  intros.
  unfold addIndex'.
  fold inc go'. (* order matters! *)
  (* goal here: nth n (map fst (go' 0 xs)) n = n * inc *)

Reaching into a match

This trick also works when the local definition we care about is inside a match statement. Consider:

Definition addIndex_weird {a} (oxs : option (list a))
  := match oxs with
     | None => []
     | Some xs =>
       let fix go n xs := match xs with
                          | []    => []
                          | x::xs => (n, x) :: go (S n) xs
                          end in
       go 0 xs
     end.

Definition go_weird {a} := ltac:(
  let e := eval cbv beta match delta [addIndex_weird]
           in (@addIndex_weird a (Some [])) in
  idtac e;
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Note the addition of match to the list of evaluation flags passed to cbv.

Conclusion

While local definitions are idiomatic in Haskell (in particular thanks to the where syntax), they are usually avoided in Coq, because they get in the way of verification. If, for some reason, one is stuck with such definitions, then this trick presents a reasonable way out.

Reservoir sampling with few random bits

Published 2018-03-25 in sections English, Math.

Assume you are Nisus, the “one cross each” guy in Monty Python’s “Life of Brian” movie. Condemned prisoners file past you one by one, on their way to crucifixion. Now it happens that this particular day, feel both both benevolent and very indecisive. Because you feel benevolent, you want to liberate one of the condemned. But because you feel indecisive, you don’t want to pick the lucky one yourself, so you have to leave it to chance. And you don’t want anybody to have grounds to to complain, so you want to give each prisoner exactly the same chance.

This would be a relatively simple task if you knew the number of prisoners, n, ahead of time: Pick a random number k ∈ {1, …, n}, count while you hand our the crosses, and tell the kth prisoner to get lost.

But alas, the Romans judiciary system is big and effective, and you have no idea how many prisoners will emerge from the dungeons today. So you might plan to just let them all wait next to you while you count them (to figure out what n is), but that won’t work either: There is not enough space to have them waiting, and some might try to escape in the confusion before you are done counting. So in fact, you can only have one prisoner waiting at a time. As soon as the next one comes, you have to send one of them towards execution.

This is still a solvable problem, and the solution is Reservoir sampling: You let the first prisoner wait. When the second one comes, you pick one of them with probability ½, and let him wait. When the third one comes, with probability you send him on, and with probability to let him wait. The fourth is sent forth with probability ¼ and kept waiting with probability ¼. And so on. When finally the dungeon master tells you that all prisoners have emerged, whoever is waiting next to you is the lucky guy.

Before you do that, you should convince yourself that this is fair: The kth prisoner is the lucky one if you let him wait and from then on, all other prisoners are sent forth. The probability of this happening is ¹ ⁄  ⋅  ⁄ ₊₁ ⋅ ⁺¹ ⁄ ₊₂ ⋅ ⋯ ⋅ ⁻¹ ⁄  which, after a number of cancellations, resolves to ¹ ⁄ . Because k does no longer show up in this calculation, the probability is the same for everybody, and hence fair.

How to not make a decision

A fair RNG

A fair RNG

So far so good. But the story misses an important point: Where do you get your randomness from? Just picking randomly is hardly random. Luckily, you just have been given your guerdon, and you have a Roman coin in your pocket. Let us assume the coin is perfectly fair: With probability ½ it shows the head of Libertas, and with probability ½ a bunch of horses. It is easy to use this coin to distinguish between the first and second prisoners, but when the third comes out, things get hairy.

You can try to throw the coin many times; say, 10 times. This has 210 = 1024 outcomes, which we can interpret as a number between 1 and 1024. If you give the third prisoner his freedom when this number is between 1 and 341, and you give him a cross if the number is between 342 and 1024, then you made pretty fair call. Not yet fully satisfying, but let’s go with it for now.

There is, however, another problem with this: You have to throw the coin a lot of times: For each prisoner, you toss it 10 times! This is too slow, and the plebs will get impatient. And it clearly is wasteful. The great philosopher Shannonus taught you that throwing the coin ten times gives you 10 bits of entropy, but you actually need only −(⅓ ⋅ log(⅓)+⅔⋅log(⅔)) = 0.918… bits to make this - decision. We should use those left-over 9.81 bits! How do we do that?

Unreal real randomness

To come up with a better (and maybe even optimal) plan, imagine for a moment that you had a source of randomness that gives you a real real number in the range (0, 1), uniformly. Clearly, that would solve your problem, because when prisoner k emerges, you can draw a random number r and let this prisoner wait if r < ¹ ⁄ .

But you don’t actually have to draw a random number more than once: It suffices to pick a single r ∈ (0, 1) at the very beginning. You just have to be a bit cleverer when deciding whether to keep the kths prisoner next to you. Let’s go through it step by step:

  • When Augustus, the first prisoner, comes out, you simply keep him.
  • When Brutus, the second prisoner, comes out, you keep him with probability ½. You do that if r < ½. You also start some bookkeeping, by remembering the range of r that has led to this outcome; in this case, the range is (0, ½). If you picked Augustus, the range is (½,1).
  • Now Claudius comes out, and you want to keep him with probability . If the range that you remembered is (0, ½), you keep him if r < ⅙. Similarly, if the range is (½,1) you keep him if r < ½ + ⅙ = ⁴ ⁄ ₆.

    The currently remembered range is now (⁴⁄₆,1) if Augustus is waiting, (⅙,½) if Brutus is waiting and either (0, ⅙) or (½,⁴⁄₆) if Claudius is waiting. Notice that the ranges are not all of the same size, but if you add the lengths of each prisoner’s range, you have , which means that every prisoner has the same chance to be waiting so far.
  • Finally, Decimus shows up. You repeat this procedure: Look at the current interval; if r is in the left quarter of it, Decimus stays, otherwise he leaves.

I tried to visualize this, and came up with this. Pick r, locate the right spot on the x axis, and you can read off, by going down, who is waiting next you.

A table for Nisus

A table for Nisus

Back to using coins

With this strategy in mind, we can go back to using a coin. Now, a single coin flip does not give us r. 1000 coin flips don’t either. If we could flip the coin an infinite number of times, yes, then we could get r out of this: The first coin flip decides the first bit after the decimal binary point: Head is 0, horses are 1. The second the second bit. And so on.

But note that you don’t need to know precisely r. To decide whether Augustus or Brutus stay, you only need to know if r is in the left or the right half -- and you know that after the first coin flip. You can also tell whether Claudius stays as soon as you threw the coin often enough to determine if r < ⅙. This might already be the case after a single coin throw (if it shows horses), or after a few more. The likelihood that you can’t decide with certainty whether r < ⅓ goes down exponentially, so with probability 1, you will come to a conclusion eventually.

And the good thing is: If you were unlucky and had to throw the coin very often, then you learned a lot about r, and you can decide the next few prisoners without throwing a coin at all! In this sense, we managed to use “left-over entropy”.

And the other good thing is: There are no rounding errors any more. Every prisoner has the same chance to be freed.

A visualization

I tried to visualize the whole story, using CodeWorld. In this not very pretty animation, you see the prisoners (so far just simple numbers) coming from the left. Nisus (not shown) either lets them wait in the slot in the top, or sends them to the right. Sometimes he needs to toss a coin. Below you can see, in numbers and in a bar, what ranges of r are interesting: In green, the range which indicates that the current prisoner can stay, in red the range where he has to go, and in blue the range that r is known so far.

Nisus (not shown) at work

Nisus (not shown) at work

Probably, by the time you have read this until now, the animation is not at the beginning any more. Move the mouse over it, and you will see controls in the lower left corner; the first button resets the animation. You can also check out the source code.

Open questions

So Nisus is happy, because with few coin tosses, he can fairly pick a random prisoner to free. I am not fully satisfied yet, because of two option questions:

  • Is this algorithm optimal? (And what does it mean to be optimal?)
  • What is the expected number of coin throws for n prisoners? Does this number converge to log(n), which is the entropy of the result?

If you know the anwers, let me know!

Brent Yorgey pointed me to a sequence of blog posts by Jeremy Gibbons, on “Arithmetic Coding”, which use a similar structure of dividing the unit intervals.