nomeata’s mind shares
http://www.joachim-breitner.de//blog
Joachim Breitners Denkblogadehttp://joachim-breitner.de/avatars/avatar_128.pngnomeata’s mind shares
http://www.joachim-breitner.de//blog
128128Interleaving normalizing reduction strategies
http://www.joachim-breitner.de/blog/736-Interleaving_normalizing_reduction_strategies
http://www.joachim-breitner.de/blog/736-Interleaving_normalizing_reduction_strategieshttp://www.joachim-breitner.de/blog/736-Interleaving_normalizing_reduction_strategies#commentsmail@joachim-breitner.de (Joachim Breitner)<p>A little, not very significant, observation about <a href="https://en.wikipedia.org/wiki/Lambda_calculus">lambda calculus</a> and reduction strategies.</p>
<p>A <a href="https://en.wikipedia.org/wiki/Reduction_strategy_(lambda_calculus)">reduction strategy</a> determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.</p>
<p>A fun fact is: If you have two normalizing reduction strategies <span class="math inline"><em>s</em><sub>1</sub></span> and <span class="math inline"><em>s</em><sub>2</sub></span>, consulting them alternately may not yield a normalizing strategy.</p>
<p>Here is an example. Consider the lambda-term <span class="math inline"><em>o</em> = (<em>λ</em><em>x</em>.<em>x</em><em>x</em><em>x</em>)</span>, and note that <span class="math inline"><em>o</em><em>o</em> → <em>o</em><em>o</em><em>o</em> → <em>o</em><em>o</em><em>o</em><em>o</em> → …</span>. Let <span class="math inline"><em>M</em><sub><em>i</em></sub> = (<em>λ</em><em>x</em>.(<em>λ</em><em>x</em>.<em>x</em>))(<em>o</em><em>o</em><em>o</em>…<em>o</em>)</span> (with <span class="math inline"><em>i</em></span> ocurrences of <span class="math inline"><em>o</em></span>). <span class="math inline"><em>M</em><sub><em>i</em></sub></span> has two redexes, and reduces to either <span class="math inline">(<em>λ</em><em>x</em>.<em>x</em>)</span> or <span class="math inline"><em>M</em><sub><em>i</em> + 1</sub></span>. In particular, <span class="math inline"><em>M</em><sub><em>i</em></sub></span> has a normal form.</p>
<p>The two reduction strategies are:</p>
<ul>
<li><span class="math inline"><em>s</em><sub>1</sub></span>, which picks the second redex if given <span class="math inline"><em>M</em><sub><em>i</em></sub></span> for an <em>even</em> <span class="math inline"><em>i</em></span>, and the first (left-most) redex otherwise.</li>
<li><span class="math inline"><em>s</em><sub>2</sub></span>, which picks the second redex if given <span class="math inline"><em>M</em><sub><em>i</em></sub></span> for an <em>odd</em> <span class="math inline"><em>i</em></span>, and the first (left-most) redex otherwise.</li>
</ul>
<p>Both stratgies are normalizing: If during a reduction we come across <span class="math inline"><em>M</em><sub><em>i</em></sub></span>, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.</p>
<p>But if we alternatingly consult <span class="math inline"><em>s</em><sub>1</sub></span> and <span class="math inline"><em>s</em><sub>2</sub></span> while trying to reduce <span class="math inline"><em>M</em><sub>2</sub></span>, we get the sequence</p>
<p><br/><span class="math display"><em>M</em><sub>2</sub> → <em>M</em><sub>3</sub> → <em>M</em><sub>4</sub> → …</span><br/></p>
<p>which shows that this strategy is not normalizing.</p>
<p><strong>Afterthought:</strong> The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.</p>Thu, 15 Feb 2018 14:17:58 -0500The magic “Just do it” type class
http://www.joachim-breitner.de/blog/735-The_magic_%E2%80%9CJust_do_it%E2%80%9D_type_class
http://www.joachim-breitner.de/blog/735-The_magic_%E2%80%9CJust_do_it%E2%80%9D_type_classhttp://www.joachim-breitner.de/blog/735-The_magic_%E2%80%9CJust_do_it%E2%80%9D_type_class#commentsmail@joachim-breitner.de (Joachim Breitner)<p>One of the great strengths of strongly typed functional programming is that it allows <em>type driven development</em>. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.</p>
<h3 id="once-more-i-am-feeling-silly">Once more, I am feeling silly</h3>
<p>In fact, it often is completely mechanical. Consider the following function:</p>
<div class="sourceCode"><pre class="sourceCode hs"><code class="sourceCode haskell"><span class="ot">foo ::</span> (r <span class="ot">-></span> <span class="dt">Either</span> e a) <span class="ot">-></span> (a <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e b)) <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e (a,b))</code></pre></div>
<p>This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.</p>
<p>Instead, I’d like to tell the compiler to just do it for me! I want to be able to write</p>
<div class="sourceCode"><pre class="sourceCode hs"><code class="sourceCode haskell"><span class="ot">foo ::</span> (r <span class="ot">-></span> <span class="dt">Either</span> e a) <span class="ot">-></span> (a <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e b)) <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e (a,b))
foo <span class="fu">=</span> justDoIt</code></pre></div>
<p>And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run <a href="https://hackage.haskell.org/package/ghc-justdoit"><code>cabal install ghc-justdoit</code></a>, and then the following code actually works:</p>
<div class="sourceCode"><pre class="sourceCode hs"><code class="sourceCode haskell"><span class="ot">{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}</span>
<span class="kw">import </span><span class="dt">GHC.JustDoIt</span>
<span class="ot">foo ::</span> (r <span class="ot">-></span> <span class="dt">Either</span> e a) <span class="ot">-></span> (a <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e b)) <span class="ot">-></span> (r <span class="ot">-></span> <span class="dt">Either</span> e (a,b))
foo <span class="fu">=</span> justDoIt</code></pre></div>
<h3 id="what-is-this-justdoit">What is this <code>justDoIt</code>?</h3>
<pre><code>*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a</code></pre>
<p>Note that there are no instances for the <code>JustDoIt</code> class -- they are created, on the fly, by the GHC plugin <code>GHC.JustDoIt.Plugin</code>. During type-checking, it looks as these <code>JustDoIt t</code> constraints and tries to construct a term of type <code>t</code>. It is based on <a href="https://rd.host.cs.st-andrews.ac.uk/publications/jsl57.pdf">Dyckhoff’s LJT proof search</a> in intuitionistic propositional calculus, which I have <a href="https://github.com/nomeata/ghc-justdoit/blob/master/GHC/LJT.hs">implemented to work directly on GHC’s types and terms</a> (and I find it pretty slick). Those who like Unicode can write <code>(…)</code> instead.</p>
<h3 id="what-is-supported-right-now">What is supported right now?</h3>
<p>Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for</p>
<div class="sourceCode"><pre class="sourceCode hs"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">Result</span> a b <span class="fu">=</span> <span class="dt">Failure</span> a <span class="fu">|</span> <span class="dt">Success</span> b
<span class="kw">newtype</span> <span class="dt">ErrRead</span> r e a <span class="fu">=</span> <span class="dt">ErrRead</span> {<span class="ot"> unErrRead ::</span> r <span class="ot">-></span> <span class="dt">Result</span> e a }
<span class="ot">foo2 ::</span> <span class="dt">ErrRead</span> r e a <span class="ot">-></span> (a <span class="ot">-></span> <span class="dt">ErrRead</span> r e b) <span class="ot">-></span> <span class="dt">ErrRead</span> r e (a,b)
foo2 <span class="fu">=</span> (…)</code></pre></div>
<p>It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.</p>
<h3 id="how-do-i-know-that-it-creates-a-sensible-implementation">How do I know that it creates a sensible implementation?</h3>
<p>You can check the generated Core using <code>-ddump-simpl</code> of course. But it is much more convenient to use <a href="https://github.com/nomeata/inspection-testing"><code>inspection-testing</code></a> to test such things, as I am doing in the <a href="https://github.com/nomeata/ghc-justdoit/blob/master/examples/Demo.hs">Demo file</a>, which you can skim to see a few more examples of <code>justDoIt</code> in action. I very much enjoyed reaping the benefits of the work I put into <code>inspection-testing</code>, as this is so much more convenient than manually checking the output.</p>
<h3 id="is-this-for-real-should-i-use-it">Is this for real? Should I use it?</h3>
<p>Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:</p>
<ul>
<li><p>To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:</p>
<ul>
<li>Preprocessors</li>
<li>Template Haskell</li>
<li>Generic programming together with type-level computation (e.g. <a href="http://hackage.haskell.org/package/generic-lens">generic-lens</a>)</li>
<li>GHC Core-to-Core plugins</li>
</ul>
<p>In order to make this viable, I <a href="http://git.haskell.org/ghc.git/commit/0e022e56b130ab9d277965b794e70d8d3fb29533">slightly changed the API</a> for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.</p></li>
<li><p>To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.</p></li>
</ul>
<h3 id="what-needs-to-happen-for-this-to-become-real">What needs to happen for this to become real?</h3>
<p>A bunch of things:</p>
<ul>
<li>The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.</li>
<li>The implementation is very much unoptimized.</li>
<li>For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.</li>
<li>It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.</li>
</ul>
<p>If someone wants to pick it up from here, that’d be great!</p>
<h3 id="i-have-seen-this-before">I have seen this before…</h3>
<p>Indeed, the idea is not new.</p>
<p>Most famously in the Haskell work is certainly Lennart Augustssons’s <a href="http://hackage.haskell.org/package/djinn">Djinn</a> tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library <a href="http://hackage.haskell.org/package/djinn-ghc">djinn-ghc</a>, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).</p>
<p>Lennart Spitzner’s <a href="http://hackage.haskell.org/package/exference">exference</a> is a much more sophisticated tool that also takes library API functions into account.</p>
<p>In the Scala world, Sergei Winitzki very recently presented the pretty neat <a href="https://github.com/Chymyst/curryhoward">curryhoward</a> library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.</p>
<p>And in Idris, Joomy Korkut has created <a href="https://github.com/joom/hezarfen">hezarfen</a>.</p>Fri, 02 Feb 2018 14:01:11 -0500Finding bugs in Haskell code by proving it
http://www.joachim-breitner.de/blog/734-Finding_bugs_in_Haskell_code_by_proving_it
http://www.joachim-breitner.de/blog/734-Finding_bugs_in_Haskell_code_by_proving_ithttp://www.joachim-breitner.de/blog/734-Finding_bugs_in_Haskell_code_by_proving_it#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Last week, I wrote a small nifty tool called <a href="https://github.com/nomeata/bisect-binary"><code>bisect-binary</code></a>, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the <a href="https://github.com/nomeata/bisect-binary/blob/48f9b9f05509a8b0c15c654f790fefd4e0c22676/src/Intervals.hs">Intervals.hs</a> module, is a data structure for “subsets of a file” represented as a sorted list of intervals:</p>
<pre><code>data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]</code></pre>
<p>The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:</p>
<pre><code>intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
where
go _ [] = []
go [] _ = []
go (i1:is1) (i2:is2)
-- reorder for symmetry
| to i1 < to i2 = go (i2:is2) (i1:is1)
-- disjoint
| from i1 >= to i2 = go (i1:is1) is2
-- subset
| to i1 == to i2 = I f' (to i2) : go is1 is2
-- overlapping
| otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
where f' = max (from i1) (from i2)</code></pre>
<p>But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...</p>
<h3 id="now-available-formal-verification-for-haskell">Now available: Formal Verification for Haskell</h3>
<p>Ten months ago I complained that there was <a href="http://www.joachim-breitner.de/blog/717-Why_prove_programs_equivalent_when_your_compiler_can_do_that_for_you_">no good way to verify Haskell code</a> (and created the nifty hack <a href="https://github.com/nomeata/ghc-proofs"><code>ghc-proofs</code></a>). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created <a href="https://github.com/antalsz/hs-to-coq"><code>hs-to-coq</code></a>: a translator from Haskell to the theorem prover Coq.</p>
<p>We have used <code>hs-to-coq</code> on various examples, as described in our <a href="https://arxiv.org/abs/1711.09286">CPP'18 paper</a>, but it is high-time to use it for real. The easiest way to use <code>hs-to-coq</code> at the moment is to clone the repository, copy one of the example directories (e.g. <code>examples/successors</code>), place the Haskell file to be verified there and put the right module name into the <code>Makefile</code>. I also commented out parts of the Haskell file that would drag in non-base dependencies.</p>
<h3 id="massaging-the-translation">Massaging the translation</h3>
<p>Often, <code>hs-to-coq</code> translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify <a href="https://github.com/antalsz/hs-to-coq/blob/8f84d61093b7be36190142c795d6cd4496ef5aed/examples/intervals/edits">three so-called <em>edits</em></a>:</p>
<ul>
<li><p>The Haskell code uses <code>Intervals</code> both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line</p>
<pre><code>rename value Intervals.Intervals = ival</code></pre>
<p>changes the constructor name to <code>ival</code>.</p></li>
<li><p>I use the <code>Int64</code> type in the Haskell code. The Coq version of Haskell’s base library that comes with <code>hs-to-coq</code> does not support that yet, so I change that via</p>
<pre><code>rename type GHC.Int.Int64 = GHC.Num.Int</code></pre>
<p>to the normal <code>Int</code> type, which itself is mapped to <a href="https://coq.inria.fr/library/Coq.Numbers.BinNums.html">Coq’s <code>Z</code> type</a>. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of <code>Int64</code>. Since none of my code does arithmetic, only comparisons, I am fine with that.</p></li>
<li><p>The biggest hurdle is the recursion of the local <code>go</code> functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the <code>go</code> above is not. For example, in the first case, the arguments to <code>go</code> are simply swapped. It is very much not obvious why this is not an infinite loop.</p>
<p>I can specify a termination measure, i.e. a function that takes the arguments <code>xs</code> and <code>ys</code> and returns a “size” of type <code>nat</code> that decreases in every call: Add the lengths of <code>xs</code> and <code>ys</code>, multiply by two and add one if the the first interval in <code>xs</code> ends before the first interval in <code>ys</code>.</p>
<p>If the problematic function were a top-level function I could tell <code>hs-to-coq</code> about this termination measure and it would use this information to define the function using <code>Program Fixpoint</code>.</p>
<p>Unfortunately, <code>go</code> is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make <code>go</code> a top-level function, but in this case I did not want to change the Haskell code.</p>
<p>Another way out offered by <code>hs-to-coq</code> is to translate the recursive function using an axiom <code>unsafeFix : forall a, (a -> a) -> a</code>. This looks scary, but as I explain in the previous blog post, <a href="http://www.joachim-breitner.de/blog/733-Existence_and_Termination">this axiom can be used in a safe way</a>.</p>
<p>I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the <code>hs-to-coq</code> author team is that using <code>unsafeFix</code> in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using <code>hs-to-coq</code>’s the support for <code>Program Fixpoint</code>.</p></li>
</ul>
<p>With these edits in place, <code>hs-to-coq</code> splits out a faithful Coq copy of my Haskell code.</p>
<h3 id="time-to-prove-things">Time to prove things</h3>
<p>The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:</p>
<pre><code>Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
match is with
| [] => True
| (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
end.
Definition good is := match is with
ival is => exists n, goodLIs is n end.</code></pre>
<p>and I give them meaning as Coq type for sets, <a href="https://coq.inria.fr/library/Coq.Sets.Ensembles.html"><code>Ensemble</code></a>:</p>
<pre><code>Definition range (f t : Z) : Ensemble Z :=
(fun z => (f <= z)%Z /\ (z < t)%Z).
Definition semI (i : Interval) : Ensemble Z :=
match i with I f t => range f t end.
Fixpoint semLIs (is : list Interval) : Ensemble Z :=
match is with
| [] => Empty_set Z
| (i :: is) => Union Z (semI i) (semLIs is)
end.
Definition sem is := match is with
ival is => semLIs is end.</code></pre>
<p>Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:</p>
<pre><code>Lemma intersect_good : forall (is1 is2 : Intervals),
good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.
Lemma intersection_spec : forall (is1 is2 : Intervals),
good is1 -> good is2 ->
sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.</code></pre>
<p>Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above</p>
<pre><code>Definition needs_reorder (is1 is2 : list Interval) : bool :=
match is1, is2 with
| (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
| _, _ => false
end.
Definition size2 (is1 is2 : list Interval) : nat :=
(if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.</code></pre>
<p>and use it in my inductive proofs.</p>
<p>As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the <a href="https://github.com/antalsz/hs-to-coq/blob/8f84d61093b7be36190142c795d6cd4496ef5aed/examples/intervals/Proofs.v">resulting Proof file</a> is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.</p>
<h3 id="using-program-fixpoint-after-the-fact">Using Program Fixpoint after the fact?</h3>
<p>This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line <code>match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.</code>). One could improve upon this approach by following these steps:</p>
<ol style="list-style-type: decimal">
<li><p>Define copies (say, <code>intersect_go_witness</code>) of the local <code>go</code> using <code>Program Fixpoint</code> with the above termination measure. The termination argument needs to be made only once, here.</p></li>
<li><p>Use this function to prove that the argument <code>f</code> in <code>go = unsafeFix f</code> actually has a fixed point:</p>
<pre><code>Lemma intersect_go_sound:</code></pre>
<p>f intersect_go_witness = intersect_go_witness</p>
<p>(This requires functional extensionality). This lemma indicates that my use of the axioms <code>unsafeFix</code> and <code>unsafeFix_eq</code> are actually sound, as discussed in the previous blog post.</p></li>
<li><p>Still prove the desired properties for the <code>go</code> that uses <code>unsafeFix</code>, as before, but using the <a href="https://coq.inria.fr/refman/schemes.html#sec655">functional induction scheme</a> for <code>intersect_go</code>! This way, the actual proofs are free from any noisy termination arguments.</p>
<p>(The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)</p></li>
</ol>
<p>Maybe I will get to this later.</p>
<p><strong>Update:</strong> I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because <code>Program Fixpoint</code> does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems <a href="https://stackoverflow.com/a/46995609/946226">would not exist if I use the <code>Function</code> command</a>, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so <a href="https://github.com/antalsz/hs-to-coq/blob/b7efc7a8dbacca384596fc0caf65e62e87ef2768/examples/intervals/Proofs_Function.v">he resulting proof</a> is still somewhat ugly, and it requires copying code, which does not scale well.</p>
<h3 id="efforts-and-gains">Efforts and gains</h3>
<p>I spent exactly 7 hours working on these proofs, according to <a href="http://arbtt.nomeata.de/"><code>arbtt</code></a>. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the <code>Main.hs</code> file of <code>bisect-binary</code>.</p>
<p>I did <a href="https://github.com/nomeata/bisect-binary/commit/48f9b9f05509a8b0c15c654f790fefd4e0c22676#diff-38999f20f11fe6a93fa194587e8ad507">find and fix three bugs</a>:</p>
<ul>
<li>The <code>intersect</code> function would not always retain the invariant that the intervals would be non-empty.</li>
<li>The <code>subtract</code> function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)</li>
</ul>
<p><strong>Conclusion:</strong> Verification of Haskell code using Coq is now practically possible!</p>
<p><strong>Final rant:</strong> Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove <a href="https://github.com/antalsz/hs-to-coq/blob/8f84d61093b7be36190142c795d6cd4496ef5aed/examples/intervals/Ensemble_facts.v">so many lemmas about basic functions on <code>Ensembles</code></a>?</p>Tue, 05 Dec 2017 09:17:43 -0500Existence and Termination
http://www.joachim-breitner.de/blog/733-Existence_and_Termination
http://www.joachim-breitner.de/blog/733-Existence_and_Terminationhttp://www.joachim-breitner.de/blog/733-Existence_and_Termination#commentsmail@joachim-breitner.de (Joachim Breitner)<p>I recently had some intense discussions that revolved around issues of existence and termination of functions in Coq, about axioms and what certain proofs actually mean. We came across some interesting questions and thoughts that I’ll share with those of my blog readers with an interest in proofs and interactive theorem proving.</p>
<h3 id="tldr">tl;dr</h3>
<ul>
<li>It can be meaningful to assume the <em>existence</em> of a function in Coq, and under that assumption prove its <em>termination</em> and other properties.</li>
<li>Axioms and assumptions are logically equivalent.</li>
<li>Unsound axioms do not necessary invalidate a theory development, when additional meta-rules govern their use.</li>
</ul>
<h3 id="preparation">Preparation</h3>
<p>Our main running example is the infamous Collatz series. Starting at any natural number, the next is calculated as follow:</p>
<pre><code>Require Import Coq.Arith.Arith.
Definition next (n : nat) :nat :=
if Nat.even n then n / 2 else 3*n + 1.</code></pre>
<p>If you start with some positive number, you are going to end up reaching 1 eventually. Or are you? So far nobody has found a number where that does not happen, but we also do not have a proof that it never happens. It is one of the <a href="https://en.wikipedia.org/wiki/Collatz_conjecture">great mysteries of Mathematics</a>, and if you can solve it, you’ll be famous.</p>
<h3 id="a-failed-definition">A failed definition</h3>
<p>But assume we had an idea on how to prove that we are always going to reach 1, and tried to formalize this in Coq. One attempt might be to write</p>
<pre><code>Fixpoint good (n : nat) : bool :=
if n <=? 1
then true
else good (next n).
Theorem collatz: forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.</code></pre>
<p>Unfortunately, this does not work: Coq rejects this recursive definition of the function <code>good</code>, because it does not see how that is a terminating function, and Coq requires all such recursive function definitions to be obviously terminating – without this check there would be a risk of Coq’s type checking becoming incomplete or its logic being unsound.</p>
<p>The idiomatic way to avoid this problem is to state <code>good</code> as an inductive predicate... but let me explore another idea here.</p>
<h3 id="working-with-assumptions">Working with assumptions</h3>
<p>What happens if we just assume that the function <code>good</code>, described above, exists, and then perform our proof:</p>
<pre><code>Theorem collatz
(good : nat -> bool)
(good_eq : forall n,
good n = if n <=? 1 then true else good (next n))
: forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.</code></pre>
<p>Would we accept this as a proof of Collatz’ conjecture? Or did we just assume what we want to prove, in which case the theorem is vacuously true, but we just performed useless circular reasoning?</p>
<p>Upon close inspection, we find that the assumptions of the theorem (<code>good</code> and <code>good_eq</code>) are certainly satisfiable:</p>
<pre><code>Definition trivial (n: nat) : bool := true.
Lemma trivial_eq: forall n,
trivial n = if n <=? 1 then true else trivial (next n).
Proof. intro; case (n <=? 1); reflexivity. Qed.
Lemma collatz_trivial: forall n, trivial n = true.
Proof.
apply (collatz trivial trivial_eq).
Qed.</code></pre>
<p>So clearly there exists a function of type <code>nat -> bool</code> that satisfies the assumed equation. This is good, because it means that the <code>collatz</code> theorem is not simply assuming <code>False</code>!</p>
<p>Some (including me) might already be happy with this theorem and proof, as it clearly states: “Every function that follows the Collatz series eventually reaches 1”.</p>
<p>Others might still not be at ease with such a proof. Above we have seen that we cannot define the real collatz series in Coq. How can the <code>collatz</code> theorem say something that is not definable?</p>
<h3 id="classical-reasoning">Classical reasoning</h3>
<p>One possible way of getting some assurance it to define <code>good</code> as a classical function. The logic of Coq can be extended with the law of the excluded middle without making it inconsistent, and with that axiom, we can define a version of <code>good</code> that is pretty convincing (sorry for the slightly messy proof):</p>
<pre><code>Require Import Coq.Logic.ClassicalDescription.
Require Import Omega.
Definition classical_good (n:nat) : bool :=
if excluded_middle_informative (exists m, Nat.iter m next n <= 1)
then true else false.
Lemma iter_shift:
forall a f x (y:a), Nat.iter x f (f y) = f (Nat.iter x f y).
Proof.
intros. induction x. reflexivity. simpl. rewrite IHx. reflexivity. Qed.
Lemma classical_good_eq: forall n,
classical_good n = if n <=? 1 then true else classical_good (next n).
Proof.
intros.
unfold classical_good at 1.
destruct (Nat.leb_spec n 1).
* destruct (excluded_middle_informative _); try auto.
contradict n0. exists 0. simpl. assumption.
* unfold classical_good.
destruct (Nat.eqb_spec (next n) 0); try auto.
destruct (excluded_middle_informative _), (excluded_middle_informative _); auto.
- contradict n0.
destruct e0.
destruct x; simpl in *. omega.
exists x. rewrite iter_shift. assumption.
- contradict n0.
destruct e0.
exists (S x). simpl. rewrite iter_shift in H0. assumption.
Qed.
Lemma collatz_classical: forall n, classical_good n = true.
Proof. apply (collatz classical_good classical_good_eq). Qed.</code></pre>
<p>The point of this is not so much to use this particular definition of <code>good</code>, but merely to convince ourselves that the assumptions of the <code>collatz</code> theorem above encompass “the” Collatz series, and thus constitutes a proof of the Collatz conjecture.</p>
<p>The main take-away so far is that <strong>existence and termination of a function</strong> are two separate issues, and it is possible to assume the former, prove the latter, and not have done a vacuous proof.</p>
<h3 id="the-ice-gets-thinner">The ice gets thinner</h3>
<h4 id="sections">Sections</h4>
<p>Starting with the above <code>Theorem collatz</code>, there is another train of thought I invite to to follow along.</p>
<p>Probably the “genius idea” proof will be more than a few lines long, and we probably to be able to declare helper lemmas and other things along the way. Doing all that in the body of the <code>collatz</code> proof is not very convenient, so instead of using assumptions, we might write</p>
<pre><code>Section collatz:
Variable good : nat -> bool.
Variable good_eq : forall n,
good n = if n <=? 1 then true else good (next n)
Theorem collatz2 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.
End collatz.</code></pre>
<p>So far so good: Clearly, I just refactored my code a bit, but did not make any significant change. The theorems <code>collatz2</code> and <code>collatz</code> are equivalent.</p>
<h4 id="sound-axioms">Sound axioms</h4>
<p>But note that we do not really intend to instantiate <code>collatz2</code>. We know that the assumptions are satisfiable (e.g. since we can define <code>trivial</code> or <code>classical_good</code>). So maybe, we would rather avoid the <code>Section</code> mechanism and simply write</p>
<pre><code>Axiom good : nat -> bool.
Axiom good_eq : forall n,
good n = if n <=? 1 then true else good (next n)
Theorem collatz3 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.</code></pre>
<p>I assume this will make a few of my readers’ eyebrows go up: How can I dare to start with such Axioms? Do they not invalidate my whole development?</p>
<p>On the other hand, all that a Coq axiom is doing is saying “the following theorems are under the assumption that the axiom holds”. In that sense, <code>collatz3</code> and <code>collatz2</code> are essentially equivalent.</p>
<h4 id="unsound-axioms">Unsound axioms</h4>
<p>Let me take it one step further, and change that to:</p>
<pre><code>Axiom unsafeFix : forall a, (a -> a) -> a.
Axiom unsafeFix_eq : forall f, unsafeFix f = f (unsafeFix f).
Definition good : nat -> bool :=
unsafeFix (fun good n => if n <=? 1 then true else good (next n)).
Theorem collatz4 : forall n, good n = true.
Proof. (* Insert genius idea here.*) Qed.</code></pre>
<p>At this point, the majority of my readers <em>will</em> cringe. The axiom <code>unsafeFix</code> is so blatantly unsound (in Coq), how do I even dare to think of using it. But bear with me for a moment: I did not change the proof. So maybe the <code>collatz4</code> theorem is still worth something?</p>
<p>I want to argue that it is: Both <code>unsafeFix</code> and <code>unsafeFix_eq</code> are unsound in their full generality. But as long as I instantiate them only with functions <code>f</code> which have a fixpoint, then I cannot prove <code>False</code> this way. So while “Coq + <code>unsafeFix</code>” is unsound, “Coq + <code>unsafeFix</code> + <code>unsafeFix_eq</code> + metarule that these axioms are only called with permissible <code>f</code>” is not.</p>
<p>In that light, my <code>collatz4</code> proof carries the same meaning as the <code>collatz3</code> proof, it is just less convenient to check: If I were to check the validity of <code>collatz3</code>, I have to maybe look for uses of <code>admit</code>, or some misleading use of syntax or other tricks, or other smells. When I have to check the validity of <code>collatz4</code>, I also have to additionally check the meta-rule -- tedious, but certainly possible (e.g. by inspecting the proof term).</p>
<h3 id="beyond-collatz">Beyond Collatz</h3>
<p>The questions discussed here did not come up in the context of the Collatz series (for which I unfortunately do not have a proof), but rather the verification of Haskell code in Coq using <a href="https://github.com/antalsz/hs-to-coq"><code>hs-to-coq</code></a>. I started with the idiomatic Haskell definition of “Quicksort”:</p>
<div class="sourceCode"><pre class="sourceCode hs"><code class="sourceCode haskell"><span class="ot">quicksort ::</span> <span class="dt">Ord</span> a <span class="ot">=></span> [a] <span class="ot">-></span> [a]
quicksort [] <span class="fu">=</span> []
quicksort (p<span class="fu">:</span>xs) <span class="fu">=</span> quicksort lesser <span class="fu">++</span> [p] <span class="fu">++</span> quicksort greater
<span class="kw">where</span> (lesser, greater) <span class="fu">=</span> partition (<span class="fu"><</span>p) xs</code></pre></div>
<p>This function is not terminating in a way that is obvious to the Coq type checker. Conveniently, <code>hs-to-coq</code> can optionally create the Coq code using the <code>unsafeFix</code> axiom above, producing (roughly):</p>
<pre><code>Definition quicksort {a} `{Ord a} : list a -> list a :=
unsafeFix (fun quicksort xs =>
match xs with
| nil => nil
| p :: xs => match partition (fun x => x <? p) xs with
| (lesser, greater) => quicksort lesser ++ [p] ++ quicksort greater
end
end).</code></pre>
<p>I <a href="https://github.com/antalsz/hs-to-coq/tree/a8cfb747cee2dbe7ce77b3a118958af99c090768/examples/ghc-base/quicksort">then proved</a> (roughly)</p>
<pre><code>Theorem quicksort_sorted:
forall a `(Ord a) (xs : list a), StronglySorted (quicksort xs).</code></pre>
<p>and</p>
<pre><code>Theorem quicksort_permutation:
forall a `(Ord a) (xs : list a), Permutation (quicksort xs) xs.</code></pre>
<p>These proofs proceed by well-founded induction on the length of the argument <code>xs</code>, and hence encompass a termination proof of <code>quicksort</code>. Note that with a only <em>partially</em> correct but non-terminating definition of <code>quicksort</code> (e.g. <code>quicksort := unsafeFix (fun quicksort xs => quicksort xs)</code>) I would not be able to conclude these proofs.</p>
<p>My (not undisputed) claim about the meaning of these theorems is therefore</p>
<blockquote>
<p>If the Haskell equations for <code>quicksort</code> actually have a fixed point, then the use of <code>unsafeFix</code> in its definition does not introduce any inconsistency. Under this assumption, we showed that <code>quicksort</code> always terminates and produces a sorted version of the input list.</p>
</blockquote>
<p>Do you agree?</p>Sat, 25 Nov 2017 15:54:57 -0500Isabelle functions: Always total, sometimes undefined
http://www.joachim-breitner.de/blog/732-Isabelle_functions__Always_total%2C_sometimes_undefined
http://www.joachim-breitner.de/blog/732-Isabelle_functions__Always_total%2C_sometimes_undefinedhttp://www.joachim-breitner.de/blog/732-Isabelle_functions__Always_total%2C_sometimes_undefined#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a>) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of <em>what is a function</em>, are function <em>total</em> and what is the business with <em>undefined</em>. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians.</p>
<p>Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.</p>
<h3 id="hol-is-a-logic-of-total-functions">HOL is a logic of total functions</h3>
<p>If I have a Isabelle function <code>f :: a ⇒ b</code> between two types <code>a</code> and <code>b</code> (the function arrow in Isabelle is <code>⇒</code>, not <code>→</code>), then – by definition of what it means to be a function in HOL – whenever I have a value <code>x :: a</code>, then the expression <code>f x</code> (i.e. <code>f</code> applied to <code>x</code>) <em>is</em> a value of type <code>b</code>. Therefore, and without exception, <em>every Isabelle function is total</em>.</p>
<p>In particular, it cannot be that <code>f x</code> does not exist for some <code>x :: a</code>. This is a first difference from Haskell, which does have partial functions like</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">spin ::</span> <span class="dt">Maybe</span> <span class="dt">Integer</span> <span class="ot">-></span> <span class="dt">Bool</span>
spin (<span class="dt">Just</span> n) <span class="fu">=</span> spin (<span class="dt">Just</span> (n<span class="fu">+</span><span class="dv">1</span>))</code></pre></div>
<p>Here, neither the expression <code>spin Nothing</code> nor the expression <code>spin (Just 42)</code> produce a value of type <code>Bool</code>: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type <code>Bool</code>.</p>
<p>Because every function is total, this confusion cannot arise in Isabelle: If an expression <code>e</code> has type <code>t</code>, then it <em>is</em> a value of type <code>t</code>. This trait is shared with other total systems, including Coq.</p>
<p>Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion:</p>
<h3 id="isabelle-functions-do-not-compute">Isabelle functions do not compute</h3>
<p>We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it<a href="#fn2" class="footnoteRef" id="fnref2"><sup>2</sup></a>. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind.</p>
<p>What is the difference?</p>
<ul>
<li>A function <code>a → b</code> in functional programming is an algorithm that, given a value of type <code>a</code>, calculates (returns, evaluates to) a value of type <code>b</code>.</li>
<li>A function <code>a ⇒ b</code> in math (or Isabelle) associates with each value of type <code>a</code> a value of type <code>b</code>.</li>
</ul>
<p>For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:</p>
<pre class="isabelle"><code>definition foo :: "(nat ⇒ real) ⇒ real" where
"foo seq = (if convergent seq then lim seq else 0)"</code></pre>
<p>This assigns a real number to every sequence, but it does not <em>compute</em> it in any useful sense.</p>
<p>From this it follows that</p>
<h3 id="isabelle-functions-are-specified-not-defined">Isabelle functions are specified, not defined</h3>
<p>Consider this function definition:</p>
<pre class="isabelle"><code>fun plus :: "nat ⇒ nat ⇒ nat" where
"plus 0 m = m"
| "plus (Suc n) m = Suc (plus n m)"</code></pre>
<p>To a functional programmer, this reads</p>
<blockquote>
<p><code>plus</code> is a function that analyses its first argument. If that is <code>0</code>, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.</p>
</blockquote>
<p>which is clearly a description of a computation.</p>
<p>But to Isabelle, the above reads</p>
<blockquote>
<p><code>plus</code> is a binary function on natural numbers, and it satisfies the following two equations: …</p>
</blockquote>
<p>And in fact, it is not so much Isabelle that reads it this way, but rather the <code>fun</code> command, which is external to the Isabelle logic. The <code>fun</code> command analyses the given equations, constructs a non-recursive definition of <code>plus</code> under the hood, passes <em>that</em> to Isabelle and then proves that the given equations hold for <code>plus</code>.</p>
<p>One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define <code>plus'</code> by recursing on the second argument, we’d obtain the the same function (i.e. <code>plus = plus'</code> is a theorem, and there would be no way of telling the two apart).</p>
<h3 id="termination-is-a-property-of-specifications-not-functions">Termination is a property of specifications, not functions</h3>
<p>Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises <em>before</em> the function is defined: The <code>fun</code> command can only construct <code>plus</code> in a way that the equations hold if it passes a termination check – very much like <code>Fixpoint</code> in Coq.</p>
<p>But while the termination check of <code>Fixpoint</code> in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a “termination proof of the function” exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument!</p>
<p>For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument <code>f</code> on the second argument <code>x</code> until it finds a fixpoint. It is completely polymorphic (the single quote in <code>'a</code> indicates that this is a type variable):</p>
<pre class="isabelle"><code>partial_function (tailrec)
fixpoint :: "('a ⇒ 'a) ⇒ 'a ⇒ 'a"
where
"fixpoint f x = (if f x = x then x else fixpoint f (f x))"</code></pre>
<p>We can work with this definition just fine. For example, if we instantiate <code>f</code> with <code>(λx. x-1)</code>, we can prove that it will always return 0:</p>
<pre class="isabelle"><code>lemma "fixpoint (λ n . n - 1) (n::nat) = 0"
by (induction n) (auto simp add: fixpoint.simps)</code></pre>
<p>Similarly, if we have a function that works within the option monad (i.e. |Maybe| in Haskell), its specification can always be turned into a function without an explicit termination proof – here one that calculates the Collatz sequence:</p>
<pre class="isabelle"><code>partial_function (option) collatz :: "nat ⇒ nat list option"
where "collatz n =
(if n = 1 then Some [n]
else if even n
then do { ns <- collatz (n div 2); Some (n # ns) }
else do { ns <- collatz (3 * n + 1); Some (n # ns)})"</code></pre>
<p>Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function “returns” a list only if the collatz sequence eventually reaches 1.</p>
<p>I expect these definitions to make a Coq user very uneasy. How can <code>fixpoint</code> be a total function? What is <code>fixpoint (λn. n+1)</code>? What if we run <code>collatz n</code> for a <code>n</code> where the <a href="https://en.wikipedia.org/wiki/Collatz_conjecture">Collatz sequence</a> does <em>not</em> reach 1?<a href="#fn3" class="footnoteRef" id="fnref3"><sup>3</sup></a> We will come back to that question after a little detour…</p>
<h3 id="hol-is-a-logic-of-non-empty-types">HOL is a logic of non-empty types</h3>
<p>Another big difference between Isabelle and Coq is that in Isabelle, <em>every type is inhabited</em>. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type.</p>
<p>Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions.</p>
<p>This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely</p>
<pre class="isabelle"><code>undefined :: 'a</code></pre>
<p>The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as <code>arbitrary</code> or, even better, <code>unknown</code>.</p>
<p>Since <code>undefined</code> can be instantiated at any type, we can instantiate it for example at <code>bool</code>, and we can observe an important fact: <code>undefined</code> is not an <em>extra</em> value besides the “usual ones”. It is simply <em>some</em> value of that type, which is demonstrated in the following lemma:</p>
<pre class="isabelle"><code>lemma "undefined = True ∨ undefined = False" by auto</code></pre>
<p>In fact, if the type has only one value (such as the unit type), then we know the value of <code>undefined</code> for sure:</p>
<pre class="isabelle"><code>lemma "undefined = ()" by auto</code></pre>
<p>It is very handy to be able to produce an expression of any type, as we will see as follows</p>
<h3 id="partial-functions-are-just-underspecified-functions">Partial functions are just underspecified functions</h3>
<p>For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle’s equivalent of Haskell’s partial <code>fromJust</code> function:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell">fun<span class="ot"> fromSome ::</span> <span class="st">"'a option ⇒ 'a"</span> <span class="kw">where</span>
<span class="st">"fromSome (Some x) = x"</span></code></pre></div>
<p>This definition is accepted by <code>fun</code> (albeit with a warning), and the generated function <code>fromSome</code> behaves exactly as specified: when applied to <code>Some x</code>, it is <code>x</code>. The term <code>fromSome None</code> is also a value of type <code>'a</code>, we just do not know which one it is, as the specification does not address that.</p>
<p>So <code>fromSome None</code> behaves just like <code>undefined</code> above, i.e. we can prove</p>
<pre class="isabelle"><code>lemma "fromSome None = False ∨ fromSome None = True" by auto</code></pre>
<p>Here is a small exercise for you: Can you come up with an explanation for the following lemma:</p>
<pre class="isabelle"><code>fun constOrId :: "bool ⇒ bool" where
"constOrId True = True"
lemma "constOrId = (λ_.True) ∨ constOrId = (λx. x)"
by (metis (full_types) constOrId.simps)</code></pre>
<p>Overall, this behavior makes sense if we remember that function “definitions” in Isabelle are not really definitions, but rather specifications. And a partial function “definition” is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.</p>
<h3 id="nonterminating-functions-are-also-just-underspecified">Nonterminating functions are also just underspecified</h3>
<p>Let us return to the puzzle posed by <code>fixpoint</code> above. Clearly, the function – seen as a functional program – is not total: When passed the argument <code>(λn. n + 1)</code> or <code>(λb. ¬b)</code> it will loop forever trying to find a fixed point.</p>
<p>But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when <code>f</code> has no fixed-point? It states that the equation <code>fixpoint f x = fixpoint f (f x)</code> holds. And this equation has a solution, for example <code>fixpoint f _ = undefined</code>.</p>
<p>Or more concretely: The specification of the <code>fixpoint</code> function states that <code>fixpoint (λb. ¬b) True = fixpoint (λb. ¬b) False</code> has to hold, but it does not specify which particular value (<code>True</code> or <code>False</code>) it should denote – any is fine.</p>
<h3 id="not-all-function-specifications-are-ok">Not all function specifications are ok</h3>
<p>At this point you might wonder: Can I just specify any equations for a function <code>f</code> and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function <code>bogus :: () ⇒ nat</code> with the equation <code>bogus () = Suc (bogus ())</code>, because this equation does not have a solution.</p>
<p>We can actually prove that such a function cannot exist:</p>
<pre class="isabelle"><code>lemma no_bogus: "∄ bogus. bogus () = Suc (bogus ())" by simp</code></pre>
<p>(Of course, <code>not_bogus () = not_bogus ()</code> is just fine…)</p>
<h3 id="you-cannot-reason-about-partiality-in-isabelle">You cannot reason about partiality in Isabelle</h3>
<p>We have seen that there are many ways to define functions that one might consider “partial”. Given a function, can we prove that it is not “partial” in that sense?</p>
<p>Unfortunately, but unavoidably, no: Since <code>undefined</code> is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that “A function result is not specified”.</p>
<p>Here is an example that demonstrates this: Two “partial” functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:</p>
<pre class="isabelle"><code>fun partial1 :: "bool ⇒ unit" where
"partial1 True = ()"
partial_function (tailrec) partial2 :: "bool ⇒ unit" where
"partial2 b = partial2 b"
fun total :: "bool ⇒ unit" where
"total True = ()"
| "total False = ()"
lemma "partial1 = total ∧ partial2 = total" by auto</code></pre>
<p>If you really <em>do</em> want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use <a href="http://isabelle.in.tum.de/dist/library/HOL/HOLCF/README.html">HOLCF</a>, where you can give equational specifications of functional programs and obtain <em>continuous functions</em> between <em>domains</em>. In that setting, <code>⊥ ≠ ()</code> and <code>partial2 = ⊥ ≠ total</code>. We have done that <a href="https://arxiv.org/abs/1306.1340">to verify some of HLint’s equations</a>.</p>
<h3 id="you-can-still-compute-with-isabelle-functions">You can still compute with Isabelle functions</h3>
<p>I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to `fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to “compute” that function quite like you would in Coq or Haskell.</p>
<p>Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See <a href="http://www21.in.tum.de/~popescua/rs3/CoCon.html">Concon</a> for a conference management system with confidentially verified in Isabelle.</p>
<p>While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones.</p>
<p>Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a <a href="http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html">“moral reasoning” foundation</a>. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like <code>fixpoint</code>, which is not possible in Coq.</p>
<p>There is <a href="https://www21.in.tum.de/~hupel/pub/isabelle-cakeml.pdf">currently ongoing work</a> about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.</p>
<h3 id="conclusion">Conclusion</h3>
<p>We have seen how in Isabelle, <em>every function is total</em>. Function declarations have equations, but these do not <em>define</em> the function in an computational sense, but rather <em>specify</em> them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.</p>
<h3 id="ps-axiom-undefined-in-coq">PS: Axiom <code>undefined</code> in Coq</h3>
<p><em>This section is speculative, and an invitation for discussion.</em></p>
<p>Coq already distinguishes between types used in programs (<code>Set</code>) and types used in proofs <code>Prop</code>.</p>
<p>Could Coq ensure that every <code>t : Set</code> is non-empty? I imagine this would require additional checks in the <code>Inductive</code> command, similar to the checks that the Isabelle command <code>datatype</code> has to perform<a href="#fn4" class="footnoteRef" id="fnref4"><sup>4</sup></a>, and it would disallow <a href="https://coq.inria.fr/library/Coq.Init.Datatypes.html"><code>Empty_set</code></a>.</p>
<p>If so, then it would be sound to add the following axiom</p>
<pre class="coq"><code>Axiom undefined : forall (a : Set), a.</code></pre>
<p>wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality.</p>
<p>With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case <code>_ ⇒ undefined</code>. Would it “help” with non-obviously terminating functions? Would it allow a Coq command <code>Tailrecursive</code> that accepts any tailrecursive function without a termination check?</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn1"><p>Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.<a href="#fnref1">↩</a></p></li>
<li id="fn2"><p>Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.<a href="#fnref2">↩</a></p></li>
<li id="fn3"><p>Let me know if you find such an <span class="math inline"><em>n</em></span>. Besides <span class="math inline"><em>n</em> = 0</span>.<a href="#fnref3">↩</a></p></li>
<li id="fn4"><p>Like <code>fun</code>, the constructions by <code>datatype</code> are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.<a href="#fnref4">↩</a></p></li>
</ol>
</div>Thu, 12 Oct 2017 13:54:20 -0400e.g. in TeX
http://www.joachim-breitner.de/blog/731-e_g__in_TeX
http://www.joachim-breitner.de/blog/731-e_g__in_TeXhttp://www.joachim-breitner.de/blog/731-e_g__in_TeX#commentsmail@joachim-breitner.de (Joachim Breitner)<p>When I learned TeX, I was told to not write <code>e.g. something</code>, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write <code>e.g.\␣</code>.</p>
<p>Years later, I learned from a convincing, but since forgotten source, that in fact <code>e.g.\@</code> is the proper thing to write. I vaguely remembering that <code>e.g.\␣</code> supposedly affected the inter-word space in some unwanted way. So I did that for many years.</p>
<p>Until I recently was called out for doing it wrong, and that infact <code>e.g.\␣</code> is the proper way. This was supported by <a href="https://tex.stackexchange.com/a/22563/15107">a StackExchange answer</a> written by a LaTeX authority and backed by a reference to documentation. The same question has, however, <a href="https://tex.stackexchange.com/a/22564/15107">another answer</a> by another TeX authority, backed by an analysis of the implementation, which concludes that <code>e.g.\@</code> is proper.</p>
<p>What now? I guess I just have to find it out myself.</p>
<div class="figure">
<img src="//www.joachim-breitner.de/various/tex-eg/tex-eg-at-1.gif" alt="The problem and two solutions"/>
<p class="caption">The problem and two solutions</p>
</div>
<p>The above image shows three variants: The obviously broken version with <code>e.g.</code>, and the two contesting variants to fix it. Looks like they yield equal results!</p>
<p>So maybe the difference lies in how <code>\@</code> and <code>\␣</code> react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;</p>
<div class="figure">
<img src="//www.joachim-breitner.de/various/tex-eg/tex-eg-at-2.gif" alt="Expanding whitespace, take 1"/>
<p class="caption">Expanding whitespace, take 1</p>
</div>
<div class="figure">
<img src="//www.joachim-breitner.de/various/tex-eg/tex-eg-at-3.gif" alt="Expanding whitespace, take 2"/>
<p class="caption">Expanding whitespace, take 2</p>
</div>
<p>I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?</p>
<div class="figure">
<img src="//www.joachim-breitner.de/various/tex-eg/tex-eg-at-4.gif" alt="Expanding whitespace, take 3"/>
<p class="caption">Expanding whitespace, take 3</p>
</div>
<div class="figure">
<img src="//www.joachim-breitner.de/various/tex-eg/tex-eg-at-5.gif" alt="Expanding whitespace, take 4"/>
<p class="caption">Expanding whitespace, take 4</p>
</div>
<p>Again, I see the same behaviour.</p>
<p><strong>Conclusion</strong>: It does not matter, but <code>e.g.\␣</code> is less hassle when using <a href="http://hackage.haskell.org/package/lhs2tex">lhs2tex</a> than <code>e.g.\@</code> (which has to be escaped as <code>e.g.\@@</code>), so the winner is <code>e.g.\␣</code>!</p>
<p>(Unless you put it in a macro, <a href="https://tex.stackexchange.com/a/116530/15107">then <code>\@</code> might be preferable</a>, and it is still <a href="https://tex.stackexchange.com/a/55112/15107">needed between a captial letter and a sentence period</a>.)</p>Sun, 08 Oct 2017 15:08:13 -0400Less parentheses
http://www.joachim-breitner.de/blog/730-Less_parentheses
http://www.joachim-breitner.de/blog/730-Less_parentheseshttp://www.joachim-breitner.de/blog/730-Less_parentheses#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Yesterday, at the <a href="http://conf.researchr.org/track/hiw-2017/hiw-2017">Haskell Implementers Workshop 2017</a> in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.</p>
<p>The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you</p>
<h3 id="context-aka.-sections">Context aka. Sections</h3>
<p>This is probably the most relevant of the three proposals. Consider a bunch of related functions, say <code>analyseExpr</code> and <code>analyseAlt</code>, like these:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">analyseExpr ::</span> <span class="dt">Expr</span> <span class="ot">-></span> <span class="dt">Expr</span>
analyseExpr (<span class="dt">Var</span> v) <span class="fu">=</span> change v
analyseExpr (<span class="dt">App</span> e1 e2) <span class="fu">=</span>
<span class="dt">App</span> (analyseExpr e1) (analyseExpr e2)
analyseExpr (<span class="dt">Lam</span> v e) <span class="fu">=</span> <span class="dt">Lam</span> v (analyseExpr flag e)
analyseExpr (<span class="dt">Case</span> scrut alts) <span class="fu">=</span>
<span class="dt">Case</span> (analyseExpr scrut) (analyseAlt <span class="fu"><$></span> alts)
<span class="ot">analyseAlt ::</span> <span class="dt">Alt</span> <span class="ot">-></span> <span class="dt">Alt</span>
analyseAlt (dc, pats, e) <span class="fu">=</span> (dc, pats, analyseExpr e)</code></pre></div>
<p>You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the <code>Var</code> case. You thus add a parameter to all these functions, and hence an argument to every call:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">type</span> <span class="dt">Flag</span> <span class="fu">=</span> <span class="dt">Bool</span>
<span class="ot">analyseExpr ::</span> <span class="dt">Flag</span> <span class="ot">-></span> <span class="dt">Expr</span> <span class="ot">-></span> <span class="dt">Expr</span>
analyseExpr flag (<span class="dt">Var</span> v) <span class="fu">=</span> <span class="kw">if</span> flag <span class="kw">then</span> change1 v <span class="kw">else</span> change2 v
analyseExpr flag (<span class="dt">App</span> e1 e2) <span class="fu">=</span>
<span class="dt">App</span> (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (<span class="dt">Lam</span> v e) <span class="fu">=</span> <span class="dt">Lam</span> v (analyseExpr (not flag) e)
analyseExpr flag (<span class="dt">Case</span> scrut alts) <span class="fu">=</span>
<span class="dt">Case</span> (analyseExpr flag scrut) (analyseAlt flag <span class="fu"><$></span> alts)
<span class="ot">analyseAlt ::</span> <span class="dt">Flag</span> <span class="ot">-></span> <span class="dt">Alt</span> <span class="ot">-></span> <span class="dt">Alt</span>
analyseAlt flag (dc, pats, e) <span class="fu">=</span> (dc, pats, analyseExpr flag e)</code></pre></div>
<p>I find this code problematic. The intention was: “<code>flag</code> is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, <code>flag</code> should be considered constant.”</p>
<p>But this intention is neither easily visible nor enforced. And in fact, in the above code, <code>flag</code> does “change”, as <code>analyseExpr</code> passes something else in the <code>Lam</code> case. The idiom is indistinguishable from the <em>environment idiom</em>, where a <em>locally changing</em> environment (such as “variables in scope”) is passed around.</p>
<p>So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!</p>
<p>The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">type</span> <span class="dt">Flag</span> <span class="fu">=</span> <span class="dt">Bool</span>
section (<span class="ot">flag ::</span> <span class="dt">Flag</span>) <span class="kw">where</span>
<span class="ot"> analyseExpr ::</span> <span class="dt">Expr</span> <span class="ot">-></span> <span class="dt">Expr</span>
analyseExpr (<span class="dt">Var</span> v) <span class="fu">=</span> <span class="kw">if</span> flag <span class="kw">then</span> change1 v <span class="kw">else</span> change2v
analyseExpr (<span class="dt">App</span> e1 e2) <span class="fu">=</span>
<span class="dt">App</span> (analyseExpr e1) (analyseExpr e2)
analyseExpr (<span class="dt">Lam</span> v e) <span class="fu">=</span> <span class="dt">Lam</span> v (analyseExpr e)
analyseExpr (<span class="dt">Case</span> scrut alts) <span class="fu">=</span>
<span class="dt">Case</span> (analyseExpr scrut) (analyseAlt <span class="fu"><$></span> alts)
<span class="ot"> analyseAlt ::</span> <span class="dt">Alt</span> <span class="ot">-></span> <span class="dt">Alt</span>
analyseAlt (dc, pats, e) <span class="fu">=</span> (dc, pats, analyseExpr e)</code></pre></div>
<p>Now the intention is clear: Within a clearly marked block, <code>flag</code> is fixed and when reasoning about this code I do not have to worry that it might change. Either <em>all</em> variables will be passed to <code>change1</code>, or <em>all</em> to <code>change2</code>. An important distinction!</p>
<p>Therefore, inside the <code>section</code>, the type of <code>analyseExpr</code> does not mention <code>Flag</code>, whereas outside its type is <code>Flag -> Expr -> Expr</code>. This is a bit unusual, but not completely: You see precisely the same effect in a <code>class</code> declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.</p>
<p>Note that idioms like implicit parameters or the <code>Reader</code> monad do not give the guarantee that the parameter is (locally) constant.</p>
<p>More details can be found in the <a href="https://github.com/ghc-proposals/ghc-proposals/pull/40">GHC proposal</a> that I prepared, and I invite you to raise concern or voice support there.</p>
<p>Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the <a href="http://hackage.haskell.org/package/seal-module"><code>seal-module</code></a> package!</p>
<h3 id="less-parentheses-1-bulleted-argument-lists">Less parentheses 1: Bulleted argument lists</h3>
<p>The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the <code>$</code> operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.</p>
<p>For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">foo ::</span> <span class="dt">Baz</span>
foo <span class="fu">=</span> bracket
• some complicated code
that is evaluated first
• other complicated code for later
• even more complicated code</code></pre></div>
<p>I find this very easy to visually parse and navigate.</p>
<p>It is actually possible to do this now, if one defines <code>(•) = id</code> with <code>infixl 0 •</code>. A dedicated syntax extension (<code>-XArgumentBullets</code>) is preferable:</p>
<ul>
<li>It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.</li>
<li>I would like to use <code>$</code> inside these complex arguments, and multiple operators of precedence 0 do not mix. (<code>infixl -1 •</code> would help).</li>
<li>It should be possible to nest these, and distinguish different nesting levers based on their indentation.</li>
</ul>
<h3 id="less-parentheses-1-whitespace-precedence">Less parentheses 1: Whitespace precedence</h3>
<p>The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).</p>
<p>Consider this definition of append:</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">(++) ::</span> [a] <span class="ot">-></span> [a] <span class="ot">-></span> [a]
[] <span class="fu">++</span> ys <span class="fu">=</span> ys
(x<span class="fu">:</span>xs) <span class="fu">++</span> ys <span class="fu">=</span> x <span class="fu">:</span> (xs<span class="fu">++</span>ys)</code></pre></div>
<p>Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.</p>
<p>We can do the same in syntax!</p>
<div class="sourceCode"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">(++) ::</span> [a] <span class="ot">-></span> [a] <span class="ot">-></span> [a]
[] <span class="fu">++</span> ys <span class="fu">=</span> ys
x<span class="fu">:</span>xs <span class="fu">++</span> ys <span class="fu">=</span> x <span class="fu">:</span> xs<span class="fu">++</span>ys</code></pre></div>
<p>The rule is simple: <em>A sequence of tokens without any space is implicitly parenthesised.</em></p>
<p>The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).</p>
<p>But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for <code>(</code> and <code>)</code>, matching them up, building a mental tree, and then digging deeper.</p>
<p>Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.</p>
<p>Some FAQs about this:</p>
<ul>
<li><em>What about an operator with space on one side but not on the other?</em> I’d simply forbid that, and hence enforce readable code.</li>
<li><em>Do operator sections still require parenthesis?</em> Yes, I’d say so.</li>
<li>Does this overrule operator precedence? Yes! <code>a * b+c == a * (b+c)</code>.</li>
<li><em>What is a token</em>? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then <code>(Succ a)+b * c</code> parses as <code>((Succ a)+b) * c</code>, otherwise it should probably simply be illegal.</li>
<li><em>Can we extend this so that one space binds tighter than two spaces, and so on?</em> Yes we can, but really, we should not.</li>
<li><em>This is incompatible with Agda’s syntax!</em> Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.</li>
<li><em>Has this been done before?</em> I have not seen it in any language, but <a href="http://wall.org/~lewis/2013/10/25/whitespace-precedence.html">Lewis Wall</a> has blogged this idea before.</li>
</ul>
<p>Well, let me know what you think!</p>Sun, 10 Sep 2017 11:10:16 +0100Compose Conference talk video online
http://www.joachim-breitner.de/blog/729-Compose_Conference_talk_video_online
http://www.joachim-breitner.de/blog/729-Compose_Conference_talk_video_onlinehttp://www.joachim-breitner.de/blog/729-Compose_Conference_talk_video_online#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Three months ago, I gave a talk at the <a href="http://www.composeconference.org/2017/program/">Compose::Conference</a> in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment <a href="https://code.world/">CodeWorld</a>, and finally the recording of the talk is available <a href="https://www.youtube.com/watch?v=2kKvVe673MA">on YouTube</a> (and is being discussed <a href="https://www.reddit.com/r/haskell/comments/6uam49/compose_conference_lock_step_simulation_is_childs/">on reddit</a>):</p>
<iframe src="https://www.youtube.com/embed/2kKvVe673MA?rel=0?ecver=2" width="640" height="360" frameborder="0" style="display: block; margin-left: auto; margin-right: auto">
</iframe>
<p>It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the <a href="https://github.com/nomeata/codeworld-talk">code for the presentation</a> online.</p>
<p>Chris and I have written about this for ICFP'17, and thanks to open access I can actually share <a href="https://www.joachim-breitner.de/publications/CodeWorld-ICFP17.pdf">the paper</a> freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.</p>Sun, 20 Aug 2017 20:50:10 +0200Communication Failure
http://www.joachim-breitner.de/blog/728-Communication_Failure
http://www.joachim-breitner.de/blog/728-Communication_Failurehttp://www.joachim-breitner.de/blog/728-Communication_Failure#commentsmail@joachim-breitner.de (Joachim Breitner)<p>I am still far from being a professor, but I recently got a glimpse of what awaits you in that role…</p>
<blockquote>
<p><strong>From:</strong> Sebastian R. <…<span class="citation">@gmail.com</span>><br/>
<strong>To:</strong> joachim@cis.upenn.edu<br/>
<strong>Subject:</strong> re: Errors</p>
<p>I've spotted a basic error in your course on Haskell (<a href="https://www.seas.upenn.edu/~cis194/fall16/" class="uri">https://www.seas.upenn.edu/~cis194/fall16/</a>). Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change.</p>
<p>In your introduction note you have written:</p>
<blockquote>
<p>n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.</p>
</blockquote>
<p>Howeverm there is no <code>input CodeWorld</code> in the code above. Have you been made aware of this error earlier?</p>
<p>Regards, ...</p>
</blockquote>
<p>Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences.</p>
<p>Strangley, I don’t see a mistake in the material, so I respond:</p>
<blockquote>
<p><strong>From:</strong> Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a><br/>
<strong>To:</strong> Sebastian R. <…<span class="citation">@gmail.com</span>><br/>
<strong>Subject:</strong> Re: Errors</p>
<p>Dear Sebastian,</p>
<p>thanks for pointing out errors. But the first piece of code under “Basic Haskell” starts with</p>
<pre><code>{-# LANGUAGE OverloadedStrings #-}
import CodeWorld</code></pre>
<p>so I am not sure what you are referring to.</p>
<p>Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things.</p>
<p>Regards, Joachim</p>
</blockquote>
<p>A while later, I receive this response:</p>
<blockquote>
<p><strong>From:</strong> Sebastian R. <…<span class="citation">@gmail.com</span>><br/>
<strong>To:</strong> Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a><br/>
<strong>Subject:</strong> Re: Errors</p>
<p>Greetings, Joachim.</p>
<p>Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code.</p>
<p>You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from <strong>your</strong> lectures; they're not my words!</p>
<blockquote>
<p>In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the <span style="color:red">input CodeWorld</span>) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.</p>
</blockquote>
<p>This time around, I've highlighted the issue. I hope that made it easier for you to spot...</p>
<p>Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else...</p>
<p>On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a> wrote:<br/>
…</p>
</blockquote>
<p>I am a bit reminded of Sean Spicer … “they’re not my words!” … but clearly I am missing something. And indeed I am: In the code snippet, I wrote – correctly – <code>import CodeWorld</code>, but in the text I had <code>input CodeWorld</code>. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back:</p>
<blockquote>
<p><strong>From:</strong> Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a><br/>
<strong>To:</strong> Sebastian R. <…<span class="citation">@gmail.com</span>><br/>
<strong>Betreff:</strong> Re: Errors</p>
<p>Dear Sebastian,</p>
<p>nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote “input” instead of “import”.</p>
<p>Thanks for the report, although you did turn it into quite a riddle… a simple “you wrote import when it should have been import” would have been a better user of both our time.</p>
<p>Regards, Joachim</p>
<p>Am Donnerstag, den 03.08.2017, 13:32 +1000 schrieb Sebastian R.:<br/>
…</p>
</blockquote>
<p>(And it seems I now made the inverse typo, writing “import“ instead of “input”. Anyways, I did not think of this any more until a few days later, when I found this nice message in my mailbox:</p>
<blockquote>
<p><strong>From:</strong> Sebastian R. <…<span class="citation">@gmail.com</span>><br/>
<strong>To:</strong> Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a><br/>
<strong>Subject:</strong> Re: Errors</p>
<blockquote>
<p>a simple “you wrote import when it should have been import” would have been a better user of both our time.</p>
</blockquote>
<p>We're both programmers. How about I cut ALL of the unnecessary garbage and just tell you to s/import/input/ on that last quotation (the thing immediately before this paragraph, in case you didn't know).</p>
<p>I blatantly quoted the error, like this:</p>
<blockquote>
<p>In your introduction note you have written:</p>
<blockquote>
<p>n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.</p>
</blockquote>
<p>Howeverm there is no <code>input CodeWorld</code> in the code above.</p>
</blockquote>
<p>Since that apparently wasn't clear enough, in my second email to you I had to highlight it like so:</p>
<blockquote>
<p>You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from <strong>your</strong> lectures; they're not my words!</p>
<blockquote>
<p>In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the <span style="color:red">input CodeWorld</span>) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library.</p>
</blockquote>
<p>This time around, I've highlighted the issue. I hope that made it easier for you to spot...</p>
</blockquote>
<p>I'm not sure if you're memeing at me or not now, but it seems either your reading comprehension, or your logical deduction skills might be substandard. Unfortunately, there isn't much either of us can do about that, so I'm happy to accept that some people will be so stupid; after all, it's to be expected and if we don't accept that which is to be expected then we live our lives in denial.</p>
<p>Happy to wrap up this discusson here, Seb...</p>
<p>On Fri, Aug 4, 2017 at 12:22 AM, Joachim Breitner <a href="mailto:joachim@cis.upenn.edu">joachim@cis.upenn.edu</a> wrote:<br/>
…</p>
</blockquote>
<p>Well, I chose to be amused by this, and I am sharing my amusement with you.</p>Sun, 06 Aug 2017 11:14:05 -0400How is coinduction the dual of induction?
http://www.joachim-breitner.de/blog/727-How_is_coinduction_the_dual_of_induction_
http://www.joachim-breitner.de/blog/727-How_is_coinduction_the_dual_of_induction_http://www.joachim-breitner.de/blog/727-How_is_coinduction_the_dual_of_induction_#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Earlier today, I demonstrated how to work with <a href="http://www.joachim-breitner.de/blog/726-Coinduction_in_Coq_and_Isabelle">coinduction in the theorem provers Isabelle, Coq and Agda</a>, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this.</p>
<p>The following is mostly based on my naive understanding of coinduction based on what I observe in <a href="http://isabelle.in.tum.de/dist/Isabelle2016-1/doc/datatypes.pdf">the implementation in Isabelle</a>. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.</p>
<h3 id="inductive-lists">Inductive lists</h3>
<p>I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:</p>
<pre><code>datatype 'a list = nil | cons (hd : 'a) (tl : "'a list")</code></pre>
<p>with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):</p>
<pre><code>P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs</code></pre>
<h3 id="coinductive-lists">Coinductive lists</h3>
<p>In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing</p>
<pre><code>codatatype 'a llist = lnil | lcons (hd : 'a) (tl : "'a llist")</code></pre>
<p>we get the following coinduction principle:</p>
<pre><code>(∀ xs ys.
R xs ys' → (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys' ≠ lnil ⟶
hd xs = hd ys ∧ R (tl xs) (tl ys))) →
→ (∀ xs ys. R xs ys → xs = ys)</code></pre>
<p>This is less scary that it looks at first. It tell you “if you give me a relation <code>R</code> between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by <code>R</code>, then any two lists related by <code>R</code> are actually equal.”</p>
<p>If you think of the infinte list as a series of states of a computer program, then this is nothing else than a <em>bisimulation</em>.</p>
<p>So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate <code>P</code>, in the other a relation <code>R</code>, to point out just one difference.</p>
<h3 id="relation-induction">Relation induction</h3>
<p>To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.</p>
<p>The <code>datatype</code> declaration automatically creates a <em>relator</em>:</p>
<pre><code>rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool</code></pre>
<p>The definition of <code>rel_list R xs ys</code> is that <code>xs</code> and <code>ys</code> have the same shape (i.e. length), and that the corresponding elements are pairwise related by <code>R</code>. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:</p>
<pre><code>Q nil nil →
(∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) →
(∀xs ys → rel_list R xs ys → Q xs ys)</code></pre>
<p>Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose <code>Q xs ys ↔ (P xs ∧ xs = ys)</code> and <code>R x y ↔ (x = y)</code>, then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.</p>
<h3 id="relation-coinduction">Relation coinduction</h3>
<p>The same observation can be made in the coinductive world. Here, as well, the <code>codatatype</code> declaration introduces a function</p>
<pre><code>rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool</code></pre>
<p>which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:</p>
<pre><code>(∀xs ys.
R xs ys → (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) →
(∀ xs ys → R xs ys → rel_llist Q xs ys)</code></pre>
<p>It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate <code>Q</code> with equality, which turns <code>rel_llist Q</code> into equality on the lists, and you have the theorem above.</p>
<h3 id="the-duality">The duality</h3>
<p>With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of “co<em>something</em> is <em>something</em> with arrows reversed”.</p>
<p>But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:</p>
<pre><code>(∀xs ys.
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { case analysis (the other two cases are vacuously true) }
(∀xs ys.
xs = lnil → ys = lnil →
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
∧ (∀xs ys.
xs ≠ lnil ⟶ ys ≠ lnil
R xs ys ← (xs = lnil) = (ys = lnil) ∧
(xs ≠ lnil ⟶ ys ≠ lnil ⟶
Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { simplification }
(∀xs ys. xs = lnil → ys = lnil → R xs ys
∧ (∀x xs y ys. R (cons x xs) (cons y ys) ← (Q x y ∧ R xs ys))
= { more rewriting }
R nil nil
∧ (∀x xs y ys. Q x y → R xs ys → R (cons x xs) (cons y ys))</code></pre>
<h3 id="conclusion">Conclusion</h3>
<p>The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.</p>
<p>More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations – this way of thinking is surprisingly fruitful, and led to proofs of <a href="https://en.wikipedia.org/wiki/Parametricity">parametricity</a> and free theorems and other nice things.</p>Thu, 27 Jul 2017 22:05:32 -0400Coinduction in Coq and Isabelle
http://www.joachim-breitner.de/blog/726-Coinduction_in_Coq_and_Isabelle
http://www.joachim-breitner.de/blog/726-Coinduction_in_Coq_and_Isabellehttp://www.joachim-breitner.de/blog/726-Coinduction_in_Coq_and_Isabelle#commentsmail@joachim-breitner.de (Joachim Breitner)<p>The <a href="http://deepspec.org/event/dsss17/">DeepSpec Summer School</a> is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle</p>
<h3 id="the-task">The task</h3>
<p>Define the extended natural numbers coinductively. Define the <span class="math inline">min </span> function and the <span class="math inline">≤</span> relation. Show that <span class="math inline">min(<em>n</em>, <em>m</em>)≤<em>n</em></span> holds.</p>
<h3 id="coq">Coq</h3>
<p>The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation:</p>
<pre class="coq"><code>CoInductive ENat :=
| N : ENat
| S : ENat -> ENat.
CoFixpoint min (n : ENat) (m : ENat)
:=match n, m with | S n', S m' => S (min n' m')
| _, _ => N end.
CoInductive le : ENat -> ENat -> Prop :=
| leN : forall m, le N m
| leS : forall n m, le n m -> le (S n) (S m).</code></pre>
<p>The lemma is specified as</p>
<pre class="coq"><code>Lemma min_le: forall n m, le (min n m) n.</code></pre>
<p>and the proof method of choice to show that some coinductive relation holds, is <code>cofix</code>. One would wish that the following proof would work:</p>
<pre class="coq"><code>Lemma min_le: forall n m, le (min n m) n.
Proof.
cofix.
destruct n, m.
* apply leN.
* apply leN.
* apply leN.
* apply leS.
apply min_le.
Qed.</code></pre>
<p>but we get the error message</p>
<pre><code>Error:
In environment
min_le : forall n m : ENat, le (min n m) n
Unable to unify "le N ?M170" with "le (min N N) N</code></pre>
<p>Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation <code>min N N = N</code>, and like a kid scared of coinduction, it did not dare to “run” the <code>min</code> function. The reason it does not just “run” a <code>CoFixpoint</code> is that doing so too daringly might simply not terminate. So, as <a href="http://adam.chlipala.net/cpdt/html/Coinductive.html">Adam explains in a chapter of his book</a>, Coq reduces a cofixpoint <em>only</em> when it is the scrutinee of a <code>match</code> statement.</p>
<p>So we need to get a <code>match</code> statement in place. We can do so with a helper function:</p>
<pre class="coq"><code>Definition evalN (n : ENat) :=
match n with | N => N
| S n => S n end.
Lemma evalN_eq : forall n, evalN n = n.
Proof. intros. destruct n; reflexivity. Qed.</code></pre>
<p>This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (<code>N</code> or <code>S _</code>). We can use it in the proof to guide Coq, and the following goes through:</p>
<pre class="coq"><code>Lemma min_le: forall n m, le (min n m) n.
Proof.
cofix.
destruct n, m; rewrite <- evalN_eq with (n := min _ _).
* apply leN.
* apply leN.
* apply leN.
* apply leS.
apply min_le.
Qed.</code></pre>
<h3 id="isabelle">Isabelle</h3>
<p>In Isabelle, definitions and types are very different things, so we use different commands to define <code>ENat</code> and <code>le</code>:</p>
<pre class="isabelle"><code>theory ENat imports Main begin
codatatype ENat = N | S ENat
primcorec min where
"min n m = (case n of
N ⇒ N
| S n' ⇒ (case m of
N ⇒ N
| S m' ⇒ S (min n' m')))"
coinductive le where
leN: "le N m"
| leS: "le n m ⟹ le (S n) (S m)"</code></pre>
<p>There are actually many ways of defining <code>min</code>; I chose the one most similar to the one above. For more details, see the <a href="http://isabelle.in.tum.de/dist/Isabelle2016-1/doc/corec.pdf"><code>corec</code> tutorial</a>.</p>
<p>Now to the proof:</p>
<pre class="isabelle"><code>lemma min_le: "le (min n m) n"
proof (coinduction arbitrary: n m)
case le
show ?case
proof(cases n)
case N then show ?thesis by simp
next
case (S n') then show ?thesis
proof(cases m)
case N then show ?thesis by simp
next
case (S m') with ‹n = _› show ?thesis
unfolding min.code[where n = n and m = m]
by auto
qed
qed
qed</code></pre>
<p>The <code>coinduction</code> proof methods produces this goal:</p>
<pre><code>proof (state)
goal (1 subgoal):
1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨
(∃n' m'.
min n m = S n' ∧
n = S m' ∧
((∃n m. n' = min n m ∧ m' = n) ∨ le n' m'))</code></pre>
<p>I chose to spell the proof out in the Isar proof language, where the outermost proof structure is done relatively explicity, and I proceed by case analysis mimiking the <code>min</code> function definition.</p>
<p>In the cases where one argument of <code>min</code> is <code>N</code>, Isabelle’s <em>simplifier</em> (a term rewriting tactic, so to say), can solve the goal automatically. This is because the <code>primcorec</code> command produces a bunch of lemmas, one of which states <code>n = N ∨ m = N ⟹ min n m = N</code>.</p>
<p>In the other case, we need to help Isabelle a bit to reduce the call to <code>min (S n) (S m)</code> using the <code>unfolding</code> methods, where <code>min.code</code> contains exactly the equation that we used to specify <code>min</code>. Using just <code>unfolding min.code</code> would send this method into a loop, so we restrict it to the concrete arguments <code>n</code> and <code>m</code>. Then <code>auto</code> can solve the remaining goal (despite all the existential quantifiers).</p>
<h3 id="summary">Summary</h3>
<p>Both theorem provers are able to prove the desired result. To me it seems that it is slightly more convenient in Isabelle because a lot of Coq infrastructure relies on the type checker being able to effectively evaluate expressions, which is tricky with cofixpoints, wheras <em>evaluation</em> plays a much less central role in Isabelle, where <em>rewriting</em> is the crucial technique, and while one still cannot simply throw <code>min.code</code> into the simpset, so working with objects that do not evaluate easily or completely is less strange.</p>
<h3 id="agda">Agda</h3>
<p>I was challenged to do it in Agda. Here it is:</p>
<div class="sourceCode"><pre class="sourceCode agda"><code class="sourceCode agda"><span class="kw">module</span> ENat <span class="kw">where</span>
<span class="kw">open</span> <span class="kw">import</span> Coinduction
<span class="kw">data</span> ENat <span class="ot">:</span> <span class="dt">Set</span> <span class="kw">where</span>
N <span class="ot">:</span> ENat
S <span class="ot">:</span> ∞ ENat <span class="ot">→</span> ENat
min <span class="ot">:</span> ENat <span class="ot">→</span> ENat <span class="ot">→</span> ENat
min <span class="ot">(</span>S n'<span class="ot">)</span> <span class="ot">(</span>S m'<span class="ot">)</span> <span class="ot">=</span> S <span class="ot">(</span>♯ <span class="ot">(</span>min <span class="ot">(</span>♭ n'<span class="ot">)</span> <span class="ot">(</span>♭ m'<span class="ot">)))</span>
min <span class="ot">_</span> <span class="ot">_</span> <span class="ot">=</span> N
<span class="kw">data</span> le <span class="ot">:</span> ENat <span class="ot">→</span> ENat <span class="ot">→</span> <span class="dt">Set</span> <span class="kw">where</span>
leN <span class="ot">:</span> <span class="ot">∀</span> <span class="ot">{</span>m<span class="ot">}</span> <span class="ot">→</span> le N m
leS <span class="ot">:</span> <span class="ot">∀</span> <span class="ot">{</span>n m<span class="ot">}</span> <span class="ot">→</span> ∞ <span class="ot">(</span>le <span class="ot">(</span>♭ n<span class="ot">)</span> <span class="ot">(</span>♭ m<span class="ot">))</span> <span class="ot">→</span> le <span class="ot">(</span>S n<span class="ot">)</span> <span class="ot">(</span>S m<span class="ot">)</span>
min<span class="ot">_</span>le <span class="ot">:</span> <span class="ot">∀</span> <span class="ot">{</span>n m<span class="ot">}</span> <span class="ot">→</span> le <span class="ot">(</span>min n m<span class="ot">)</span> n
min<span class="ot">_</span>le <span class="ot">{</span>S n'<span class="ot">}</span> <span class="ot">{</span>S m'<span class="ot">}</span> <span class="ot">=</span> leS <span class="ot">(</span>♯ min<span class="ot">_</span>le<span class="ot">)</span>
min<span class="ot">_</span>le <span class="ot">{</span>N<span class="ot">}</span> <span class="ot">{</span>S m'<span class="ot">}</span> <span class="ot">=</span> leN
min<span class="ot">_</span>le <span class="ot">{</span>S n'<span class="ot">}</span> <span class="ot">{</span>N<span class="ot">}</span> <span class="ot">=</span> leN
min<span class="ot">_</span>le <span class="ot">{</span>N<span class="ot">}</span> <span class="ot">{</span>N<span class="ot">}</span> <span class="ot">=</span> leN</code></pre></div>
<p>I will refrain from commenting it, because I do not really know what I have been doing here, but it typechecks, and refer you to the <a href="http://agda.readthedocs.io/en/latest/language/coinduction.html">official documentation on coinduction in Agda</a>. But let me note that I wrote this using plain inductive types and recursion, and added <code>∞</code>, <code>♯</code> and <code>♭</code> until it worked.</p>Thu, 27 Jul 2017 16:24:03 -0400The Micro Two Body Problem
http://www.joachim-breitner.de/blog/725-The_Micro_Two_Body_Problem
http://www.joachim-breitner.de/blog/725-The_Micro_Two_Body_Problemhttp://www.joachim-breitner.de/blog/725-The_Micro_Two_Body_Problem#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Inspired by recent <a href="http://phdcomics.com/comics/archive.php?comicid=1961">PhD comic “Academic Travel”</a> and not-so-recent <a href="https://xkcd.com/657">xkcd comic “Movie Narrative Charts”</a>, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).</p>
<div class="figure">
<img src="//www.joachim-breitner.de/various/reiseplot-anonym.svg" alt="Two bodies traveling the world"/>
<p class="caption">Two bodies traveling the world</p>
</div>Thu, 06 Jul 2017 16:27:46 +0100The perils of live demonstrations
http://www.joachim-breitner.de/blog/723-The_perils_of_live_demonstrations
http://www.joachim-breitner.de/blog/723-The_perils_of_live_demonstrationshttp://www.joachim-breitner.de/blog/723-The_perils_of_live_demonstrations#commentsmail@joachim-breitner.de (Joachim Breitner)<p>Yesterday, I was giving a <a href="https://www.meetup.com/de-DE/haskellhackers/events/240759486/">talk at the The South SF Bay Haskell User Group</a> about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make <a href="https://code.world/">CodeWorld</a> even more attractive to students. I gave the talk before, at <a href="http://www.composeconference.org/2017/program/">Compose::Conference</a> in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.</p>
<p>So I arrived at the offices of Target<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a> in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…</p>
<p>Turns out that the API of CodeWorld was changed <a href="https://github.com/google/codeworld/commit/054c811b494746ec7304c3d495675046727ab114">just the day before</a>:</p>
<pre><code>commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <cdsmith@gmail.com>
Date: Wed Jun 21 23:53:53 2017 +0000
Change dilated to take one parameter.
Function is nearly unused, so I'm not concerned about breakage.
This new version better aligns with standard educational usage,
in which "dilation" means uniform scaling. Taken as a separate
operation, it commutes with rotation, and preserves similarity
of shapes, neither of which is true of scaling in general.</code></pre>
<p>Ok, that was <a href="https://github.com/nomeata/codeworld-talk/commit/9a5830b0a20010a9e54ab23a6f4aeafd31fccc6b">quick to fix</a>, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.</p>
<p>Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.</p>
<p>Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless<a href="#fn2" class="footnoteRef" id="fnref2"><sup>2</sup></a>. In the end, I could save my face a bit by running the <a href="https://code.world/run.html?mode=haskell&dhash=DoE66gidx7VusA3qcMRI_hg">real pong game</a> against an attendee over the network, and no desynchronisation could be observed there.</p>
<p>Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in <a href="https://arxiv.org/abs/1705.09704">our paper about it</a>, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn1"><p>Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.<a href="#fnref1">↩</a></p></li>
<li id="fn2"><p>I hope the video is going to be online soon, then you can check for yourself.<a href="#fnref2">↩</a></p></li>
</ol>
</div>Fri, 23 Jun 2017 16:54:36 -0700Farewall green cap
http://www.joachim-breitner.de/blog/722-Farewall_green_cap
http://www.joachim-breitner.de/blog/722-Farewall_green_caphttp://www.joachim-breitner.de/blog/722-Farewall_green_cap#commentsmail@joachim-breitner.de (Joachim Breitner)<p>For the last two years, I was known among swing dancers for my green flat cap:</p>
<div class="figure">
<img src="//www.joachim-breitner.de/bilder/scales/20160526153528_full.jpg" alt="Monti, a better model than me"/>
<p class="caption">Monti, a better model than me</p>
</div>
<p>This cap was very special: It was a gift from a good friend who sewed it by hand from what used to be a table cloth of my deceased granny, and it has traveled with me to many corners of the world.</p>
<p>Just like last week, when I was in Paris where I attended the Charleston class of Romuald and Laura on Saturday (April 29). The following Tuesday I went to a Swing Social and wanted to put on the hat, and noticed that it was gone. The next day I bugged the manager and the caretaker of the venue of the class (Salles Sainte-Roche), and it seems that the hat was still there, that morning, im Salle Kurtz<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a>, but when I went there it was gone.</p>
<p>And that is sad.</p>
<div class="figure">
<img src="//www.joachim-breitner.de/bilder/scales/20170426215737_full.jpg" alt="The last picture with the hat"/>
<p class="caption">The last picture with the hat</p>
</div>
<div class="footnotes">
<hr/>
<ol>
<li id="fn1"><p>How fitting, given that my granny’s maiden name is Kurz.<a href="#fnref1">↩</a></p></li>
</ol>
</div>Fri, 05 May 2017 18:13:40 -0400ghc-proofs rules more now
http://www.joachim-breitner.de/blog/720-ghc-proofs_rules_more_now
http://www.joachim-breitner.de/blog/720-ghc-proofs_rules_more_nowhttp://www.joachim-breitner.de/blog/720-ghc-proofs_rules_more_now#commentsmail@joachim-breitner.de (Joachim Breitner)<p>A few weeks ago I blogged about an experiment of mine, where I <a href="/blog/717-Why_prove_programs_equivalent_when_your_compiler_can_do_that_for_you_.html.en">proved equalities of Haskell programs by (ab)using the GHC simplifier</a>. For more details, please see that post, or the <a href="https://www.youtube.com/watch?v=jcL4bp4FMUw&feature=youtu.be">video of my talk at the Zürich Haskell User Group</a>, but one reason why this approach has any chance of being useful is the compiler’s support for rewrite rules.</p>
<p>Rewrite rules are program equations that the programmer specifies in the source file, and which the compiler then applies, from left to right, whenever some intermediate code matches the left-hand side of the equation. One such rule, for example, is</p>
<pre><code>{-# RULES "foldr/nil" forall k z. foldr k z [] = z #-}</code></pre>
<p>taken right out of the standard library.</p>
<p>In my blog post I went through the algebraic laws that a small library of mine, <a href="http://hackage.haskell.org/package/successors-0.1/docs/Control-Applicative-Successors.html">successors</a>, should fulfill, and sure enough, once I got to more interesting proofs, they would not go through just like that. At that point I had to add additional rules to the file I was editing, which helped the compiler to finish the proofs. Some of these rules were simple like</p>
<pre><code>{-# RULES "mapFB/id" forall c. mapFB c (\x -> x) = c #-}
{-# RULES "foldr/nil" forall k n. GHC.Base.foldr k n [] = n #-}
{-# RULES "foldr/undo" forall xs. GHC.Base.foldr (:) [] xs = xs #-}</code></pre>
<p>and some are more intricate like</p>
<pre><code>{-# RULES "foldr/mapFB" forall c f g n1 n2 xs.
GHC.Base.foldr (mapFB c f) n1 (GHC.Base.foldr (mapFB (:) g) n2 xs)
= GHC.Base.foldr (mapFB c (f.g)) (GHC.Base.foldr (mapFB c f) n1 n2) xs
#-}</code></pre>
<p>But there is something fishy going on here: The <code>foldr/nil</code> rule is identical to a rule in the standard library! I should not have to add to my file that as I am proving things. So I knew that the GHC plugin, which I wrote to do these proofs, was doing something wrong, but I did not investigate for a while.</p>
<p>I returned to this problem recetly, and with the efficient and quick <a href="https://ghc.haskell.org/trac/ghc/ticket/13614">help of Simon Peyton Jones</a>, I learned what I was doing wrong.<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a> After fixing it, I could remove all the simple rules from the files with my proofs. And to my surprise, I could remove the intricate rule as well!</p>
<p>So with this bug fixed, ghc-proofs is able to prove <em>all</em> the Functor, Applicative and Monad rules of the Succs functor <a href="https://github.com/nomeata/ghc-proofs/blob/ea17e78a98c20995b2de4f64a9eb4e299f6dcde4/examples/Successors.hs">without any additional rewrite rules</a>, as you can see in the example file! (I still have to strategically place <code>seq</code>s in a few places.)</p>
<p>That’s great, isn’t it! Yeah, sure. But having to introduce the rules at that point provided a very good narrative in my talk, so when I will give a similar talk next week in Pairs (actually, twice, first <a href="https://www.irif.fr/gt/acs/index">at the university</a> and then <a href="https://www.meetup.com/de-DE/haskell-paris/events/239389804/">at the Paris Haskell meetup</a>, I will have to come up with a different example that calls for additional rewrite rules.</p>
<p>In related news: Since the last blog post, ghc-proofs learned to interpret proof specifications like</p>
<pre><code>applicative_law4 :: Succs (a -> b) -> a -> Proof
applicative_law4 u y = u <*> pure y
=== pure ($ y) <*> u</code></pre>
<p>where it previously only understood</p>
<pre><code>applicative_law4 = (\ u (y::a) -> u <*> (pure y :: Succs a))
=== (\ u (y::a) -> pure ($ y) <*> u)
</code></pre>
<p>I am not sur if this should be uploaded to Hackage, but I invite you to play around with the <a href="https://github.com/nomeata/ghc-proofs">GitHub version of ghc-proofs</a>.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn1"><p>In short: I did not initialize the simplifier with the right <code>InScopeSet</code>, so RULES about functions defined in the current module were not always applied, and I did not feed the <code>eps_rules</code> to the simplifier, which contains all the rules found in imported packages, such as <code>base</code>.<a href="#fnref1">↩</a></p></li>
</ol>
</div>Thu, 27 Apr 2017 23:11:38 -0400