Joachim Breitner

Blog

How to merge a Pull Request

Published 2019-03-27 in sections English, Digital World.

It’s easy!

How to merge a pull request

How to merge a pull request

Spring cleaning: local git branches

Published 2019-03-03 in sections English, Digital World.

This blog post is not for everyone. Look at the top of your screen. Do you see dozens of open tabs? Then this blog post is probably not for you. Do you see a small number of open tabs, (and you use git and GitHub seriously)? Then this might be interesting to you.

In a typical development project using git and GitHub, I have a bunch of local git branches, in various categories:

  1. master, simply tracking upstream’s master branch
  2. Actively developed branches, already pushed to the remote repository.
  3. like 2., but already turned into a pull request.
  4. like 3., but already merged into master, using a merge commit,
  5. like 3., but already merged into master, using squash or rebase,
  6. like 2., but rejected and closed,
  7. like 6., and the corresponding remote branch already removed.
  8. Local commits that never made it remote.

This adds up to a fair number of local branches. Some of them are clearly mostly useless (in particular, those accepted into master, i.e. 4. and 5.). So I am motivate to delete as many branches as I can.

But I also do not want to lose any information. So I do not want to delete commits that I could not recover, if I had to.

A simple first thing to do, as recommended on StackExchange, is to delete all branches that have been merged into master:

git branch --merged | egrep -v "(^\*|master)" | xargs git branch -d

but this only catches category 4, and would not even delete branches that were merged using rebase or squash. Can we do better?

The crucial bit to know here is that Github treats each pull request like a branch, you just don’t see it, as they are hidden (they are in a different namespace, apart of the usual branches and tags). These branches stay around and, more importantly, when you merge a pull request using squash or rebase, they still reflect the “old” history of the PR.

In order to make use of that, we can instruct git to retch these pull request branches. To do so, edit .git/config in your repository, and add the following last line to the [remote "origin"] section:

[remote "origin"]
	url = …
	fetch = +refs/heads/*:refs/remotes/origin/*
        fetch = +refs/pull/*/head:refs/remotes/origin/pr/*

If you now run git fetch, you should see a lot of remotes/origin/pr/123 references being pushed.

And now that we see the state of all the pull requests that GitHub stores for us, we can delete all local branches that are contained in any of the remote branches:

for r in $(git branch -r --format='%(refname)'); do git branch --merged $r; done | sort -u | grep -v '^\*' | xargs -r git branch -D

Done! This deletes all categories mentioned above besides the last one.

A nice consequence of this is that I now know what to do with branches that did not lead anywhere, but which I did not simply want to remove: I push them to GitHub repository, create a new draft pull request, with a helpful comment for the future, immediately close the PR again and delete the remote and the local branch. If I ever want to come back to it, I find it in the list of closed pull requests, which is a much nicer attic than a ever growing list of branches of unclear status.

Teaching to read Haskell

Published 2019-01-11 in sections English, Haskell.

TL;DR: New Haskell tutorial at http://haskell-for-readers.nomeata.de/.

Half a year ago, I left the normal academic career path and joined the DFINITY Foundation, a non-profit start-up that builds a blockchain-based “Internet Computer” which will, if everything goes well, provide a general purpose, publicly owned, trustworthy service hosting platform.

DFINITY heavily bets on Haskell as a programming language to quickly develop robust and correct programs (and it was my Haskell experience that opened this door for me). DFINITY also builds heavily on innovative cryptography and cryptographic protocols to make the Internet Computer work, and has assembled an impressive group of crypto researchers.

Crypto is hard, and so is implementing crypto. How do we know that the Haskell code correctly implements what the cryptography researchers designed? Clearly, our researchers will want to review the code and make sure that everything is as intended.

But surprisingly, not everybody is Haskell-literate. This is where I come in, given that I have taught Haskell classes before, and introduce Haskell to those who do not know it well enough yet.

At first I thought I’d just re-use the material I created for the CIS 194 Haskell course at the University of Pennsylvania. But I noticed that I am facing quite a different audience. Instead of young students with fairly little computer scientist background who can spent quite a bit of time to learn to write Haskell, I am now addressing senior and very smart computer scientists with many other important things to do, who want to learn to read Haskell.

Certainly, a number of basics are needed in either case; basic functional programming for example. But soon, the needs diverge:

  • In order to write Haskell, I have to learn how to structure a program, how to read error message and deal with Haskell’s layout rule, but I don’t need to know all idioms and syntax features yet.
  • If I want to read Haskell, I need to navigate possibly big files, recognize existing structure, and deal with a plenitude of syntax, but I don’t need to worry about setting up a compiler or picking the right library.

So I set out to create a new Haskell course, “Haskell for Readers”, that is specifically tailored to this audience. It leaves out a few things that are not necessary for reading Haskell, is relatively concise and densely packed, but it starts with the basics and does not assume prior exposure to functional programming.

As it behooves for a non-profit-foundation, DFINITY is happy for me to develop the lecture material in the open, and release it to the public under a permissive creative commons license, so I invite you to read the in-progress document, and maybe learn something. Of course, my hope is to also get constructive feedback in return, and hence profit from this public release. Sources on GitHub.

Nonce sense paper online

Published 2019-01-10 in sections English, Math.

Nadia Heninger and I just have put the preprint version of our paper “Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies”, to be presented at Financial Crypto 2019, online. In this work, we see how many private keys used on the Bitcoin, Ethereum and Ripple blockchain, as well as in HTTPS and SSH, were used in an unsafe way and hence can be compromised. The resulting numbers are not large – 300 Bitcoin keys, with a balance of around $54 – but it shows (again and again) that it can be tricky to get crypto right, and that if you don’t get it right, you can lose your money.

Brief summary

When you create a cryptographic signatures using ECDSA (the elliptic curve digital signature algorithm), you need to come up with the nonce, a 256 bit random number. It is really important to use a different nonce every time, otherwise it is easy for someone else to take your signatures (which might be stored for everyone to read on the Bitcoin blockchain) and calculate your private key using relatively simple math, and with your private key they can spend all your Bitcoins. In fact, there is evidence that people out there continuously monitor the blockchains for signatures with such repeated nonces and immediately extract the money from compromised keys.

Less well known, but still nothing new to the crypto (as in cryptography) community is the that an attacker can calculate the key from signature that use different, but similar nonces: For example if they are close by each other (only the low bits differ), or if they differ by exactly a large power of two (only the high bits differ). This uses a fancy and powerful technique based on lattices. Our main contribution here is to bridge crypto (as in cryptography) and crypto (as in cryptocurrency) and see if such vulnerabilities actually exist out there.

And indeed, there are some. Not many (which is good), but they do exist, and clearly due to more than one source. Unfortunately, it is really hard to find out who made these signatures, and with which code, so we can only guess about the causes of these bugs. A large number of affected signatures are related to multisig transactions, so we believe that maybe hardware tokens could be the cause here.

Observing programming bugs

Even though we could not identify the concrete implementations that caused these problems, we could still observe some interesting details about them. The most curious is certainly this one:

One set of signatures, which incidentally were created by an attacker who emptied out accounts of compromised keys (e.g. those that are created with a weak password, or otherwise leaked onto the internet), was using nonces that shared the low 128 bits, and hence revealed the (already compromised) private key of the account he emptied out. Curiously, these low 128 bits are precisely the upper 128 bits of the private key.

So it very much looks like the attacker hacked up a program that monitors the blockchain and empties out accounts, and probably did so using a memory unsafe language like C, and got the size of the buffers for nonce and key wrong, so maybe they did properly filled the nonce with good random data, but when they wrote the secret key, the memory locations overlapped and they overrode parts of their randomness with the very non-random secret key. Oh well.

Do I need to worry?

Probably not. The official blockchain clients get their crypto right (at least this part), and use properly random nonces, so as a user you don’t have to worry. In fact, since 2016, the Bitcoin client uses deterministic signatures (RFC6979) which completely removes the need for randomness in the process.

If you are using non-standard libraries, or if you write your own crypto routines (which you should only ever do if you have a really really good reason for it) you should make sure that these use RFC6979. This is even more important on embedded devices or hardware tokens where a good source of randomness might be hard to come by.

Discrete logarithm in secp256k1 with lookup table

In the course of this work I wanted to find out if small nonces (<264) were used even when the key created only one of these – the lattice-based attacks need at least two signatures to work. So I created code that calculates the discrete log in the secp256k1 curve up to an exponent of (<264). This is made feasible using a lookup table for smaller exponents (<239 in our case – just large enough to still fit into 2.2TB of RAM).

This exercise turned out to be not very useful; we did not find any additional keys, but I had fun writing up the program, implemented in C and working very close to the raw data, juggling big binary files mmap’ed into memory, and implementing custom lookup indices and such. In the hope that this might be useful to someone, I share the code at https://github.com/nomeata/secp265k1-lookup-table.

Thoughts on bootstrapping GHC

Published 2018-12-13 in sections English, Haskell.

I am returning from the reproducible builds summit 2018 in Paris. The latest hottest thing within the reproducible-builds project seems to be bootstrapping: How can we build a whole operating system from just and only source code, using very little, or even no, binary seeds or auto-generated files. This is actually concern that is somewhat orthogonal to reproducibility: Bootstrappable builds help me in trusting programs that I built, while reproducible builds help me in trusting programs that others built.

And while they make good progress bootstrapping a full system from just a C compiler written in Scheme, and a Scheme interpreter written in C, that can build each other (Janneke’s mes project), and there are plans to build that on top of stage0, which starts with a 280 bytes of binary, the situation looks pretty bad when it comes to Haskell.

Unreachable GHC

The problem is that contemporary Haskell has only one viable implementation, GHC. And GHC, written in contemporary Haskell, needs GHC to be build. So essentially everybody out there either just downloads a binary distribution of GHC. Or they build GHC from source, using a possibly older (but not much older) version of GHC that they already have. Even distributions like Debian do nothing different: When they build the GHC package, the builders use, well, the GHC package.

There are other Haskell implementations out there. But if they are mature and active developed, then they are implemented in Haskell themselves, often even using advanced features that only GHC provides. And even those are insufficient to build GHC itself, let alone the some old and abandoned Haskell implementations.

In all these cases, at some point an untrusted binary is used. This is very unsatisfying. What can we do? I don’t have the answers, but please allow me to outline some venues of attack.

Retracing history

Obviously, even GHC does not exist since the beginning of time, and the first versions surely were built using something else than GHC. The oldest version of GHC for which we can find a release on the GHC web page is version 0.29 from July 1996. But the installation instructions write:

GHC 0.26 doesn't build with HBC. (It could, but we haven't put in the effort to maintain it.)

GHC 0.26 is best built with itself, GHC 0.26. We heartily recommend it. GHC 0.26 can certainly be built with GHC 0.23 or 0.24, and with some earlier versions, with some effort.

GHC has never been built with compilers other than GHC and HBC.

So it seems that besides GHC, only ever HBC was used to compiler GHC. HBC is a Haskell compiler where we find the sources of one random version only thanks to archive.org. Parts of it are written in C, so I looked into this: Compile HBC, use it to compile GHC-0.29, and then step for step build every (major) version of GHC until today.

The problem is that it is non-trivial to build software from the 90s using today's compilers. I briefly looked at the HBC code base, and had to change some files from using varargs.h to stdargs.v, and this is surely just one of many similar stumbling blocks trying to build that tools. Oh, and even the hbc source state

# To get everything done: make universe
# It is impossible to make from scratch.
# You must have a running lmlc, to
# recompile it (of course).

So I learned that actually, most of it is written in LML, and the LML compiler is written in LML. So this is a dead end. (Thanks to Lennart for clearing up a misunderstanding on my side here.

Going back, but doing it differently

Another approach is to go back in time, to some old version of GHC, but maybe not all the way to the beginning, and then try to use another, officially unsupported, Haskell compiler to build GHC. This is what rekado tried to do in 2017: He use the most contemporary implementation of Haskell in C, the Hugs interpreter. Using this, he compiled nhc98 (yet another abandoned Haskell implementation), with the hope of building GHC with nhc98. He made impressive progress back then, but ran into a problem where the runtime crashed. Maybe someone is interested in picking up up from there?

Removing, simplifying, extending, in the present.

Both approaches so far focus on building an old version of GHC. This adds complexity: other tools (the shell, make, yacc etc.) may behave different now in a way that causes hard to debug problems. So maybe it is more fun and more rewarding to focus on today’s GHC? (At this point I am starting to hypothesize).

I said before that no other existing Haskell implementation can compile today’s GHC code base, because of features like mutually recursive modules, the foreign function interface etc. And also other existing Haskell implementations often come with a different, smaller set of standard libraries, but GHC assumes base, so we would have to build that as well...

But we don’t need to build it all. Surely there is much code in base that is not used by GHC. Also, much code in GHC that we do not need to build GHC, and . So by removing that, we reduce the amount of Haskell code that we need to feed to the other implementation.

The remaining code might use some features that are not supported by our bootstrapping implementation. Mutually recursive module could be manually merged. GADTs that are only used for additional type safety could be replaced by normal ones, which might make some pattern matches incomplete. Syntactic sugar can be desugared. By simplifying the code base in that way, one might be able a fork of GHC that is within reach of the likes of Hugs or nhc98.

And if there are features that are hard to remove, maybe we can extend the bootstrapping compiler or interpreter to support them? For example, it was mostly trivial to extend Hugs with support for the # symbol in names – and we can be pragmatic and just allow it always, since we don’t need a standards conforming implementation, but merely one that works on the GHC code base. But how much would we have to implement? Probably this will be more fun in Haskell than in C, so maybe extending nhc98 would be more viable?

Help from beyond Haskell?

Or maybe it is time to create a new Haskell compiler from scratch, written in something other than Haskell? Maybe some other language that is reasonably pleasant to write a compiler in (Ocaml? Scala?), but that has the bootstrappability story already sorted out somehow.

But in the end, all variants come down to the same problem: Writing a Haskell compiler for full, contemporary Haskell as used by GHC is hard and really a lot of work – if it were not, there would at least be implementations in Haskell out there. And as long as nobody comes along and does that work, I fear that we will continue to be unable to build our nice Haskell ecosystem from scratch. Which I find somewhat dissatisfying.

Sliding Right into Information Theory

Published 2018-12-03 in sections English, Math.

It's hardly news any more, but it seems I have not blogged about my involvement last year with an interesting cryptanalysis project, which resulted in the publication Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, me, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom, which was published at CHES 2017 and on ePrint (ePrint is the cryptographer’s version of arXiv).

This project nicely touched upon many fields of computer science: First we need systems expertise to mount a side-channel attack that uses cache timing difference to observe which line of a square-and-multiply algorithm the target process is executing. Then we need algorithm analysis required to learn from these observations partial information about the bits of the private key. This part includes nice PLy concepts like rewrite rules (see Section 3.2). Oncee we know enough about the secret keys, we can use fancy cryptography to recover the whole secret key (Section 3.4). And finally, some theoretical questions arise, such as: “How much information do we need for the attack to succeed?” and “Do we obtain this much information”, and we need some nice math and information theory to answer these.

Initially, I focused on the PL-related concepts. We programming language people are yak-shavers, and in particular “rewrite rules” just demands the creation of a DSL to express them, and an interpreter to execute them, doesn’t it? But it turned out that these rules are actually not necessary, as the key recovery can use the side-channel observation directly, as we found out later (see Section 4 of the paper). But now I was already hooked, and turned towards the theoretical questions mentioned above.

Shannon vs. Rényi

It felt good to shake the dust of some of the probability theory that I learned for my maths degree, and I also learned some new stuff. For example, it was intuitively clear that whether the attack succeeds depends on the amount of information obtained by the side channel attack, and based on prior work, the expectation was that if we know more than half the bits, then the attack would succeed. Note that for this purpose, two known “half bits” are as good as knowing one full bit; for example knowing that the secret key is either 01 or 11 (one bit known for sure) is just as good as knowing that the key is either 00 or 11.

Cleary, this is related to entropy somehow -- but how? Trying to prove that the attack works if the entropy rate of the leak is >0.5 just did not work, against all intuition. But when we started with a formula that describes when the attack succeeds, and then simplified it, we found a condition that looked suspiciously like what we wanted, namely H > 0.5, only that H was not the conventional entropy (also known as the Shannon entropy, H = −∑p ⋅ log p), but rather something else: H = −∑p2, which turned to be called the collision entropy or Rényi entropy.

This resulted in Theorem 3 in the paper, and neatly answers the question when the Heninger and Shacham key recovery algorithm, extended to partial information, can be expected to succeed in a much more general setting that just this particular side-channel attack.

Markov chains and an information theoretical spin-off

The other theoretical question is now: Why does this particular side channel attack succeed, i.e. why is the entropy rate H > 0.5. As so often, Markov chains are an immensly powerful tool to answer that question. After some transformations, I managed to model the state of the square-and-multiply algorithm, together with the side-channel leak, as a markov chain with a hidden state. Now I just had to calculate its Rényi entropy rate, right? I wrote some Haskell code to do this transformation, and also came up with an ad-hoc, intuitive way of calculating the rate. So when it was time to write up the paper, I was searching for a reference that describes the algorithm that I was using…

Only I could find none! I contacted researchers who have published related to Markov chains and entropies, but they just referred me in circles, until one of them, Maciej Skórski responded. Our conversation, highly condendensed, went like this: “Nice idea, but it can’t be right, it would solve problem X” – “Hmm, but it feels so right. Here is a proof sketch.” – “Oh, indeed, cool. I can even generalize this! Let’s write a paper”. Which we did! Analytic Formulas for Renyi Entropy of Hidden Markov Models (preprint only, it is still under submission).

More details

Because I joined the sliding-right project late, not all my contributions made it into the actual paper, and therefore I published an “inofficial appendix” separately on ePrint. It contains

  1. an alternative way to find the definitively knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1
  2. an algorithm to calculate the collision entropy H, including how to model a side-channel attack like this one as a markov chain, and how to calculate the entropy of such a markov chain, and
  3. the proof of Theorem 3.

I also published the Haskell code that I wrote for this projects, including the markov chain collision entropy stuff. It is not written with public consumption in mind, but feel free to ask if you have questions about this.

Note that all errors, typos and irrelevancies in that document and the code are purely mine and not of any of the other authors of the sliding-right paper. I’d like to thank my coauthors for the opportunity to join this project.

Swing Dancer Profile

Published 2018-08-07 in sections English, Reisen.

During my two years in Philadelphia (I was a post-doc with Stephanie Weirich at the University of Pennsylvania) I danced a lot of Swing, in particular at the weekly social “Jazz Attack”.

They run a blog, that blog features dancers, and this week – just in time for my last dance – they featured me with a short interview.

The merits of a yellow-red phase

Published 2018-07-30 in sections English, Reisen.

In my yesterday post on Continuity Lines in North America I mentioned in passing that I am big fan of German1 traffic lights, which have a brief yellow-red phase between the red and and the green phase (3 to 5 seconds, depending on road speed). A German reader of my blog was surprised by this, said that he considers it pointless, and would like to hear my reasons. So where we go…

Life without Yellow-Red

Lights that switch directly from red to green cause more stress. Picture yourself at the first car waiting at a traffic light, with a bunch of cars behind you. Can you relax, maybe observe the cars passing in front of you, switch the radio station, or simply look somewhere else for a moment? Well, you can, but you risk missing how the light switches from red to green. When your look at the traffic light again and see it bright green, you have no idea how long it has been on green. Hyper-aware of all the cars behind you waiting to get going, you’ll rush to get started, and if you don’t do that really fast now, surely one of the people waiting behind you will have honked.

So at the next intersection, you better don’t take your eyes off the light – or, alternatively, you develop a screw-you-I-don’t-care-attitude towards the other cars, which might allow you to relax in this situation, but is in general not desirable.

Maybe this is less of a problem on the West Coast, where supposedly everybody is nice, relaxed, and patient, and you can take all the time you want to get rolling. But in the stereotypical East Coast traffic full of angry, impatient and antisocial drivers, you really don’t get away with that.

Life with Yellow-Red

The yellow-red phase solves this problem elegantly: As long as the light is red, you don’t have to watch the light constantly and with maximum attention. You can relax: it suffices to check it often enough to catch the red-yellow phase, once every two or three seconds. When you see it, you get ready to start; and when it actually turns green, you start on time.

I would expect that it is not only less stressful, it is also more efficient: Because every car in the lane has the heads-up warning “green in a moment”, the cars can start rolling in quicker succession. Without the warning, every car to account much more for the possible slower reaction of the car before.

PS: A friend of mine wonders whether the German Yellow-Red is needed because cars with manual transmissions are much more common in Germany than in the US, and you need more time to, well, get into gear with these cars.


  1. Also Great Britain, Croatia, Latvia, Norway, Austria, Poland, Russia, Saudi-Arabia, Sweden, Switzerland, Hungary and others.

Continuity Lines

Published 2018-07-29 in sections English, Reisen.

I am currently on a road trip going from Philly going North along the Atlantic Coast into Canada, and aiming for Nova Scotia. When I passed from the US into Canada, I made had an unexpected emotional response to the highways there: I felt more at home!

And I believe it has to do with the pavement markings on US vs. Canadian freeways.

Consider this image, taken from the Manual on Uniform Traffic Control Devices of the United States, an official document describing (among other things) how the pavement of a freeway ought to be paved:

Parallel Deceleration Lane for Exit Ramp

Parallel Deceleration Lane for Exit Ramp

This is a typical exit ramp from the freeway. On ramps and cloverleaf ramps look similar. Note that the right-most lane goes somewhere else than the left and the middle lane, yet the lanes look completely identical. In particular, the lines between the lanes are shaped the same!

Now, for comparison, the corresponding image in a Canadian manual, namely the Manual of Standard Traffic Signs & Pavement Markings for British Columbia:

Canadian off-ramps

Canadian off-ramps

Here, there are different lines between the different lanes: normal lane lines left, but a so-called continuity line, with a distinctly different pattern, between the normal lanes and the exit lane. It’s like in Germany!

With this is mind I understand one reason1 why driving in the US2 noticeably more stressful: There is just always anxiety whether you are accidentally in an exit lane!

Update (2018-07-30): AS Paul Johnson pointed out (see comment below), I was looking at an old version of the MUTCD. The current version, from 2009, has these lines:

Current US off-ramps

Current US off-ramps

They have published a (very long) document describing the changes in the new version of the manual , and Section 306 describes the rationale:

[..] the FHWA adopts language to clarify that dotted lane lines, rather than broken lane lines, are to be used for non-continuing lanes, including acceleration lanes, deceleration lanes, and auxiliary lanes. [..] a number of States and other jurisdictions currently follow this practice, which is also the standard practice in Europe and most other developed countries. The FHWA believes that the existing use of a normal broken lane line for these non- continuing lanes does not adequately inform road users of the lack of lane continuity ahead and that the standardized use of dotted lane lines for non-continuing lanes as adopted in this final rule will better serve this important purpose in enhancing safety and uniformity.

So all is well! But it means that either Pennsylvania was slower than allowed in implementing these changes (the deadline was December 2016), or it was something else alltogether that made me feel more at home on the Canadian freeway.


  1. I say “one reason”, not “the reason”, because there are many more – “Rechtsfahrgebot”, no red-and-yellow-phase in the traffic light, Pennsylvanian road quality…

  2. Supposedly, Pennsylvania is particularly bad with roads in general, but also with this particular problem, and California has exit lanes clearly separated. But, of course, because this is the US, not using the same patter as the others (Canada, Europe), but using spaced dots…

Build tool semantic aware build systems

Published 2018-07-28 in sections English, Digital World.

I just had a lovely visit at Ben Gamari and Laura Dietz, and at the breakfast table we had an idea that I want to record here.

The problem

Laura and Ben talked about the struggles they had using build systems like make or Nix in data science applications. A build system like nix is designed around the idea that builds are relatively cheap, and that any change in a dependency ought to trigger a rebuild, just to be sure that all build outputs are up-to-date. This is a perfectly valid assumption when building software, where build times are usually minutes, or maybe hours. But when some of the data processing takes days, then you really really want to avoid any unnecessary builds.

One way of avoiding unnecessary rebuilds that is supported by build systems like shake and (I presume, maybe with some extra work) Nix, is to do output hashing: If one build stage has its input changed and need to be re-run, but its output is actually unchanged, then later build stages do not have to be re-run. Great!

But even with this feature in place, there one common problem remains: Build tools! Your multi-gigabyte data is going to be processed by some code you write. What if the code changes? Clearly, if you change the algorithm, or fix a bug in your code, you want the output to be re-run. But if you just change some strings in the --help flag, or update some libraries, or do something else that does not change the program in a way that is significant for the task at hand, wouldn’t you prefer to not pay for that by futile multi-day data processing step?

Existing approaches

There are various ways you can tackle this these days:

  • You bite the bullet, add the build tool as a dependency of the processing step. You get the assurance that your data always reflects the output of the latest version of your tool, but you get lots of rebuilds.

  • You don’t track the tool as part of your build system. It is now completely up to you to decide when the tool has changed in significant ways. When you think it has, you throw away all build artifacts and start from scratch. Very crude, and easy to get wrong.

  • You keep track of a “build tool version”, e.g. a text file with a number, that you depend on in lieu of the build tool itself. When you make a change that you think is significant, you bump that version number. This is similar to the previous approach, but more precise: Only the build outputs that used this particular tools will be invalidated, and it also scales better to working in a team. But of course, you can still easily get it wrong.

Build tool semantics

Why are all these approaches so unsatisfying? Because none allow you to express the intent, which is to say “this build step depends on the semantics (i.e. behaviour) of the build tool”. If we could somehow specify that, then all would be great: Build tool changes, but its semantics is unaffected? no rebuild. Build tool changes, semantics change? rebuild.

This ideal is not reachable (blah blah halting problem blah blah) – but I believe we can approximate it. At least if good practices were used and the tool has a test suite!

Assume for now that the tool is a simple patch-processing tool, i.e. takes some input and produces some output. Assume further that there is a test suite with a small but representative set of example inputs, or maybe some randomly generated inputs (using a fixed seed). If the test suite is good, then the (hash of) the output on the test suite example is an approximation of the programs semantic.

And the build system can use this “semantics hash”. If the build tool changes, the build system can re-run the test suite and compare the output with the previous run. If they change, then the tools seems to have changed in significant ways, and it needs to re-run the data processing. But if the test suite outputs are unchanged, then it can assume that the behaviour of the tool has not changed, re-use the existing data outputs.

That is the core idea! A few additional thoughts, though:

  • What if the test suite changes? If the above idea is implemented naively, then adding a test case to the test suite will change the semantics, and re-build everything. That would be horrible in terms of incentives! So instead, the build systems should keep the old version of the build tool around, re-calculate its semantics hash based on the new test suite, and then compare that. This way, extending the test suite does not cause recompilation.

Hash-and-store-based build systems like Nix should have no problem keeping the previous version of the build tool around as long there is output that depends on it.

  • A consequence of this approach is that if you find and fix a bug in your tool that is not covered by the existing test suite, then you absolutely have to add a test for that to your test suite, otherwise the bug fix will not actually make it to your data output. This might be the biggest downside of the approach, but at least it sets incentives right in that it makes you maintain a good regression test suite.

  • What if things go wrong, the test suite is incomplete and the build system re-uses build output because it wrongly assumes that two versions of the build tool have the same semantics?

If done right, this can be detected and fixed: The build system needs to record which tool version (and not just which “semantics hash”) was used to produce a particular output. So once the test suite uncovers the difference, the build systems will no longer consider the two versions equivalent and – after the fact – invalidate the re-used of the previous build output, and re-build what needs to be rebuild

I am curious to hear if anybody has played with these or similar ideas before? How did it go? Would it be possible to implement this in Nix? In Shake? Other systems? Tell me your thoughts!