Joachim Breitner

Blog

GHC Steering Committee Retrospective

Published 2024-01-25 in sections English, Haskell.

After seven years of service as member and secretary on the GHC Steering Committee, I have resigned from that role. So this is a good time to look back and retrace the formation of the GHC proposal process and committee.

In my memory, I helped define and shape the proposal process, optimizing it for effectiveness and throughput, but memory can be misleading, and judging from the paper trail in my email archives, this was indeed mostly Ben Gamari’s and Richard Eisenberg’s achievement: Already in Summer of 2016, Ben Gamari set up the ghc-proposals Github repository with a sketch of a process and sent out a call for nominations on the GHC user’s mailing list, which I replied to. The Simons picked the first set of members, and in the fall of 2016 we discussed the committee’s by-laws and procedures. As so often, Richard was an influential shaping force here.

Three ingredients

For example, it was him that suggested that for each proposal we have one committee member be the “Shepherd”, overseeing the discussion. I believe this was one ingredient for the process effectiveness: There is always one person in charge, and thus we avoid the delays incurred when any one of a non-singleton set of volunteers have to do the next step (and everyone hopes someone else does it).

The next ingredient was that we do not usually require a vote among all members (again, not easy with volunteers with limited bandwidth and occasional phases of absence). Instead, the shepherd makes a recommendation (accept/reject), and if the other committee members do not complain, this silence is taken as consent, and we come to a decision. It seems this idea can also be traced back on Richard, who suggested that “once a decision is requested, the shepherd [generates] consensus. If consensus is elusive, then we vote.”

At the end of the year we agreed and wrote down these rules, created the mailing list for our internal, but publicly archived committee discussions, and began accepting proposals, starting with Adam Gundry’s OverloadedRecordFields.

At that point, there was no “secretary” role yet, so how I did become one? It seems that in February 2017 I started to clean-up and refine the process documentation, fixing “bugs in the process” (like requiring authors to set Github labels when they don’t even have permissions to do that). This in particular meant that someone from the committee had to manually handle submissions and so on, and by the aforementioned principle that at every step there ought to be exactly one person in change, the role of a secretary followed naturally. In the email in which I described that role I wrote:

Simon already shoved me towards picking up the “secretary” hat, to reduce load on Ben.

So when I merged the updated process documentation, I already listed myself “secretary”.

It wasn’t just Simon’s shoving that put my into the role, though. I dug out my original self-nomination email to Ben, and among other things I wrote:

I also hope that there is going to be clear responsibilities and a clear workflow among the committee. E.g. someone (possibly rotating), maybe called the secretary, who is in charge of having an initial look at proposals and then assigning it to a member who shepherds the proposal.

So it is hardly a surprise that I became secretary, when it was dear to my heart to have a smooth continuous process here.

I am rather content with the result: These three ingredients – single secretary, per-proposal shepherds, silence-is-consent – helped the committee to be effective throughout its existence, even as every once in a while individual members dropped out.

Ulterior motivation

I must admit, however, there was an ulterior motivation behind me grabbing the secretary role: Yes, I did want the committee to succeed, and I did want that authors receive timely, good and decisive feedback on their proposals – but I did not really want to have to do that part.

I am, in fact, a lousy proposal reviewer. I am too generous when reading proposals, and more likely mentally fill gaps in a specification rather than spotting them. Always optimistically assuming that the authors surely know what they are doing, rather than critically assessing the impact, the implementation cost and the interaction with other language features.

And, maybe more importantly: why should I know which changes are good and which are not so good in the long run? Clearly, the authors cared enough about a proposal to put it forward, so there is some need… and I do believe that Haskell should stay an evolving and innovating language… but how does this help me decide about this or that particular feature.

I even, during the formation of the committee, explicitly asked that we write down some guidance on “Vision and Guideline”; do we want to foster change or innovation, or be selective gatekeepers? Should we accept features that are proven to be useful, or should we accept features so that they can prove to be useful? This discussion, however, did not lead to a concrete result, and the assessment of proposals relied on the sum of each member’s personal preference, expertise and gut feeling. I am not saying that this was a mistake: It is hard to come up with a general guideline here, and even harder to find one that does justice to each individual proposal.

So the secret motivation for me to grab the secretary post was that I could contribute without having to judge proposals. Being secretary allowed me to assign most proposals to others to shepherd, and only once in a while myself took care of a proposal, when it seemed to be very straight-forward. Sneaky, ain’t it?

7 Years later

For years to come I happily played secretary: When an author finished their proposal and public discussion ebbed down they would ping me on GitHub, I would pick a suitable shepherd among the committee and ask them to judge the proposal. Eventually, the committee would come to a conclusion, usually by implicit consent, sometimes by voting, and I’d merge the pull request and update the metadata thereon. Every few months I’d summarize the current state of affairs to the committee (what happened since the last update, which proposals are currently on our plate), and once per year gathered the data for Simon Peyton Jones’ annually GHC Status Report. Sometimes some members needed a nudge or two to act. Some would eventually step down, and I’d sent around a call for nominations and when the nominations came in, distributed them off-list among the committee and tallied the votes.

Initially, that was exciting. For a long while it was a pleasant and rewarding routine. Eventually, it became a mere chore. I noticed that I didn’t quite care so much anymore about some of the discussion, and there was a decent amount of naval-gazing, meta-discussions and some wrangling about claims of authority that was probably useful and necessary, but wasn’t particularly fun.

I also began to notice weaknesses in the processes that I helped shape: We could really use some more automation for showing proposal statuses, notifying people when they have to act, and nudging them when they don’t. The whole silence-is-assent approach is good for throughput, but not necessary great for quality, and maybe the committee members need to be pushed more firmly to engage with each proposal. Like GHC itself, the committee processes deserve continuous refinement and refactoring, and since I could not muster the motivation to change my now well-trod secretarial ways, it was time for me to step down.

Luckily, Adam Gundry volunteered to take over, and that makes me feel much less bad for quitting. Thanks for that!

And although I am for my day job now enjoying a language that has many of the things out of the box that for Haskell are still only language extensions or even just future proposals (dependent types, BlockArguments, do notation with (← foo) expressions and 💜 Unicode), I’m still around, hosting the Haskell Interlude Podcast, writing on this blog and hanging out at ZuriHac etc.

The Haskell Interlude Podcast

Published 2023-12-22 in sections English, Haskell.

It was pointed out to me that I have not blogged about this, so better now than never:

Since 2021 I am – together with four other hosts – producing a regular podcast about Haskell, the Haskell Interlude. Roughly every two weeks two of us interview someone from the Haskell Community, and we chat for approximately an hour about how they came to Haskell, what they are doing with it, why they are doing it and what else is on their mind. Sometimes we talk to very famous people, like Simon Peyton Jones, and sometimes to people who maybe should be famous, but aren’t quite yet.

For most episodes we also have a transcript, so you can read the interviews instead, if you prefer, and you should find the podcast on most podcast apps as well. I do not know how reliable these statistics are, but supposedly we regularly have around 1300 listeners. We don’t get much feedback, however, so if you like the show, or dislike it, or have feedback, let us know (for example on the Haskell Disourse, which has a thread for each episode).

At the time of writing, we released 40 episodes. For the benefit of my (likely hypothetical) fans, or those who want to train an AI voice model for nefarious purposes, here is the list of episodes co-hosted by me:

Can’t decide where to start? The one with Ryan Trinkle might be my favorite.

Thanks to the Haskell Foundation and its sponsors for supporting this podcast (hosting, editing, transscription).

Joining the Lean FRO

Published 2023-11-01 in sections English, Lean.

Tomorrow is going to be a new first day in a new job for me: I am joining the Lean FRO, and I’m excited.

What is Lean?

Lean is the new kid on the block of theorem provers.

It’s a pure functional programming language (like Haskell, with and on which I have worked a lot), but it’s dependently typed (which Haskell may be evolving to be as well, but rather slowly and carefully). It has a refreshing syntax, built on top of a rather good (I have been told, not an expert here) macro system.

As a dependently typed programming language, it is also a theorem prover, or proof assistant, and there exists already a lively community of mathematicians who started to formalize mathematics in a coherent library, creatively called mathlib.

What is a FRO?

A Focused Research Organization has the organizational form of a small start up (small team, little overhead, a few years of runway), but its goals and measure for success are not commercial, as funding is provided by donors (in the case of the Lean FRO, the Simons Foundation International, the Alfred P. Sloan Foundation, and Richard Merkin). This allows us to build something that we believe is a contribution for the greater good, even though it’s not (or not yet) commercially interesting enough and does not fit other forms of funding (such as research grants) well. This is a very comfortable situation to be in.

Why am I excited?

To me, working on Lean seems to be the perfect mix: I have been working on language implementation for about a decade now, and always with a preference for functional languages. Add to that my interest in theorem proving, where I have used Isabelle and Coq so far, and played with Agda and others. So technically, clearly up my alley.

Furthermore, the language isn’t too old, and plenty of interesting things are simply still to do, rather than tried before. The ecosystem is still evolving, so there is a good chance to have some impact.

On the other hand, the language isn’t too young either. It is no longer an open question whether we will have users: we have them already, they hang out on zulip, so if I improve something, there is likely someone going to be happy about it, which is great. And the community seems to be welcoming and full of nice people.

Finally, this library of mathematics that these users are building is itself an amazing artifact: Lots of math in a consistent, machine-readable, maintained, documented, checked form! With a little bit of optimism I can imagine this changing how math research and education will be done in the future. It could be for math what Wikipedia is for encyclopedic knowledge and OpenStreetMap for maps – and the thought of facilitating that excites me.

With this new job I find that when I am telling friends and colleagues about it, I do not hesitate or hedge when asked why I am doing this. This is a good sign.

What will I be doing?

We’ll see what main tasks I’ll get to tackle initially, but knowing myself, I expect I’ll get broadly involved.

To get up to speed I started playing around with a few things already, and for example created Loogle, a Mathlib search engine inspired by Haskell’s Hoogle, including a Zulip bot integration. This seems to be useful and quite well received, so I’ll continue maintaining that.

Expect more about this and other contributions here in the future.

Squash your Github PRs with one click

Published 2023-10-29 in sections Digital World, English.

TL;DR: Squash your PRs with one click at https://squasher.nomeata.de/.

Very recently I got this response from the project maintainer at a pull request I contributed: “Thanks, approved, please squash so that I can merge.”

It’s nice that my contribution can go it, but why did the maintainer not just press the “Squash and merge button”, and instead adds the this unnecessary roundtrip to the process? Anyways, maintainers make the rules, so I play by them. But unlike the maintainer, who can squash-and-merge with just one click, squashing the PR’s branch is surprisingly laberous: Github does not allow you to do that via the Web UI (and hence on mobile), and it seems you are expected to go to your computer and juggle with git rebase --interactive.

I found this rather annoying, so I created Squasher, a simple service that will squash your branch for you. There is no configuration, just paste the PR url. It will use the PR title and body as the commit message (which is obviously the right way™), and create the commit in your name:

Squasher in action

If you find this useful, or found it to be buggy, let me know. The code is at https://github.com/nomeata/squasher if you are curious about it.

Left recursive parser combinators via sharing

Published 2023-09-10 in sections English, Haskell.

At this year’s ICFP in Seattle I gave a talk about my rec-def Haskell library, which I have blogged about before here. While my functional pearl paper focuses on a concrete use-case and the tricks of the implementation, in my talk I put the emphasis on the high-level idea: it beholds of a declarative lazy functional like Haskell that recursive equations just work whenever they describe a (unique) solution. Like in the paper, I used equations between sets as the running example, and only conjectured that it should also work for other domains, in particular parser combinators.

Naturally, someone called my bluff and asked if I actually tried it. I had not, but I should have, because it works nicely and is actually more straight-forward than with sets. I wrote up a prototype and showed it off a few days later as a lightning talk at Haskell Symposium; here is the write up that goes along with that.

Parser combinators

Parser combinators are libraries that provide little functions (combinators) that you compose to define your parser directly in your programming language, as opposed to using external tools that read some grammar description and generate parser code, and are quite popular in Haskell (e.g. parsec, attoparsec, megaparsec).

Let us define a little parser that recognizes sequences of as:

ghci> let aaa = tok 'a' *> aaa <|> pure ()
ghci> parse aaa "aaaa"
Just ()
ghci> parse aaa "aabaa"
Nothing

Left-recursion

This works nicely, but just because we were lucky: We wrote the parser to recurse on the right (of the *>), and this happens to work. If we put the recursive call first, it doesn’t anymore:

ghci> let aaa = aaa <* tok 'a' <|> pure ()
ghci> parse aaa "aaaa"
^CInterrupted.

This is a well-known problem (see for example Nicolas Wu’s overview paper), all the common parser combinator libraries cannot handle it and the usual advise is to refactor your grammar to avoid left recursion.

But there are some libraries that can handle left recursion, at least with a little help from the programmer. I found two variations:

  • The library provides an explicit fix point combinator, and as long as that is used, left-recursion works. This is for example described by Frost, Hafiz and Callaghan by, and (of course) Oleg Kiselyov has an implementation of this too.

  • The library expects explicit labels on recursive productions, so that the library can recognize left-recursion. I found an implementation of this idea in the Agda.Utils.Parser.MemoisedCPS module in the Agda code, the gll library seems to follow this style and Jaro discusses it as well.

I took the module from the Agda source and simplified a bit for the purposes of this demonstration (Parser.hs). Indeed, I can make the left-recursive grammar work:

ghci> let aaa = memoise ":-)" $ aaa <* tok 'a' <|> pure ()
ghci> parse aaa "aaaa"
Just ()
ghci> parse aaa "aabaa"
Nothing

It does not matter what I pass to memoise, as long as I do not pass the same key when memoising a different parser.

For reference, an excerpt of the the API of Parser:

data Parser k tok a -- k is type of keys, tok type of tokens (e.g. Char)
instance Functor (Parser k tok)
instance Applicative (Parser k tok)
instance Alternative (Parser k tok)
instance Monad (Parser k tok)
parse :: Parser k tok a -> [tok] -> Maybe a
sat :: (tok -> Bool) -> Parser k tok tok
tok :: Eq tok => tok -> Parser k tok tok
memoise :: Ord k => k -> Parser k tok a -> Parser k tok a

Left-recursion through sharing

To follow the agenda set out in my talk, I now want to wrap that parser in a way that relieves me from having to insert the calls to memoise. To start, I import that parser qualified, define a newtype around it, and start lifting some of the functions:

import qualified Parser as P

newtype Parser tok a = MkP { unP :: P.Parser Unique tok a }

parse :: Parser tok a -> [tok] -> Maybe a
parses (MkP p) = P.parse p

sat :: Typeable tok => (tok -> Bool) -> Parser tok tok
sat p = MkP (P.sat p)

tok :: Eq tok => tok -> Parser tok tok
tok t = MkP (P.tok t)

So far, nothing interesting had to happen, because so far I cannot build recursive parsers. The first interesting combinator that allows me to do that is <*> from the Applicative class, so I should use memoise there. The question is: Where does the unique key come from?

Proprioception

As with the rec-def library, pure code won’t do, and I have to get my hands dirty: I really want a fresh unique label out of thin air. To that end, I define the following combinator, with naming aided by Richard Eisenberg:

propriocept :: (Unique -> a) -> a
propriocept f = unsafePerformIO $ f <$> newUnique

A thunk defined with propriocept will know about it’s own identity, and will be able to tell itself apart from other such thunks. This gives us a form of observable sharing, precisely what we need. But before we return to our parser combinators, let us briefly explore this combinator.

Using propriocept I can define an operation cons :: [Int] -> [Int] that records (the hash of) this Unique in the list:

ghci> let cons xs = propriocept (\x -> hashUnique x : xs)
ghci> :t cons
cons :: [Int] -> [Int]

This lets us see the identity of a list cell, that is, of the concrete object in memory.

Naturally, if we construct a finite list, each list cell is different:

ghci> cons (cons (cons []))
[1,2,3]

And if we do that again, we see that fresh list cells are allocated:

ghci> cons (cons (cons []))
[4,5,6]

We can create an infinite list; if we do it without sharing, every cell is separate:

ghci> take 20 (acyclic 0)
[7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]

but if we tie the knot using sharing, all the cells in the list are actually the same:

ghci> let cyclic = cons cyclic
ghci> take 20 cyclic
[27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27]

We can achieve the same using fix from Data.Function:

ghci> import Data.Function
ghci> take 20 (fix cons)
[28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28]

I explore these heap structures more visually in a series of screencasts.

So with propriocept we can distinguish different heap objects, and also recognize when we come across the same heap object again.

Left-recursion through sharing (cont.)

With that we return to our parser. We define a smart constructor for the new Parser that passes the unique from propriocept to the underlying parser’s memoise function:

withMemo :: P.Parser Unique tok a -> Parser tok a
withMemo p = propriocept $ \u -> MkP $ P.memoise u p

If we now use this in the definition of all possibly recursive parsers, then the necessary calls to memoise will be in place:

instance Functor (Parser tok) where
  fmap f p = withMemo (fmap f (unP p))

instance Applicative (Parser tok) where
  pure x = MkP (pure x)
  p1 <*> p2 = withMemo (unP p1 <*> unP p2)

instance Alternative (Parser tok) where
  empty = MkP empty
  p1 <|> p2 = withMemo (unP p1 <|> unP p2)

instance Monad (Parser tok) where
  return = pure
  p1 >>= f = withMemo $ unP p1 >>= unP . f

And indeed, it works (see RParser.hs for the full code):

ghci> let aaa = aaa <* tok 'a' <|> pure ()
ghci> parse aaa "aaaa"
Just ()
ghci> parse aaa "aabaa"
Nothing

A larger example

Let us try this on a larger example, and parse (simple) BNF grammars. Here is a data type describing them

type Ident = String
type RuleRhs = [Seq]
type Seq = [Atom]
data Atom = Lit String | NonTerm Ident deriving Show
type Rule = (Ident, RuleRhs)
type BNF = [Rule]

For the concrete syntax, I’d like to be able to parse something like

numExp :: String
numExp = unlines
    [ "term   := sum;"
    , "pdigit := '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9';"
    , "digit  := '0' | pdigit;"
    , "pnum   := pdigit | pnum digit;"
    , "num    := '0' | pnum;"
    , "prod   := atom | atom '*' prod;"
    , "sum    := prod | prod '+' sum;"
    , "atom   := num | '(' term ')';"
    ]

so here is a possible parser; mostly straight-forward use of parser combinator:

type P = Parser Char

snoc :: [a] -> a -> [a]
snoc xs x = xs ++ [x]

l :: P a -> P a
l p = p <|> l p <* sat isSpace
quote :: P Char
quote = tok '\''
quoted :: P a -> P a
quoted p = quote *> p <* quote
str :: P String
str = some (sat (not . (== '\'')))
ident :: P Ident
ident = some (sat (\c -> isAlphaNum c && isAscii c))
atom :: P Atom
atom = Lit     <$> l (quoted str)
   <|> NonTerm <$> l ident
eps :: P ()
eps = void $ l (tok 'ε')
sep :: P ()
sep = void $ some (sat isSpace)
sq :: P Seq
sq = []   <$ eps
 <|> snoc <$> sq <* sep <*> atom
 <|> pure <$> atom
ruleRhs :: P RuleRhs
ruleRhs = pure <$> sq
      <|> snoc <$> ruleRhs <* l (tok '|') <*> sq
rule :: P Rule
rule = (,) <$> l ident <* l (tok ':' *> tok '=') <*> ruleRhs <* l (tok ';')
bnf :: P BNF
bnf = pure <$> rule
  <|> snoc <$> bnf <*> rule

I somewhat sillily used snoc rather than (:) to build my lists, just so that I can show off all the left-recursion in this grammar.

Sharing is tricky

Let’s try it:

ghci> parse bnf numExp
^CInterrupted.

What a pity, it does not work! What went wrong?

The underlying library can handle left-recursion if it can recognize it by seeing a memoise label passed again. This works fine in all the places where we re-use a parser definition (e.g. in bnf), but it really requires that values are shared!

If we look carefully at our definition of l (which parses a lexeme, i.e. something possibly followed by whitespace), then it recurses via a fresh function call, and the program will keep expanding the definition – just like the acyclic above:

l :: P a -> P a
l p = p <|> l p <* sat isSpace

The fix (sic!) is to make sure that the recursive call is using the parser we are currently defining, which we can easily do with a local definition:

l :: P a -> P a
l p = p'
  where p' = p <|> p' <* sat isSpace

With this little fix, the parser can parse the example grammar:

ghci> parse bnf numExp
Just [("term",[[NonTerm "sum"]]),("pdigit",[[Lit "1"],…

Going meta

The main demonstration is over, but since we now have already have a parser for grammar descriptions at hand, let’s go a bit further and dynamically construct a parser from such a description. The parser should only accept strings according to that grammar, and return a parse tree annotated with the non-terminals used:

interp :: BNF -> P String
interp bnf = parsers M.! start
  where
    start :: Ident
    start = fst (head bnf)

    parsers :: M.Map Ident (P String)
    parsers = M.fromList [ (i, parseRule i rhs) | (i, rhs) <- bnf ]

    parseRule :: Ident -> RuleRhs -> P String
    parseRule ident rhs = trace <$> asum (map parseSeq rhs)
      where trace s = ident ++ "(" ++ s ++ ")"

    parseSeq :: Seq -> P String
    parseSeq = fmap concat . traverse parseAtom

    parseAtom :: Atom -> P String
    parseAtom (Lit s) = traverse tok s
    parseAtom (NonTerm i) = parsers M.! i

Let’s see it in action (full code in BNFEx.hs):

ghci> Just bnfp = parse bnf numExp
ghci> :t bnfp
bnfp :: BNF
ghci> parse (inter
interact  interp
ghci> parse (interp bnfp) "12+3*4"
Just "term(sum(prod(atom(num(pnum(pnum(pdigit(1))digit(pdigit(2))))))+sum(prod(atom(num(pnum(pdigit(3))))*prod(atom(num(pnum(pdigit(4)))))))))"
ghci> parse (interp bnfp) "12+3*4+"
Nothing

It is worth noting that the numExp grammar is also left-recursive, so implementing interp with a conventional parser combinator library would not have worked. But thanks to our propriocept tick, it does! Again, the sharing is important; in the code above it is the map parsers that is defined in terms of itself, and will ensure that the left-recursive productions will work.

Closing thoughts

I am using unsafePerformIO, so I need to justify its use. Clearly, propriocept is not a pure function, and it’s type is a lie. In general, using it will break the nice equational properties of Haskell, as we have seen in our experiments with cons.

In the case of our parser library, however, we use it in specific ways, namely to feed a fresh name to memoise. Assuming the underlying parser library’s behavior does not observably depend on where and with which key memoise is used, this results in a properly pure interface, and all is well again. (NB: I did not investigate if this assumption actually holds for the parser library used here, or if it may for example affect the order of parse trees returned.)

I also expect that this implementation, which will memoise every parser involved, will be rather slow. It seems plausible to analyze the graph structure and figure out which memoise calls are actually needed to break left-recursion (at least if we drop the Monad instance or always memoise >>=).

If you liked this post, you might enjoy reading the paper about rec-def, watch one of my talks about it (MuniHac, BOBKonf, ICFP23; the presentation evolved over time), or if you just want to see more about how things are laid out on Haskell’s heap, go to my screen casts exploring the Haskell heap.

Generating bibtex bibliographies from DOIs via DBLP

Published 2023-07-12 in sections English, Digital World.

I sometimes write papers and part of paper writing is assembling the bibliography. In my case, this is done using BibTeX. So when I need to add another citation, I have to find suitable data in Bibtex format.

Often I copy snippets from .bib files from earlier paper.

Or I search for the paper on DBLP, which in my experience has highest quality BibTeX entries and best coverage of computer science related publications, copy it to my .bib file, and change the key to whatever I want to refer the paper by.

But in the days of pervasive use of DOIs (digital object identifiers) for almost all publications, manually keeping the data in bibtex files seems outdated. Instead I’d rather just put the two pieces of data I care about: the key that I want to use for citation, and the doi. The rest I do not want to be bothered with.

So I wrote a small script that takes a .yaml file like

entries:
  unsafePerformIO: 10.1007/10722298_3
  dejafu: 10.1145/2804302.2804306
  runST: 10.1145/3158152
  quickcheck: 10.1145/351240.351266
  optimiser: 10.1016/S0167-6423(97)00029-4
  sabry: 10.1017/s0956796897002943
  concurrent: 10.1145/237721.237794
  launchbury: 10.1145/158511.158618
  datafun: 10.1145/2951913.2951948
  observable-sharing: 10.1007/3-540-46674-6_7
  kildall-73: 10.1145/512927.512945
  kam-ullman-76: 10.1145/321921.321938
  spygame: 10.1145/3371101
  cocaml: 10.3233/FI-2017-1473
  secrets: 10.1017/S0956796802004331
  modular: 10.1017/S0956796817000016
  longley: 10.1145/317636.317775
  nievergelt: 10.1145/800152.804906
  runST2: 10.1145/3527326
  polakow: 10.1145/2804302.2804309
  lvars: 10.1145/2502323.2502326
  typesafe-sharing: 10.1145/1596638.1596653
  pure-functional: 10.1007/978-3-642-14162-1_17
  clairvoyant: 10.1145/3341718
subs:
  - replace: Peyton Jones
    with: '{Peyton Jones}'

and turns it into a nice .bibtex file:

$ ./doi2bib.py < doibib.yaml > dblp.bib
$ head dblp.bib
@inproceedings{unsafePerformIO,
  author       = {Simon L. {Peyton Jones} and
                  Simon Marlow and
                  Conal Elliott},
  editor       = {Pieter W. M. Koopman and
                  Chris Clack},
  title        = {Stretching the Storage Manager: Weak Pointers and Stable Names in
                  Haskell},
  booktitle    = {Implementation of Functional Languages, 11th International Workshop,
                  IFL'99, Lochem, The Netherlands, September 7-10, 1999, Selected Papers},

The last bit allows me to do some fine-tuning of the file, because unfortunately, not even DBLP BibTeX files are perfect, for example in the presence of two family names.

Now I have less moving parts to worry about, and a more consistent bibliography.

The script is rather small, so I’ll just share it here:

#!/usr/bin/env python3

import sys
import yaml
import requests
import requests_cache
import re

requests_cache.install_cache(backend='sqlite')

data = yaml.safe_load(sys.stdin)

for key, doi in data['entries'].items():
    bib = requests.get(f"https://dblp.org/doi/{doi}.bib").text
    bib = re.sub('{DBLP.*,', '{' + key + ',', bib)
    for subs in data['subs']:
        bib = re.sub(subs['replace'], subs['with'], bib)
    print(bib)

There are similar projects out there, e.g. dblpbibtex in C++ and dblpbib in Ruby. These allow direct use of \cite{DBLP:rec/conf/isit/BreitnerS20} in Latex, which is also nice, but for now I like to choose more speaking citation keys myself.

ICFP Pearl preprint on rec-def

Published 2023-06-22 in sections English, Haskell.

I submitted a Functional Pearl to this year’s ICFP and it got accepted!

It is about the idea of using Haskell’s inherent ability to define recursive equations, and use them for more than just functions and lazy data structures. I blogged about this before (introducing the idea, behind the scenes, applications to program analysis, graph algorithms and minesweeper), but hopefully the paper brings out the idea even more clearly. The constructive feedback from a few friendly readers (Claudio, Sebastian, and also the anonymous reviewers) certainly improved the paper.

Abstract

Haskell’s laziness allows the programmer to solve some problems naturally and declaratively via recursive equations. Unfortunately, if the input is “too recursive”, these very elegant idioms can fall into the dreaded black hole, and the programmer has to resort to more pedestrian approaches.

It does not have to be that way: We built variants of common pure data structures (Booleans, sets) where recursive definitions are productive. Internally, the infamous unsafePerformIO is at work, but the user only sees a beautiful and pure API, and their pretty recursive idioms – magically – work again.

If you are curious, please have a look at the current version of the paper. Any feedback is welcome; even more if it comes before July 11, because then I can include it in the camera ready version.


There are still some open questions around this work. What bothers me maybe most is the lack of a denotational semantics that unifies the partial order underlying the Haskell fragment, and the partial order underlying the domain of the embedded equations.

The crux of the probem is maybe best captured by this question:

Imagine an untyped lambda calculus with constructors, lazy evaluation, and an operation rseq that recursively evaluates constructors, but terminates in the presence of cycles. So for example

rseq (let x    = 1 :: x    in x   ) ≡ ()
rseq (let x () = 1 :: x () in x ()) ≡ ⊥

In this language, knot tying is observable. What is the “nicest” denotational semantics.

Update: I made some progress via a discussion on the Haskell Discource and started some rough notes on a denotational semantics.

The curious case of the half-half Bitcoin ECDSA nonces

Published 2023-06-07 in sections English, Digital World.

This is the week of the Gulaschprogrammiernacht, a yearly Chaos Computer Club even in Karlsruhe, so it was exactly a year ago that I sat in my AirBnB room and went over the slides for my talk “Lattice Attacks on Ethereum, Bitcoin, and HTTPS” that I would give there.

It reports on research done with Nadia Heninger while I was in Phildalephia, and I really liked giving that talk: At some point we look at some rather odd signatures we found on the bitcoin blockchain, and part of the signature (the “nonce”) happens to share some bytes with the secret key. A clear case of some buffer overlap in a memory unsafe language, which I, as a fan of languages like Haskell, are very happy to sneer at!

A sneery slide

But last year, as I was going over the slides I looked at the raw data again for some reason, and I found that we overlooked something: Not only was the the upper half ot the nonce equal to the lower half of the secret key, but he lower half of the nonce was also equal to the upper half of the message hash!

This now looks much less like an accident to me, and more like a (overly) simple form of deterministic nonce creation… so much for my nice anecdote. (I still used the anecdote in my talk, followed up with an “actually”.)

When I told Nadia about this, she got curious as well, and quickly saw that from a signature with such a nonce, one can rather easily extract the secret key. So together with her student Dylan Rowe, we implemented this analysis and searched the bitcoin blockchain for more instance of such signatures. We did find a few, and were even able to trace them back to a somewhat infamous bitcoin activist going under the pseudonym Amaclin.

This research and sleuthing turned into another paper, “The curious case of the half-half Bitcoin ECDSA nonces”, to be presented at AfricaCrypt 2023. Enjoy!

Giving back to OPLSS

Published 2023-06-04 in sections English, Haskell.

Nine years ago, when I was a PhD student, I attended the Oregon Programming Language Summer School in Eugene. I had a great time and learned a lot.

The OPLSS’14 full image

Learning some of the things I learned there, and meeting some of the people I met there, also led to me graduating, which led to me becoming a PostDoc at UPenn, which led to me later joining DFINITY to implement the Motoko programming language and help design and specify the public interface of their “Internet Computer”, including the response certification (video).

So when the ICDevs non-profit offered a development bounty for a Motoko library implementing the merkle trees involved in certification, this sounded like a fun little coding task, so I completed it; likely with less effort than it would have taken someone who first had to get into these topics.

The bounty was quite generous, at US$ 10k, and I was too vain to “just” have it donated to some large charity, as I recently with a few coding and consulting gigs, and looked for more personal. So, the ICDevs guys and I agreed to donate the money to this year’s OPLSS, where I heard it can cover the cost of about 8 students, and hopefully helps the PL cause.

(You will not find us listed as sponsors because for some reason, a “donation” instead of “sponsorship” comes with less strings attached to the organizers.)

More thoughts on a bootstrappable GHC

Published 2023-04-26 in sections English, Haskell.

The bootstrappable builds project tries to find ways of building all our software from source, without relying on binary artifacts. A noble goal, and one that is often thwarted by languages with self-hosting compilers, like GHC: In order to build GHC, you need GHC. A Pull Request against nixpkgs, adding first steps of the bootstrapping pipeline, reminded me of the issue with GHC, which I have noted down some thoughts about before and I played around a bit more.

The most promising attempt to bootstrap GHC was done by rekado in 2017. He observed that Hugs is maybe the most recently maintained bootstrappable (since written in C) Haskell compiler, but noticed that “it cannot deal with mutually recursive module dependencies, which is a feature that even the earliest versions of GHC rely on. This means that running a variant of GHC inside of Hugs is not going to work without major changes.” He then tries to bootstrap another very old Haskell compiler (nhc) with Hugs, and makes good but incomplete progress.

This made me wonder: What if Hugs supported mutually recursive modules? Would that make a big difference? Anthony Clayden keeps advocating Hugs as a viable Haskell implementation, so maybe if that was the main blocker, then adding support to Hugs for that is probably not too hard (at least in a compile-the-strongly-connected-component-as-one-unit mode) and worthwhile?

All of GHC in one file?

That reminded me of a situation I was in before, where I had to combine multiple Haskell modules into one before: For my talk “Lock-step simulation is child’s play” I wrote a multi-player game, a simulation environment for it, and a presentation tool around it, all in the CodeWorld programming environment, which supports only a single module. So I hacked the a small tool hs-all-in-one that takes multiple Haskell modules and combines them into one, mangling the names to avoid name clashes.

This made me wonder: Can I turn all of GHC into one module, and compile that?

At this point I have probably left the direct path towards bootstrapping, but I kinda good hooked.

  1. Using GHC’s hadrian/ghci tool, I got it to produce the necessary generated files (e.g. from happy grammars) and spit out the lit of modules that make up GHC, which I could feed to hs-all-in-one.

  2. It uses haskell-src-exts for parsing, and it was almost able to parse all of that. It has a different opinion about how MultiWayIf should be indented, whether EmptyCase needs {} and issues pretty-printing some promoted values, but otherwise the round-tripping worked fine, and I as able to generate a large file (680,000 loc, 41 MB) that passes GHC’s parser.

  3. It also uses haskell-names to resolve names.

    This library is less up-to-date with various Haskell features, so I added support for renaming in some pragmas (ANN, SPECIALIZE), pattern signatures etc.

    For my previous use-case I could just combine all the imports, knowing that I would not introduce conflicts. For GHC, this is far from true: Some modules import Data.Map.Strict, others Data.Map.Lazy, and yet others introduce names that clash with stuff imported from the prelude… so I had to change the tool to fully qualify all imported values. This isn’t so bad, I can do that using haskell-names, if I somehow know what all the modules in base, containers, transformers and array export.

    The haskell-names library itself comes with a hard-coded database of base exports, but it is incomplete and doesn’t help me with, say, containers.

    I then wrote a little parser for the .txt files that haddock produces for the benefit of hoogle, and that are conveniently installed along the packages (at least on nix). This would have been great, if these files wouldn’t simply omit all reexported entities! I added some manual hacks (Data.Map has the same exports as Data.IntMap; Prelude exports all entities as known by haskell-names, but those that are also exported from Data.List, use the symbol from there…)

    I played this game of whack-a-mole for a while, solving many of the problems that GHC’s renamer reports, but eventually stopped to write this blog post. I am fairly confident that this could be pulled through, though.

Back to bootstrapping

So what if we could pull this through? We’d have a very large code file that GHC may be able to compile to produce a ghc binary without exhausting my RAM. But that doesn’t help with bootstrapping yet.

If lack of support for recursive modules is all that Hugs is missing, we’d be done indeed. But quite contrary, it is probably the least of our worries, given that contemporary GHC uses many many other features not supported by Hugs.

Some of them a syntactic and can easily be rewritten to more normal Haskell in a preprocessing step (e.g. MultiWayIf).

Others are deep and hard (GADTs, Pattern synonyms, Type Families), and prohibit attempting to compile a current version of GHC (even if its all one module) with Hugs. So one would certainly have to go back in time and find a version of GHC that is not yet using all these features. For example, the first use of GADTs was introduced by Simon Marlow in 2011, so this suggests going back to at least GHC 7.0.4, maybe earlier.

Still, being able to mangle the source code before passing it to Hugs is probably a useful thing. This poses the question whether Hugs can compile such a tool; in particular, is it capable of compiling haskell-src-exts, which I am not too optimistic about either. Did someone check this already?

So one plan of attack could be

  1. Identify an old version of GHC that

    • One can use to bootstrap subsequent versions until today.
    • Is old enough to use as few features not supported by hugs as possible.
    • Is still new enough so that one can obtain a compatible toolchain.
  2. Wrangle the build system to tell you which files to compile, with which preprocessor flags etc.

  3. Boostrap all pre-processing tools used by GHC (cpphs or use plan cpp, happy, alex).

  4. For every language feature not supported by Hugs, either

    • Implement it in Hugs,
    • Manually edit the source code to avoid compiling the problematic code, if it is optional (e.g. in an optimization pass)
    • Rewrite the problematic code
    • Write a pre-processing tool (like the one above) that compiles the feature away
  5. Similarly, since Hugs probably ships a base that is different than what GHC, or the libraries used by GHC expects, either adjust Hugs’ base, or modify the GHC code that uses it.

My actual plan, though, for now is to throw these thoughts out, maybe make some noise on Discourse, Mastodon, Twitter and lobste.rs, and then let it sit and hope someone else will pick it up.