SwirlyMyself

Blog

2014-09-08T23:04:54+02:00

Die ersten hausgemachten Tip-Toi-Bücher

Das letzte halbe Jahr war es sehr ruhig um unser Tip-Toi-Projekt. Das erste Ziel war damals erreicht: Ich konnte existierende Bücher mit neuen Texten versehen, und meinem Neffen eine Freude machen. Und so passierte erstmal lange nichts.

Bis sich vor ein paar Tagen plötzlich ein gewisser „Pronwan“ meldete und erzählte, dass er erfolgreich komplette Tip-Toi-Bücher (naja, Seiten) für seine Kinder gestaltet, gedruckt und mit eigenen Audio-Dateien versehen hat. Er hat das ganze in einem Video-Tutorial beschrieben. Fazit ist: Selber Drucken geht, wenn man die Bilder etwas blasser macht (schwarze Punkte auf schwarzem Grund geht schlecht) und gegebenenfalls erst das Bild mit einem Farb-Laserdrucker druckt und dann die die Punkt-Matrix mit einem Schwarz-Weiß-Drucker drüberlegt.

In dem Video kommt auch mein tttool zum Einsatz, das – obwohl eigentlich eher für mich und für Leute, die das GME-Dateiformat besser verstehen wollen – dabei ganz gut wegkommt. Aus ist erwähnenswert dass „Pronwan“ Windows verwendet und vermutlich die von mir unter Wine vorkompilierte exe-Datei verwendet – die Platformunabhängigkeit von Haskell zahlt sich aus.

Das ganze ist wohl für einen interessierten Hobbisten wirklich recht gut zu machen. Ich bin gespannt was da noch alles kommt. Vielleicht gibts ja bald eine Webseite wo man hausgemachte Tip-Toi-Bücher sammeln und tauschen kann! Im Microcontroller-Forum gibts stets die neusten Infos.

2014-09-06T22:46:22+02:00

ICFP 2014

Another on-my-the-journey-back blog post; this time from the Frankfurt Airport Train Station – my flight was delayed (if I knew that I could have watched the remaining Lightning Talks), and so was my train, but despite 5min of running through the Airport just not enough. And now that the free 30 Minutes of Railway Station Internet are used up, I have nothing else to do but blog...

Last week I was attending ICFP 2014 in Gothenburg, followed by the Haskell Symposium and the Haskell Implementors Workshop. The justification to attend was the paper on Safe Coercions (joint work with Richard Eisenberg, Simon Peyton Jones and Stephanie Weirich), although Richard got to hold the talk, and did so quite well. So I got to leisurely attend the talks, while fighting the jet-lag that I brought from Portland.

There were – as expected – quite a few interesting talks. Among them the first keynote, Kathleen Fisher on the need for formal methods in cars and toy-quadcopters and unmanned battle helicopters, which made me conclude that my Isabelle skills might eventually become relevant in practical applications. And did you know that if someone gains access to your car’s electronics, they can make the seat belt pull you back hard?

Stefanie Weirich’s keynote (and the subsequent related talks by Jan Stolarek and Richard Eisenberg) on what a dependently typed Haskell would look like and what we could use it for was mouth-watering. I am a bit worried that Haskell will be become a bit obscure for newcomers and people that simply don’t want to think about types too much, on the other hand it seems that Haskell as we know it will always stay there, just as a subset of the language.

Similarly interesting were refinement types for Haskell (talks by Niki Vazou and by Eric Seidel), in the form of LiquidTypes, something that I have not paid attention to yet. It seems to be a good way for more high assurance in Haskell code.

Finally, the Haskell Implementors Workshop had a truckload of exciting developments in and around Haskell: More on GHCJS, Partial type signatures, interactive type-driven development like we know it from Agda, the new Haskell module system and amazing user-defined error messages – the latter unfortunately only in Helium, at least for now.

But it’s not the case that I only sat and listened. During the Haskell Implementors Workshop I held a talk “Contributing to GHC” with a live demo of me fixing a (tiny) bug in GHC, with the aim of getting more people to hack on GHC (slides, video). The main message here is that it is not that big of deal. And despite me not actually saying much interesting in the talk, I got good feedback afterwards. So if it now actually motivates someone to contribute to GHC, I’m even more happier.

And then there is of course the Hallway Track. I discussed the issues with fusing a left fold (unfortunately, without a great solution). In order to tackle this problem more systematically, John Wiegley and I created the beginning of a “List Fusion Lab”, i.e. a bunch of list benchmark and the possibility to compare various implementations (e.g. with different RULES) and various compilers. With that we can hopefully better assess the effect of a change to the list functions.

PS: The next train is now also delayed, so I’ll likely miss my tram and arrive home even later...

PPS: I really have to update my 10 year old picture on my homepage (or redesign it completely). Quite a few people knew my name, but expected someone with shoulder-long hair...

PPPS: Haskell is really becoming mainstream: I just talked to a randomly chosen person (the boy sitting next to me in the train), and he is a Haskell enthusiast, building a structured editor for Haskell together with his brother. And all that as a 12th-grader...

2014-08-30T15:10:58-07:00

DebConf 14

I’m writing this blog post on the plane from Portland towards Europe (which I now can!), using the remaining battery live after having watched one of the DebConf talks that I missed. (It was the systemd talk, which was good and interesting, but maybe I should have watched one of the power management talks, as my battery is running down faster than it should be, I believe.)

I mostly enjoyed this year’s DebConf. I must admit that I did not come very prepared: I had neither something urgent to hack on, nor important things to discuss with the other attendees, so in a way I had a slow start. I also felt a bit out of touch with the project, both personally and technically: In previous DebConfs, I had more interest in many different corners of the project, and also came with more naive enthusiasm. After more than 10 years in the project, I see a few things more realistic and also more relaxed, and don’t react on “Wouldn’t it be cool to have crazy idea” very easily any more. And then I mostly focus on Haskell packaging (and related tooling, which sometimes is also relevant and useful to others) these days, which is not very interesting to most others.

But in the end I did get to do some useful hacking, heard a few interesting talks and even got a bit excited: I created a new tool to schedule binNMUs for Haskell packages which is quite generic (configured by just a regular expression), so that it can and will be used by the OCaml team as well, and who knows who else will start using hash-based virtual ABI packages in the future... It runs via a cron job on people.debian.org to produce output for Haskell and for OCaml, based on data pulled via HTTP. If you are a Debian developer and want up-to-date results, log into wuiet.debian.org and run ~nomeata/binNMUs --sql; it then uses the projectb and wanna-build databases directly. Thanks to the ftp team for opening up incoming.debian.org, by the way!

Unsurprisingly, I also held a talk on Haskell and Debian (slides available). I talked a bit too long and we had too little time for discussion, but in any case not all discussion would have fitted in 45 minutes. The question of which packages from Hackage should be added to Debian and which not is still undecided (which means we carry on packaging what we happen to want in Debian for whatever reason). I guess the better our tooling gets (see the next section), the more easily we can support more and more packages.

I am quite excited by and supportive of Enrico’s agenda to remove boilerplate data from the debian/ directories and relying on autodebianization tools. We have such a tool for Haskell package, cabal-debian, but it is unofficial, i.e. neither created by us nor fully endorsed. I want to change that, so I got in touch with the upstream maintainer and we want to get it into shape for producing perfect Debian packages, if the upstream provided meta data is perfect. I’d like to see the Debian Haskell Group to follows Enrico’s plan to its extreme conclusion, and this way drive innovation in Debian in general. We’ll see how that goes.

Besides all the technical program I enjoyed the obligatory games of Mao and Werewolves. I also got to dance! On Saturday night, I found a small but welcoming Swing-In-The-Park event where I could dance a few steps of Lindy Hop. And on Tuesday night, Vagrant Cascadian took us (well, three of us) to a blues dancing night, which I greatly enjoyed: The style was so improvisation-friendly that despite having missed the introduction and never having danced Blues before I could jump right in. And in contrast to social dances in Germany, where it is often announced that the girls are also invited to ask the boys, but then it is still mostly the boys who have to ask, here I took only half a minute of standing at the side until I got asked to dance. In retrospect I should have skipped the HP reception and went there directly...

I’m not heading home at the moment, but will travel directly to Göteborg to attend ICFP 2014. I hope the (usually worse) west-to-east jet lag will not prevent me from enjoying that as much as I could.

2014-08-23T15:54:43-07:00

This blog goes static

After a bit more than 9 years, I am replacing Serendipity, which as been hosting my blog, by a self-made static solution. This means that when you are reading this, my server no longer has to execute some rather large body of untyped code to produce the bytes sent to you. Instead, that happens once in a while on my laptop, and they are stored as static files on the server.

I hope to get a little performance boost from this, so that my site can more easily hold up to being mentioned on hackernews. I also do not want to worry about security issues in Serendipity – static files are not hacked.

Of course there are down-sides to having a static blog. The editing is a bit more annoying: I need to use my laptop (previously I could post from anywhere) and I edit text files instead of using a JavaScript-based WYSIWYG editor (but I was slightly annoyed by that as well). But most importantly your readers cannot comment on static pages. There are cloud-based solutions that integrate commenting via JavaScript on your static pages, but I decided to go for something even more low-level: You can comment by writing an e-mail to me, and I’ll put your comment on the page. This has the nice benefit of solving the blog comment spam problem.

The actual implementation of the blog is rather masochistic, as my web page runs on one of these weird obfuscated languages (XSLT). Previously, it contained of XSLT stylesheets producing makefiles calling XSLT sheets. Now it is a bit more-self-contained, with one XSLT stylesheet writing out all the various html and rss files.

I managed to import all my old posts and comments thanks to this script by Michael Hamann (I had played around with this some months ago and just spend what seemed to be an hour to me to find this script again) and a small Haskell script. Old URLs are rewritten (using mod_rewrite) to the new paths, but feed readers might still be confused by this.

This opens the door to a long due re-design of my webpage. But not today...

2014-07-19T17:13:06+00:00

Good bye GNOME

When I was young...

I have been a user of GNOME for a long time. I must have started using it in either 2000 or 2001, when LinuxTag was in Stuttgart. For some reason I wanted to start using one of the two Desktop Environments available (having used fvwm95 and/or IceWM before, I believe). I approached one of the guys at the GNOME booth and asked the question “Why should I use GNOME over KDE?”, knowing that it is quite a silly question, but unable to come up with a better one. He replied something along the lines of “Because it is part of GNU”, and that was good enough for me. Not that it matters a lot whether I use one or the other, but it was a way to decide it.

Back then GNOME was still version 1.2, with detachable menus and lots of very colorful themes – I first had something with thick yellow borders and then a brushed metal look. Back then, sawfish was the window manager of choice.

I used GNOME for many years. People complained when GNOME 2.0 came out, but I liked the approach they were taking: Simplicity and good defaults are a time saver! I did my bit of customization, such has having my panel vertically on the left edge, and even had a tool running that would react on certain events and make the window manager do stuff, such as removing the title bar and the borders from my terminals – naked terminals are very geeky (I forgot the name of the tool, but surely some will recognize and remember it).

Leaving the path of conformance

In 2009 I got more and more involved in Haskell and stumbled over xmonad, a tiling window manager implemented and configured in Haskell. I found this a user interface that like a lot, so I started using it. This was no problem: GNOME happily let me replace the default window manager (metacity) with xmonad, and continue working. I even implemented the necessary support in xmonad so that it would spare out the gnome-panel, and that the pager (which displays the workspaces and windows) would work, and even interact with xmonad.

I was happy with this setup for a few more years, until GNOME3 came out. Since then, it has become harder and harder to maintain the setup. The main reason is gnome-shell, which replaces both gnome-panel and doesn’t work with any window manager but the new default, mutter. I want to use GNOME’s panel, but not its window manager, so I was stuck with a hardly maintained gnome-panel. I fixed what I could (with some patches applied upstream two years after submission, and some not at all) and lived with the remaining warts.

The end (for now)

But a few days ago, GNOME 3.12 was pushed to Debian and I couldn’t even logout our shut down the computer any more, as gnome-session tries to talk to gnome-shell now to do that. Also, monitor configuration (e.g. remembering what setup to use when which monitors are attached) has been moved to gnome-shell. I tried to work around it a bit, but I quickly realized that it was time to make a decision: Either do it the GNOME way all the way, including using gnome-shell, or ditch GNOME:

Now as I said: I like the design and the philosophy of GNOME, including GNOME3, and I have told people who were complaining about it first to give it a try, and succeeded. I also tried it, but after years using a tiling window manager, I just couldn’t adjust to not having that any more. If xmonad could be changed to remotely control gnome-shell, I this might actually work for me! I think one of the biggest problems I had was to adjust to how gnome-shell handles multiple monitors. In xmonad, my workspaces are independent of the monitors, and I can move any workspace to any monitor.

So I had to ditch GNOME. My session now consists of a shell script making some adjustments (blank black background, loading the xmodmap), starts a few tools (taffybar, mail-notification, nagstamon, xscreensaver and dunst) and executes xmonad. So far it works good. It boots faster, it suspends faster.

I still use some GNOME components. I login using gdm (but it is auto-login, I guess I could try something faster), and gnome-keyring-daemon is also started. And I still use evolution (which has its own set of very disappointing problems in the current version).

Compared to my old setup, I’m still missing my beloved link-monitor-applet, but I guess I can implement an approximation to that in taffybar. The same for some other statistics like cpu temperature. I don’t have the GNOME menu any more, which I did not use regularly, but was useful occasionally.

The biggest problem so far is the lack of session management: I yet have to find a good way to logout and shutdown, while still giving Firefox time to finish without believing it crashed. Dear lazyweb: What is the best solution for that problem? Can systemd help me here somehow?

All in all I want to thank the GNOME guys for providing me with a great desktop environment for over a decade, and I hope I’ll be able to use it again one day (and hopefully not out of necessity and lack of viable alternatives).

2014-06-19T20:00:29+00:00

Another instance of Haskell Bytes

When I gave my “Haskell Bytes” talk on the runtime representation of Haskell values the first time, I wrote here “It is in German, so [..] if you want me to translate it, then (convince your professor or employer to) invite me to hold the talk again“. This has just happened: I got to hold the talk as a Tech Talk at Galois in Portland, so now you can fetch the text also in English. Thanks to Jason for inviting me!

This was on my way to the Oregon Summer School on Programming Languages in Eugene, where I’m right now enjoying the shade of a tree next to the campus. We’ve got a relatively packed program with lectures on dependent types, categorical logic and other stuff, and more student talks in the evening (which unfortunately always collide with the open board game evenings at the local board game store). So at least we started to have a round of diplomacy, where I am about to be crushed from four sides at once. (And no, I don’t think that this has triggered the “illegal download warning” that the University of Oregon received about our internet use and threatens our internet connectivity.)

2014-06-08T19:34:14+00:00

ZuriHac 2014

I’m writing this on the train back from the ZuriHac Haskell Hackathon in Zürich, generously sponsored by Better and Google. My goal for this event was to attract new people to work on GHC, the Haskell compiler, so I announced a „GHC bugsquashing project“. I collected a few seemingly simple ticket that have a good effort/reward ratio for beginners and encouraged those who showed up to pick one to work on.

Roughly six people started, and four actually worked on GHC on all three days. The biggest hurdle for them was to get GHC built for the first time, especially those using a Mac or Windows. They also had to learn to avoid recompilation of the whole compiler, which takes an annoying amount of time (~30 minutes for most people). But once such hurdles weren taken all of them managed to find their way around the source code to the place they need to touch and were able to produce a patch, some of which are already merged into GHC master. When I wasn’t giving tips and hints I was working on various small tickets myself, but nothing of great impact. I very much hope that this event will pay off and one or two of the newcomers end up being regular contributors to GHC.

We took breaks from our respective projects to listen to interesting talks by Edward Kmett and Simon Marlow, and on Saturday evening we all went to the shores of the Zurisee and had a nice Barbecue. It was a good opportunity to get into contact with more of the attendees (the hacking itself was separated in multiple smaller office rooms) and I was happy to hear about people having read my recent Call Arity paper, and even found it valuable.

Thanks to the organizers and sponsors for this nice opportunity!

2014-05-28T21:43:42+00:00

Predicting the livetime of a Hackage package

The Debian Haskell Group has no proper policy about when to update a certain package to a new version from Hackage. So far, we upgrade when one of us personally needs something, when someone nudges us, if its required as a dependency or just when we feel like it. I was thinking about ways to improve that.

One thing that we should definitely do is to upgrade to new versions that differ only in the forth version number, e.g. from 1.2.3.4 to 1.2.3.5 – these are most likely bug fix uploads and have little risk of breaking things. (Unless they are only changes to the .cabal file, then they might not be an improvement for our users or us.) I plan to code that soon.

But what else can we do? Ideally we’d package versions that will be the newest version for a long time, and not versions that are going to be replaced the next day. Unfortunately, deciding that requires visionary powers. But maybe there is a correlation between the upload history and the lifetime of a new package? Maybe there are patterns, e.g. the first upload after a long time tends to be replaced fast, but the third package in a short succession has a high chance to live long? Can we predict the livetime of a freshly uploaded package? So after a bit of hacking I got this graphic:

It needs a bit explanation: Both axis are time differences, the picture is one year by one year. For every release of which we know the lifetime (i.e. there has been an upload later), we draw its history on a vertical line. The horizontal position of the line corresponds to the lifetime of the release: A package that was superseded immediately later (which I sometimes do after spotting annoying typos) would appear on the far left, a package that is stable for one year on the far right.

The history itself is a series of dots, representing the previous uploads of that package, with their distance from the lower edge indicating the time that has passed since then. So if a package had two updates half a year ago, and then no update for another half year, it would contribute two dots right above each other in the middle of the picture.

Unfortunately, the picture does not yield any insight, besides that short-lived packages are more frequent than long-lived packages.

So I tried another view:

I grouped uploads by the livetime of their two preceding uploads. For each such groups, the circle indicates the average livetime. The darkness indicates the absolute number in the sample. So we see a correlation between the packages livetime and the livetime of its predecessor, which is also not very surprising. We can state some hypothesis, like: “A package replacing a very old package is likely to be long-lived if its pre-predecessor lived very long or very shortly, but less so if that lived for a few months.“ But I’m not sure of the significance of these.

I could have also taken into account which uploads are major and which are minor, by looking at the version number, but I didn’t.

What’s the conclusion? Nothing much. I got some funny looking graphs. And there needs to be a way to render pictures like the first within the axes of a Chart diagram. I put the (somewhat hackish) code online – feel free to play with it.

2014-05-26T12:10:46+00:00

Does list fusion work?

I’m writing this in the lunch break of TFP in Soesterberg. The invited talk this morning was by Geoffrey Mainland, and he talked about the difficulties of (informal) reasoning about performance in high-level languages like Haskell, especially with fancy stuff in the compiler like fusion. So I couldn’t help but think about a (very small) help here.

Consider the the two very similar expressions foldl (+) 0 [0..1000] and foldr (+) 0 [0..1000]. Which of these fuse away the list? Hopefully both, but hard to predict.

So with my list-fusion-probe library, you can write

import Data.List.Fusion.Probe (fuseThis)
main = print $ foldr (+) 0 (fuseThis [0..1001])

and find out. If you compile this (with -O!), it will print

500500

If you change the foldr to foldl, it will print

Test: fuseThis: List did not fuse

So you can see that the function fuseThis :: [a] -> [a] does nothing if the list gets fused away, but causes a run-time error if not. It allows you to annotate your code with your assumptions of list fusion, and get shouted at if your assumptions are wrong.

It wouldn’t be hard to have the compiler give a warning or error message at compile time; we’d just need to introduce a special function abortCompilation that, when found in the code during compilation, does just that.

Note that you’ll have trouble reproducing the above in GHC HEAD, where foldl does fuse (which is what I’m going to talk about tomorrow here).

2014-05-09T07:59:28+00:00

Going to TFP 2014

Near the end of my internship with Simon Peyton Jones at MSR Cambridge I tackled the problem that foldl is not a good consumer in the sense of list fusion, as the resulting code compiles quite badly. I found an analysis that would allow GHC to safely do the required transformation to fix this in the common case; I dubbed this analysis Call Arity. The code is in GHC master (but not in 7.8), and I get to present at the Trends in Functional Programming conference in Soesterburg in 2½ weeks. If you are curious you can either look at the code in CallArity.hs or the submitted paper, which yet has to go through the formal post-conference peer review in order to be officially accepted.

If you are going to TFP as well and want to get your GPG key signed, just talk to me there!

Go up