SwirlyMyself

Blog

2014-08-30T15:10:58-07:00

DebConf 14

I’m writing this blog post on the plain from Portland towards Europe (which I now can!), using the remaining battery live after having watched one of the DebConf talks that I missed. (It was the systemd talk, which was good and interesting, but maybe I should have watched one of the power management talks, as my battery is running down faster than it should be, I believe.)

I mostly enjoyed this year’s DebConf. I must admit that I did not come very prepared: I had neither something urgent to hack on, nor important things to discuss with the other attendees, so in a way I had a slow start. I also felt a bit out of touch with the project, both personally and technically: In previous DebConfs, I had more interest in many different corners of the project, and also came with more naive enthusiasm. After more than 10 years in the project, I see a few things more realistic and also more relaxed, and don’t react on “Wouldn’t it be cool to have crazy idea” very easily any more. And then I mostly focus on Haskell packaging (and related tooling, which sometimes is also relevant and useful to others) these days, which is not very interesting to most others.

But in the end I did get to do some useful hacking, heard a few interesting talks and even got a bit excited: I created a new tool to schedule binNMUs for Haskell packages which is quite generic (configured by just a regular expression), so that it can and will be used by the OCaml team as well, and who knows who else will start using hash-based virtual ABI packages in the future... It runs via a cron job on people.debian.org to produce output for Haskell and for OCaml, based on data pulled via HTTP. If you are a Debian developer and want up-to-date results, log into wuiet.debian.org and run ~nomeata/binNMUs --sql; it then uses the projectb and wanna-build databases directly. Thanks to the ftp team for opening up incoming.debian.org, by the way!

Unsurprisingly, I also held a talk on Haskell and Debian (slides available). I talked a bit too long and we had too little time for discussion, but in any case not all discussion would have fitted in 45 minutes. The question of which packages from Hackage should be added to Debian and which not is still undecided (which means we carry on packaging what we happen to want in Debian for whatever reason). I guess the better our tooling gets (see the next section), the more easily we can support more and more packages.

I am quite excited by and supportive of Enrico’s agenda to remove boilerplate data from the debian/ directories and relying on autodebianization tools. We have such a tool for Haskell package, cabal-debian, but it is unofficial, i.e. neither created by us nor fully endorsed. I want to change that, so I got in touch with the upstream maintainer and we want to get it into shape for producing perfect Debian packages, if the upstream provided meta data is perfect. I’d like to see the Debian Haskell Group to follows Enrico’s plan to its extreme conclusion, and this way drive innovation in Debian in general. We’ll see how that goes.

Besides all the technical program I enjoyed the obligatory games of Mao and Werewolves. I also got to dance! On Saturday night, I found a small but welcoming Swing-In-The-Park event where I could dance a few steps of Lindy Hop. And on Tuesday night, Vagrant Cascadian took us (well, three of us) to a blues dancing night, which I greatly enjoyed: The style was so improvisation-friendly that despite having missed the introduction and never having danced Blues before I could jump right in. And in contrast to social dances in Germany, where it is often announced that the girls are also invited to ask the boys, but then it is still mostly the boys who have to ask, here I took only half a minute of standing at the side until I got asked to dance. In retrospect I should have skipped the HP reception and went there directly...

I’m not heading home at the moment, but will travel directly to Göteborg to attend ICFP 2014. I hope the (usually worse) west-to-east jet lag will not prevent me from enjoying that as much as I could.

2014-08-23T15:54:43-07:00

This blog goes static

After a bit more than 9 years, I am replacing Serendipity, which as been hosting my blog, by a self-made static solution. This means that when you are reading this, my server no longer has to execute some rather large body of untyped code to produce the bytes sent to you. Instead, that happens once in a while on my laptop, and they are stored as static files on the server.

I hope to get a little performance boost from this, so that my site can more easily hold up to being mentioned on hackernews. I also do not want to worry about security issues in Serendipity – static files are not hacked.

Of course there are down-sides to having a static blog. The editing is a bit more annoying: I need to use my laptop (previously I could post from anywhere) and I edit text files instead of using a JavaScript-based WYSIWYG editor (but I was slightly annoyed by that as well). But most importantly your readers cannot comment on static pages. There are cloud-based solutions that integrate commenting via JavaScript on your static pages, but I decided to go for something even more low-level: You can comment by writing an e-mail to me, and I’ll put your comment on the page. This has the nice benefit of solving the blog comment spam problem.

The actual implementation of the blog is rather masochistic, as my web page runs on one of these weird obfuscated languages (XSLT). Previously, it contained of XSLT stylesheets producing makefiles calling XSLT sheets. Now it is a bit more-self-contained, with one XSLT stylesheet writing out all the various html and rss files.

I managed to import all my old posts and comments thanks to this script by Michael Hamann (I had played around with this some months ago and just spend what seemed to be an hour to me to find this script again) and a small Haskell script. Old URLs are rewritten (using mod_rewrite) to the new paths, but feed readers might still be confused by this.

This opens the door to a long due re-design of my webpage. But not today...

2014-07-19T17:13:06+00:00

Good bye GNOME

When I was young...

I have been a user of GNOME for a long time. I must have started using it in either 2000 or 2001, when LinuxTag was in Stuttgart. For some reason I wanted to start using one of the two Desktop Environments available (having used fvwm95 and/or IceWM before, I believe). I approached one of the guys at the GNOME booth and asked the question “Why should I use GNOME over KDE?”, knowing that it is quite a silly question, but unable to come up with a better one. He replied something along the lines of “Because it is part of GNU”, and that was good enough for me. Not that it matters a lot whether I use one or the other, but it was a way to decide it.

Back then GNOME was still version 1.2, with detachable menus and lots of very colorful themes – I first had something with thick yellow borders and then a brushed metal look. Back then, sawfish was the window manager of choice.

I used GNOME for many years. People complained when GNOME 2.0 came out, but I liked the approach they were taking: Simplicity and good defaults are a time saver! I did my bit of customization, such has having my panel vertically on the left edge, and even had a tool running that would react on certain events and make the window manager do stuff, such as removing the title bar and the borders from my terminals – naked terminals are very geeky (I forgot the name of the tool, but surely some will recognize and remember it).

Leaving the path of conformance

In 2009 I got more and more involved in Haskell and stumbled over xmonad, a tiling window manager implemented and configured in Haskell. I found this a user interface that like a lot, so I started using it. This was no problem: GNOME happily let me replace the default window manager (metacity) with xmonad, and continue working. I even implemented the necessary support in xmonad so that it would spare out the gnome-panel, and that the pager (which displays the workspaces and windows) would work, and even interact with xmonad.

I was happy with this setup for a few more years, until GNOME3 came out. Since then, it has become harder and harder to maintain the setup. The main reason is gnome-shell, which replaces both gnome-panel and doesn’t work with any window manager but the new default, mutter. I want to use GNOME’s panel, but not its window manager, so I was stuck with a hardly maintained gnome-panel. I fixed what I could (with some patches applied upstream two years after submission, and some not at all) and lived with the remaining warts.

The end (for now)

But a few days ago, GNOME 3.12 was pushed to Debian and I couldn’t even logout our shut down the computer any more, as gnome-session tries to talk to gnome-shell now to do that. Also, monitor configuration (e.g. remembering what setup to use when which monitors are attached) has been moved to gnome-shell. I tried to work around it a bit, but I quickly realized that it was time to make a decision: Either do it the GNOME way all the way, including using gnome-shell, or ditch GNOME:

Now as I said: I like the design and the philosophy of GNOME, including GNOME3, and I have told people who were complaining about it first to give it a try, and succeeded. I also tried it, but after years using a tiling window manager, I just couldn’t adjust to not having that any more. If xmonad could be changed to remotely control gnome-shell, I this might actually work for me! I think one of the biggest problems I had was to adjust to how gnome-shell handles multiple monitors. In xmonad, my workspaces are independent of the monitors, and I can move any workspace to any monitor.

So I had to ditch GNOME. My session now consists of a shell script making some adjustments (blank black background, loading the xmodmap), starts a few tools (taffybar, mail-notification, nagstamon, xscreensaver and dunst) and executes xmonad. So far it works good. It boots faster, it suspends faster.

I still use some GNOME components. I login using gdm (but it is auto-login, I guess I could try something faster), and gnome-keyring-daemon is also started. And I still use evolution (which has its own set of very disappointing problems in the current version).

Compared to my old setup, I’m still missing my beloved link-monitor-applet, but I guess I can implement an approximation to that in taffybar. The same for some other statistics like cpu temperature. I don’t have the GNOME menu any more, which I did not use regularly, but was useful occasionally.

The biggest problem so far is the lack of session management: I yet have to find a good way to logout and shutdown, while still giving Firefox time to finish without believing it crashed. Dear lazyweb: What is the best solution for that problem? Can systemd help me here somehow?

All in all I want to thank the GNOME guys for providing me with a great desktop environment for over a decade, and I hope I’ll be able to use it again one day (and hopefully not out of necessity and lack of viable alternatives).

2014-06-19T20:00:29+00:00

Another instance of Haskell Bytes

When I gave my “Haskell Bytes” talk on the runtime representation of Haskell values the first time, I wrote here “It is in German, so [..] if you want me to translate it, then (convince your professor or employer to) invite me to hold the talk again“. This has just happened: I got to hold the talk as a Tech Talk at Galois in Portland, so now you can fetch the text also in English. Thanks to Jason for inviting me!

This was on my way to the Oregon Summer School on Programming Languages in Eugene, where I’m right now enjoying the shade of a tree next to the campus. We’ve got a relatively packed program with lectures on dependent types, categorical logic and other stuff, and more student talks in the evening (which unfortunately always collide with the open board game evenings at the local board game store). So at least we started to have a round of diplomacy, where I am about to be crushed from four sides at once. (And no, I don’t think that this has triggered the “illegal download warning” that the University of Oregon received about our internet use and threatens our internet connectivity.)

2014-06-08T19:34:14+00:00

ZuriHac 2014

I’m writing this on the train back from the ZuriHac Haskell Hackathon in Zürich, generously sponsored by Better and Google. My goal for this event was to attract new people to work on GHC, the Haskell compiler, so I announced a „GHC bugsquashing project“. I collected a few seemingly simple ticket that have a good effort/reward ratio for beginners and encouraged those who showed up to pick one to work on.

Roughly six people started, and four actually worked on GHC on all three days. The biggest hurdle for them was to get GHC built for the first time, especially those using a Mac or Windows. They also had to learn to avoid recompilation of the whole compiler, which takes an annoying amount of time (~30 minutes for most people). But once such hurdles weren taken all of them managed to find their way around the source code to the place they need to touch and were able to produce a patch, some of which are already merged into GHC master. When I wasn’t giving tips and hints I was working on various small tickets myself, but nothing of great impact. I very much hope that this event will pay off and one or two of the newcomers end up being regular contributors to GHC.

We took breaks from our respective projects to listen to interesting talks by Edward Kmett and Simon Marlow, and on Saturday evening we all went to the shores of the Zurisee and had a nice Barbecue. It was a good opportunity to get into contact with more of the attendees (the hacking itself was separated in multiple smaller office rooms) and I was happy to hear about people having read my recent Call Arity paper, and even found it valuable.

Thanks to the organizers and sponsors for this nice opportunity!

2014-05-28T21:43:42+00:00

Predicting the livetime of a Hackage package

The Debian Haskell Group has no proper policy about when to update a certain package to a new version from Hackage. So far, we upgrade when one of us personally needs something, when someone nudges us, if its required as a dependency or just when we feel like it. I was thinking about ways to improve that.

One thing that we should definitely do is to upgrade to new versions that differ only in the forth version number, e.g. from 1.2.3.4 to 1.2.3.5 – these are most likely bug fix uploads and have little risk of breaking things. (Unless they are only changes to the .cabal file, then they might not be an improvement for our users or us.) I plan to code that soon.

But what else can we do? Ideally we’d package versions that will be the newest version for a long time, and not versions that are going to be replaced the next day. Unfortunately, deciding that requires visionary powers. But maybe there is a correlation between the upload history and the lifetime of a new package? Maybe there are patterns, e.g. the first upload after a long time tends to be replaced fast, but the third package in a short succession has a high chance to live long? Can we predict the livetime of a freshly uploaded package? So after a bit of hacking I got this graphic:

It needs a bit explanation: Both axis are time differences, the picture is one year by one year. For every release of which we know the lifetime (i.e. there has been an upload later), we draw its history on a vertical line. The horizontal position of the line corresponds to the lifetime of the release: A package that was superseded immediately later (which I sometimes do after spotting annoying typos) would appear on the far left, a package that is stable for one year on the far right.

The history itself is a series of dots, representing the previous uploads of that package, with their distance from the lower edge indicating the time that has passed since then. So if a package had two updates half a year ago, and then no update for another half year, it would contribute two dots right above each other in the middle of the picture.

Unfortunately, the picture does not yield any insight, besides that short-lived packages are more frequent than long-lived packages.

So I tried another view:

I grouped uploads by the livetime of their two preceding uploads. For each such groups, the circle indicates the average livetime. The darkness indicates the absolute number in the sample. So we see a correlation between the packages livetime and the livetime of its predecessor, which is also not very surprising. We can state some hypothesis, like: “A package replacing a very old package is likely to be long-lived if its pre-predecessor lived very long or very shortly, but less so if that lived for a few months.“ But I’m not sure of the significance of these.

I could have also taken into account which uploads are major and which are minor, by looking at the version number, but I didn’t.

What’s the conclusion? Nothing much. I got some funny looking graphs. And there needs to be a way to render pictures like the first within the axes of a Chart diagram. I put the (somewhat hackish) code online – feel free to play with it.

2014-05-26T12:10:46+00:00

Does list fusion work?

I’m writing this in the lunch break of TFP in Soesterberg. The invited talk this morning was by Geoffrey Mainland, and he talked about the difficulties of (informal) reasoning about performance in high-level languages like Haskell, especially with fancy stuff in the compiler like fusion. So I couldn’t help but think about a (very small) help here.

Consider the the two very similar expressions foldl (+) 0 [0..1000] and foldr (+) 0 [0..1000]. Which of these fuse away the list? Hopefully both, but hard to predict.

So with my list-fusion-probe library, you can write

import Data.List.Fusion.Probe (fuseThis)
main = print $ foldr (+) 0 (fuseThis [0..1001])

and find out. If you compile this (with -O!), it will print

500500

If you change the foldr to foldl, it will print

Test: fuseThis: List did not fuse

So you can see that the function fuseThis :: [a] -> [a] does nothing if the list gets fused away, but causes a run-time error if not. It allows you to annotate your code with your assumptions of list fusion, and get shouted at if your assumptions are wrong.

It wouldn’t be hard to have the compiler give a warning or error message at compile time; we’d just need to introduce a special function abortCompilation that, when found in the code during compilation, does just that.

Note that you’ll have trouble reproducing the above in GHC HEAD, where foldl does fuse (which is what I’m going to talk about tomorrow here).

2014-05-09T07:59:28+00:00

Going to TFP 2014

Near the end of my internship with Simon Peyton Jones at MSR Cambridge I tackled the problem that foldl is not a good consumer in the sense of list fusion, as the resulting code compiles quite badly. I found an analysis that would allow GHC to safely do the required transformation to fix this in the common case; I dubbed this analysis Call Arity. The code is in GHC master (but not in 7.8), and I get to present at the Trends in Functional Programming conference in Soesterburg in 2½ weeks. If you are curious you can either look at the code in CallArity.hs or the submitted paper, which yet has to go through the formal post-conference peer review in order to be officially accepted.

If you are going to TFP as well and want to get your GPG key signed, just talk to me there!

2014-04-24T22:07:39+00:00

gtk-vector-screenshot screencast

I recently installed piwik on my website, to get some idea about how it is being used. Piwik is a very nice and shiny tool, although it is slightly scary to watch people walk through your website, click for click. Anyways, I now know that I get around 50 visitors per day.

But I also learn where they are coming from, and this way I noticed that “Shnatsel” did a very nice screencast of my gtk-vector-screenshot tool – if I had not created it myself I would think it was fake.

2014-04-02T21:10:16+00:00

Steuerhinterziehung bei Subway?

Kürzlich war ich, wie üblich vor dem (sehr zu empfehlenden) Spieleabend in der Karlsruher Spielepyramide bei Subway essen. Heute wollte ich wissen, wie viel ich da eigentlich ausgegeben habe. Den Kassenzettel hatte ich nicht mitgenommen, aber da ich bei Subway Punkte sammle kann ich da ja problemlos nachschauen, was mich das Essen gekostet hat.

Den Eintrag „Kauf 8.59 €“ an dem Tag finde ich – allerdings am gleichen Tag auch einen Eintrag „Kaufstornierung -8.59 €“, samt entsprechendem Punkteabzug. Das Sandwich hatte ich gegessen, das Geld habe ich auch nicht mehr, also ist irgendwas faul.

Meine Vermutung ist: Steuerhinterziehung. Wie kürzlich in Der Zeit beschrieben ist es wohl unter Gastronomen zum Teil gängige Praxis, am Abend ein paar Einnahmen aus der Kasse zu löschen – weniger Einnahmen heißt weniger Steuerlast. Bevorzugt natürlich jene bei denen der Kunde den Kassenbon nicht mitgenommen hat. Nun scheint die Subway-Kasse direkt an das Punktesammelsystem angeschlossen zu sein, so dass das Stornieren auch meine Punkte gelöscht hat.

Zum einen bin ich etwas sauer: Nicht nur dass sich ja jemand auf unser aller Kosten um seine steuerlichen Pflichten drückt (sozusagen ein 100000000tel Hoeneß), mir entgehen auch noch meine sauer erfressenen Bonuspunkte!

Andererseits haben es Fast-Food-Franchisenehmer alles andere als einfach, und Subway ist hier wohl ein besonders heftiger Fall – gut möglich dass es der Händler solche Tricks nötig hat, um über die Runden zu kommen. Im Vergleich zu z.B. dem Karlsruher Freshsub haben die bei Subway ganz offensichtlich weniger zu lächeln...

Außerdem entgeht so nicht nur dem Staat ein wenig Geld, sondern auch Subway selbst, die – so schätze ich – einen Anteil des Gewinns oder gar des Umsatzes vom Franchisenehmer einfordern. Dazu passt auch dass Subway auf seiner Facebookseite sehr erpicht darauf war, in einem ähnlichen Fall genau zu erfahren, welcher Händler hier ein Storno durchgeführt hat.

Was lerne ich daraus? In Zukunft den Kassenbon mitnehmen, und wenn mir öfter die Punkte durch den Lappen gehen werde ich den Händler mal drauf ansprechen.

Go up