or: songlines / code lines

Finally, the Songlines Audio Guide is up and running at the Museum of Modern Art, NYC. The press and VIP opening was on March, 3rd. On the 8th, the exhibition will be open to the public. If you have the chance, you should totally check it out. The tour takes about 40 minutes (do not rush!) and is definitely worth waiting in the queue. Apart from the audio guide tour, there are many other interesting things on display at MoMA, for instance the Black Lake installation and the original instruments from Biophilia (featuring a Tesla coil).

The app was created by Klangerfinder in cooperation with MoMA, Björk and her team, VW, TwoBigEars and Dysonics. My roles in the project were lead programmer and project assistant. I am very thankful for this opportunity. Developing the app in cooperation with so many experienced people was pretty awesome. I would love to do this stuff more often ;)

Songlines consists of seven rooms, one for each of Björk’s albums; starting with Debut and ending with Biophilia. Note that her latest album – Vulnicura – is featured in another part of the exhibition, the Black Lake installation.


  •  the app is running on 350 iPod Touch 5G devices which are distributed to the visitors at the entry to the labyrinth
  •  HiFi headphones were provided by Bowers & Wilkins
  • the visitor’s head movement is tracked using RondoMotion, a small bluetooth device which is attached to the headphones. The RondoMotion outputs rotation data as a quaternion (=rotation around three axis in 3D space)
  • many audio assets are played through virtual sound sources, rendered using the HRTF (binaural) plugin 3Dception by TwoBigEars. The rotation of the virtual listener inside 3Dception is controlled by the head tracking data, creating the illusion that certain sounds (like the voice of Margrét Vilhjálmsdóttir) are in a fixed position near you
  • the signals from many (many!) Gimbal bluetooth beacons are used by the app to guess the position of the visitor. Most of the beacons are used to determine the current room/album, the remaining 16 beacons trigger sounds at specific positions
  • a part of the narrative and music is linear in each room, another part is triggered by the beacons (this is similar to the hotspot logic in computer games)
  • the audio engine of the app is an in-house product from Klangerfinder. Originally, it was created for the app SoundJourney and ported from Android to iOS for integration into the Songlines app
  • the audio content is several gigabytes large, consisting of uncompressed PCM data
  • programming languages: objective C, C, C++, Python (ReaScript saved us in the end!), PureData, Perl


(my chaotic desk where most of the audio programming happened. need larger desk.)


(a pd patch to simulate a walk through the exhibition, this was used to debug the audio engine)

Press links for further information:

Feel free to ask me anything in the comments!

Right after Christmas 2014, I started to work on a new project for the Museum of Modern Art in New York City. In March, a retrospective exhibition about the singer Björk will launch. As you might expect, the exhibition will use a lot of new, cutting-edge technologies. I am very happy with the fact that Klangerfinder was asked to participate in the production, which means that I have to handle most of the audio programming tasks.

Since this is – again – a commercial project, I cannot blog about any details about my work. However, I can assure you that it involves almost anything I have learned so far in computer science, composition and sound design ;)



This month, I am finally able to present the result of my biggest project this year:


The Android app “Sound Journey” was created on behalf of Volkswagen. My work was part of my job at Klangerfinder, the company who developed the audio engine for this app with me being the lead programmer.  Unfortunately, the app does not work in every modern car, just in the newest models of VW. However, if you own such a car, you can download the app for free and give it a try. The music changes in real-time, based on your interaction with the car. This project certainly pushed the boundary of what I thought was possible with Android phones and sound.

While preparing for a concert next week, I tried out a new pd abstraction with my laptop’s internal microphone. One can create a lot of different sounds by hitting the laptop, the touchpad or just by typing. The “aaron”-abstraction (a reference to the famous “moses” object in pd) loops the input indefinitely for the specified amount of milliseconds. The options on the right inlet are: record, overdub, play loop.

The aaron-abstraction is pretty small, so I just paste it here. Just copy it in your favorite text editor and save it as a *.pd file.

#N canvas 0 0 1596 817 10;
#X obj 666 319 *~;
#X obj 795 308 *~;
#X obj 628 241 sig~;
#X obj 812 246 sig~;
#X obj 588 487 *~;
#X obj 518 369 *~;
#X obj 485 282 inlet~;
#X obj 439 535 outlet~;
#X obj 685 41 sel 0 1 2;
#X obj 680 205 unpack 0 0;
#X msg 656 86 1 0;
#X msg 684 126 1 1;
#X msg 746 152 0 1;
#X text 839 86 record;
#X text 839 116 overdub;
#X text 836 146 play loop;
#X obj 751 383 delwrite~ d1 \$1;
#X obj 618 403 delread~ d1 \$1;
#X obj 689 7 inlet;
#X connect 0 0 16 0;
#X connect 1 0 16 0;
#X connect 2 0 5 1;
#X connect 2 0 0 1;
#X connect 3 0 1 1;
#X connect 3 0 4 0;
#X connect 4 0 7 0;
#X connect 5 0 7 0;
#X connect 6 0 5 0;
#X connect 6 0 0 0;
#X connect 8 0 10 0;
#X connect 8 1 11 0;
#X connect 8 2 12 0;
#X connect 9 0 2 0;
#X connect 9 1 3 0;
#X connect 10 0 9 0;
#X connect 11 0 9 0;
#X connect 12 0 9 0;
#X connect 17 0 1 0;
#X connect 17 0 4 1;
#X connect 18 0 8 0;
Ohhhh, this post was due a long time ago… Anyways, here comes part two of my thoughts on loudness.
First, a correction: modern classical orchestras usually tune 440 or 442, they do not go up to 445 anymore.
I personally think that the reasons for reduced dynamics in story arcs, film music and tv programmes have their roots in the developments of audio and video engineering in the 90s. Digital mixing lead to rapidly increasing loudness levels, resulting in the decrease of micro- and macro dynamics. One cannot blame the engineers for that “one louder” behavior, since they would have lost their clients if they would not have complied. Today, more producers care for dynamics instead of loudness, so the loudness war might come to an end bit by bit. The effects on other “loudness problems” as discussed in the previous post however cannot be reverted easily – we have to live with them now.
Another reason behind the urge to produce “loud” pieces of media (e.g., videos with a high scene cut rate) might be the loss of control on the context in which the media is presented. For instance, a YouTube video might be watched on many different kinds of devices, half of them having shitty displays, half of them having shitty speakers. Therefore, video producers try to create something which works on all possible devices, like audio engineers mix music which does even sound good on the cheapest playback system you can imagine. In music, this goal results in louder mixing, in video production, the need to support many devices and viewing contexts results in “loud” (i.e., intensive) videos.
Luckily, there are settings where artists still have control of the context, like concerts, exhibitions, special events and so on – basically every occasion where the art piece is connected to a specific context like a concert hall, museum or gallery.

In music, one could move away from conventional formats like mp3 and release material on different platforms and in different ways. Interactive music has a promising future, because it can be tied to interesting contexts, like games or everyday objects, and provides a longer experience before the listener (or player) gets bored. Additionally, interactive music is simply a consequence of the larger capabilities of playback devices. For instance, compare a minidisc player from the 90s to nowadays smarthpones. Why should we still use static mp3s on mini-computers with a quadcore CPU and 1 GB of RAM? Interactive music does not need to scream for attention, because it is either meant to stay in the background anyway (which is fine) or easily captures the listener’s attention because it is meant to be played with. The engineered loudness problem would also be partially solved since the interactive content cannot be completely mixed and mastered in advance. Even mastering the audio in realtime is not an option since it is not (yet?) feasible to implement the whole mastering chain on a consumer device.

If you are a passionate music listener or musician you are very likely confronted with the problem of “loudness” in contemporary music. I am not talking about high sound pressure levels at rock concerts – that is another story – but about flat (=compressed), loud recordings on CDs and the radio. The problem started in the early 90s when digital brickwall limiters were introduced and was later dubbed the “loudness war”. Based on the assumption that a record sells better if it is just a little louder than other records, sound engineers pushed the limits of how loud a recording could be. Music lost one of its most important elements: dynamics. In these days it looks like the loudness war will come to an end because of newbroadcasting regulations and “replay gain”-like countermeasures in online music stores like iTunes.

However, I am sure that the loudness war is not the problem itself but a symptom of a larger trend in (digital) media. First, we must broaden the definition of “loudness” a bit into the direction of “screaming for attention”. These are a few examples of occasions where the “attention war” takes place:

– commercials and trailers: the louder, the better. Ultra-deep drone sounds, ridiculously deep male voices, superfast cuts and stupid lines of epic blablabla are ok for one trailer. In the average German cinema, I am being bombarded with that shit for about 45 minutes.

– classical music: modern orchestras have the tendency to tune their instruments to a higher pitch (e.g. 445 Hz). Thus, the whole string section sounds more brilliant and louder.

– instrumentation in film music: in modern blockbusters, every film score sounds similar to me in the sense of instrumentation and dynamics. Every small phrase or melody is duplicated among every orchestra/synth/choir/something-section. Thus the score does not contain any surprising ups and downs in dynamics but is just a big sausage of orchestral bwwam – similar to “sausage waveforms” in over-compressed recordings.

– dramaturgy in modern films: modern blockbusters contain frantic action (and no story!) from beginning to end. Good examples are the Transformers series, Dark Knight Rising, Pacific Rim. “Loud” scenes are included instead of more story elements or quiet sections, making the story arc of the film appear like a flat line.

These the-more-the-better media all fail eventually because of the fact that loudness is measured in relative units (decibels), because humans perceive volumes relative to other volumes. To make something LOUD, it must be preceded by something quiet. The result of a constant bombardment with loudness is usually numbness, which dulls the interest for things which are really interesting/important. Consumers of media do not have short attention spans in general, but they are treated like idiots anyways. For instance, many radio stations assume an attention span of less than thirty seconds and torture their listeners with ultra-short information snippets, stupid music and “you are listening to YELL-O-RADIO”-announcements every 45 seconds.

In part 2, I will try to find some reasons for the overall loudness problem and list some possible remedies.

Related links:

Long time, no post… I am very sorry for the large gap between this new post and the last one. A list of excuses would be: christmas, new year’s eve, a stomach infection and lots of work for my sidejob at Klangerfinder inbetween.

In November, I have uploaded a video showing a theremin-like controller consisting of two cans with lightsensors at the bottom. Last week, I added a Magnetometer (a sensor which reads the direction of a magnetic field) to the setup. The original idea was to use the actual direction and strength of the magnetic field to control something. It was possible to use the sensor as a digital compass, but it was too unreliable to control musical parameters with the data. Therefore, I simply ignored the direction and strength of the magnetic field and just computed a single value based on the fluctuations in those values. This means, whenever some magnet is moving near the sensor, it will trigger a control value which roughly correlates with the intensity (i.e., speed) of the motion.

In this fairly simple example, the magnet controls the amount of vibrato in the cello tone. One can/light sensor controls the volume of the cello, the other can/light sensor controls the pitch of the drumloop.

I think the potential of the magnet-controller is in its simplicity. You just have to wave with the magnet to “excite” some musical instrument, similar to plucking a string.

The facts which will tell you how to build this yourself are already listed in the previous raspberry-pi post.

The code for this new version is also hosted on github in the lightsensors-repo. For this version, I extended the old code a bit and used the file combined.c.


Get every new post delivered to your Inbox.