On Wednesday, June 24, 7.30 pm, you are welcome to attend a very special concert at the University of Music, Trossingen, Germany.  Musikdesign-students cooperate again with the Sinfonietta ensemble to fill the concert hall with an all new sound. One of the pieces will be accompanied by a video mashup created by me specifically for the piece.

On Saturday, June 27, 8.30 pm, one of my pieces I composed and produced for the Musikdesign studies will be played at the Next Generation Festival 6.0 at the ZKM, Karlsruhe, Germany. The piece is called “Core Audio” and is essentially an immersive sonification of the programming game “Core Wars”. Using the “Klangdom” speaker setup with four subwoofers and more than 30 speakers distributed around and above the public, the piece puts the listener’s head directly into the digital chaos of an ongoing Core Wars tournament.


I will try to publish the visuals and the Core Audio sonification once the concerts are done.

During my second year at the music conservatory in Trossingen, I have worked together with two game design students from the National University of Singapore. There is a cooperation between Trossingen and NUS where Musikdesign students create soundtracks for animated films created by students in Singapore. The films are final year projects, as was “The Last Spark”, a video game. Since I am very interested in interactive (=game) music, I immediately picked the only game from the list of projects, instead of picking one of the films. The Last Spark is a 3D video game with horror and stealth elements. Its story is adapted from “The little match girl”, a tale by Hans Christian Andersen.

Apart from the composition and sound design tasks, I also integrated the sounds directly into the Unity project, which turned out to be very effective (once Hieu and me managed to come up with a git-based workflow, which took some time). Unity 5 is great for 3D audio and in-game mixing, but it lacks some important features to create interactive sounds. For larger projects, Wwise seems to be a good choice. In this case, I hacked a few scripts in C# to create an interactive ambient piece for a hide-and-seek part of the game where the player has to sneak past a monster. The sounds get more intense and scary the closer the player is to the monster. Apart from the horror elements, the main objective of the music was to capture the mood of a teenage girl who has serious trouble with her family and at school. This is why the music sounds sad (sometimes exaggerated, melodramatic) and in some parts childish.

The sites of Kim and Hieu can be found here: Kim Van / Trung Hieu Nguyen.

Christian Fischer, a fellow student in Trossingen, was very kind to play the cello for some of the recordings for the music.

You can listen to the music here (parts names can be found in the comments on soundcloud):

To create the interactive ambient part, I used a small Unity project which just consists of a single slider (left = moderately scary, right = very scary) to mimic the scary-factor in the game. You can download a build for Windows here and try it yourself:


tl;dr made music for a small video game, listen on sound cloud

After two years of development, I am proud to release De Motu, an interactive music app, made in cooperation with Iris Fegerl, Paul Brenner and Jan Roth. The project was financed by the MFG Baden-Württemberg.

For me, the app was not just a journey through the blood cycle, but also a long journey through the various aspects of app production. The close work with Jan (music) and Paul (graphics) and Iris (concept, testing, everything else) was very challenging and fruitful.

The app’s main menu features four organs – brain, heart, lungs and kidneys – into which the user can zoom in and discover four interactive songs. Both age and sex of the user can be adjusted in the app and will heavily influence the sound. For instance, a high age results in a muffled sound because of age-induced hearing loss. I also like the idea that the user’s sex is a continuous slider, not just “male”, “female”, “inbetween”. This reflects the fact that sexes are not sharply divided into two or three groups, but that a person’s sex is indeed a mere tendency towards being male or female. Extreme settings for both sex and age have extreme effects on the music, too – because in reality, being excessively manly or womanly or being very young or very old are indeed special conditions.

Inside the organs, users can influence different body functions like breathing, blood cleansing and heart rate as well as thinking (in a sense). Inside the brain, it is possible to record your own voice (=thoughts) and spin them on a disk – which feels just like the real brain where thoughts and ideas are constantly re-evaluated and go kind of “back and forth”.

You can check out the game yourself (it’s free!):


Official site: DeMotu-App.com



Idea, concept, project lead + Text: Iris Fegerl
Audio concept + music recordings: Jan Roth
Visual design, animation + interaction design: Paul Brenner
Code, pure data patches + interaction design: Jan Freymann
Mixing: Benjamin Grau
Copy editing: Kemi Fatoba

This game was created using Sparrow 2D engine for iOS  and Libpd.

While preparing for a concert next week, I tried out a new pd abstraction with my laptop’s internal microphone. One can create a lot of different sounds by hitting the laptop, the touchpad or just by typing. The “aaron”-abstraction (a reference to the famous “moses” object in pd) loops the input indefinitely for the specified amount of milliseconds. The options on the right inlet are: record, overdub, play loop.

The aaron-abstraction is pretty small, so I just paste it here. Just copy it in your favorite text editor and save it as a *.pd file.

#N canvas 0 0 1596 817 10;
#X obj 666 319 *~;
#X obj 795 308 *~;
#X obj 628 241 sig~;
#X obj 812 246 sig~;
#X obj 588 487 *~;
#X obj 518 369 *~;
#X obj 485 282 inlet~;
#X obj 439 535 outlet~;
#X obj 685 41 sel 0 1 2;
#X obj 680 205 unpack 0 0;
#X msg 656 86 1 0;
#X msg 684 126 1 1;
#X msg 746 152 0 1;
#X text 839 86 record;
#X text 839 116 overdub;
#X text 836 146 play loop;
#X obj 751 383 delwrite~ d1 \$1;
#X obj 618 403 delread~ d1 \$1;
#X obj 689 7 inlet;
#X connect 0 0 16 0;
#X connect 1 0 16 0;
#X connect 2 0 5 1;
#X connect 2 0 0 1;
#X connect 3 0 1 1;
#X connect 3 0 4 0;
#X connect 4 0 7 0;
#X connect 5 0 7 0;
#X connect 6 0 5 0;
#X connect 6 0 0 0;
#X connect 8 0 10 0;
#X connect 8 1 11 0;
#X connect 8 2 12 0;
#X connect 9 0 2 0;
#X connect 9 1 3 0;
#X connect 10 0 9 0;
#X connect 11 0 9 0;
#X connect 12 0 9 0;
#X connect 17 0 1 0;
#X connect 17 0 4 1;
#X connect 18 0 8 0;
Ohhhh, this post was due a long time ago… Anyways, here comes part two of my thoughts on loudness.
First, a correction: modern classical orchestras usually tune 440 or 442, they do not go up to 445 anymore.
I personally think that the reasons for reduced dynamics in story arcs, film music and tv programmes have their roots in the developments of audio and video engineering in the 90s. Digital mixing lead to rapidly increasing loudness levels, resulting in the decrease of micro- and macro dynamics. One cannot blame the engineers for that “one louder” behavior, since they would have lost their clients if they would not have complied. Today, more producers care for dynamics instead of loudness, so the loudness war might come to an end bit by bit. The effects on other “loudness problems” as discussed in the previous post however cannot be reverted easily – we have to live with them now.
Another reason behind the urge to produce “loud” pieces of media (e.g., videos with a high scene cut rate) might be the loss of control on the context in which the media is presented. For instance, a YouTube video might be watched on many different kinds of devices, half of them having shitty displays, half of them having shitty speakers. Therefore, video producers try to create something which works on all possible devices, like audio engineers mix music which does even sound good on the cheapest playback system you can imagine. In music, this goal results in louder mixing, in video production, the need to support many devices and viewing contexts results in “loud” (i.e., intensive) videos.
Luckily, there are settings where artists still have control of the context, like concerts, exhibitions, special events and so on – basically every occasion where the art piece is connected to a specific context like a concert hall, museum or gallery.

In music, one could move away from conventional formats like mp3 and release material on different platforms and in different ways. Interactive music has a promising future, because it can be tied to interesting contexts, like games or everyday objects, and provides a longer experience before the listener (or player) gets bored. Additionally, interactive music is simply a consequence of the larger capabilities of playback devices. For instance, compare a minidisc player from the 90s to nowadays smarthpones. Why should we still use static mp3s on mini-computers with a quadcore CPU and 1 GB of RAM? Interactive music does not need to scream for attention, because it is either meant to stay in the background anyway (which is fine) or easily captures the listener’s attention because it is meant to be played with. The engineered loudness problem would also be partially solved since the interactive content cannot be completely mixed and mastered in advance. Even mastering the audio in realtime is not an option since it is not (yet?) feasible to implement the whole mastering chain on a consumer device.

If you are a passionate music listener or musician you are very likely confronted with the problem of “loudness” in contemporary music. I am not talking about high sound pressure levels at rock concerts – that is another story – but about flat (=compressed), loud recordings on CDs and the radio. The problem started in the early 90s when digital brickwall limiters were introduced and was later dubbed the “loudness war”. Based on the assumption that a record sells better if it is just a little louder than other records, sound engineers pushed the limits of how loud a recording could be. Music lost one of its most important elements: dynamics. In these days it looks like the loudness war will come to an end because of newbroadcasting regulations and “replay gain”-like countermeasures in online music stores like iTunes.

However, I am sure that the loudness war is not the problem itself but a symptom of a larger trend in (digital) media. First, we must broaden the definition of “loudness” a bit into the direction of “screaming for attention”. These are a few examples of occasions where the “attention war” takes place:

– commercials and trailers: the louder, the better. Ultra-deep drone sounds, ridiculously deep male voices, superfast cuts and stupid lines of epic blablabla are ok for one trailer. In the average German cinema, I am being bombarded with that shit for about 45 minutes.

– classical music: modern orchestras have the tendency to tune their instruments to a higher pitch (e.g. 445 Hz). Thus, the whole string section sounds more brilliant and louder.

– instrumentation in film music: in modern blockbusters, every film score sounds similar to me in the sense of instrumentation and dynamics. Every small phrase or melody is duplicated among every orchestra/synth/choir/something-section. Thus the score does not contain any surprising ups and downs in dynamics but is just a big sausage of orchestral bwwam – similar to “sausage waveforms” in over-compressed recordings.

– dramaturgy in modern films: modern blockbusters contain frantic action (and no story!) from beginning to end. Good examples are the Transformers series, Dark Knight Rising, Pacific Rim. “Loud” scenes are included instead of more story elements or quiet sections, making the story arc of the film appear like a flat line.

These the-more-the-better media all fail eventually because of the fact that loudness is measured in relative units (decibels), because humans perceive volumes relative to other volumes. To make something LOUD, it must be preceded by something quiet. The result of a constant bombardment with loudness is usually numbness, which dulls the interest for things which are really interesting/important. Consumers of media do not have short attention spans in general, but they are treated like idiots anyways. For instance, many radio stations assume an attention span of less than thirty seconds and torture their listeners with ultra-short information snippets, stupid music and “you are listening to YELL-O-RADIO”-announcements every 45 seconds.

In part 2, I will try to find some reasons for the overall loudness problem and list some possible remedies.

Related links:

Long time, no post… I am very sorry for the large gap between this new post and the last one. A list of excuses would be: christmas, new year’s eve, a stomach infection and lots of work for my sidejob at Klangerfinder inbetween.

In November, I have uploaded a video showing a theremin-like controller consisting of two cans with lightsensors at the bottom. Last week, I added a Magnetometer (a sensor which reads the direction of a magnetic field) to the setup. The original idea was to use the actual direction and strength of the magnetic field to control something. It was possible to use the sensor as a digital compass, but it was too unreliable to control musical parameters with the data. Therefore, I simply ignored the direction and strength of the magnetic field and just computed a single value based on the fluctuations in those values. This means, whenever some magnet is moving near the sensor, it will trigger a control value which roughly correlates with the intensity (i.e., speed) of the motion.

In this fairly simple example, the magnet controls the amount of vibrato in the cello tone. One can/light sensor controls the volume of the cello, the other can/light sensor controls the pitch of the drumloop.

I think the potential of the magnet-controller is in its simplicity. You just have to wave with the magnet to “excite” some musical instrument, similar to plucking a string.

The facts which will tell you how to build this yourself are already listed in the previous raspberry-pi post.

The code for this new version is also hosted on github in the lightsensors-repo. For this version, I extended the old code a bit and used the file combined.c.


Get every new post delivered to your Inbox.