Between September 2015 and June 2016, I was working on my final year project for the Musikdesign studies at Trossingen. It is essentially a single player audio game where the player’s movements are tracked inside a room.

The title of the game is “Morpheus’ stairs”. Morpheus as a reference to the ancient Greek God of sleep and dreams, stairs meaning that the player has to descend several layers (this is similar to the concept of early dungeon crawler games like Rogue and Nethack).

It is easiest to describe the whole game from the player’s perspective:

“I open the door labeled Morpheus Stairs and enter a room with blind windows, no lights except for the glow of a laptop in the corner. The game developer gives me a special kind of headphones with a small printed circuit board attached. To play the game, I have to wear a sleeping mask and stand the middle of the room. After a short calibration (presumably of the head and positional tracker), the game starts.
A whispering voice explains that I have to find a specific spot in the room and that I have to listen closely and walk carefully.

After the introduction, another voice says that I have to look for the prime, probably referring to the musical interval.

A piano starts to play two notes. After a while, I notice that the interval between the notes changes when I turn around. Once I have fixed a prime, I start walking towards it.
I am afraid to hit the walls (since I am not able to see anything). However, the tracking system detects the proximity of the walls very accurately and plays a clear acoustical warning.

Inside each level, I use different systems of acoustical navigation to zero in on the entrance to the next level. The whispering voice tells me how to navigate. In the seventh level, the voice is silent. I get confused and start to walk around, looking for the exit. The sound of a beating heart starts to get long louder and louder, hammering into my ears. Walking around, I seem to be close to a wall more and more often. Are the walls closing in on me? Are these acoustical warnings reliable anymore? Is the tracking system broken?

Bravely, I walk a few steps, disregarding the yelling sound telling me that there is a wall. Suddenly, the heart beat stops. Everything I have heard before is played backwards with a very high pitch. Then… silence except for the calm waves of the sea. Game over.

To experience the game yourself, you can download a standalone version for Mac OS X. In the standalone, the tracking system is replaced by playable third person character. You can see the room, but the exits are hidden.

There is also a video of a complete walkthrough:

The tracking system transfer the positioning data wireless to an Arduino device which is connected via USB to a Macbook. The USB data is read via an emulated serial port, translated to OSC packets and forwarded to the Unity application which you can see in the video on screen. Thus the real movements in the room are directly projected into the virtual room of the game.

This final year project would not have been possible without the support from many people, who tested early versions, provided hardware and provided advice on the content and game mechanics. Thank you!

One assignment in this summer’s composition course was to use the theme from the well-known medieval hymn dies irae, slightly modify it and create different variations based on the theme.

You can find the score as a PDF here:


The theme’s cinematic feel was exploited a lot, there are many movies where you can hear citations of the theme:

The following track was made as a homework assignment. As a group, we had drawn a “collective painting” on the blackboard – with more or less serious artistic intentions😉 Afterwards, everybody should create a piece of music and/or sound design inspired by the picture. I choose a style I always wanted to try out and integrated the picture’s properties mostly as words describing different items.


On Wednesday, June 24, 7.30 pm, you are welcome to attend a very special concert at the University of Music, Trossingen, Germany.  Musikdesign-students cooperate again with the Sinfonietta ensemble to fill the concert hall with an all new sound. One of the pieces will be accompanied by a video mashup created by me specifically for the piece.

On Saturday, June 27, 8.30 pm, one of my pieces I composed and produced for the Musikdesign studies will be played at the Next Generation Festival 6.0 at the ZKM, Karlsruhe, Germany. The piece is called “Core Audio” and is essentially an immersive sonification of the programming game “Core Wars”. Using the “Klangdom” speaker setup with four subwoofers and more than 30 speakers distributed around and above the public, the piece puts the listener’s head directly into the digital chaos of an ongoing Core Wars tournament.


I will try to publish the visuals and the Core Audio sonification once the concerts are done.


Edit: this is a video of “Controversation”, Ron Freyenschlag’s final year project, played by the Sinfonietta Ensemble in Trossingen, conducted by Sven Kiebler. I used Reaper to control the video playback during the performance and a bit of PureData and Processing for the realtime-generated visuals.

During my second year at the music conservatory in Trossingen, I have worked together with two game design students from the National University of Singapore. There is a cooperation between Trossingen and NUS where Musikdesign students create soundtracks for animated films created by students in Singapore. The films are final year projects, as was “The Last Spark”, a video game. Since I am very interested in interactive (=game) music, I immediately picked the only game from the list of projects, instead of picking one of the films. The Last Spark is a 3D video game with horror and stealth elements. Its story is adapted from “The little match girl”, a tale by Hans Christian Andersen.

Apart from the composition and sound design tasks, I also integrated the sounds directly into the Unity project, which turned out to be very effective (once Hieu and me managed to come up with a git-based workflow, which took some time). Unity 5 is great for 3D audio and in-game mixing, but it lacks some important features to create interactive sounds. For larger projects, Wwise seems to be a good choice. In this case, I hacked a few scripts in C# to create an interactive ambient piece for a hide-and-seek part of the game where the player has to sneak past a monster. The sounds get more intense and scary the closer the player is to the monster. Apart from the horror elements, the main objective of the music was to capture the mood of a teenage girl who has serious trouble with her family and at school. This is why the music sounds sad (sometimes exaggerated, melodramatic) and in some parts childish.

The sites of Kim and Hieu can be found here: Kim Van / Trung Hieu Nguyen.

Christian Fischer, a fellow student in Trossingen, was very kind to play the cello for some of the recordings for the music.

You can listen to the music here (parts names can be found in the comments on soundcloud):

To create the interactive ambient part, I used a small Unity project which just consists of a single slider (left = moderately scary, right = very scary) to mimic the scary-factor in the game. You can download a build for Windows here and try it yourself:

tl;dr made music for a small video game, listen on sound cloud

After two years of development, I am proud to release De Motu, an interactive music app, made in cooperation with Iris Fegerl, Paul Brenner and Jan Roth. The project was financed by the MFG Baden-Württemberg.

For me, the app was not just a journey through the blood cycle, but also a long journey through the various aspects of app production. The close work with Jan (music) and Paul (graphics) and Iris (concept, testing, everything else) was very challenging and fruitful.

The app’s main menu features four organs – brain, heart, lungs and kidneys – into which the user can zoom in and discover four interactive songs. Both age and sex of the user can be adjusted in the app and will heavily influence the sound. For instance, a high age results in a muffled sound because of age-induced hearing loss. I also like the idea that the user’s sex is a continuous slider, not just “male”, “female”, “inbetween”. This reflects the fact that sexes are not sharply divided into two or three groups, but that a person’s sex is indeed a mere tendency towards being male or female. Extreme settings for both sex and age have extreme effects on the music, too – because in reality, being excessively manly or womanly or being very young or very old are indeed special conditions.

Inside the organs, users can influence different body functions like breathing, blood cleansing and heart rate as well as thinking (in a sense). Inside the brain, it is possible to record your own voice (=thoughts) and spin them on a disk – which feels just like the real brain where thoughts and ideas are constantly re-evaluated and go kind of “back and forth”.

You can check out the game yourself (it’s free!):


Official site:



Idea, concept, project lead + Text: Iris Fegerl
Audio concept + music recordings: Jan Roth
Visual design, animation + interaction design: Paul Brenner
Code, pure data patches + interaction design: Jan Freymann
Mixing: Benjamin Grau
Copy editing: Kemi Fatoba

This game was created using Sparrow 2D engine for iOS  and Libpd.

While preparing for a concert next week, I tried out a new pd abstraction with my laptop’s internal microphone. One can create a lot of different sounds by hitting the laptop, the touchpad or just by typing. The “aaron”-abstraction (a reference to the famous “moses” object in pd) loops the input indefinitely for the specified amount of milliseconds. The options on the right inlet are: record, overdub, play loop.

The aaron-abstraction is pretty small, so I just paste it here. Just copy it in your favorite text editor and save it as a *.pd file.

#N canvas 0 0 1596 817 10;
#X obj 666 319 *~;
#X obj 795 308 *~;
#X obj 628 241 sig~;
#X obj 812 246 sig~;
#X obj 588 487 *~;
#X obj 518 369 *~;
#X obj 485 282 inlet~;
#X obj 439 535 outlet~;
#X obj 685 41 sel 0 1 2;
#X obj 680 205 unpack 0 0;
#X msg 656 86 1 0;
#X msg 684 126 1 1;
#X msg 746 152 0 1;
#X text 839 86 record;
#X text 839 116 overdub;
#X text 836 146 play loop;
#X obj 751 383 delwrite~ d1 \$1;
#X obj 618 403 delread~ d1 \$1;
#X obj 689 7 inlet;
#X connect 0 0 16 0;
#X connect 1 0 16 0;
#X connect 2 0 5 1;
#X connect 2 0 0 1;
#X connect 3 0 1 1;
#X connect 3 0 4 0;
#X connect 4 0 7 0;
#X connect 5 0 7 0;
#X connect 6 0 5 0;
#X connect 6 0 0 0;
#X connect 8 0 10 0;
#X connect 8 1 11 0;
#X connect 8 2 12 0;
#X connect 9 0 2 0;
#X connect 9 1 3 0;
#X connect 10 0 9 0;
#X connect 11 0 9 0;
#X connect 12 0 9 0;
#X connect 17 0 1 0;
#X connect 17 0 4 1;
#X connect 18 0 8 0;