One assignment in this summer’s composition course was to use the theme from the well-known medieval hymn dies irae, slightly modify it and create different variations based on the theme.

You can find the score as a PDF here:

dies_irae_2_0

The theme’s cinematic feel was exploited a lot, there are many movies where you can hear citations of the theme:

The following track was made as a homework assignment. As a group, we had drawn a “collective painting” on the blackboard – with more or less serious artistic intentions😉 Afterwards, everybody should create a piece of music and/or sound design inspired by the picture. I choose a style I always wanted to try out and integrated the picture’s properties mostly as words describing different items.

 

On Wednesday, June 24, 7.30 pm, you are welcome to attend a very special concert at the University of Music, Trossingen, Germany.  Musikdesign-students cooperate again with the Sinfonietta ensemble to fill the concert hall with an all new sound. One of the pieces will be accompanied by a video mashup created by me specifically for the piece.

On Saturday, June 27, 8.30 pm, one of my pieces I composed and produced for the Musikdesign studies will be played at the Next Generation Festival 6.0 at the ZKM, Karlsruhe, Germany. The piece is called “Core Audio” and is essentially an immersive sonification of the programming game “Core Wars”. Using the “Klangdom” speaker setup with four subwoofers and more than 30 speakers distributed around and above the public, the piece puts the listener’s head directly into the digital chaos of an ongoing Core Wars tournament.

Links:

I will try to publish the visuals and the Core Audio sonification once the concerts are done.

 

Edit: this is a video of “Controversation”, Ron Freyenschlag’s final year project, played by the Sinfonietta Ensemble in Trossingen, conducted by Sven Kiebler. I used Reaper to control the video playback during the performance and a bit of PureData and Processing for the realtime-generated visuals.

During my second year at the music conservatory in Trossingen, I have worked together with two game design students from the National University of Singapore. There is a cooperation between Trossingen and NUS where Musikdesign students create soundtracks for animated films created by students in Singapore. The films are final year projects, as was “The Last Spark”, a video game. Since I am very interested in interactive (=game) music, I immediately picked the only game from the list of projects, instead of picking one of the films. The Last Spark is a 3D video game with horror and stealth elements. Its story is adapted from “The little match girl”, a tale by Hans Christian Andersen.

Apart from the composition and sound design tasks, I also integrated the sounds directly into the Unity project, which turned out to be very effective (once Hieu and me managed to come up with a git-based workflow, which took some time). Unity 5 is great for 3D audio and in-game mixing, but it lacks some important features to create interactive sounds. For larger projects, Wwise seems to be a good choice. In this case, I hacked a few scripts in C# to create an interactive ambient piece for a hide-and-seek part of the game where the player has to sneak past a monster. The sounds get more intense and scary the closer the player is to the monster. Apart from the horror elements, the main objective of the music was to capture the mood of a teenage girl who has serious trouble with her family and at school. This is why the music sounds sad (sometimes exaggerated, melodramatic) and in some parts childish.

The sites of Kim and Hieu can be found here: Kim Van / Trung Hieu Nguyen.

Christian Fischer, a fellow student in Trossingen, was very kind to play the cello for some of the recordings for the music.

You can listen to the music here (parts names can be found in the comments on soundcloud):

To create the interactive ambient part, I used a small Unity project which just consists of a single slider (left = moderately scary, right = very scary) to mimic the scary-factor in the game. You can download a build for Windows here and try it yourself:

https://www.dropbox.com/s/y1d737iisxaqr5c/interactive_ambient_demo.zip?dl=0

tl;dr made music for a small video game, listen on sound cloud

After two years of development, I am proud to release De Motu, an interactive music app, made in cooperation with Iris Fegerl, Paul Brenner and Jan Roth. The project was financed by the MFG Baden-Württemberg.

For me, the app was not just a journey through the blood cycle, but also a long journey through the various aspects of app production. The close work with Jan (music) and Paul (graphics) and Iris (concept, testing, everything else) was very challenging and fruitful.

The app’s main menu features four organs – brain, heart, lungs and kidneys – into which the user can zoom in and discover four interactive songs. Both age and sex of the user can be adjusted in the app and will heavily influence the sound. For instance, a high age results in a muffled sound because of age-induced hearing loss. I also like the idea that the user’s sex is a continuous slider, not just “male”, “female”, “inbetween”. This reflects the fact that sexes are not sharply divided into two or three groups, but that a person’s sex is indeed a mere tendency towards being male or female. Extreme settings for both sex and age have extreme effects on the music, too – because in reality, being excessively manly or womanly or being very young or very old are indeed special conditions.

Inside the organs, users can influence different body functions like breathing, blood cleansing and heart rate as well as thinking (in a sense). Inside the brain, it is possible to record your own voice (=thoughts) and spin them on a disk – which feels just like the real brain where thoughts and ideas are constantly re-evaluated and go kind of “back and forth”.

You can check out the game yourself (it’s free!):

appstore2

Official site: DeMotu-App.com

AppIcon_small

Credits:

Idea, concept, project lead + Text: Iris Fegerl
Audio concept + music recordings: Jan Roth
Visual design, animation + interaction design: Paul Brenner
Code, pure data patches + interaction design: Jan Freymann
Mixing: Benjamin Grau
Copy editing: Kemi Fatoba

This game was created using Sparrow 2D engine for iOS  and Libpd.

While preparing for a concert next week, I tried out a new pd abstraction with my laptop’s internal microphone. One can create a lot of different sounds by hitting the laptop, the touchpad or just by typing. The “aaron”-abstraction (a reference to the famous “moses” object in pd) loops the input indefinitely for the specified amount of milliseconds. The options on the right inlet are: record, overdub, play loop.

The aaron-abstraction is pretty small, so I just paste it here. Just copy it in your favorite text editor and save it as a *.pd file.

#N canvas 0 0 1596 817 10;
#X obj 666 319 *~;
#X obj 795 308 *~;
#X obj 628 241 sig~;
#X obj 812 246 sig~;
#X obj 588 487 *~;
#X obj 518 369 *~;
#X obj 485 282 inlet~;
#X obj 439 535 outlet~;
#X obj 685 41 sel 0 1 2;
#X obj 680 205 unpack 0 0;
#X msg 656 86 1 0;
#X msg 684 126 1 1;
#X msg 746 152 0 1;
#X text 839 86 record;
#X text 839 116 overdub;
#X text 836 146 play loop;
#X obj 751 383 delwrite~ d1 \$1;
#X obj 618 403 delread~ d1 \$1;
#X obj 689 7 inlet;
#X connect 0 0 16 0;
#X connect 1 0 16 0;
#X connect 2 0 5 1;
#X connect 2 0 0 1;
#X connect 3 0 1 1;
#X connect 3 0 4 0;
#X connect 4 0 7 0;
#X connect 5 0 7 0;
#X connect 6 0 5 0;
#X connect 6 0 0 0;
#X connect 8 0 10 0;
#X connect 8 1 11 0;
#X connect 8 2 12 0;
#X connect 9 0 2 0;
#X connect 9 1 3 0;
#X connect 10 0 9 0;
#X connect 11 0 9 0;
#X connect 12 0 9 0;
#X connect 17 0 1 0;
#X connect 17 0 4 1;
#X connect 18 0 8 0;
Ohhhh, this post was due a long time ago… Anyways, here comes part two of my thoughts on loudness.
First, a correction: modern classical orchestras usually tune 440 or 442, they do not go up to 445 anymore.
I personally think that the reasons for reduced dynamics in story arcs, film music and tv programmes have their roots in the developments of audio and video engineering in the 90s. Digital mixing lead to rapidly increasing loudness levels, resulting in the decrease of micro- and macro dynamics. One cannot blame the engineers for that “one louder” behavior, since they would have lost their clients if they would not have complied. Today, more producers care for dynamics instead of loudness, so the loudness war might come to an end bit by bit. The effects on other “loudness problems” as discussed in the previous post however cannot be reverted easily – we have to live with them now.
Another reason behind the urge to produce “loud” pieces of media (e.g., videos with a high scene cut rate) might be the loss of control on the context in which the media is presented. For instance, a YouTube video might be watched on many different kinds of devices, half of them having shitty displays, half of them having shitty speakers. Therefore, video producers try to create something which works on all possible devices, like audio engineers mix music which does even sound good on the cheapest playback system you can imagine. In music, this goal results in louder mixing, in video production, the need to support many devices and viewing contexts results in “loud” (i.e., intensive) videos.
Luckily, there are settings where artists still have control of the context, like concerts, exhibitions, special events and so on – basically every occasion where the art piece is connected to a specific context like a concert hall, museum or gallery.

In music, one could move away from conventional formats like mp3 and release material on different platforms and in different ways. Interactive music has a promising future, because it can be tied to interesting contexts, like games or everyday objects, and provides a longer experience before the listener (or player) gets bored. Additionally, interactive music is simply a consequence of the larger capabilities of playback devices. For instance, compare a minidisc player from the 90s to nowadays smarthpones. Why should we still use static mp3s on mini-computers with a quadcore CPU and 1 GB of RAM? Interactive music does not need to scream for attention, because it is either meant to stay in the background anyway (which is fine) or easily captures the listener’s attention because it is meant to be played with. The engineered loudness problem would also be partially solved since the interactive content cannot be completely mixed and mastered in advance. Even mastering the audio in realtime is not an option since it is not (yet?) feasible to implement the whole mastering chain on a consumer device.