If you are a passionate music listener or musician you are very likely confronted with the problem of “loudness” in contemporary music. I am not talking about high sound pressure levels at rock concerts – that is another story – but about flat (=compressed), loud recordings on CDs and the radio. The problem started in the early 90s when digital brickwall limiters were introduced and was later dubbed the “loudness war”. Based on the assumption that a record sells better if it is just a little louder than other records, sound engineers pushed the limits of how loud a recording could be. Music lost one of its most important elements: dynamics. In these days it looks like the loudness war will come to an end because of newbroadcasting regulations and “replay gain”-like countermeasures in online music stores like iTunes.

However, I am sure that the loudness war is not the problem itself but a symptom of a larger trend in (digital) media. First, we must broaden the definition of “loudness” a bit into the direction of “screaming for attention”. These are a few examples of occasions where the “attention war” takes place:

- commercials and trailers: the louder, the better. Ultra-deep drone sounds, ridiculously deep male voices, superfast cuts and stupid lines of epic blablabla are ok for one trailer. In the average German cinema, I am being bombarded with that shit for about 45 minutes.

- classical music: modern orchestras have the tendency to tune their instruments to a higher pitch (e.g. 445 Hz). Thus, the whole string section sounds more brilliant and louder.

- instrumentation in film music: in modern blockbusters, every film score sounds similar to me in the sense of instrumentation and dynamics. Every small phrase or melody is duplicated among every orchestra/synth/choir/something-section. Thus the score does not contain any surprising ups and downs in dynamics but is just a big sausage of orchestral bwwam – similar to “sausage waveforms” in over-compressed recordings.

- dramaturgy in modern films: modern blockbusters contain frantic action (and no story!) from beginning to end. Good examples are the Transformers series, Dark Knight Rising, Pacific Rim. “Loud” scenes are included instead of more story elements or quiet sections, making the story arc of the film appear like a flat line.

These the-more-the-better media all fail eventually because of the fact that loudness is measured in relative units (decibels), because humans perceive volumes relative to other volumes. To make something LOUD, it must be preceded by something quiet. The result of a constant bombardment with loudness is usually numbness, which dulls the interest for things which are really interesting/important. Consumers of media do not have short attention spans in general, but they are treated like idiots anyways. For instance, many radio stations assume an attention span of less than thirty seconds and torture their listeners with ultra-short information snippets, stupid music and “you are listening to YELL-O-RADIO”-announcements every 45 seconds.

In part 2, I will try to find some reasons for the overall loudness problem and list some possible remedies.

Related links:

Long time, no post… I am very sorry for the large gap between this new post and the last one. A list of excuses would be: christmas, new year’s eve, a stomach infection and lots of work for my sidejob at Klangerfinder inbetween.

In November, I have uploaded a video showing a theremin-like controller consisting of two cans with lightsensors at the bottom. Last week, I added a Magnetometer (a sensor which reads the direction of a magnetic field) to the setup. The original idea was to use the actual direction and strength of the magnetic field to control something. It was possible to use the sensor as a digital compass, but it was too unreliable to control musical parameters with the data. Therefore, I simply ignored the direction and strength of the magnetic field and just computed a single value based on the fluctuations in those values. This means, whenever some magnet is moving near the sensor, it will trigger a control value which roughly correlates with the intensity (i.e., speed) of the motion.

In this fairly simple example, the magnet controls the amount of vibrato in the cello tone. One can/light sensor controls the volume of the cello, the other can/light sensor controls the pitch of the drumloop.

I think the potential of the magnet-controller is in its simplicity. You just have to wave with the magnet to “excite” some musical instrument, similar to plucking a string.

The facts which will tell you how to build this yourself are already listed in the previous raspberry-pi post.

The code for this new version is also hosted on github in the lightsensors-repo. For this version, I extended the old code a bit and used the file combined.c.

In the previous post, I promised to upload a remix of a well-known silent short film this week. The good news is that creating the remixed video and the music was a thrilling experience, the bad news is that I will not upload it on this blog. The images are too grotesque/insane to be presented without an explanation and a proper context (just believe me). After all, this blog is a part of my online self-presentation (like my facebook profile and soundcloud) and I do not want people to get a wrong impression about me. If you publish material on the web, you accept the fact that people will watch the material in a totally different situation, in a totally different context. That is why I will not simply upload the video on vimeo. If you want to see the video, you can send me an email or post an (empty) comment below (you have to enter an email address to post a comment, however, it is only visible to me).

The video is out there, so there is a small chance that you might stumble upon it anyways. Since I have used my identity on the I2P-network to spread the video, I will not write about the video in detail in this post. Otherwise one would be able to connect my name with the identity on I2P which would be stupid considering anonymity.

The video was remixed using a small C program which I have written exactly for that purpose. The code analyses some music composed for the movie and rearranges the video frames according to the sound. In fact, the images follow the music, not the other way round. The result was overwhelming in a sense that the machine rearranging the images totally suprised me and that the rearranged images amplified the emotional message of the music. I will probably reuse part of the code to remix other videos which will be more suitable for this blog.

tl;dr the video is to weird for this blog, believe me. send me a message if you want to see it anyways.

This week, I want to raise your attention on two movies which both deal with aspects of digital technology in arts, especially music. Both movies are available as Creative Commons on Youtube and Vimeo.

RiP – A remix manifesto

RiP tells the story of the war between copyright and copyleft since its early days in the beginning of the 20th century. It shows how large corporations use the copyright laws to make money and influence politicians to create even stricter laws. My opinion on copyright is mostly in line with the movie, which states that every piece of art is essentially a remix of earlier works and thus advocates less strict laws for artists. It goes beyond most discussions about copyright which I have read about so far because it puts everything into a historical context.

Corporations are completely taking over our culture and telling us that we can only consume it. But we’re saying no. We’re saying we wanna actually create with it, respond to it, take it, mutilate it, cut it up…

Even the movie itself is partially a remix, the current “version” was edited by fans. The rapid, rought cuts are unusual for a documentary, but they make the movie fun to watch, which is why I can totally recommend it, even if you are not that much interested in the topic itself.

Press Pause Play

In contrast to RiP, this movie is boring and I do not agree with the overall message. However, it is worth watching because its statements are worth a fierce discussion nonetheless. The movie states that the digital age yields a lot of difficulties for aspiring, talented artists, because they are not recognized among the tons of content which less-talented people put online. In the movie, Moby complains about the fact that “today, practially every kid can steal some cracked software and start making electronic music”. Well, so what? I am perfectly capable of digging out small awesome pieces of art among the rubble of cute-little-kitten-videos and look-what-I-can-cook-posts. That is why I do not see any truth in the claim of Press Pause Play. Apart from that, the scenes and the sound track in the movie are beautiful. Also, I do not disagree with all opinions stated in throughout the movie ;)

Next week, I will upload a ~ 2 min remix of a short film, which luckily is  in the public domain so it is legal to re-use it.

tl;dr Press play.

The idea

One of the reasons which kept me from producing sound with more than two speakers were the difficulties with multichannel routing in different mixing rooms. Besides the standard 5.1 config, there are many other (even more interesting) ways to position your speakers. In most cases, one is mixing the multichannel audio in a room A, performing the piece in a room B with a different speaker setup. That is the reason why I decided to develop a tool based on Perl and Csound which is able to “convert” spatial audio from one config to another config to some extent.

The rerouting/remixing tool uses DBAP (distance base amplitude panning) as a spatial panning algorithm to position virtual sound sources on a 2D grid. The sound is played on every available speaker in the setup, the volume depends on the distance between the sound source and the speaker. Thus, DBAP just needs a set of speaker coordinates and the coordinates of the sound source and the loudness rolloff (depening on the room reverberation) to position the virtual sound source in the room.

Each channel of the original file can be seen as a virtual sound source (the speaker of room A) being rendered by the speakers in room B. This means, the original channels from room A are mixed and redistributed on the speakers in room B.

The script

The easiest way to acquire the script is to download the whole repository as a zip-file. Or you copy the code from this page and paste it into a text editor to save it as .pl file. Or you use git to clone the repository: “git clone https://github.com/janfreymann/patches.git”

Csound is available here. If you do not know anything about Csound, get started at the FLOSS manuals page.

The tool is text-based (yay!), asking you for the name of the audio file, the coordinates of speakers in rooms A and B etc. Finally, the tool generates a CSound file, which outputs a multichannel audio file containing the rerouted and remixed channels. To run it, just open a terminal, navigate to the directory where reroute.pl is located and type “perl reroute.pl”. Windows users need to install a free distribution of Perl: ActivePerl. The rest is quite self-explanatory.

Example

This is an example run of reroute.pl (user input is marked red):

C:\coding\dbap_reroute>perl reroute.pl

----------------------------
Spatial Sound Rerouting
Based on DBAP

----------------------------
Use previous config? (y): n
Routing multichannel input from config A to config B
Config A:
Filename of multichannel file A (a.wav): a.wav
Length of file A (3m0s): 2m30s
Number of speakers in config A (4): 5
Coordinates of speaker/channel A 1 (0 0):
Coordinates of speaker/channel A 2 (0 0): 2 0
Coordinates of speaker/channel A 3 (0 0): 4 0
Coordinates of speaker/channel A 4 (0 0): 4 6
Coordinates of speaker/channel A 5 (0 0): 0 6
Number of speakers in config B (4): 4
Coordinates of speaker/channel B 1 (0 0): 0 0
Coordinates of speaker/channel B 2 (0 0): 1 0
Coordinates of speaker/channel B 3 (0 0): 0 1
Coordinates of speaker/channel B 4 (0 0): 1 1
Amplitude rolloff in location B between 6 db and 1 db: (4): 5
Do you want to scale and center speaker configurations? (y):
Rerouting with the following configuration:
In a.wav: 5 speakers with coordinates
(0 0) (2 0) (4 0) (4 6) (0 6)
Outputting to out.wav: 4 speakers with coordinates
(0 0) (1 0) (0 1) (1 1)
Amplitude rolloff is 5db
Press ENTER to continue. Otherwise, press CTLR+C

Saved config to file lastConf.txt

A0B0--------A1----------A2-----------B1
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
-----------------------------------------
A4B2---------------------A3-----------B3
Saved csound code to reroute.csd
See you!

C:\coding\dbap_reroute>

In the generated csound file, you can see how each output channel is a weighted sum of all input channels:

aA0, aA1, aA2, aA3, aA4 diskin2 "a.wav", 1, 0, 0 
aB0 sum (0.999390113057932 * aA0), (0.794764617241656 * aA1), 
(0.447054203879472 * aA2), (0.274052207805497 * aA3), 
(0.021814887035443 * aA4)
aB1 sum (0.021814887035443 * aA0), (0.447054203879471 * aA1), 
(0.794764617241656 * aA2), (0.305593078081537 * aA3), 
( 0.0163591970865385 * aA4) 
aB2 sum (0.021814887035443 * aA0), (0.305593078081537 * aA1), 
(0.274052207805497 * aA2), (0.447054203879472 * aA3),
(0.999390113057932 * aA4) 
aB3 sum (0.0163591970865385 * aA0), (0.274052207805497 * aA1), 
(0.305593078081537 * aA2), (0.794764617241656 * aA3),
(0.021814887035443 * aA4)
outs aB0, aB1, aB2, aB3

tl;dr Download this. Go to dbap_reroute using Terminal/CMD. Run “perl reroute.pl”.

Today, I am trying to explain why open software systems are necessary from an artistic point of view. First, lets look at some…

…bad examples
Consider Avid’s hardware-software policy: They give you some audio interface and their ProTools software. ProTools has its own plugin standard (called RTAS), shitty midi mapping, own control surfaces like the Control24 which use the proprietary Eucon-protocol. In the end, this means you can use your – very expensive – Avid stuff only in combination with ProTools. In the case of the Control24: what a waste of knobs which could possibly control so many other things! I also developed a special hate for the Digidesign Mbox, which was the only piece of hardware connected to my computer in the last five years which produced a blue screen at a regular interval of three minutes.

Another bad example is the AppStore policy of Apple for mobile iOS devices. The iPod-touch, iPad and iPhone products are an awesome platform for media artists to develop their own tools and apps. Sadly, you need to pay just to be allowed to develop your software. If you are cool with that (which I am not), you have to get a permission from Apple to release your app in the AppStore, i.e., to share it with other people. the only way to get around this is to root (jailbreak) the device, which often comes with a loss of warranty or is just plain illegal in some countries. The main justification for this closed-store policy is security, which is obviously ridiculous considering the bunch of harmful apps which go public in the AppStore everyday.

If a system is not open to a substantial extent, it is worthless

I think closed systems like the two examples above are obstructing the way media artists work and think: They interconnect different devices and software, think out-of-the-box and work with a small financial budget, which means they need to at least try it before they buy it.
A company which dictates exactly how you should use their product implies that you are stupid. I am not stupid. Computer users are not stupid. Media artists are not stupid.
If a system cannot be tweaked to comply with one’s own (original) ideas, it is worthless. For instance, if Avid dictates how a studio setup should be, it is worthless for people who have their own ideas about working in a studio. Of course, the effect-chain-bus-based-mixing setup is quite flexible, but what about macros, custom keyboard shortcuts and OpenSoundControl?
To state this again: “professional” software which treats professionals as uncreative idiots is junk.

Everything should connect with everything else

Media software should ideally connect with every other piece of media software. I imagine patching a (virtual) audio cable from my favorite software synth’s output to another software synths awesome lowpass filter, ringmodding it with some LFO and connecting it to the main hardware output which ends up in some nice gritty-sounding guitar amp. You get the idea? Finally, some…

…good examples

The best example for connection-happy software is Max/MSP/Jitter, which manages to work with many other programs using different standards like ReWire, VST, Midi, OSC,… It is also possible to program your own extensions for Max/MSP (yay!). Cycling’74 (the company behind Max) needs to keep the code closed-source to survive the market, I acknowledge that, but they have a vivid community which helps them to improve their software (like open-source projects do). In terms of biology, Max/MSP is the happy swinging bisexual hermaphrodite while ProTools is some weirdo who refuses to breed with anyone except his own sisters.
Max also has a little cousin called PureData, which is an open-source branch of Max. PureData lacks some features of Max, but is otherwise very similar (and its free, after all).

This is a list of more “connection-happy” and extensible software:

- Reaper: a DAW which supports Python scripting, loads nearly all kinds of plugins, supports many audio and video codecs and has a very active community working together with the developers.
- OpenSoundControl (OSC): network-based sound control, it beats MIDI in every aspect. You can use it to control your patches via the loopback (127.0.0.1) network on your laptop or use it to control your buddy’s synthesizer in Australia via the Internet.
- SuperCollider: similar to Max/MSP and PureData
- Arduino and Raspberry Pi: the hardware equivalents of Max/MSP. Small microcontroller boards which connect to sensors, control systems, LAN… The Arduino board is programmed using Processing, the Raspberry Pi runs a comblepte Debian Linux distribution.
- Jack Audio Driver: a realtime audio driver which has a patchbay to connect inputs and outputs from different software. Works on Linux and OS X, my attempts to use it on Win7 have failed so far.

tl;dr closed systems using proprietary standards are junk from an artistic perspective. Use open systems like Max/MSP, Reaper, OSC and tweak them to your needs.

ps: I know this article contains a lot of ranting about Avid. One could say that Avid is my scapegoat for many other companies which act the same or even worse. The thing is that Avid’s ProTools is considered a “standard” in professional studios which totally pisses me off, because I do not see any rational justification for this. Similar things which make me angry: Digital Rights Management, Steinberg’s portaudio.dll blacklisting many ASIO devices, Apple’s adaptor jungle, Dolby’s market strategy, …

 

Are you able to figure out what this patch does?

delayUnit1

Probably not, since it was not written by a human being, but by a Perl script, which interpreted the following scheme:

250
-----------------------9----
----------------------------
-------------------7--------
-----------------6----------
---------------9------------
----------------------------
-------------------2--------
-------------------1--------
-------------------2--------
-------------------1--------
-------------------2--------
----2-----------------------
----1-----------------------
----2-----------------------
feedback
6 2 0.5
12 8 0.4
15 13 0.4

It is a sketch of a stereo-delay unit. The time-axis runs from top to bottom, each line being 250 milliseconds. The numbers inbetween the dashes are delays of a certain amplitude between 1 and 9 (translated to 0.1, 0.2, etc.). The panning depends on the position of the number.

The feedback-lines are defined below. For instance, “6 2 0.5″ means a feedback from the unit in line 6 to the unit in line 2 with an amplification of 0.5.

These delay units are translated to a Pure Data patch, using this Perl script. Since it is very difficult to arrange a graph like this patch in a human-readable manner, the Per script places every object at a random position. At least, it looks like art then :P To run the script, open a terminal window (assuming Perl is installed) and run “perl delayGen.pl [filename.txt] > [patchname.pd]” to create a new Pd abstraction.

You can see two delay units in action in this video:

I like the delay-generator for several reasons:

  • there are no (theoretical) limitations on the number of delays
  • you can have multiple feedback lines
  • it is modular and compatible with a lot of other stuff, you could even chain several of those units together to create delays of delays of delays [....]
  • it is free, open source and extentable

Future work on this script/patch might include adjustable delay times, modulations of the signal and decorrelation of multiple delays on one line (which would allow to add multiple numbers on one dashed line in the text file).

The Pure Data file format is pretty self-explanatory, but if you want to generate patches on your own and need a few pointers on how to do it, feel free to drop a comment.

tl;dr Text file -> Perl Script -> Pd abstraction -> wonderful delay units

Follow

Get every new post delivered to your Inbox.