Thursday, December 27, 2012

Musical key and mood

Key determines mood? Really?!

It is often argued that with the advent of equal temperament all differences between musical keys disappeared, i.e. C major doesn't sound fundamentally different from Eb major. Yet composers still attribute certain moods to certain musical keys. In part this is probably determined by tradition, and in part it may be influenced by the mechanics of playing an instrument: playing a piece in C major on a piano (using only white keys) is quite a different experience from playing that same piece in C# major (using many black and some white keys) because of the differences between physical location of black and white keys on a piano. Also, bowed instruments will often play an F# differently than a Gb, despite both being the same note on a piano.

So what are these traditional moods associated to certain keys?

This list is based on a list made up by Christian Schubart in his book Ideen Zu Einer Ästhetik Der Tonkunst...
Cmajorcompletely pure key; speaks of innocence, simplicity, naivety, child's talk
Dbmajorkey to bring out unusual feelings; can smile but not laugh; can grimace but not cry
Dmajorkey of triumph, hallelujah, war-cries and victory-rejoicing
Ebmajorkey of love, devotion, intimate conversation with God
Emajornoisy shouts of love, laughing pleasure and not-yet-complete full delight
Fmajorkey of complaisance and calmness
Gbmajorkey of triumph over difficulty, sigh of relief after difficulaties have been overcome
Gmajorkey that is rustic, idyllic and lyrical, calm and satisfied passion, any gentle and peaceful emotion of the heart
Abmajorkey of the grave, death, putrefaction, judgement, eternity
Amajordeclaration of innocent love, satisfaction with one's state of affairs, hope of seeing one's beloved one again when departing, youthful cheerness and trust in God
Bbmajorcheerful love, clear conscience, hope, aspiration for a better world
Bmajorstrongly colored, announcing wild passions, anger, rage, jealousy, fury, despair and every emotion of the heart
Cminordeclaration of love with lament of unhappy love, sighing of the lovesick soul
C#minorpenitential lamentation, intimate conversation with God, sighs of disappointed friendship and love
Dminormelancholy, womanliness
Ebminorfeelings of anxiety, the soul's deepest distress, brooding despair, blackest depression, most gloomy condition of the soul
Eminornaive declaration of love, lament without grumbling, sighs accompanied by a few tears, desires to resolve into the pure happiness of C major
Fminordeep depression, funeral lament, groans of misery and longing for the grave
F#minora gloomy key, resentment and discontent, it languishes for the calm of A major or happiness of D major
Gminordiscontent, uneasiness, worry about a failed scheme, bad-tempered gnashing of teeth, dislike
G#minorgrumbling, struggling with difficulty, heart squeezed until it suffocates
Aminorpious womanliness, tenderness of character
Bbminormocking God and the world, discontented with itself and everything, preparation of suicide
Bminorkey of patience, calm awaiting one's fate, submission to divine dispensation, mild lament without breaking out into offensive murmuring or whimpering,

Sunday, December 9, 2012

Rutger Kopland's poem XIV

Tribute to Rutger Kopland (1934-2012)

For quite a while I have admired the poem "XIV" by Rutger Kopland. A few days after I had finished making this song from his poem, I heard the news that he had died at the age of 77. I dedicate this song to him. One can sing the poem to the song, but I thought of your mental sanity and in the end made it a song with unsung words.

Ga nu maar liggen liefste in de tuin,
de lege plekken in het hoge gras, ik heb
altijd gewild dat ik dat was, een lege
plek voor iemand, om te blijven

And my very literal English translation which cannot be sung to the song
(translations of poetry should not be attempted by amateurs like me ;) )

Now go lie down, my love, into the garden,
the empty spots in the tall grass, I've
always wanted to be just that, an empty
spot for someone, to stay

Years ago, I commented on Fred Tak's blog that I read the poem as a kind of elegy, and not as a kind of love poem (the traditional interpretation). This interpretation now makes even more sense to me. May my song become an empty spot for him to stay.

Sunday, November 25, 2012

Broken Glass

After finishing an online course on modern and contemporary American poetry (yes! I took the already legendary "ModPo 2012"!) I start getting a little bit more time again to write some music.

The course was splendid and I would recommend everyone to subscribe to the next edition. One of the poems we discussed was William Carlos William's "Between Walls", and the poem inspired me to write this "Broken Glass" piece. The title of course refers both to the contents of the WCW poem, and to the broken chords and Philip Glass style music.

The recording has some mistakes. I'm having trouble with my recording device and by the time I managed to play something and make a recording of it, my right hand was getting too tired. (The piano has a rather heavy touch.)

This musical work is literally composed of pieces of broken chords. I never play it the same twice in a row. I just select fragments at random while playing. It's so much fun it should be forbidden! If you took ModPo 2012 as well, you will understand why ;-)

Can you find the wedding bells? Feel free to leave comments!

Tuesday, September 25, 2012

Tunestorm 07 revisited...


Since tunestorm doesn't seem to take place, I'm making available my contribution to the tunestorm now. Hope you like it!

Written in lmms, mixed in audacity. This work features some mistakes in recording, pronunciation and mixing. This was my very first digital music project ever. So, despite its shortcomings, I still like it :)

Compose a track II

Gruesome Lullaby

Using headphones, with volume set to moderate, see what you think of this...
  • Music written in LMMS, using fluidR3 sound font.
  • Large number of samples from (credits at the end of the movie).

Sunday, September 9, 2012



Modulation is the art of smoothly moving from one musical key to another. Modulation is one of the ways to make your music more interesting. If this sounds intriguing and you know how to read music notes, stay with me.

Three weeks ago, I set out to learn something about modulation. I took notes while doing so, and I'm making those notes available for anyone interested in the subject. The notes so far encompass 257 (!) pages full of modulations between different keys. Each modulation each time is written out in all keys, leading to a catalogue of modulations from any key to any key (well... almost. It's a work in progress). It also contains brief explanations about the underlying principles that make the modulation work. Using those principles, it should be possible to create new modulations yourself.

Most of the material is illustrated in the context of classical music theory (but all the underlying principles illustrated there apply also to jazz/blues/pop/gospel/...), but part III also contains some beginnings of Jazz and Gospel cadenzas.


Sure. The document is available under a CC-BY-NC-SA license. The source code is available on sourceforge. It requires lilypond, lyx and LaTeX to "compile" to .pdf.

You can also download a prebuilt .pdf from the files section.

If you have downloaded and skimmed the book, be sure to comment either here or on the book's forum.

Wednesday, September 5, 2012


University of the Philippines singing ambassadors

Today I had the chance to sing a song (well, two actually) with the University of the Philippines singing ambassadors (UPSA) in Leuven, Belgium. I can honestly say that it was a special experience in my life. The rest of the concert was superb, phenomenal - with a convincing display of a great range of musical styles (classical, contemporary classical, broadway (musical including stage play), folklore) and including some stunning renditions of breath-takingly difficult pieces (who wrote that "de profundis"?).

The really humbling part is where they tell you they are not music students (at least not all of them; some I spoke to were training to become an engineer). They practice their art 4 times a week. I'm sure there's no other way to arrive at this incredible level where they sound the way they sound. Having participated in 21 contests around the world, they won 21 first prizes and some other honorary prizes. The result of their regular hard work is simply stunning. Congratulations!!

If you ever have a chance to see them perform, don't hesitate. Too bad they had to leave in a hurry to be in Germany by tomorrow morning 6 o'clock - you can't let the president of Germany wait ;)

Sunday, August 26, 2012

How to write a table canon

Table canon

According to wikipedia:

A Table canon is a retrograde and inverse canon meant to be placed on a table in between two musicians, who both read the same line of music in opposite directions. As both parts are included in each single line, a second line is not needed.

I thought there should be no reason to limit one musician to one line of music (or to limit one side of the table to one musician), so I created a table canon with two lines of music per musician (or two musicians per side of the table).


You will hear three parts: part I is a 2-voice theme, part II is the same 2-voice theme upside-down, finally part III consists of part I playing simultaneously with part II. The actual piece consists of part III only. The rest is added for demonstration purposes. You will notice that the 2-voice themes sometimes sound a bit awkward. Compared to my previous musical constructions, it was harder to construct a theme that sounds reasonably interesting and reasonably well together with itself playing backwards and upside down in 4 voices.

Or alternatively, listen to it on soundcloud:

I've written up the method used to create this piece. Click here to download the article. You may have trouble downloading the article with some versions of internet explorer (in that case, use chrome or firefox instead).

Saturday, August 18, 2012

How to write a Crab Canon

What's a crab canon?

According to wikipedia:

A crab canon—also known by the Latin form of the name, canon cancrizans—is an arrangement of two musical lines that are complementary and backward, similar to a palindrome. Originally it is a musical term for a kind of canon in which one line is reversed in time from the other (e.g. FABACEAE <=> EAECABAF).

Ever since I read about this in the Goedel, Escher, Bach book by Douglas Hofstadter I've had some fascination with this form of music. I've tried to write some but I usually got stuck after about three notes. In some previous articles I discussed how I created a 5-part canon and a 6-part invention using a technique I have invented (or more likely: rediscovered). Now I have extended this technique to create crab canons and palindrome canons et voila! A brand-new no-sweat, no-tears 4-voice palindrome crab-canon in the somewhat exotic meter of 11/4 appears... (why 11/4? Because I can ;) )

You can hear it on Youtube:

or SoundCloud:

The article explaining the full construction of this piece is available for download here. You may have trouble downloading it with some versions of internet explorer. In that case use chrome or firefox instead.

Sunday, August 12, 2012

The Fly

The Fly

The Fly is a poem written by William Blake, a poet not unknown to music lovers. His poem "The Lamb" has inspired many a composer to write music, and the same holds for another of his poems: "The Tyger".

William Blake

My version of The Fly

Of course "The Fly" has been set to music as well. There's something about Blake's poems that invite people to write music...

I've written something for piano solo (one can sing the words to the music, but in order to protect your sanity, I won't sing in the recording ;) ).

Without further ado, head over to my youtube page:


Saturday, July 28, 2012

Explanation about technique used in writing my 6-part invention

Download the article by clicking this link. You may have trouble downloading it with some versions of internet explorer. In that case use chrome or firefox instead.

You can hear the result of the tutorial in my previous post.

Friday, July 27, 2012

6-part Invention

What? Not another one??

In a previous post I explained what technique I used to write a Canon in 5 parts without much sweating. I've been experimenting a bit more with the technique and here's the result of one of those newer experiments: a 6-part invention.


As it goes with such pieces it's interesting to highlight the structure of the piece. The piece starts with a theme. The theme is repeated at different starting pitches (basic fugal treatment), and counter-subjects are introduced which are really just the theme in canon to itself. The canon is left to play in 3 voices at a few different starting pitches, and then gradually morphs into a different canon with a different theme. The new canon is also given some basic fugal treatment, and finally both canons sound simultaneously forming a 6-part super-canon with two themes playing simultaneously in 3 different time-delayed versions each. The piece rests in peace after a soothing final chord.

How did you make this?

The piece is generated using the techniques explained in my tutorial on writing canons, with some cool twists. I intend to fully explain the creation of this piece including the full score and algorithms in a new tutorial that will appear in a separate post. Stay tuned :)

Enough already... let me hear what this is all about

Ok.. ok... Here it is:

And here it is as well:

Tuesday, July 24, 2012

Tutorial on my technique for writing a canon

I promised to write a tutorial on how to write a canon like the one I posted in my previous post Canon in 5, and I did.

The tutorial is 12 pages long, but most of that space is taken up by white-space and examples. You will probably get the most out of it if you already know how to read music, although strictly speaking it is not a requirement to already benefit from the material in the tutorial.

You can get the tutorial in .pdf format by clicking this link. You may have trouble downloading it with some versions of internet explorer. In that case use chrome or firefox instead.

You can listen to the result of the tutorial here:

Drop me a note if you've been able to use the techniques explained in there.

Monday, July 23, 2012

Canon in 5

Canon in 5

So I've been experimenting today with an idea I had on how to write a canon without needing to know a lot of music theory. I want to explain the method I used in a later post, but here's the result of a test drive with a short theme repeated 5 times.

To conceive this 5 part fugue using the method I invented (or more likely: rediscovered :-) ) was a matter of a few minutes. The creation of the score and the rendering to audio and video took a bit longer.

You can hear the synthesizer version here:

You can download the score by clicking this link. And here's a preview of the score:


Thursday, July 12, 2012

Rain Dance

Another day, another piece!

LMMS madness struck again... I tried to make something different from my usual style and the result - love it, or hate it, is here.

I've embedded some soundscapes into the music (reused from

  • oceanwavescrushing by Luftrum (CC attribution)
  • kinder auf dem spielplatz by fieldmuzick (sampling+)
  • thunderstorm2 by erdie (CC attribution)

Friday, June 29, 2012

Call for contributions: YouCoLeLe Twilight


Maybe you remember my enthusiastic blog post with respect to tunestorm. Unfortunately, the submission deadline has long passed and the organizers do not seem to find the time to broadcast the submissions. That's not exactly the way to make and keep people enthusiastic - perhaps quite the contrary.

I still very much like the concept of a tunestorm: let a group of people create music around a common theme, each in their own style. As a variation on the tunestorm theme, I thought of trying something similar, from now on called YouCoLeLe.

"YouCoLeLe" is an acronym and stands for Youtube Composer's Legendary League. The idea is for people to create music around a common theme, put that music on a site like Youtube or SoundCloud, and tag it with "YouCoLeLe <Theme>" where <Theme> should be replaced with the theme the music was created for. This way, people can contribute without really having to obey a strict deadline. Nevertheless, each time a new theme is chosen, a target date is set, after which a compilation of "contributions so far" will be made. One important difference with tunestorm is this: even if this blog dies or I don't find time to update it anymore, anyone can still find back the contributions by searching for videos/sound streams tagged with "YouCoLeLe ".

So what's the first theme?

The first theme is Twilight. It's up to you to interpret the assignment as you like. The target date is set to 1 september 2012.

What are the constraints?

I don't want to restrict your creative freedom too much, meaning that you are free to create something that in your opinion illustrates the theme. You are free to chose your instruments or musical style. You can create songs or pieces or soundscapes... whatever floats your boat. For your own sake, just make sure that you have the right to use whatever materials you happen to reuse. Legal reuse of material available under creative commons license is highly recommended by the way: that way the new contributions also become available under creative commons license, and who knows: perhaps someone else will reuse your material in their creation. If you create original work of your own of course you can choose not to release it under a creative commons license (although I would highly recommended for The Greater Good.)

Are you going to keep babbling or can we hear some music?

As it happens (surprise, surprise), I have made an entry for "YouCoLeLe Twilight" which you can hear here:

If you want to play it yourself, you can download the score by clicking on this link. Or you can browse the score (without the poem, due to copyright restrictions) here:


Feel free to drop me a note with a link to your contribution. Don't be scared to contribute - it's for fun only. It's not a competition, and the whole thing is not meant to judge your work, although it can be a way to find an audience and a way to get feedback on your work if you are looking for that. Better indicate it explicitly in that case.

Unless big disasters happen I will create a compilation of contributions on this blog after the target date. As I see it now, the contributions would be listed with my personal comments on them (if the author indicated they wanted comments), ordered according to the date on which I became aware of the contribution. Should I forget to do so, feel free to remind me of my promise!

I hope to see (m)any contributions :)

Wednesday, June 27, 2012

Learn to read notes faster!

Introducing SpeedSightRead

Next to making music I also enjoy programming. I've started a small program called SpeedSightRead which tries to help you improving your sight reading speed. This is free and open source software licensed under the GPLv3 license, meaning that you can use and modify it anyway you like (as long as you also make your new code free and open source software). It will display random notes (from a selection of possible notes that you can configure) and ask you to either type the correct name or click the correct piano key. It has been tested on Windows and Linux systems, but probably also can be made to work on other platforms. Get it here! There's an installshield available here for people using a windows systems. If you use another operating system, you will need to read the instructions on how to build and run the program from source code. It's not as difficult as it looks :)

Saturday, May 19, 2012

Concert Announcement: Daydream lullaby.


"Aperitif" concert for mixed choir with instrumental and textual intermezzi. Music from a variety of different style periods and moods.


Gasthuisberg - Auditorium GA2 - Onderwijs & Navorsing 1 - Herestraat 49 - 3000 Leuven - Belgium.


3rd of june, at 11am



Capella Academica, personel choir from Katholieke Universiteit Leuven, conducted by Dieter Staelens.


Veerle Foulon

Piano Accompaniment

Annelynn Bailleu


I will actively participate in this concert: both as a singer and as piano player during the instrumental intermezzi. One of the intermezzi is composed by me: "Meditation" for piano and cello (written on jan 1st 2012, style: Eric Satie meets Philip Glass (I'm modest like that ;) ). The other two intermezzi are written by Philip Gaubert (Divertissement Grec, written in 1908, style: early 20th century, with a touch of impressionism) and Emile Kronke (the first of his "Deux Papillons" op 165, written in 1921, style: early 20th century). The cello will be played by Arthur Spaepen. The flutes will be played by Lotte Goyvaerts and Koen Eneman.

If the music is not enough reason to motivate you, there will be a free reception afterwards...

I want to order tickets!

But of course :-) You can reserve them online using the ticket reservation form. The prices are:
  • € 9: standard price
  • € 7: personel KU Leuven, -18, students, 65+
  • € 6: holders of a culture card
  • € 10: at the entrance, on the day of the concert

Saturday, March 17, 2012

Mixing and mastering

Disclaimer: I'm by no means a professional mixing/mastering engineer (in fact, quite the opposite ;) ). I do like to research stuff, and will occasionally report on what I find out here. The explanations may contain factual errors, in which case I'd appreciate if you took the time to report errors or inaccuracies. Given the huge field that audio mixing really is, this post will necessarily skip a lot of details.

Why did you write this post?

I hear many problems in my own recordings, and have decided I should educate myself a bit on the subject. This is mainly written as a summary of things I read and remembered.

Mixing? Mastering?

First things first: what's the difference between mixing and mastering?

In mixing one tries to bring together different tracks into one recording. In doing so, one can apply a whole range of effects to each track separately before combining them to a complete song. I intend to discuss some of those effects in this post.

In mastering one takes a mixed song and applies effects to the whole mix at once. This would be done e.g. to make all songs on an album share a similar hear and feel.

A photoshop (or gimp ;) ) analogy would be that in mixing you combine clipart into a picture, and in mastering you apply effects to the picture as a whole (cropping, change colors to sepia, ...).

My current investigation is mostly about mixing audio.

Mixing objectives

In mixing audio one strives to create interest, mood, balance and definition. These four dimensions can be heavily influenced by cleverly applying effects to each of the tracks.
  • Interest: is all about keeping the listener's attention by adding enough variation in the mix. Example: make the chorus sound subtly different than the verse, or make verses with dark lyrics sound darker than verses with lighter lyrics. Variation keeps the interest higher. You can also decide where to put the instruments in an (imaginary) 3d audio scene. If you pay attention to recordings, you can start to hear how certain instruments seem to sound as if they are placed on different places on a sound stage.
  • Mood: how do you make the same music sound darker or lighter? More mellow or more aggressive ?
  • Balance: make sure each instrument gets the space it needs and make sure that the instruments don't sound like a bunch of aliens that happen to play simultaneously. The different instruments should sound as if they belong together, and none of the instruments should overpower all of the other instruments.
  • Definition: make sure enough details can be heard in each of the instruments, voices, and at the same time get rid of the unwanted details like breathing.

Mixing techniques

In order to achieve those objectives, we can apply effects to the individual tracks that must be mixed together. Some effects will operate on the time domain (i.e. it will influence how volume changes over time, or how long certain sounds remain hearable), whereas other effects will mostly influence the frequency domain (change how a specific sound sounds, e.g. make vocals sound clearer or darker).


In itself, the phase of a single track is pretty meaningless. Sound is made of waves, and phase says something about at which moment in time the wave reaches its peaks (low and high), and passes through zero. Phase starts to matter when you combine two or more tracks. A phase difference between two tracks is a small delay between the two tracks. Funny things can happen when you mix sounds with different phase. When you mix two tracks, you basically sum them. If you take one track, make a copy of it and apply phase inversion to the copy, then mix both tracks together you end up with no sound at all: the phase inversion causes both tracks to cancel each other out. (Indeed when the original reaches its peak, the phase-inverted copy reaches its valley and vice versa. At each moment in time the waves are each other's complement.) Of course this is an extreme example that you would never encounter in practice. But if you record the same track simultaneously with different distances to different microphones a phase difference could be present, and if it is not compensated before mixing, it might lead to unwanted (partial) cancellations of sound. Applying certain effects will also affect the phase of the track. Mixing together tracks with phase differences will result in something that sounds a bit different (usually worse) than what you expected, typically a more metallic, hollow sound. Sometimes this effect is applied on purpose: then it's called flanging. There's another related effect called phasing. Both effects suppress certain frequency components (a phenomenon called comb-filtering takes place). In flanging the frequencies that are suppressed are harmonically related, in phasing they are not. Phasing is usually a little bit more subtle than flanging.

Sometimes you can play with phase to achieve interesting effects. One such effect is known as the "Haas" effect. Basically you take a track, and hard-pan it to the left speaker. Then you take a copy of that track and hard-pan it to the right speaker + let it play starting a few milliseconds after the first track. As a result you get a very spacious open sound. Try it out in your favourite audio tool. Another trick is the out-of-speakers trick where you keep the tracks time-aligned, but you invert the phase of one of the tracks. This results in sound that seems to come from all around you. Works best with low-frequency (i.e. low notes) sounds.


Fading is adapting the volume of your track, in order to give each instrument equal chances of being heard. To add interest to a recording, many programs allow automating volume levels, so that variations can occur throughout the song. Beware though: humans are only human, and psychoacoustics dictate that "louder" gives the impression of sounding "better". The result is often that volumes tend to be increased, to the point where they don't make sense anymore, or don't leave enough room for other instruments to be added. If volume is set too high, also clipping can occur, which results in considerable distortion (typically clicking or crackling sounds) in the end result.


With panning you can give different instruments a different spot on the virtual sound stage you are creating in your mix. Its effect is to make the sound come more from the left or from the right. When speaking about panning, one often refers to the sound as coming from a different "hour". Hard left would be 7:00, hard right would be 17:00 and right in front of you would be 12:00.

Compressors and other dynamic range processors

The word compression has two different meanings. On one side it is used to denote the process of representing digital recordings with fewer bytes (like .mp3 is a compressed version of .wav). This is not the meaning that is used in audio mixing. In audio mixing, compression means something different: it means reducing the dynamic range, i.e. reducing the difference between the loudest and most silent parts of a track. Like that the recorded track can blend better with other tracks. Compression is often used on vocals: without it, the more silent parts of the singing risk to drown in the sounds of the other tracks. In this context: apparently in the recording industry there's an ongoing loudness war: by (ab)using compression, recordings are made to sound as loud as possible. The downside is of course that a lot of dynamic range is lost that way and the music becomes less interesting as a result. Different applications of compressors include:
  • Compressor: make loud sounds more quiet (while keeping a sense of louder and more silent); keep silent sounds at the same volume
  • Limiter: ensures that the volume never exceeds a given maximum. The volume of any sound that is louder than some threshold is brought back to the threshold. Volume of sound that is more silent than the threshold is kept as-is.
  • Expander: making quiet sounds quieter; keep louder sounds at their original level
  • Upward compressor: make quiet sounds louder, keep loud sounds at their original volume
  • Upward expander: make loud sounds even louder, keep silent sounds at at their original volume
  • Gate: make all signals with a volume below some threshold a lot more quiet (with a fixed amount known as the range; often: completely remove)
  • Ducker: make all signals with a volume higher than some threshold a lot more quiet (with a fixed amount known as the range
Compressors can have some unwanted side effects, producing varying noise/hiss levels (breathing) or sudden noticeable level variations (pumping). In case of extreme compressing one also loses dynamic range, to the point of making the music less interesting. When used properly, compression can make sounds denser, warmer, louder. Compression can move sounds forward and backward in the virtual sound stage. To a certain extent it can be used to remove sibilance.


Equalizers can have various effects on your sound. They are not so easy to use effectively. They can influence separation (hearing details from individual instruments), feelings and moods, make instruments sound different, adding depth to sound, suppress unwanted content (like constant background noise or humming). To a certain extent they can also compensate for bad recordings, or to suppress unwanted sibilance. Sibilance is the piercing sound produced when recording fricative sounds ("s", "sh", "ch", "t") which can be quite disturbing in a song.

Equalizers typically work on a part of the frequency spectrum. Depending on the part of the frequency you operate on, and the kind of operations you do on it, you get wildly different effects from the equalization. In short: equalizing is like the swiss knife of audio mixing, and it requires (a lot) more investigation from my side.

A rule of thumb seems to be that equalization should be done after compressing.


Ever noticed how the same instrument can sound very different in a different room? One of the factors that determine this difference in sound is the reverb. As you emit sound, it travels through the room and bounces off walls and furniture. Some frequencies will be absorbed by the materials in the room, others less so (this is a kind of natural equalization taking place). After a while, delayed and filtered copies of the sound will arrive at the sound source again (reflections from the walls).

Different ways exist to add reverb to a signal. One interesting technique involves using an impulse response of a room. Think: you clap in your hands, and you record the sound with all the echos this makes. This is more or less the impulse response of the room. Now you can apply this same echo to any other sound using a mathematical operation known as convolution. So if you have the impulse response of a famous concert hall, you can make your piece sound as if it was played in that concert hall. The downside of convolution with an impulse response is its computational requirements, and its inflexibility: there are no parameters you can tweak. For this reason also other algorithms have been developped that allow more flexibility in reverb definition (e.g. choosing the room size). Convolution with the impulse response will more automatically result in a natural sound. Note that in principle you could also apply convolution between any two samples (say: an oboe and a talking voice). The result is cross-synthesis, i.e. the oboe that seems to speak the words of the voice.

Reverb is used to add depth to a recording. A common reverb applied to all tracks can make them sound more compatible, fit better together in the mix (on the other hand, applying different reverb to different tracks can help to increase the separation between instruments). Reverb can fill up pauses in the sound. It also contributes to the mood.


Delay delays a signal with a given time. Mixing the original signal with the delayed signal creates an echo, but if the delay is short enough, we won't perceive the mix as two distinct copies of the same sound. In case of very short delays: be careful of phase differences between the original signal and the delayed copy: they can lead to unwanted side effects during mixing. With slightly longer delays you can get an effect of doubling (i.e. the basis for a chorus effect; making a less dry sound). With longer delays, you create an echo effect. See also the explanation about the "Haas" effect in the section about phase, which is another application of delay.

Vibrato and tremolo

Two kinds of vibrato exist: frequency vibrato and amplitude vibrato (sometimes called tremolo). In frequency vibrato one rapidly alterates the pitch, in tremolo one rapidly changes the volume.


Distortion adds to aggressiveness of the end result. It is also a way of adding imperfections to sound, rendering it less boring (when applied skillfully ;) ). The easiest and least subtle way to add distortion is to clip sounds to a certain limit. Other techniques include applying amplifier simulators and bit reduction techniques.

Pitch shifting, time stretching and harmonizing

Pitch shifting and time stretching are very closely related to each other (at least from a mathematical point of view). If you play back the same recording faster (e.g. by selecting a different sampling rate or by making some tape run faster), it will become shorter, but also its pitch will increase. Sometimes you want to make recordings faster or slower without affecting the pitch. This is not easy to accomplish, and different instruments typically require different algorithms to get a convincing result. Also when the stretching or pitch shifting is very extreme, sound quality will clearly suffer. Even with the best algorithms. Pitch shifters are useful to correct instruments and vocals that are off-tune. They can also be used to turn a piece with one voice into a choir piece with multiple voices singing different pitches simultaneously (harmonizing). The algorithms used typically offer some parameters that allow you to create special effects as a side effect of the pitch shifting or time stretching.


This is more than enough material for this post. As you can see there's more to mixing audio than just throwing different tracks together. This post only scratched the surface of what mixing is all about. The real difficulty starts when, presented with some recordings, you have to make sense of it all: decide which effects to apply, how to configure their numerous parameters, in what order to apply them, etc etc. in order to reach a desired end result.

Ideas for future posts (but those may may never get written, or at least not in the coming years, since I still have no experience with all these things):

  • in-depth discussions of single effects, illustrated with ladspa or other plugins in some popular tools (audacity, snd, ardour,...?)
  • topical discussions, like: how to clean up vocals
  • Maybe all these tutorials and discussions already exist somewhere, in which case I'd be happy to see some links in the comments section.

    Wednesday, March 7, 2012

    Swan song

    Swan song


    Something that fascinates me is the concept of near-tonality. I'm not certain if this is existing terminology, and if it means what I would like it to mean in the context of this blog post. I use it to denote music that somehow "almost" sounds tonal. A typical example would be a soloist playing something "out-of-key", while the accompaniment remains "in-key". The notes sound wrong in the context, but if you keep playing out-of-key systematically something funny happens: once you get used to the new musical language, it all starts to sound right again. You start to hear how different types of dissonances can evoke different moods, and what first sounded all weird and wrong suddenly sounds beautiful, comforting and soothing.

    Music please?

    As you may have guessed, all this was but a long introduction towards my most recent somewhat experimental LMMS composition: Swan song!

    After 2 minutes of tonal introduction, I dip my toes into the rich world of near-tonality and atonality.

    • +1 @ you if you can listen to it once without questioning my mental sanity
    • +2 @ you if you can listen to it three times in a row without it getting stuck inside your head for the next two days ;)

    Monday, January 30, 2012


    Tunestorm deadline is approaching

    Tunestorm is a kind of online experiment (an uncompetition as they call it) where you are given an assignment and you are expected to carry it out in your own style, using your own ideas, instruments, background,... How cool is that?

    More info? I'm not affiliated with the organization in any way, but I'm ridiculously enthusiastic about contributing something to it.

    Next deadline is 29th of february 2012. Don't wait, get started today :) I don't want to end up being the sole contributor :D

    Current assignment

    Current assignment for the seventh edition is: "Write any song of your choosing WITH lyrics in the form of a haiku, i.e. lyrics with 5-7-5 syllables" You are expected to keep your work a secret until the big revelation day to avoid influencing other uncontenstants.

    Thursday, January 26, 2012

    Yay! My first Lmms project...

    I discovered the free and open source tool LMMS a few days ago via some post in the forum. The software seemed so easy to use and so inviting to experiment with that I just had to try it out.

    Love it or hate it... here's my first lmms project:

    In case you'd want to reuse the choral parts I wrote for this piece (one never knows :) ) you can download the entire .mmpz file here: (creative commons attribution share-alike v3.0 license)

    The music was completely made with lmms in debian linux. It probably sounds best with decent headphones or a decent speaker set with subwoofer.
    The video was made with kdenlive on debian linux.
    Have fun!
    (Oh and did I mention already that comments and constructive criticisms are welcome? :D )

    Monday, January 2, 2012


    Using modes in composition

    I have some fascination with composing based on modes. It can make music sound so refreshing. For the longest time i've been very confused about modes. Here's an attempt at clarifying some aspects related to modes. The explanation assumes you are already familiar with key signatures in major and minor scales (i.e. number of sharps and flats required to make a key like D major sound like D major).

    A typical explanation about modes is something as follows. First you start with a C major scale

    c d e f g a b c
    Now you play the same notes, but you start on the note "d"
    d e f g a b c d
    and you have created a dorian mode. Similarly, starting on the note "e": "e f g a b c d e" results in a phrygian mode, starting on "f": "f g a b c d e f" results in a lydian mode, starting on "g": "g a b c d e f g" results in a mixolydian mode, starting on "a": "a b c d e f g a" results in a aeolian mode, and starting on the "b": "b c d e f g a b" creates a locrian mode.

    If you're like me, immediately a few questions arise (which are never answered by most basic tutorials)

    1. Is "d e f g a b c d" is a dorian mode of the C major scale? or of the D major scale?
    2. Do different modes have a specific sound to them? some mood or character?
    3. How do you quickly construct a dorian mode (or any other mode) of - say - the E major scale without memorizing the notes for each possible combination of (scale, mode)?
    4. What difference does it make if you play "c d e f g a b c" or "d e f g a b c d", it's all the same notes anyway?

    Is "d e f g a b c d" a dorian mode of the c major or of d major ?

    The rule is simple: if it starts on a "d" it's derived from a d-scale

    What different modes exist? Do they have a specific sound to them? some mood or character?

    Some modes lend themselves more naturally to achieving specific moods in your music, but you can by no means generalize. Some very sad and melancholic music has been written in major keys (I'm thinking e.g. of Scriabin's prelude op 17, no 6, or the Plainte by Caix d'Hervelois) Despite all those over-generalizations, the following list seems to work well (these are all modes derived from a major scale; you could derive even more modes by starting from other scales, say a harmonic or melodic minor scale, too):
    • c ionian "c d e f g a b c": happy music. This is also called the c major key.
    • c dorian "c d es f g a bes c": irish folk-like
    • c phrygian "c des es f g aes bes c": spanish sounding
    • c lydian "c d e fis g a b c": happy, playful, somewhat comical effect, attention raising (think: the simpsons theme)
    • c mixolydian "c d e f g a bes c": uniting pleasure and sadness; creating a effect of yearning for something or someone
    • c aeolian "c d es f g aes bes c": sad music. This is also called the c natural minor key.
    • c locrian "c des es f ges aes bes c": (this is not often used as it sounds weird to most western ears)
    Some nice effects can be achieved e.g. if a melody is written in one mode, and later on is echoed in a different mode.

    How do you quickly construct the dorian mode of - say - the E major scale ?

    This may not work for you, but it works for me. It requires that you don't have to think about key signatures for major keys (i.e. you do know those by heart), and you also don't have to think twice about the names of the simplest modes built on the major keys (i.e. ionian "c d e f g a b c", dorian "d e f g a b c d", phrygian "e f g a b c d e", lydian "f g a b c d e f", mixolydian "g a b c d e f g", aeolian "a b c d e f g a", locrian "b c d e f g a b")
    • First I remember that the simplest dorian mode is "d e f g a b c d"
    • Then I remember that D major really needs two sharps (fis and cis) to sound like D major.
    • From those two memories I can quickly construct the following rule: "the dorian mode is like the major scale, but with two less sharps (or two extra flats, or one less sharp and one extra flat)." This rule can be applied to any scale.
    • Example: E major requires four sharps. If I want to have the dorian mode I just need to drop two of those sharps, so I end up with "e fis g a b cis d e". Second example: now I want the E lydian mode. Simplest lydian mode is "f g a b c d e f". Compared to F major, this has one less flat (bes became b), or equivalently: one extra sharp. E lydian therefore requires 5 sharps instead of 4, i.e. fis, cis, gis, dis, ais.

    What difference does it make if you play c ionian "c d e f g a b c" or d dorian "d e f g a b c d", it's all the same notes anyway?

    • One difference is in which notes you emphasize on the strong beats (typically first, third, fifth notes of the mode) also the notes you chose to return to at the end of a musical fragment (often something like the first note of the mode).
    • If you're writing a melody in a mode, you will want to use the notes that make it sound different from a major scale more prominently to emphasize that you're not working in a major/minor scale. So a melody in G mixolydian should feature the natural f, and a melody in d dorian should feature the natural f and the natural c. It's exactly those altered notes (compared to the major scale) that lend your melody its extra qualities/mood.


    Yet another blog?

    Yes. I have found that maintaining a blog is a good way to force myself to document some of my musical experiments. It has happened multiple times already that I've created some music and the little evidence that existed of it got lost in computer crashes, or in unreadable binary proprietary file formats of defunct tools. Some of my scores (of which I had only one copy of course :) ) I lended to others but never got back.

    What's the purpose of this blog?

    I want a place to ponder about music, to brainstorm about music, to discuss some music related tools, perhaps post the occasional tutorial. I also want a place to occasionally link to a musical score or youtube recording I've created.

    Musical instruments I frequently use

    I own a Yamaha GT2 digital grand piano, which I use because it is easy to record. If I ever win the lottery, I would like to check out the Yamaha Avant N3 which promises a "real piano experience" (whatever that may mean :) )
    I use someone else's (ahem!) Roland RA30 for an occasional synth sound.
    I've also used software synthesizers like SynAddSubFx or CSound. I have a fazer acoustic piano but I tend not to use it that often since most of my musical activities are concentrated in the very late (or very early - as you please) hours of the day. Fazer is/was a Finnish brand of piano with a very decent quality for a relatively low price. The factory no longer exists. It was bought by Warner music corporation around 1990 who then miss-managed it until it didn't mean anything anymore.
    I record most of my audio on a simple but adequate Roland BOSS BR-600 multitrack recorder.

    Software tools I frequently use

    I have high demands for the tools I use: they must be free ("free" as in "free speech") and open source software, and if possibly also free ("free" as in "free beer") of cost. Luckily there are many high-quality free and open source music tools. I happen to use the linux operating system that has everything (and much more!) a musician could dream about.
    • Typesetting of scores: using the brilliant lilypond program.
    • Postprocessing of audio: using audicity.

    Musical styles and influences

    I mainly dabble with classical (in a very broad sense) music and I'm not afraid of film, jazz and folk influences. Don't bother talking to me about rock, techno, dance or other similar styles, as I know next to nothing about them and I feel very little motivation to learn more about them. It's not my thing.