Louigi Verona Workshop

‹‹‹back

Linux Audio: A setup for live performances

How the kitchen works

30 March, 02011

This article will discuss my live performance setup on Linux. Apart from a quite detailed desciption of how I approached the problem, I offer links to some of the recordings of improvisational sessions I've made to this day. In fact, if you wanna make sure that my setup is worth anything, you might want to listen to those first and see if they are impressing enough, they are at the end of the page.

In short, my setup quite reliably allows one to make very detailed music, and not simply ambient, but music with synced basslines and drums and all that other good stuff, as well as volume control of all the channels involved, midi control of effect parameters and the ability to quickly change the chord playing, the melody and even transform one tune into another. Obviously, all of this is possible thanks to the continuing work of the Linux Audio community, which is putting together some serious software. So my task was to connect this software in a certain manner so that I could create music I want to do, in real time, on the fly.

So how did it all start?

After finding out about din, I have made up my mind to build a setup for live performances on Linux and although just some time before I gave up on it, this time I had a feeling it would work out.

I also decided to explore some of the midi sequencers on Linux, thinking that perhaps I can at least add a bassline or some other structured sounds to my din playing. I looked at seq24, but did not know how to make it show up in Patchage, so discarded it at that time, tried the NON Sequencer, which had sync issues.
But then I was introduced to HarmonySEQ, a sequencer that became a game changer for my whole Linux Audio setup.

HarmonySEQ is a project that deserves a separate article and the only reason I am not writing it, is that I am waiting for the new version to come out which will have many features added, as well as the revised user interface. But in short, it is very easy to use and allows to create "events", binded to your computer keys and/or to a midi controller to turn patterns on and off, switch chords, notes and active patterns on the fly. This feature makes it invaluable for live performances, since many of the times you would find it difficult to actually change a chord, along with all the basslines and supporting themes, but this is not a problem for HarmonySEQ.


Anyway, once I saw that there is a midi sequencer that can be the core of my system, sequence and sync everything, I decided to figure what the best approach should be.
Obviously, having lots and lots of applications and synths available, you could just open it up randomly and start sequencing. However, the ease with which I got quickly confused as to what is channeled through what and the need to constantly route things in Patchage, requiring lots of mouse work, hinted that a different method is prefferable.

So I decided to unify things and pick a set of synths and effects I will use. Not only that, but I decided to figure out the best way to route those and keep it fixed for every live performance session, so that I know precisely which synth is routed through what effect and into what mixer channel. At the same time, the setup has to be versatile enough to allow me to make complex music.

And this is what I eventually came up with:



Notice that even in Patchage it does not look too complex - I mean, I've seen setups that have so many virtual wires, you might mistake it for a maze. At the same time each and every connection here is thought through and the resulting simplicity and yet efficiency comes from practice. Let me now describe in detail how it works and why I set it up the way I did.


Overview.

Okay, so, first of all, let's look at what programms we are using and for what.
We have HarmonySEQ for sequencing and synchronization of everything, QSynth with three opened engines to play sounds, Hydrogen to play drums, Kluppe to play longer sounds and loops, four Rakarracks which offer effects, and finally Qtractor which plays the role of a midi driven mixer. I use KORG nanoKontrol as my midi controller.


Audio routing.

Before we go into the specifics of how each app is setup, let's look at routing.
Routing has to be easy to remember and be logical enough, so that in a subconscious situation of a live performance you would be able to instantly recall what goes through what.
Because we have several instances of apps open, it is thus a good idea to route apps in accordance with their instance numbering. If you look at the screenshot, you will notice that QSynth1 is routed through rakarrack-01 and that goes into Track 1 in Qtractor. Same goes for QSynth2 - it goes through rakarrack-02 and into Track 2 of the mixer, QSynth3 goes into rakarrack-03 and then into Track 3.
There is also just a "rakarrack". This is actually the first instance of Rakarrack which has no number on initialization, but because QSynth does have numbers, it would not be easy to remember that QSynth1 goes into rakarrack and QSynth2 into rakarrack-01, so just "rakarrack" is considered to be the last and goes into Track 4.

Note that the secondary purpose of such audio routing is to also minimize the amount of mouse work in Patchage during the performance. This routing is universal enough and if I need to add additional synths, I don't have to think where to route them through, I know precisely the options I have: I either switch them into any of the rakarracks or into free tracks of Qtractor, of which we will speak in the next paragraph.

Note: on the screenshot you can see that there is really no QSynth1, there is qsynth. For some reason first engine of QSynth does not respond to option changes and always comes out called "qsynth". On the GUI, however, it is called QSynth1, as you will see on screenshots below, so routing "qsynth" into "rakarrack-01" makes sense.


Setting up the mixer.

Qtractor has very solid midi learn features and can easily create any amount of audio inputs. This is why I decided to use Qtractor as my mixer. I did not create any individual audio outputs as the whole purpose of having a mixer apart from getting to control volume of separate channels is to also unite all audio tracks into one, for general output and/or recording.

The general concept of simplicity and unification is applied to all chains of the system. In case of Qtractor I limited the amount of tracks to 8 and assigned each track a fixed role. Let's look at those.

Tracks 1 through 4 host outputs of the four rakarracks and each carries a CALF Vintage Delay.
Tracks 5, 6 and 7 are generally used for clean audio output for apps that require no delay and no effect processing, such as kluppe, which I mostly use to play pads and sounds crafted in advance with all the processing contained in them already. By default I use track 7 for clean audio and connect anything to tracks 5 and 6 only if I need separate volume control.
Finally, track 8 is used for unprocessed Hydrogen output. Very often I use Hydrogen as a vocoder carrier or pass it through effects, but if I need drums as they are, I always know they are on track 8.

Additionally, I have created a midi channel, called "TB303" on the screenshot, which hosts a Nekobee DSSI plugin and which is controlled by HarmonySEQ.


Note: delay plugins are hosted in output buses, but my midi controller controls volume sliders of input buses. This is important, because it allows you to quickly shoot volume up and down and make those bursts echo around and generally gives a more smooth sound when fading in and out. If you host delays in input buses, by pulling sliders down you will silence the delay effect as well, which is not as versatile.


KORG nanoKontrol is a great controller for such a setup. It has several "scenes", each scene having different MIDI CCs on knobs and sliders.



"Scene 1" I use for volume and delay control. Sliders 1-8 control volume sliders of tracks 1-8. Knobs above sliders 1-4 control delay amount of the corresponding tracks, knobs above sliders 5-8 control delay feedback of tracks 1-4 (i.e., knob above slider 5 controls delay feedback of delay on track 1, knob above slider 6 controls delay feedback on track 2 and so on). That way I have volume and delay control right before me without the need to switch scenes of the midi controller around.
At first I set slider 9 to control Master Out volume, but then decided not to do it, because it is not so much needed and at the same time poses a risk of a mistake, when you want to kill volume on track 8 but instead kill all the mix. At the moment slider 9 does nothing and I advice against putting any function on it for a live performance setup.

"Scene 2" is used for Nekobee DSSI control. All Nekobee functions I've put on knobs, since it is much more intuitive to work it that way, not to mention it's more traditional ;) As seen on the screenshot, Nekobee midi track also carries a CALF Vintage delay: a TB-303 is unthinkable without some good echoes. Nekobee delay amount and feedback I put on sliders 1 and 2. It is up to you what controls to choose though since there are enough knobs and sliders here, so just find a setup that works best for you. I do miss the ability to assign same parameter to separate knobs, as I wouldn't mind having track 8 volume control duplicated in "Scene 2", for situations when I want to kick in drums along with some cool TB-303 sqeaking, but maybe that feature will be available in the future (or either I'll find a way to program a particular slider on nanoKontrol to the same CC message).

I have several versions of these session templates, each version being setup for a particular tempo because there is no reliable way that I know of to pass tempo changes to all the delay plugins, so without simply loading a project with the required tempo set in advance, you would have to open all delay plugins and set tempo manually, which is not what you want to do during a performance.

This is it for Qtractor. The only thing to mention is that Qtractor thankfully remembers all connections, so you don't have to worry about connections, related to it. Basically, the only connections you do have to create manually, are those between HarmonySEQ and QSynth instances and between your audio apps and four rakarracks, which is not so much and is done just once.

Rakarracks.

There are four Rakarracks I use. I do try to differentiate between them, although so far it has been erratic, since during a performance pretty quickly you begin to change effects and load different presets and tweak things. But in general, the really fixed one is the fourth track, the just "rakarrack" one, which I use as a vocoder. My use of a vocoder in music is very vast, I am ready to feed all possible sounds into it.

HarmonySEQ

HarmonySEQ is a different story and one can spend a lot of time figuring out the best way to set it up. Because HarmonySEQ allows you to program it to accept events, be it from a computer keyboard or a midi controller, and those events include switching patterns on and off, playing them once, changing chords and even specific notes inside a pattern, there is a lot of thinking and trying out should be done.



My initial setup is this. As you see on the picture, first four patterns are called "bass01,02,03,04". Those four patterns are the ones which are bound to keys on a computer keyboard that change chords. I programmed them for a whole set of chords I typically use, both on a major and minor scale. The efficiency here is that one key changes chords in all four patterns at the same time, so I can have several themes going and they all change at once. Patterns "seq5,6,7" do not change chords and can hold themes which have to stay the same. Pattern 8 I use to control drum soundfonts I have.

"hydro_init" and "hydro_stop" are special patterns which control Hydrogen. Hydrogen allows to define midi input in its options and to bind events to that input. I've set it up to PLAY on receiving a specific note and to STOP on receiving another specific note. To make sure those notes do not trigger a drum, I have used very high notes.
Please note that at the moment of this writing, HarmonySEQ does not very accurately trigger a pattern in "play once" mode, so while playing "hydro_stop" once works fine to stop Hydrogen, in order to sync it perfectly to the rest of the patterns, you have to trigger "hydro_init" in "play looped" mode, by checking the box in front of it or better programming a key to trigger it on and off. Otherwise, Hydrogen will be a little off the rhythm. Hopefully, that bug will be fixed soon and such problem would not exist, but at the moment just use the play looped mode.
When Hydrogen starts playing, you can turn the hydro_init pattern off. In fact, if you do not, stopping it with "hydro_stop" will result in it almost instantly starting to play again.

QSynth

QSynth, the latest version of which deserves much praise, is a good tool for performing with sf2. And the ability to save presets of chosen sf2 instruments is very helpful. Below on the screenshot you can see a loaded preset with HarmonySEQ sending midi notes to QSynth.



Window manager

At the moment I am using Openbox for my audio work. I found it to be very efficient for my needs. Using "obmenu" program, I could edit the standard menus and create a "Linux Audio" section where all of my software is a click away. Additionally, I've set up three desktops, all of which, as typical for my live setup, have their fixed roles. Desktop 1 hosts Patchage, which is maximized, so that I can instantly switch over to it and make my connections, Desktop 2 is my main working space and Desktop 3 usually hosts a maximized din. din is a separate topic in itself, but in short, I've bought a graphical pad specifically to play din with it and so I need din window to be maximized so that it is more comfortable to play and I do not accientally touch other applications.

I did look at some other window managers, but so far this one is my favourite. You should choose yours, but I do recommend using an alternative window manager, which is minimal and uses less memory and allows to launch software you need quickly.

Conclusion

What I described above is the basic setup, the skeleton. But what if I want to add some more synths? This I also unified as much as possible and have decided for myself to add additional synths to Track 1. Obviously, if I want to separate them or channel them through effects on Tracks 2, 3 or 4, I can do that, but by default I assume that din, AmSynth or Zyn, which I would want to use sometimes, go through Track 1. With din, I typically channel it through Track 1 and often use it as a modulator in vocoder on Track 4. If I want clean din sound, I channel it into Track 7, the "clean sounds" track, or additional tracks 5 or 6. Below you can see a little more advanced setup and which is in fact a more usual setup I have these days.



I am also now setting up Specimen, to be used both with Harmony and also with my Oxygene 8 midi keyboard, so I suspect that in the future my setup would look much more complex, but when you build a setup with a certain logic, chances are that no matter how complex you make it, it can still be simple to work.

In general, I am very happy with what Linux Audio gives me in terms of live performance. For the first time since I became unhappy with what Windows can do as a live performance tool, my laptop finally begins to do things I dreamed to do - to be able to just sit there and improvise. But it does require a lot of practice to do things fluently and quickly. Setup is very important. But practice is important also!


Audio examples

Finally, here are some of the recordings I made of my improvised sessions using the setup described above. All of these tunes where created in real time, nothing pre-written.

First recorded HarmonySEQ session
"Cycles" session extract
These two are basically same tune played differently. I called the tune "Cycles", thinking about night and day cycle and how our life is a sequence of cycles. These tunes have very slow progression because I was literally figuring out how HarmonySEQ works and connected things in real time. But I still think these are pretty nice for a first try.

107 Autumn session
Yet another session, technically similar to my first sessions, but it is at this point that I started to put together a more advanced setup.

cosmoport
A very oldskool sounding trance, something you could hear in the early 90s.

water element
This is a tune made with the existing setup. It lasts for 15 minutes and I decided to leave everything as it is, so you can even hear me trying out various samples and finally fitting them into the tune. The tune from calm goes into the rhythmic TB-303 sequence, which reminds me of rave music of the early 90s.



If you want to comment on this post, please use the textboard.