Latest Sonic State blog post; The Deep End back

Happy Saturday all! Just a quick note to let you know that my latest Sonic State blog post is up now. It offers some tips for automating external MIDI gear in Ableton Live.

In other news, my deep/downtempo night The Deep End is back in business as of August 30th. Head on over to the Facebook page and give it a “Like” if you’re in Halifax and interested in this sort of thing.

10 ways to work faster with Ableton Live

Sorry for the silence lately but the past couple months have been crazy! Several summer trips and out of town gigs, a new job, and unfortunately, basement flooding that’s led to me having to disassemble my studio and (temporarily) move it to a much smaller space 🙁

I realized I totally forgot to share my last blog post on this blog. It outlines some of my personal workflow tips for getting things done more quickly Ableton Live.
 
Anyhow, I’ll continue posting links here as I remember to! I typically blog for Sonic State once a month, and you can expect the next post within the next few days. Be sure to “Like” Sonicstate.com on Facebook, or watch their home page, as they post new blog posts to both.

Disk speed and multi-tracking

A friend recently asked me about disk speed as it pertains to multi-tracking:

“I’m getting in a new laptop next week, and have a bit of extra cash to
spend as well. It comes with a 1TB 5400 RPM drive. Would it be worth
it, performance wise, to splash out on a 7200 or 10000 RPM drive and use
the 1TB as extra storage? In terms of multi tracking and sequencing,
would there be a noticeable difference between using the 5400 as a main
drive and using a faster one?”

I figured I would share my response here, as it may help others facing the same decision:

“It really depends how many tracks you’re tracking at once. If
you’re doing a couple at a time then it probably won’t make much of a
difference for recording, but if you’re doing like 8 or something then
you definitely want the fastest drive possible.

The other thing to
consider is how many tracks you’ll be playing back. The default
behaviour of most DAWs is to stream from disk, and of course the more
tracks you stream the more likely you are to hit the limitations of disk
I/O. If your DAW supports loading audio to RAM (Ableton Live does, for
example), then you can ignore this bit.

One trick to get better
(not necessarily faster, but more consistent) disk performance would be
to partition the disk and dedicate one partition solely for recording
and/or as a scratch disk (if your DAW supports it). This way you don’t
need to worry as much about fragmentation, since they will “fragment”
separately.

Finally, if you’re running Windows 7 you could get a large USB thumb drive and use it for ReadyBoost, which basically gives you solid-state caching of frequently
accessed files (system files and the like). This way the DAW can get more
“exclusivity” of the mechanical drive.”

Hope this helps somebody out there! If you’re wondering about my setup: I have a mechanical drive and a solid-state drive. I use the SSD for ReadyBoost and as a scratch drive (for Ableton and Photoshop). Plus I have a 16GB thumb drive that I use for ReadyBoost as well, when USB bandwidth permits.

Managing headroom in Ableton Live

Watch your head bro! Generally speaking you want to maintain 3-6 dB of headroom when you’re working on a track. This means the master should peak somewhere between -6 and -3 dB. Why? Well in short: the closer you get to 0 dB the easier it becomes to inadvertently cause clipping. And unlike analog clipping, which can be warm and musical, digital clipping sounds bad. Very bad, bro.

First thing’s first: if we’re going to be watching our levels we’ll need a little more insight. Live’s mixer offers some useful information that, by default, is hidden (although they’ve changed this in Live 9)

Increase the height of the mixer section to reveal two values: the value in the pill-shaped box is the peak level; the value in the rectangle is the fader level.

Scaling your levels

In the following example I have a single audio loop playing. As you can see, with just this one loop I’m already hitting -3.34 dB.

If I add anything else I’m likely going to cause some clipping:

Woops! I do like the mix between these two audio clips so rather than adjust the faders individually I’m going to bring them both down in one go. With the track still playing:

  1. Click on the track title for any of the tracks (in this case, “1 Audio”)
  2. Press CTRL+A to select all other tracks
  3. Adjust any of the faders – the faders for all other tracks will move by an equal amount
  4. Click on the master “Peak level” reading to reset it
  5. Rinse and repeat until you reach an ideal headroom

As you can see, by scaling my levels every time I add something to my track it’s easy to maintain a consistent 3-6 dB of headroom even before the track is completed.

Volume automation

A common problem with volume automations is that, traditionally, you’re forced to wait until you’ve fully mixed your track before adding them. Why? Because they work on an “absolute” basis and not a “relative” basis. That is, an automation from -8 to -5 dB will always do just that — even if you move the fader in an attempt to adjust your levels it will jump back to the automation levels as soon as you click “Back to Arrangement”.

There’s a very easy way to get around this in Live, though: instead of automating the mixer level, insert Live’s “Utility” device in your chain and automate the “Gain” knob:

Now your volume automation will work relative to the mixer, so in the example above it would be a “3 dB boost” instead of a “sweep from -8 dB to -5 dB”!

In Closing…

Some might say it’s best to not worry too much about headroom while writing, and to do a full mix-down at the very end (that is, bring all the faders down and mix from scratch). While I won’t argue with this approach, my philosophy on this differs. Obviously you don’t want to get side-tracked mixing your track before it’s written, but if you’re cognizant of your headroom while you’re writing it will be that much easier to mix in the end.

Music Technology Program

For those looking to get into computer-based music but not sure where to start, I have put together a 6 week program in music technology (based in Halifax, starting in October). Really excited to share some of the things that I’ve learned over the years and hopefully get other people as interested in this stuff as I am 🙂

More information/registration: http://learning.snugsound.com/

Hardware integration with Ableton Live

MIDI integration is notoriously lacking in Ableton Live. For example, you can’t store SysEx data at the start of a song (i.e. to store a patch dump), you can’t automate CCs in the Arrangement view, etc. Couple this with some of the other caveats of dealing with hardware (latency, MIDI timing errors, drop-outs) and it can make for a very frustrating experience.

But we love Ableton Live and want to get the most out of it, so in this post I will explore some options to tighten up timing, automate your external hardware seamlessly from the Arrangement view and generally have a much more enjoyable experience when working with MIDI devices.

Timing is everything 

The first thing you should do if you haven’t already is set your Driver Error Compensation. Contrary to some other articles on the internet this is not simply a matter of entering a negative value to reduce your Overall Latency to 0ms!

Wrong way!

Rather, what you are trying to do is tell Live how “truthful” your audio interface is being about latency. Doing so will allow Live to automatically compensate for delay more accurately (more on this later).

Ableton includes a tutorial and sample project that will help you set this value properly. To access it:

  • From the top menu: View -> Help View
  • In the Help section, “Show all built-in lessons”
  • Select “Driver error compensation”
  • Follow the steps

Note that you should repeat the above steps whenever you change your audio interface or Buffer Size.

Take Control

When I first started incorporating hardware into Live I was doing things the “hard way”: creating separate MIDI and audio tracks and then recording the audio signal from my synths before doing a final mixdown/render. There are some advantages to this,
such as being able to warp/process the audio, but the downside is that all delay compensation needs to be
done manually.

The “right” way to incorporate hardware (as of Live 7, I believe) is to use its respective devices/instruments: External Instrument and External Audio Effect. These instruments will take care of several things for you:

Firstly, they will account for latency. If you’ve properly set your Driver Error Compensation per the above you should have almost no latency relative to your soft-synths and audio tracks. Basically, what Live is doing is delaying everything else to give your synths time to catch up.

You will notice that these instruments provide a Hardware Compensation value: this is to account for actual hardware latency (i.e. the amount of time it takes your synth to respond to a note, MIDI I/O)

Secondly, these devices will take care of recording the output from your hardware automatically when you bounce your track:

Real-time rendering

Unfortunately, what these Live devices don’t provide is a way to automate CCs from within the Arrangement view. There are three possible approaches to this, described below.

Clip envelopes

This is the “default” way of working with CCs in Live. Unfortunately, you can’t “see” clip envelopes on the Arrangement view nor can you name the CCs.

Where are you going with this… ?

So let’s say you’re trying to create an epic acid line rise/fall. All you can really tell from the clip view is that  “MIDI CC 74 is climbing towards bar 64”. This doesn’t cut it for me. To me, clip envelopes only really make sense for modulation and pitch bend, and that’s all I will use them for. Moving on…

VSTs

There are several VSTs out there that allow you to control specific hardware devices (both my DSI Tetra and Little Phatty have VSTs, for example). These work by taking control of your MIDI I/O on behalf of your DAW. So when Live sends a “note on” to the plugin, the plugin will the relay this to the hardware. And vice versa.

Because these VSTs generally provide controls for all of the synth’s parameters (cutoff, resonance, etc.) it means you can automate them in the same manner as you would other virtual instrument parameters. In other words, you can automate them from the Arrangement view! As an added bonus, these plugins generally store the “state” of all parameters, so when you reload your project you will get the same patch (even if it’s not saved as a patch on the synth)

Little Phatty VST

The main caveat with these plugins is that, because they take control of MIDI I/O, you can no longer use Live’s External Instrument device.

There is a workaround involving loopbacks/virtual MIDI ports, but a far simpler workaround is to simply use Live’s External Audio Effect and only choose an input channel. This will force Live to perform real-time rendering, however, it will no longer automatically compensate for latency so you will need to apply a negative track delay on your MIDI track (see “Tighten Up” below).

Note that if  a VST doesn’t exist for your hardware there is an open-ended plugin called CTRLR that’s worth checking.

Tighten up

As I mentioned earlier, Live’s External devices allow you to enter a Hardware Delay. Assuming you aren’t using a VST to control your hardware then you can use this to tighten up timing even further. (If you are using a VST you will need to use a negative track delay on your MIDI track, but otherwise the below applies)

The process for identifying your Hardware Latency is the essentially the same process as determining your Driver Error Compensation. Here are the steps I used:

  • Load a patch with an instant attack on your hardware device (basses or kick drums are good)
  • Sequence a couple notes in your MIDI track (say, beats, 1, 2, 3 & 4)
  • Render the project to WAV
  • Drag the audio track into a new channel in Live and turn off warping

Look at the waveform produced by the synth: does it line up with the 1, 2, 3 & 4 beat markers? In my case it didn’t.

Test loop with audio for comparison

Edit the bounced audio clip and adjust the right-most digit until it lines up. This value is the value for your Hardware Latency, or negative track delay (edit Sept 2013: one thing to keep in mind with track delays is that they affect playback, not recording, therefore you would need to include an extra bar before your MIDI phrase to ensure that the full audio gets captured when the track is rendered or frozen)

Adjusting clip start point

Re-bounce the audio and everything should line up now. Perfect timing!

Update September 2013: I’ve written a similar blog post for Sonic State that provides some additional thoughts on using Instrument Racks and Max4Live to automate CCs from the arrangement view. You can check it out here

Beat-matching is soo 1999 (or Fun with Ableton pt. 2)

Don’t get me wrong, I started off with vinyl and I still love it, but I just sold my turntables in favour of a DJ controller to use in conjunction with Ableton Live!

Not spending half of my time focusing on beat-matching opens up a lot of mixing possibilities. For example, if I wanted to mix a dozen tracks at once, I could. I don’t want to, but I could. More realistically, I can mix in bits and pieces of tracks that I may not want to play in their entirety, while doing a conventional two-deck mix and applying a healthy dose of filters and FX to create some additional movement, suspense, etc.

Another advantage is that I can audition a track, in sync, in a split second. If it works I can start bringing it up in the mix right away. Looping a track is just as easy and it’s always in sync with the master tempo.

On a side note, something I’ve become completely addicted to is harmonic mixing. I used to do this instinctively with vinyl but it took tenfold the effort to find records that mixed in key, since adjusting the speed of the record also adjusted the pitch. Using Live to do harmonic mixes is a dream. Not only can I key-lock tracks I can also transpose them on the fly with high quality algorithms.

The main downside I’ve found to letting the computer beat-match is that it doesn’t always get it right. Messing around with warp makers is definitely not as gratifying as nudging a piece of vinyl or adjusting a pitch slider, but when it’s done, it’s done for good – I don’t have to do it every single time I play a gig, over and over 😉