I was just watching the Roska Against The Clock video on Fact TV and noticed he (like many others i’ve seen over the recent years) uses audio events in the arrange window to make his beats. I still don’t ‘get’ this approach, it feels so limited to me. Here are some reasons why I feel this way:
Say I want to adjust the tuning of the drum sound to match the musical elements in the composition, I’d have to ‘process’ the core sample. If I was using a sampler all i’d need to do is ‘tune’ the actual sample. Processing is something that changes the sample.. it’s destructive… tuning via midi keeps the sample the same and just transposes it. It just feels more flexible – you can change the pitch at any point without changing the source sample.
Having to drag a large collection of audio regions that you have to drag around feels like it’s making structuring harder. If you put the regions into a folder of 4 or 8 bars it can help but it still doesn’t feel as flexible as using MIDI for the programming. The other thing is the video card has to work harder to draw all of the individual waveforms.. video performance is therefore degraded.
You just can’t do this with audio unless you go in changing the gain of each sample or using automation to do so. You could of course use sidechaining to simulate this – hats being ducked by a kick for example.
Placing beats into the arrangement is not the same as ‘performing’ them. It feels soulless to me, although.. I have used the approach a few times. I can’t discount it as a technique as it can yield great results – it just feels like ‘you’ cannot manifest yourself into the beat properly.
Now i’m not knocking anyone who uses this technique, Roska for example makes great music. I’m just trying to get my head around WHY people use it so i’m all ears – please let me know the plus points I may be missing. Bear in mind that this is a technique that didn’t exist pre DAW’s – many classic club tunes were made using the traditional midi plus samples approach (and i’m pretty old school)