What can I do with these loops?

In my weekends (and the occasional evening) I like to jam with plugins and MIDI, edit up samples, and make little loops. This is a low-pressure outlet for my creativity. In the last few years this has meant I've had a lot more fun, accumulated lots of rendered audio, and almost stopped releasing music. I'd been looking for a way to combine all these loops into some kind of generative audio wallpaper.

Landscapes are nice

A wee while ago I came across @SoftLandscapes (see also web twitter). I like experimenting with primitive (minimalist?) animation, and I wondered if I could implement something similar in canvas or SVG, so I did some prototyping on codepen.

Press play to hear it! Leave it going for an hour or so to hear everything, or leave it going for a week! Full site here, so you can full screen it in a dark room.

Let's learn about new web tech

I was also looking to get more experience with the Web Audio API, Redux, and React. So I combined all these things into a project that I can focus on and then ship (because real artists ship).

What is it? How does it work?

There are a handful of songs, each split into four layers. The algorithm cycles through all the songs, and gradually adds in, and then removes layers. Each of the layers corresponds to one of the scrolling parallax mountain ranges, so they fade in and out too.

When switching to the next song, as layers are removed, the code decides how soon to bring in the next song - how much the parts overlap. The code also randomly chooses different ways for parts to enter and exit – simple start stop, low and high-pass filter sweeps, or volume sweeps.

There's also a selection of texture/field recording samples which are played over each transition – a random section of the sample is selected and then looped, and filtered in and out as the song changes.

Sometimes, to mark changes, there's a little bit of punctuation – a randomly-selected "hit" sample, such as an air horn or cymbal. These are sent to a dub delay using a Web Audio DelayNode and BiquadFilterNode (hat tip Chris Lowis). Gotta have an air horn. No cowbell though (not this time).

The colours of the mountains are randomised – a front and a back colour are chosen, and then these are interpolated for the middle two mountain ranges.

There's a range of styles in there, things that you might call instrumental east-coast hip hop, dubstep, dark ambient electronica, subdued moody piano, break beat.

Look out!

If I make nice sounds when noodling or jamming, I'll add new songs periodically, so check back in :)

The Nomad is running a remix contest for his forthcoming album 7.

It seems there's nothing that gets me fired up quite like a remix comp. I heard about this on Saturday afternoon, grabbed the stems, and finished the mix that evening. Pretty happy with that given that I've not completed a track/remix in about a year!

The original is a hip-hop track featuring UK artist Lotek. I've bumped it up a bit (0.3), resulting in a dark bass-house slash vocal-breakbeat thumper. Best thing about this: the vocal. Huge fun layering a nice beat under that flow.

Anyway, have a listen to the track and let me know what you think. Also check out all the other remixes in the soundcloud group.

I finally got Traktor to sync with something else - Reaper - using MIDI Clock (running on Mac OS X, on the same machine). Here's how I did it..

  • Start Traktor.
  • Start Reaper.
  • Open Reaper Preferences, select Devices > MIDI Devices.
  • Double click "Traktor Virtual Input" and select "use this device" and "send midi clock to this device".
  • In Traktor, open the sync panel thing up the top by clicking the metronome.
  • Click "EXT" button - this tells traktor to listen for MIDI Clock.

Now get a track (with an accurate grid!) loaded into a Traktor deck. In Reaper, set up something locked to the Reaper beat grid - for example, an audio or midi loop.

Press play in Reaper so your loop plays forever. In Traktor, the sync panel should show a tempo similar to the tempo in Reaper. You'll notice that it wavers about a bit. Press play on your gridded track and click Sync.

The Traktor deck should be roughly in sync with Reaper! (In fact, it is loose enough that it sounds a bit like a real DJ is nudging it.)

Questions:

  • If we send the MIDI over a network or MIDI connection to a different machine, will this sync well enough to bother with?
  • Can we sync two copies of Traktor (on different machines) this way?

If you have problems (or corrections), comment below so we can determine what I really did to make this work.

As mentioned in a previous post, I have long dreamed about being able to play electronic music live. I've finally done it!

Here's the audio (freely downloadable if you want to listen later, and I may put it on the beats reality podcast too).
Haszari's First Ever Truly Live Set by haszari

A Rudimentary Approach

As far as live electronic music goes, this is a somewhat primitive performance. There are four songs, each has approximately 3-4 parts, and there are a small number of effects/parameters available for me to tweak. Also I'm relying heavily on a great-sounding dub delay effect which can feedback; I've used it to fill things out, give things dynamics and shape, and also used it (screaming feedback like a guitar amp) for short-term sound effects.

Also, of the four songs, 2 are built from short loops of (my own) previously recorded material - i.e. the notes are not being sequenced live, and there is no synthesiser producing the notes live.

Another limitation is that there are no stabs or manually-played samples/notes/effects; everything is in some kind of pattern which is triggered quantised. (The main reason for this is I ran out of time.)

Limitless Possibility

But I'm really excited about what is happening in this live set. Even though I've heavily used samples, I had a huge amount of control available to me live, and more importantly, it was easy and fun to perform with no plan in an underground bunker (through my low rent sound system, running the whole night off a single power outlet). I had a lot of flexibility, nothing much was planned. What did I have control over?
  • Within each song, I had a level fader for each part, meaning the songs were mixed (in a primitive sense) live. These faders could be set up differently depending on the part, for example one synth strings part had the fader pre-reverb.
  • Each part had at least one other parameter on a knob; this could be a filter cutoff for a synth part, or a fader between two drum sounds for a drum part. 
  • All parts had one or more (looped) patterns, which could be triggered/untriggered (quantised to an appropriate interval) with a button. In the case of multiple patterns, a button allowed me to navigate up/down to select the pattern to play next time around.
  • Some parts had a triggerable variation or fill - for example, hold down a button to play a randomised (schizofrenic funk drummer) fill until the button is released.
  • Each song was assignable to a global (DJ-ish) channel - with a level fader, 3-band EQ, and a send to the global dubdelay. Assigning a song also made its parts available - i.e. I could only trigger parts etc when a song was "loaded" into a channel.
  • Although I only had two songs' worth of hardware control, this was live-mappable and I could easily manage (parts of) all four songs playing at once if I want to.
Of course this was all implemented in handy SuperCollider. I spent a bit over a month or so of occasional evenings and bits-of-weekend developing things and jamming it out. Most of my time was spent on infrastructure - things like setting up the code to live-map a song to a hardware channel, implementing a simplistic EQ/band compressor for the channel strip, factoring out the dub-delay effect so all songs can opt-in to using it, etc.

What made this really exciting and fun for me is that I could treat this like a software project. I could start small, implement a simple beat that I could drop & interact with live, building in more complexity later. It felt like prototyping - sketching out a framework of how things should work, and revisiting different aspects later until I had something much more complex, organic and live up and running.

I may post again with more detail about how SuperCollider supports writing and performing like this - so comment if you want to find out more about something.


I have spent a lot of time in my life playing with different music-making software, getting excited about what can be done, and ultimately getting bogged down or frustrated. I have also spent a reasonable amount of time working on custom tools for making music. So what am I looking for? And why haven't I found it yet?

It seems in the past ten or so years, music software and technology has become much more powerful, usable, and accessible (i.e. cheap). This, combined with the convenience and ease of digital distribution, is why Cartoon Beats came about - we are mucking around making our own music, and if we are going to be hustling around labels and DJs trying to get them to promote our music, we might as well release it ourselves. So that's great, but something's still missing for me.

Producing a track, and DJing it out have become completely separated for me - one of them involves sitting at a desk, endlessly tweaking and fiddling with a timeline UI in software, the other involves standing up, bashing buttons, waggling my arms and/or nodding my head while subtly adjusting various aspects of a fixed and unchanging recording. I crave the ability to combine all this into one activity - the power and flexibility of composition and production with the immediacy of DJing.

Alongside this I feel a huge disconnect between how I used to make music, when I was in metal/rock band - jamming, noodling on riffs with other players for hours on end. The music was definitely not locked to any timeline, and often evolved in front of you, even if many of the elements were pre-planned or rehearsed. I miss that! And I thought that by now (ten years later, surrounded by supercomputers and extremely affordable USB MIDI gear) it would be much easier to achieve!

So I have tried out lots of software with these kinds of goals in mind. And always found myself wanting more..

So what is it that I want?

Power & Flexibility or Expandability

I don't want a system to present arbitrary limitations or obscure or prevent cutting edge algorithms/processing from being used. A simple example - I want completely flexible routing, which most software provides these days (GarageBand doesn't; Reaper does). A more complex example - should I read about some clever DSP technique, I don't want to have to wait for someone to write a plugin, or attempt to write one myself; ideally the technique can be implemented/prototyped directly in the software (e.g. Pd contains many units which can be used as building blocks to build up arbitrary processing; the same technique in Reaper might require a custom plugin, or a complex mess of routing connecting many plugins). Another way of looking at this: if I happen upon some interesting technique in a tutorial or academic paper, it should be possible to apply/adapt the technique.

Modularity or Abstraction

This is essentially the flip side of flexibility - a complex graph of processing units, or a sequence of audio samples, or notes, should be able to be treated as a single unit, with useful parameters exposed. I have found that this is a limitation with timeline-based software (e.g. Reaper or GarageBand) - a channel is a single level. You cannot for example sequence some audio and then treat the sequence as a unit; you have to paste it, warts-and-all, around the timeline to reuse it, and of course, you can't easily adjust something in all these pasted copies. MIDI clips are one way around this. Also I find myself wanting to package up snippets of automation data.

Visibility

By this I mean not obscuring things. VST plugins for example, are really great units with lots of expressive power packed into them; but more often than not the actual core of what the plugin does is obscured from the user. This is especially frustrating when you have lots of plugins that do subtle variations on the same thing. I don't want these things to be a black box - I want to be able to find out or understand. Note that this is distinct from abstraction above, which allows the user to package things up and (temporarily) obscure the details. Also important here is that the techniques, musical information and processing used in a track should be as easy to get out of the system as they are to put in (so you aren't locked in to the system).

Casualness or Immediacy

Traditional musical instruments and musical gear have presented a very casual kind of interface - you pick the thing up and start playing it, or connect it to a sound system and turn knobs. I want this kind of expressive power to be possible or available to me. What I don't want is for production to be something that I have to lock myself in a room for hours to do - I want it to be more like a toy, and something that I can attempt to involve my kids in (or expose them to). You know, like picking up a guitar.

Conciseness or Efficiency (or Scalability)

I have only recently realised how important this is to me. As my projects got more complex, I found that I was more constrained by previous decisions. With modular software such as Jive (or Pd), while it might technically be possible to play a complete live set of a few original tracks, to do so would not really be feasible. This really ties in with modularity mentioned above, being paranoid about backing up/losing work, and the fact that my day job is writing software. A text-based file format (as in Pd), or even text-based user interface (SuperCollider, or any one of many audio programming languages) is a huge advantage from the point of view of organising the content in a project. On the other end of the scale for example, a binary file format that folds in midi information and audio samples etc is not ideal.

So I'm delighted to announce, I have found a system that appears to do well in all these respects - SuperCollider. In future blogs I will go into why I am so happy with SuperCollider, and what I have been doing with it - but rest assured, I have been jamming with it, playing with it as a musical/audio toy, experimenting with the low-level nuts and bolts of audio processing, and even working towards playing live with it.

Of course this is my own idiosyncratic view, and I don't have much experience with non-free/cheap musical software. In particular, Ableton Live, AudioMulch and energyXT all seem potentially very useful. So feel free to chime in with your experiences...
  • What do you love about the software/system you use to make music?
  • What do you hate?
  • How do you (or how would you like to) perform electronic music live?
  • What do you wish your system could do?
I have been wanting to make some el cheapo near-field monitors for ages. My plan was to get the cheapest drivers I could find, probably with tweeters built in (aha! car speakers!) and put them in decent-sized boxes made out of MDF.

Last weekend when we painted our bedroom I saw that Warehouse had $20 4 inch car speakers and I was like "I am so gonna finally do this". (Luckily for me, the car speaker aisle is the same as the paint aisle.)

SO this weekend I did the stuff in about a day. A little bit of measuring and marking yesterday, and pretty much all day today sawing, glueing and getting tired.

The boxes are glued together with No More Nails and then screwed as well. I didn't use any tables or information about what size cabs would be best for the drivers, just decided based on the size I felt would be good & practical but ensuring there was a reasonable amount of internal volume.

Here's a couple of pictures, there are some more up on flickr.
IMG_6205
IMG_6212

And the best news - I am completely astounded by how they sound! Exactly what I was hoping for, enough range to produce on, they sound good without having to be turned up too loud, and much clearer highs than any of my other speakers.

IMG_6213

Wahoo!
Radio 1's web stream has for a long time been intermittently lo-fi, even though the actual stream format is nice-enough 128k mp3, i.e. 44.1kHz stereo. More often than not (and pretty much permanently) the audio quality was in fact 22kHz mono, i.e. shithouse, tragically being re-streamed at 128k (oh the waste.. oh the bits...).

So, since the Energy Flash podcast relies on this stream, the problem was one dear to my heart. For a lil while I had been emailing, attempting to suss it out, get to the bottom of it, or just keep hassling so someone figures it out.

Didn't get anywhere, but last week I noted that the tech guy hangs out on GChat sometimes, so I decided to keep banging on about it in an attempt to really get it fixed. AND WE DID IT!

The culprits:
R1 uses RecALL Pro for logging purposes, recording at low quality everything broadcast on the station. Running on the same machine is WinAMP and a Shoutcast server powering the web stream.

When RecALL pro starts up recording, it requests a low-fi audio stream from the antiquated (and I think deprecated) MS ACM. I believe that this results in subsequent requests to the audio input (particularly the one made by WinAMP) being downgraded to the low quality. This makes sense, as MS ACM (as I see it) is an older, lower level technology that was probably designed when only one app would even think about accessing the soundcard.

So the solution was to start up WinAMP first, and maybe (I wasn't there!) tweak the quality settings (still low quality) used in RecALL Pro.

Get on with it then - have a listen to the stream or check out the podcast! The water's fine, come on in...
I’ve entered a contest to remix Deepcentral’s "Is It Real", a progressive/eastern euro trance track.

I need your help - listen to the remix, let me know what you think, and vote & comment on the site!
Visit the label site to have a listen:
http://contest.e-motionsounds.com/index/userdetail/iduser/315

If you have trouble with their player or site, just head on over to the Cartoon Network DJs myspace, the track is the first one in the player. (and why not check out the other snippets while you’re there)
http://myspace.com/cartoonnetworkdjs
(and/or just complain bitterly to me about the inadequacy of trying to listen to music in shitty little flash/js custom web audio players).

My remix is a deep progressive chugger with a bassline that is a bit of a nod to early Dirty South. I’m quite pleased with it.

Please comment your comments here, at the contest site, or myspace or facebook or email.

Not quite all made with free software this one. More like made with all the software I've tried out recently - this is a 3 & 1/2 platform process.

There's a bit of a story so if you have the time I'll tell you all about it...

  • chorded out on guitar

  • prototyped in seq24 & general midi on an old Pentium II running linux/debian/64Studio

  • "shit, perhaps these vocals I just randomly downloaded for this comp might work with these chords"

  • re-prototyped in GarageBand and edited & combined with Deepcentral parts

  • chugger bassline added in Reaper (demo) using Phadiz VST plus some bleeps with a VST theremin

  • (and finally, the free-opensource part) mixed and more importantly low-end sidechain compressed (I have to do this to everything now) using SC3, jack, jack-rack, Ardour running in 64Studio of course (on my now quite senior Athlon64 desktop, needs RAM, needs RAM)

  • and mastered in good ol' Audacity


Cheers, see you at Pop this saturday for some cocktails + banter + beats...