Haszari Live at the Library

As part of NZ Music Month I was invited to play in the Dunedin Public Library. Because I love libraries, and this one (a marvellous concrete construction) in particular, I jumped at the chance! And of course I'm generally obsessed with the idea of playing electronic music live.

Cue a month or so of fervent anxious work getting a set together. Because this was a 20 minute slot, I felt that I could pull out all the stops and make this as complicated as I wanted. Over the last wee while I've been making lots of little loops so I had a bunch of material to play with.

The first time through (I had two slots booked - a Sunday morning and a Thursday after work), I had a technical problem, and had to play a (boring old, not even live mate) DJ set instead. In the intervening time I dusted off a custom nodejs websockets OSC UI thing to replace the bits that didn't work (studiomux providing OSC-over-lightning cable). This worked great, turning the iPad into a wireless pattern-trigger controller.

On the laptop I used SuperCollider as a MIDI pattern sequencer and also to play back texture samples. This all fed into Logic Pro X where the synthesis, effects and mix happened (with lots of params assigned to hardware knobs).

Here's the recording - great background music for reading perhaps? Hope it's as much fun to listen to as it was to put together :)

Tracklist:

  1. Like So (live dub)
  2. Maenyb (live dub)
  3. Nevozo (live dub)
  4. Janura Crossing (live dub)

One of the many neat little touches in Figure (a fun music app/toy for iPhone/iPad/iOS) is how it names documents.

Instead of using the typical "Untitled 1" (or similar) approach, it generates a nonsense word. I like this because you're not forced to think up a name but you still still get a "meaningful" name/handle to use to refer to your lil song doodle thing. Examples: Dukadygo, Hudolyka, Tejugy.

I really liked this idea, and wanted to use it in SuperCollider, so I wrote a little String extension to generate pseudowords.

+ String {
  // Generate a random "word" between 2-5 syllables.
  * randomWord { | minSyllables=2, maxSyllables=5 |
    var
    consonants = "bcdfghjklmnpqrstvwxyz",
    vowels = [
      'a', 'e', 'i', 'o', 'u'
    ],
    longvowels = [
      'ee', 'oo'
    ],
    dipthongs = [
      'ae', 'ai', 'ao', 'au',
      'ea', 'ei', 'eo', 'eu',
      'ia', 'ie', 'io', 'iu',
      'oa', 'oe', 'oi', 'ou'
    ],
    word = Array.fill(rrand(minSyllables, maxSyllables).round, {
      var syllableMode = 20.rand;
      case
      // 10% chance of a diphthong vowel
      {syllableMode >= 18} {consonants.choose ++ dipthongs.choose}
      // 10% chance of a long vowel
      {syllableMode >= 16} {consonants.choose ++ longvowels.choose}
      // 5% chance of no vowel!
      {syllableMode == 6} {consonants.choose}
      // otherwise "typical" syllable (75% chance)
      {syllableMode >= 0} {consonants.choose ++ vowels.choose}
      ;
    }).flatten.join;
    ^word
  }
}

I'll be using this in SuperCollider for naming little midi patterns, song sections etc, as opposed to beat, bass1, bridgechords etc.

Other potential applications of this concept:

  • blog post 'slug' - the url key for a post (coming soon to drongo)
  • names of characters, technologies, plants or animals in creative writing
  • name for your new company/brand.
As mentioned in a previous post, I have long dreamed about being able to play electronic music live. I've finally done it!

Here's the audio (freely downloadable if you want to listen later, and I may put it on the beats reality podcast too).
Haszari's First Ever Truly Live Set by haszari

A Rudimentary Approach

As far as live electronic music goes, this is a somewhat primitive performance. There are four songs, each has approximately 3-4 parts, and there are a small number of effects/parameters available for me to tweak. Also I'm relying heavily on a great-sounding dub delay effect which can feedback; I've used it to fill things out, give things dynamics and shape, and also used it (screaming feedback like a guitar amp) for short-term sound effects.

Also, of the four songs, 2 are built from short loops of (my own) previously recorded material - i.e. the notes are not being sequenced live, and there is no synthesiser producing the notes live.

Another limitation is that there are no stabs or manually-played samples/notes/effects; everything is in some kind of pattern which is triggered quantised. (The main reason for this is I ran out of time.)

Limitless Possibility

But I'm really excited about what is happening in this live set. Even though I've heavily used samples, I had a huge amount of control available to me live, and more importantly, it was easy and fun to perform with no plan in an underground bunker (through my low rent sound system, running the whole night off a single power outlet). I had a lot of flexibility, nothing much was planned. What did I have control over?
  • Within each song, I had a level fader for each part, meaning the songs were mixed (in a primitive sense) live. These faders could be set up differently depending on the part, for example one synth strings part had the fader pre-reverb.
  • Each part had at least one other parameter on a knob; this could be a filter cutoff for a synth part, or a fader between two drum sounds for a drum part. 
  • All parts had one or more (looped) patterns, which could be triggered/untriggered (quantised to an appropriate interval) with a button. In the case of multiple patterns, a button allowed me to navigate up/down to select the pattern to play next time around.
  • Some parts had a triggerable variation or fill - for example, hold down a button to play a randomised (schizofrenic funk drummer) fill until the button is released.
  • Each song was assignable to a global (DJ-ish) channel - with a level fader, 3-band EQ, and a send to the global dubdelay. Assigning a song also made its parts available - i.e. I could only trigger parts etc when a song was "loaded" into a channel.
  • Although I only had two songs' worth of hardware control, this was live-mappable and I could easily manage (parts of) all four songs playing at once if I want to.
Of course this was all implemented in handy SuperCollider. I spent a bit over a month or so of occasional evenings and bits-of-weekend developing things and jamming it out. Most of my time was spent on infrastructure - things like setting up the code to live-map a song to a hardware channel, implementing a simplistic EQ/band compressor for the channel strip, factoring out the dub-delay effect so all songs can opt-in to using it, etc.

What made this really exciting and fun for me is that I could treat this like a software project. I could start small, implement a simple beat that I could drop & interact with live, building in more complexity later. It felt like prototyping - sketching out a framework of how things should work, and revisiting different aspects later until I had something much more complex, organic and live up and running.

I may post again with more detail about how SuperCollider supports writing and performing like this - so comment if you want to find out more about something.


I have spent a lot of time in my life playing with different music-making software, getting excited about what can be done, and ultimately getting bogged down or frustrated. I have also spent a reasonable amount of time working on custom tools for making music. So what am I looking for? And why haven't I found it yet?

It seems in the past ten or so years, music software and technology has become much more powerful, usable, and accessible (i.e. cheap). This, combined with the convenience and ease of digital distribution, is why Cartoon Beats came about - we are mucking around making our own music, and if we are going to be hustling around labels and DJs trying to get them to promote our music, we might as well release it ourselves. So that's great, but something's still missing for me.

Producing a track, and DJing it out have become completely separated for me - one of them involves sitting at a desk, endlessly tweaking and fiddling with a timeline UI in software, the other involves standing up, bashing buttons, waggling my arms and/or nodding my head while subtly adjusting various aspects of a fixed and unchanging recording. I crave the ability to combine all this into one activity - the power and flexibility of composition and production with the immediacy of DJing.

Alongside this I feel a huge disconnect between how I used to make music, when I was in metal/rock band - jamming, noodling on riffs with other players for hours on end. The music was definitely not locked to any timeline, and often evolved in front of you, even if many of the elements were pre-planned or rehearsed. I miss that! And I thought that by now (ten years later, surrounded by supercomputers and extremely affordable USB MIDI gear) it would be much easier to achieve!

So I have tried out lots of software with these kinds of goals in mind. And always found myself wanting more..

So what is it that I want?

Power & Flexibility or Expandability

I don't want a system to present arbitrary limitations or obscure or prevent cutting edge algorithms/processing from being used. A simple example - I want completely flexible routing, which most software provides these days (GarageBand doesn't; Reaper does). A more complex example - should I read about some clever DSP technique, I don't want to have to wait for someone to write a plugin, or attempt to write one myself; ideally the technique can be implemented/prototyped directly in the software (e.g. Pd contains many units which can be used as building blocks to build up arbitrary processing; the same technique in Reaper might require a custom plugin, or a complex mess of routing connecting many plugins). Another way of looking at this: if I happen upon some interesting technique in a tutorial or academic paper, it should be possible to apply/adapt the technique.

Modularity or Abstraction

This is essentially the flip side of flexibility - a complex graph of processing units, or a sequence of audio samples, or notes, should be able to be treated as a single unit, with useful parameters exposed. I have found that this is a limitation with timeline-based software (e.g. Reaper or GarageBand) - a channel is a single level. You cannot for example sequence some audio and then treat the sequence as a unit; you have to paste it, warts-and-all, around the timeline to reuse it, and of course, you can't easily adjust something in all these pasted copies. MIDI clips are one way around this. Also I find myself wanting to package up snippets of automation data.

Visibility

By this I mean not obscuring things. VST plugins for example, are really great units with lots of expressive power packed into them; but more often than not the actual core of what the plugin does is obscured from the user. This is especially frustrating when you have lots of plugins that do subtle variations on the same thing. I don't want these things to be a black box - I want to be able to find out or understand. Note that this is distinct from abstraction above, which allows the user to package things up and (temporarily) obscure the details. Also important here is that the techniques, musical information and processing used in a track should be as easy to get out of the system as they are to put in (so you aren't locked in to the system).

Casualness or Immediacy

Traditional musical instruments and musical gear have presented a very casual kind of interface - you pick the thing up and start playing it, or connect it to a sound system and turn knobs. I want this kind of expressive power to be possible or available to me. What I don't want is for production to be something that I have to lock myself in a room for hours to do - I want it to be more like a toy, and something that I can attempt to involve my kids in (or expose them to). You know, like picking up a guitar.

Conciseness or Efficiency (or Scalability)

I have only recently realised how important this is to me. As my projects got more complex, I found that I was more constrained by previous decisions. With modular software such as Jive (or Pd), while it might technically be possible to play a complete live set of a few original tracks, to do so would not really be feasible. This really ties in with modularity mentioned above, being paranoid about backing up/losing work, and the fact that my day job is writing software. A text-based file format (as in Pd), or even text-based user interface (SuperCollider, or any one of many audio programming languages) is a huge advantage from the point of view of organising the content in a project. On the other end of the scale for example, a binary file format that folds in midi information and audio samples etc is not ideal.

So I'm delighted to announce, I have found a system that appears to do well in all these respects - SuperCollider. In future blogs I will go into why I am so happy with SuperCollider, and what I have been doing with it - but rest assured, I have been jamming with it, playing with it as a musical/audio toy, experimenting with the low-level nuts and bolts of audio processing, and even working towards playing live with it.

Of course this is my own idiosyncratic view, and I don't have much experience with non-free/cheap musical software. In particular, Ableton Live, AudioMulch and energyXT all seem potentially very useful. So feel free to chime in with your experiences...
  • What do you love about the software/system you use to make music?
  • What do you hate?
  • How do you (or how would you like to) perform electronic music live?
  • What do you wish your system could do?