An exploration in waveform-driven sequencing via Max for Live
I began studying the idea of narrative interlace as it relates to medieval poetry for my bachelor’s degree, focusing primarily on Eugene Vinaver’s notions of interlace, as it applies in Sir Gawain and the Green Knight. This woven approach to meter, rhyme, narrative, and formal structure has stayed with me through the years and was the spark for an idea to drive pitch and rhythm sequencing in computer music via waveforms of varying phase. I refer to geometric waveforms shifted in cycle onset (phase shifted from 0 to 180 degrees, or sine to cosine to -sine for example) as phase interlace, seeing their phase relationships not in the typically deconstructive wave interference they exhibit in audio terms, but as a weaving of sounds, capable of immensely complex and varied tonal and rhythmic interactions.
The Phase Interlace sequencing is driven by a Max for Live MIDI Effect device that I pieced together over the course of a week. Max for Live devices are built using Cycling74’s Max/MSP/Jitter object-based programming environment, connecting number, signal, and graphic generators and effects together in completely custom patches. These open inside of Live to interact with Live’s own software instruments, audio, and MIDI environment.
My device consists of four geometric waveform oscillators, each with four different waveform generators: sine, triangle, sawtooth, and drawn wave, where the user can create a custom waveshape in a buffer, via the mouse.
The Max patcher works by scaling the bi-polar or unipolar signal values from the oscillators (-1.0 to 1.0 as floating-point values and/or 0.-1.0) to a usable MIDI note range, from 36 (C2) to 96 (C7). The oscillators can vary in cycle speed from 0.01 Hz (as an LFO) to audible frequencies up to 200 Hz, the limits being arbitrary and designed by me for this application. The phase of each sine oscillator is variable from 0.01 to 1.0. While the best results come from lower frequencies (below 2 Hz, typically), the oscillators can be synced to Live’s master transport in note subdivisions from 32nd notes to whole notes at tempo.
The MIDI notes are limited in speed of throughput by a Max object called speedlim, which can create more variations in pattern by taking fewer snapshots of the waveform, leading to fewer and more erratic MIDI note generation. The velocity of each note is subtly varied by the first oscillator, scaled to MIDI velocities from 70 to 100. This is to create a less-ostinato effect and inject a small amount of variation by default. All of the note creation is sent via the noteout object from Max to Live, where it runs based on the state of Live’s transport, i.e. when Live is playing, the oscillators begin generating MIDI notes into any software instrument assigned to its MIDI path in Live.
I. Phase Unison with an analog synth as voice
1.0 My first exploration was with waveshape interactions at fixed time intervals, and fixed phase (what I will call “unison phase"). The sine wave generates a smooth curve of pitches, chromatic, but not entirely symmetrical, so that each crest and trough of the sine waveform has slightly different notes. The pitch range is from C1 to B5 in actuality, as the waveform is slightly clipped in MIDI note generation.
1.1 When speedlim is adjusted for a faster throughput of notes, the crests and troughs add more chromaticism, same octave range, but with many more notes, and more symmetry.
1.2 When speedlim is adjusted for slower throughput, the sine wave takes on a random and chaotic pitch contour, as the sine wave is in essence under-sampled for pitch information / note creation, and the pitches lose all symmetry, though patterns emerge over longer periods.
1.3 Sawtooth waves provide a readily-apparent waveshape, with rapid jumps in value, and a smaller octave range thanks to the unipolar signal values from 0.0 to 1.0.
1.4 Sawtooth with speedlim provides for an interesting parallel motion, more regular than the sine waves at slower throughput, much more like a conventional arpeggiator MIDI effect.
1.5 Triangle waveforms and variation tend to truncate note ranges at higher speeds of throughput, and the waveshape isn’t visible in the MIDI notes, but at lower speeds of throughput, the triangle oscillator was wider in octave range, and randomized.
1.6 The hand-drawn waveforms exhibit obviously non-linear pitch contours, and speedlim would quickly truncate pitch movement at slower settings.What is interesting in these applied waveforms is how small patterns still emerge, and how the human ear still seeks and recognizes even highly unorganized and complex pitch-rhythm patterns.
II. Phase Interlacing with a Piano as voice
2.0 In this sequence, I moved from the four oscillators in phase unison, to each beginning to break in small phase amounts from the first sine wave, where the subsequent sine waves were at phase relationships of 0.2, 0.4, and 0.7, respectively. The rhythmic and pitch relations between the four oscillators become apparent, and weaving of notes begins.
2.1 I switched to a granular synth, created from a chord sample, to apply the same phase interlacing. There are small looped grains of the long chord sample, and the Max for Live device is not only sending various pitches to the sampler hosting the granular sounds, but it is varying the location of the looped samples across the whole file sample: scanning the grains and shifting timbres based on pitch. As phase interlaces and diverges, so the timbres get more varied.
III. Using Drum Samples
3.0 Sine waves in phase unison are driving a drum sampler here, holding phrase samples gated to MIDI notes for short envelopes (they will only “open” on a received MIDI note-on message). By using phrase samples and not just one-shot drum samples, the sine waves create a wide range of subtle timbre shifts, even with very short note duration.
3.1 The sine waves now phase interlace, creating still-subtle variations at such short duration.
3.2 Now the waveforms are mixed, using sine, triangle, sawtooth, and drawn waves all together, with longer note duration, to “reveal” more of the phrase samples, with a remaining internal rhythmic logic, however algorithmic and un-natural.
3.3 Sine waves, at a faster tempo (228 BPM, at 32.8 ms note interval), at a tempo-sync of 32nd notes force the drum phrases into a widened stereo image, rhythm more disjointed, but still internally sound and musically more meaningful with any musical elements underneath. A quirk in the Max device creates some sustained notes when the duration is changed, in sync mode. This allows for fuller phrase sample playback and a cacophony of rhythm towards the middle of the clip. This is fixed by bringing the oscillators generating notes back down to LFO speeds.
IV. Scaling the notes to a key and mode
4.0 I added a Scale MIDI effect after the note generating Max device, to force the notes being driven by the oscillators into a particular key and mode, for this clip, F Mixolydian. The intervallic relations become clearer, more consonant in this mode, but the wide pitch range and rapid speed of the notes still blur into more of a tonal “impression,” than a clear arpeggio. As the phase diverges, the intervals become more pronounced and the modal feel becomes clearer.
4.1 By varying the root note, interesting transpositions in the same mode occur, on both the analog oscillator and the drum sampler, with the mode limiting the drum sampler’s notes as well as the pitched notes. Once the phase interlace occurs, drums and harmonies diverge in deeper and more complex ways, while remaining tonally bound to F Mixolydian.
V. All In
5.0 The analog synth, piano, and drum sampler all together, with mixed waveforms, phase interlacing, and a Scale device forcing them into C Persian mode, makes for a compelling micro-composition, and an interesting place to begin fully-formed combinations of sounds, and ever-new and unexpected waveform-driven melodies, harmonies, and rhythms.
I can see several applications in my own music for these kinds of sequences, especially as they generate melodic, harmonic, and rhythmic ideas I would not write, nor could physically play. While this is an oft-cited opposition to computer music, i.e. it’s “non-realism,” I am drawn to the ability to interlace pitches and percussion into ornate sequences, by using the very building blocks of synthesized sound. To hear a piano played by a waveform causes a collision within a listener’s sonic world between realism and the hyper-real. This kind of pitch ornamentation, sequencing of notes, and imagination of what could happen within (or without) a modal character leads to new sequences to build all styles of music from. Being overwhelmed by notes in rapid succession is a back door into ambient sound epistemology, where the notes blur not via spatial or legato effects, but by extreme ostinato. The initial disorientation then leads to the listener hearing new patterns emerging over time from the smaller collections of notes, and each listener would hopefully hear different discrete patterns based on their own sonic environment and inner listening habits. I hope to develop this patch to perform the math necessary for millisecond-based sequences at various tempos, and even develop a hardware interface for this that could reflect the waveforms’ phase interactions, and resulting sequences (a blank-faced, grid-based controller like the monome is ideal for this). I trust this notion of interlace, because it has been long used by poets to sequence words and narrative, but it has great promise for adapting some of the more de-humanized computer music into woven music for humans to parse in their own unique ways.