Deconstructive Clap-Trap – Generating Rhythmic Fragments

A project for the Creative Coding II Module of my MA at Goldsmiths.


Brief

My aim for this project was to create an interactive sound and screen-based installation piece, centred around the deconstruction of a live rhythm, as performed by the viewer.  By breaking down the original rhythmic material, in real-time,  into its constituent parts (rhythm fragments, individual hits and impulses), I hoped to rearrange the material to build an arhythmic amorphous sonic structure,  finding a new aesthetic coherency; if not rhythmically, then in the use of timbre hidden within the  existing material . In conjunction with the aural form, I planned an accompanying visual component, via a web camera to capture action of the input, providing visual material to manipulate and twist harmoniously with the sonic sculpture.

A secondary factor, but also an important one to me, was that I wanted to set  a challenging brief where I could have the opportunity to learn a great deal about a variety of different methods and functions that I have not encountered.  It was also this reasoning as to why I chose to work using the Openframeworks library/C++ as opposed to Max/Msp, which I am familiar with and Processing which I am comfortable with.

How it works
As aforementioned in the previous paragraph, I wanted to learn new methods. Consequently the installation starts off by using spectral flux, a measurement of how quickly the power spectrum of a signal is changing, accomplished by comparing the current power spectrum against that from the previous frame (a process I had never previously come across). If the spectrum has changed then it gets marked down as an ‘onset.’ Onsets have their temporal data (their position in relation to a global timeline, the start of their loop, and their duration) collated in a series of arrays.  Additionally the first onset is used to initialise the recording, and the last onset gets scrubbed off as a means of providing an end point for the recording. Thus allowing the performer to stay hands-free to perform unrestricted. 

The collection of onsets that comprise of the recording made then get sent to two separate sequencing mechanisms. The first are divided by the length of the recording by the total number of onsets and use ratios of this initial division to create ‘chop’ or cut points where the recording is triggered to repeat. An additional sequencing mechanism akin to that of a round (each new iteration of playback starts from the next onset position, resulting in the desired dynamic ebb and flow of rhythmic intervals) is superimposed over the top.  Meanwhile the corresponding sample videos are slowly layered over each other, blending via their alpha levels. To complement the flowing nature of the round mechanism, I decided it would be appropriate to implement some slit scanning; as the installation reaches its predetermined end the remaining limbs on screen become more deformed and twisted. Once the piece has performed 72 cycles of the original loop, the piece finishes and should be ready to be played again.

Process
I started at the beginning of the project figuring out how to implement spectral flux by using ofxMaxim. After a couple of weeks of coming very close, but still short of having something workable, I discovered some great examples that my tutor Mick Grierson (Hi Mick!) had actually provided for an undergraduate course. After a short while I was able to decipher how to progress and moved on to creating a mechanism to store the entire onset data, ( the resulting by-product meant I learnt about memory allocation and pointers!). This would lead to building a rudimentary mechanism for playback of different parts of the video based on onset location. However It would be here that once again I would get stuck. Frame rate differences, rounding differences and a variable frame rate in quicktime (not too mention lag on my 6/7 year old laptop) made it incredibly difficult to figure out why they weren’t synchronised, especially when I was able to call positions via ‘keyPressed()’ as a debugging method. Eventually I figured out how to correctly map the elapsed time to the format the ‘ofVideoPlayer’ class, implementing it in the correct place.

At one point I had lofty dreams of utilising the model of Husserl’s Phenomenology of Internal time-consciousness for music analysis and composition as a mechanism for generative compositional development, however after a few hours of essentially getting stuck in feedback loop (literally and figuratively!) I decided it was best to learn to walk before running and settled on the processes I now have in place. Lastly I adjusted the ‘draw.functions’, adding the title and instructions for users to the start of the piece as well as including horizontal image scanning and slit scanning. For these process I found code that was too good not to borrow in the Mastering Openframeworks: Creative Coding Demystified Book. Unfortunately the horizontal image scanning was a step too far for my old laptop to handle, as it was already struggling with processing the video, consequently  it was excluded in the video demonstrations (although it can still be found in the code below).


The next step
From here the next step is to improve my C++ coding abilities. Despite  being satisfied with parts of the project, far too many of the problems I encountered took too many hours to solve and were actually a case of a small process error occurring from comparatively wonky logic as opposed to an entirely flawed conception. However by the end of the project, the rate of mistake was markedly lower, and my ability to troubleshoot and fix unintended consequences was greater. Although the computational challenges my masters thesis, will present  (and will no doubt be harder), having spent further time learning  Openframeworks, I do feel more confident that I am working in the right direction. Additionally this project has also reignited my desire to delve deeper into generative rhythmic processes.