The writing below used to be a synopsis for a some kind of vlog progress report about how the Turns Me On video was made. To be honest, I recorded the talk but didn’t like it, but since it actually detailed in a brief way the process I converted it into a blog post.
The idea of this post is to give a peek at the behind the scenes ideas and mechanics that lead to the way the Turns Me On video turned out.
At the initial brainstorming the basic problem in that early stage seemed to be this question: does a video for a track about sexual excitement really needs hypersexualized images of women in any way? I really wanted to turn this stereotype on its head somehow and first had the idea to go to clothes shops with a friend, a young male actor, and take skirts, high heels, sexy tights and other women’s clothes, go to changing rooms and have another friend shoot him trying things on. But I was already getting late with finishing the main track, not to mention I wanted to make remixes and so on. So the video was on hold all this time as I was working on music and I felt I need a more flexible plan. And, you know, the need for flexibilty usually translates into kinda do-it-yourself quick guerilla work. At least with me.
So, because of all this time problems I decided I want to go out to shopping mall myself with a camera and shoot whatever I can shoot in the shopping mall and elsewhere: electronics (plasma TV), utility store (diesel chainsaw), supermarket, fishery, at the butcher’s, drogerie-markt, shoe-shop, a bookshop, cars, etc. And then also ads with women on it: images of make-up, clothes, underwear… I wanted get on camera in some form or another whatever is mentioned in the lyrics.
I wrote a synopsis that then juxtaposed meanings from the second part of the text and underlaid the text from the first part with images from the second part and vice versa. So for example lyrics “high heels” written on the screen would be underlaid with shots of chocolate brownie, and lyrics “diesel chainsaw” with shots of red fishnets, and so on… I also wrote a location shooting plan which detailed what has to be shot at which location.
But, when all music for the EP was finished and sent to mastering, I felt I was so late and running out of time, that I wanted to speed things up. It seemed like … going to the mall and shooting there is too much of a hassle, so, I decided to replace the needed video material with images from the internet, images which were published under Creative Commons licences that allowed reuse and derivative works. I had an idea to write a program in Processing in which I would animate the images and lyrics. I would control the animation from my composition program Renoise. So that was my final plan that I went ahead with.
Most of the music I make these days is written in a program called Renoise. It’s a very powerful piece of software that is based on the music tracker paradigm: the timeline is vertical, composition is organised in so-called “patterns”, where all sonic elements are easily accessible on a flat area at particular point in time. Musical events are entered as numbers and letters, which is why it looks kinda geeky and un-intuitive, but once you get the gist of it, it’s very easy and fast to work with.
Renoise is able to talk to the outside world via MIDI protocol, a traditional way analogue synthesizers and sequencers used to talk to each other. So in Renoise I created a silent instrument that would send out MIDI messages. Via this instrument I would control the video. The MIDI messages were listened to by an animation program which I wrote in Processing.
Processing is this great programming language designed for designers who wanted to explore more algorithmic approach to design or visual art but didn’t want to become computer programmers. It is simple but powerful, open source, cross platform and in active development. At the basic level it allows you to write so called ‘sketches’, little programs that do something on screen.
So, for the visual part I started with an idea of two surfaces, each carrying a chosen image, with an ability that these images would slide in any direction. These surfaces would be placed so, that it would appear that there’s a straight cut of one surface which would reveal the second. This straight cut should be movable in position and rotation. I than replicated this functionality to have third and fourth surface like that, but that pair of images would be masked out with typography: words from lyrics.
So once I had the functionality of the Processing sketch in place and working, I needed to control all the aspects of movement, changing the images and displaying of different words … from the composition in Renoise. So I connected each parameter in the sketch with a specific midi control, so by changing that MIDI control I could change an aspect of the image displayed by the Processing program in real-time.
All that was left to do was to go through all the bars, beat after beat, and animate images and lyrics. Now, that took much longer then I expected. It was slow and somewhat tedious. When the tracks for the EP came out from mastering I was still slowly grinding through the video, setting movements and visual compositions to each basskick, each snare, almost each hihat. It was unexpectedly time-consuming.
But in the end it worked out. I had the real-time version of the video, which executed pretty well on my screen of my computer. But this version itself cannot be uploaded – to YouTube for example. I still needed to capture it into a video recording using a screen capture program. I then put the recording into a video editing program, added some titles, replaced the music track, exported it, and this was a final version ready to go public with.