Despite the fact I really want to move away from WordPress, I need to write this down here.
The other day I did a heavy update of my ‘workstation’ machine. I jump several Ubuntu releases and suprisingly most of things ended up working the way they worked before. SuperCollider was unfortunately not one of them. So I decided to recompile from source, using the latest release from SuperCollider Download page (3.12.1) and my notes for compilation on Ubuntu from 2019.
I have youtube-dl installed on linux as a “binary”. I’m using quotes because it’s not really binary but zipped executable. I needed to patch my youtube-dl with a patch from some user that provided it in github issues. So, here it goes.
The main installation method to install youtube-dl, according to https://github.com/ytdl-org/youtube-dl/blob/master/README.md#installation is to
You end up with a single executable file in /usr/local/bin/ and that’s totally fine. Updating youtube-dl to the latest version is usually done with
sudo youtube-dl -U
But how to patch this? Contrary to usual logic it’s possible to simply unpack executalble and patch. That executable is actually a zip file. I first create a folder for hacking and extract the zip into it:
I have downloaded a patch to enable some franceTV videos to be downloaded, since they changed their API, into a file ~/.local/src/youtube-dl/youtube-dl-patch.txt, so in order to patch the extractor for franceTV – francetv.py I do
This is a quick guide to setup DKIM and DMARC records on your postfix-based ubuntu email server. I followed closely a guide from Julian Kunkel and am reproducing many steps from his post here. Despite the fact that he wrote his guide because he found some typos in other guides, his guide also had some.
I’ve been doing work on already working Ubuntu server (xenial, 16.04) with working postfix which I setup using that ArsTechnica article.
In the instructions I will use the placeholder <DOMAIN> for a full qualified domain such as emanat.si. You have to decide about a selector to use which basically provides means to use multiple keys, I use <SELECTOR> as placeholder, in my case, it is “mail”. <EMAIL> is an email where you like to receive errors (in case of DMARC).
The code is in progress and already available for the public at my new git space git.tmp.si – specifically: git.tmp.si/luka/Rhizosphere.
I’m still thinking to perhaps do a rehearsal live performance stream tomorrow, Tuesday, when the live performance should’ve taken place at Steklenik. Surely opus/audio but also possibly a webm a/v via self-hosted IceCast2 or YouTube or Picarto or Twitch or all at once.
Today I did some testing with streaming from home.
First you have a normal PDF with A4 pages. Something you printed from wikipedia but using ‘to pdf’ option. Now you want to print it at half size, well A5, but in order to use duplex and everything, pages have to be rotated and rearranged. On Linux command-line this is easy using ‘ghost-script’ tools.
Print the resulting file with duplex option with short-edge. You will end up with foldable pages that don’t go one into another. Fold them and stack them on top of each other and bind them with stapler or thread.
Fascination with chaos mathematics – among others via a video by ‘Veritasium’ titled (clickbaity?): “This equation will change how you see the world“, but also, and already, by chaotic stochastic SuperCollider UGens which make me wonder what is ‘Logistic Map’, and Lorenz attractor etc…
The video references a book “Chaos: Making a New Science” by James Gleick.
Trying to make a review of what has been done and researched since last log entry.
About 8 days ago I realised I need to get into understanding underlaying concepts of patterns in SuperCollider in order to be able to imagine first and then know where and how to start laying first prototype/s for Jitakami. I’m been at this point of starting for very long and it’s quite frustrating to be constantly conceptualizing but nothing comes out of it.
I worked through the whole Streams-Patterns-Events SuperCollider tutorial. It did open and solidify some concepts, but I’m painfuly aware how it is really important to keep coding stuff in SC on daily basis. And on the side note – I actually think I should stick with one programming language for a while, master it and write the whole sound_engine+GUI in it first and then start expanding it to visuals.
I was writing and testing and trying hard to move from the dead point for the last two days. Current question is – what data should an ‘agent’ actually output – send and execute an algorithm. How to shape ‘agents‘, what kind of information do they have and what do they output. I started to write what kind of libraries we (should) work with (inventory): libraries of instruments, libraries of duration patterns… what other patterns are important, how to classify them, how to treat them so that I can create a simple specification.
I guess there could be a small library of basic instruments, like a drum machines (hihats, snares, kicks, claps, percussion, glitches), bass generators/synths, pads, leads. This is classification according to function of an instrument in a song. Let’s start simple.
The other ‘sort of’ library should be duration/trigger patterns – 8 or 16 bars of beat patterns, about 10 variations that could be applied to beat machine for example. And then melodic patterns for basslines, and melodic patterns for leads, and the same for pads.
This needs to be laid down in a simple specification and that’s our blueprint version 0.1 (or 2019-9).
Tomorrow I’m leaving for Linz with a train and a bike. The program is packed and the only thing I know and I’m looking forward is Ryoichi Kurokawa – especially his live a/v performance. He has done a new one in 2019. He also has an installation – silent one on Ars Electronica, and there are two things I’m wondering about – what and how did he incoporate A.I. into his work to be featured on the show, and secondly, why is his a/v language very dear to me. It is extremely powerful to me and something I “understand” (in affective sense) very well. Like his artistic language is very clear and domestic. When I watch his work it is … I don’t know how to describe it.
A four hour session started as quick 20-30 minutes regrouping – that’s a synonym for checking on the plan so far and readjusting it. I haven’t looked into all the stuff I have layed out in Asana (all online courses and reading I’m in delay with), I wanted to refresh my memory and plans about the Jitakami instrument project. That is why I didn’t fiddle too much with the study line.
Let me first go over what was done further on today and then come back to evaluation of past two months.
I worked a bit on basic general concept that the Jitakami engine prototype should follow. The idea is that basicly there’s some kind of top conductor process that overlooks the composition. Conductor is controlled by the user via a touchscreen GUI. Conductor is able to intelligently launch and control agents/workers/operators/sequencers – these are possibly separate algorithms or functions that in turn create actual sequences of events – probably SuperCollider’s Patterns (or Events or Streams?) and deploy them to the concrete timeline.
I arrived at the question “can a function generate a pattern somehow” and off I was into SuperCollider Help system where I started to work on “Understanding Streams, Patterns and Events” tutorial and few hours took me to go through a first part (out of 7). I learned some basic stuff about Streams but the feeling is I don’t know enough yet, so I think I should be working on this tutorial daily for few hours.
To go back and reflect on this last month or two especially with regard to installation at Kapelica and A.I.:
At the beginning of the month (August) I managed to work on pretty difficult rephrasing of the project – kinda refocusing and trying to find more narrow path to produce material and then to rework on the audio-visual of the installation. I came with some interesting material on the area of rare earth elements, did a lot of research, but then felt like I’m not sure how to proceed and kinda left it there. So the installation was not upgraded into another version that I would be satisfied with and today, just few days before the closing of the exhibition, I feel like I failed to do something I liked, something that said something articulated (in an art’s own way) about Artificial Intelligence.
Next week I’m leaving for Linz to be at Ars Electronica festival, and especially their AI x Music part, but I must say I’m very frustrated by the term. It’s just too loaded with hype and it encopases too huge range of disciplines and approaches to actually mean anything but hyper-bloated phantasm.
Coming to studio (after kinda running away from it all in a cafe – so burdening is the timeline, deadline and all the delayed work!) I discover so much more is online (and offline) about “music and A.I.” I’m overwhelmed. I will post some links here as a way to bookmark stuff.
Good news! I got a small working stipend from cultural ministry. The main requirement is to follow my plan and (I think) submit a report at the end. Work/study must not be focused on a final product. Inevitably I am creating project in the same area and direction so the upcoming piece “INTELLIGENCE IS WHATEVER MACHINES HAVEN’T DONE YET” for exhibition at Kapelica gallery is more or less connected to this research. I will need to reschedule the working plan from spring to fall.
Luka Prinčič: a musician, sound & media artist, engineer and dj. My sound goes from broken bass to noise, drone and sonic experiments. I'm one half of Wanda & Nova deViator, I run Kamizdat label and work at Emanat institute. I'm passionate about critical art expressions, free software, social awareness, cyberpunk, and peculiarity of contemporary human condition.
Like what you hear, see, read? Making music and art takes many hours of hard work and releasing it to the commons means less income from sales. Consider a per-release patronage at Patreon, a regular anonymous donation via LiberaPay, or paying for some free music at Bandcamp. Every single ¢ counts.