Trying to make a review of what has been done and researched since last log entry.

About 8 days ago I realised I need to get into understanding underlaying concepts of patterns in SuperCollider in order to be able to imagine first and then know where and how to start laying first prototype/s for Jitakami. I’m been at this point of starting for very long and it’s quite frustrating to be constantly conceptualizing but nothing comes out of it.

I worked through the whole Streams-Patterns-Events SuperCollider tutorial. It did open and solidify some concepts, but I’m painfuly aware how it is really important to keep coding stuff in SC on daily basis. And on the side note – I actually think I should stick with one programming language for a while, master it and write the whole sound_engine+GUI in it first and then start expanding it to visuals.

I was writing and testing and trying hard to move from the dead point for the last two days. Current question is – what data should an ‘agent’ actually output – send and execute an algorithm. How to shape ‘agents‘, what kind of information do they have and what do they output. I started to write what kind of libraries we (should) work with (inventory): libraries of instruments, libraries of duration patterns… what other patterns are important, how to classify them, how to treat them so that I can create a simple specification.

I guess there could be a small library of basic instruments, like a drum machines (hihats, snares, kicks, claps, percussion, glitches), bass generators/synths, pads, leads. This is classification according to function of an instrument in a song. Let’s start simple.

The other ‘sort of’ library should be duration/trigger patterns – 8 or 16 bars of beat patterns, about 10 variations that could be applied to beat machine for example. And then melodic patterns for basslines, and melodic patterns for leads, and the same for pads.

This needs to be laid down in a simple specification and that’s our blueprint version 0.1 (or 2019-9).

Tomorrow I’m leaving for Linz with a train and a bike. The program is packed and the only thing I know and I’m looking forward is Ryoichi Kurokawa – especially his live a/v performance. He has done a new one in 2019. He also has an installation – silent one on Ars Electronica, and there are two things I’m wondering about – what and how did he incoporate A.I. into his work to be featured on the show, and secondly, why is his a/v language very dear to me. It is extremely powerful to me and something I “understand” (in affective sense) very well. Like his artistic language is very clear and domestic. When I watch his work it is … I don’t know how to describe it.

Jit.log #190827

Short report on today’s work:

A four hour session started as quick 20-30 minutes regrouping – that’s a synonym for checking on the plan so far and readjusting it. I haven’t looked into all the stuff I have layed out in Asana (all online courses and reading I’m in delay with), I wanted to refresh my memory and plans about the Jitakami instrument project. That is why I didn’t fiddle too much with the study line.

Let me first go over what was done further on today and then come back to evaluation of past two months.

I worked a bit on basic general concept that the Jitakami engine prototype should follow. The idea is that basicly there’s some kind of top conductor process that overlooks the composition. Conductor is controlled by the user via a touchscreen GUI. Conductor is able to intelligently launch and control agents/workers/operators/sequencers – these are possibly separate algorithms or functions that in turn create actual sequences of events – probably SuperCollider’s Patterns (or Events or Streams?) and deploy them to the concrete timeline.

I arrived at the question “can a function generate a pattern somehow” and off I was into SuperCollider Help system where I started to work on “Understanding Streams, Patterns and Events” tutorial and few hours took me to go through a first part (out of 7). I learned some basic stuff about Streams but the feeling is I don’t know enough yet, so I think I should be working on this tutorial daily for few hours.

To go back and reflect on this last month or two especially with regard to installation at Kapelica and A.I.:

At the beginning of the month (August) I managed to work on pretty difficult rephrasing of the project – kinda refocusing and trying to find more narrow path to produce material and then to rework on the audio-visual of the installation. I came with some interesting material on the area of rare earth elements, did a lot of research, but then felt like I’m not sure how to proceed and kinda left it there. So the installation was not upgraded into another version that I would be satisfied with and today, just few days before the closing of the exhibition, I feel like I failed to do something I liked, something that said something articulated (in an art’s own way) about Artificial Intelligence.

Next week I’m leaving for Linz to be at Ars Electronica festival, and especially their AI x Music part, but I must say I’m very frustrated by the term. It’s just too loaded with hype and it encopases too huge range of disciplines and approaches to actually mean anything but hyper-bloated phantasm.


Good news! I got a small working stipend from cultural ministry. The main requirement is to follow my plan and (I think) submit a report at the end. Work/study must not be focused on a final product. Inevitably I am creating project in the same area and direction so the upcoming piece “INTELLIGENCE IS WHATEVER MACHINES HAVEN’T DONE YET” for exhibition at Kapelica gallery is more or less connected to this research. I will need to reschedule the working plan from spring to fall.

Continue reading


Last two weeks:

  • visited Rob in Maribor, had long discussion about different solutions, ideas… [190522]
  • fruitful and constructive meeting with Kapelica team [190523]
  • ordered hardware (2x computer monitors, 1x computer system with Nvidia graphics)
  • checked the cover/cocoon for the installation at Kapelica. looks nice, black and acoustically dampened. Made plans with Jure about how to proceed [190531]
  • picked up two screens from Mlacom. One multi-touch screen (1920×1080) + another wide and curved screen (2560×1080). Both Dell. setup in studio, connected to already working prak system. Testing SuperCollider and Processing functionalities. [190531] That was progress.
Continue reading

Jit.log#190209 Machine Learning course starts

Yesterday I went through some basic introductory parts of the Machine Learning course at Coursera. Learned about unsupervised and supervised learning, classification and regression ML problems, clustering and non-clustering problems. There’s a lot left to do in the week 1 and I’m not sure if I’ll be able to finish that one till tomorrow. These deadlines can be extended as needed though.

Continue reading

Jitakami Research 2019: Hello World

Today starts the logging of research project called Jitakami Research 2019. Basicly it consists of three big blocks – two of them running serially, and one in parallel to the two. The research part, for which I applied to a work stipend on cultural ministry (still pending, and probably will continue to for some time) consists of study and research from now up until the summer. In parallel, the idea is to produce a perfomance/installation at BitShift program at Kapelica gallery sometime in April – currently called Raglain, and then work on the second phase (AUTOCOM) that would presumably be shown in Linz.

Continue reading

Jitakami Research 2019: working plan

This is the concept and plan for working stipend that I applied:

Predmet štipendije je raziskava na področju umetniškega pristopa k spoju glasbe in umetne inteligence.

Avtorja v to raziskavo žene predvsem vprašanje kako vzpostaviti umetniško situacijo (intermedijsko instalacijo in/ali performans), ki bi radikalno vstopila v gledalčevo/poslušalčevo dojemanje sodobnih tehnologij in z njimi povezanih fobij in fetišev. Za bližanje možnim odgovorom je nujna poglobljena raziskava.

Continue reading