First you have a normal PDF with A4 pages. Something you printed from wikipedia but using ‘to pdf’ option. Now you want to print it at half size, well A5, but in order to use duplex and everything, pages have to be rotated and rearranged. On Linux command-line this is easy using ‘ghost-script’ tools.
Print the resulting file with duplex option with short-edge. You will end up with foldable pages that don’t go one into another. Fold them and stack them on top of each other and bind them with stapler or thread.
Trying to make a review of what has been done and researched since last log entry.
About 8 days ago I realised I need to get into understanding underlaying concepts of patterns in SuperCollider in order to be able to imagine first and then know where and how to start laying first prototype/s for Jitakami. I’m been at this point of starting for very long and it’s quite frustrating to be constantly conceptualizing but nothing comes out of it.
I worked through the whole Streams-Patterns-Events SuperCollider tutorial. It did open and solidify some concepts, but I’m painfuly aware how it is really important to keep coding stuff in SC on daily basis. And on the side note – I actually think I should stick with one programming language for a while, master it and write the whole sound_engine+GUI in it first and then start expanding it to visuals.
I was writing and testing and trying hard to move from the dead point for the last two days. Current question is – what data should an ‘agent’ actually output – send and execute an algorithm. How to shape ‘agents‘, what kind of information do they have and what do they output. I started to write what kind of libraries we (should) work with (inventory): libraries of instruments, libraries of duration patterns… what other patterns are important, how to classify them, how to treat them so that I can create a simple specification.
I guess there could be a small library of basic instruments, like a drum machines (hihats, snares, kicks, claps, percussion, glitches), bass generators/synths, pads, leads. This is classification according to function of an instrument in a song. Let’s start simple.
The other ‘sort of’ library should be duration/trigger patterns – 8 or 16 bars of beat patterns, about 10 variations that could be applied to beat machine for example. And then melodic patterns for basslines, and melodic patterns for leads, and the same for pads.
This needs to be laid down in a simple specification and that’s our blueprint version 0.1 (or 2019-9).
Tomorrow I’m leaving for Linz with a train and a bike. The program is packed and the only thing I know and I’m looking forward is Ryoichi Kurokawa – especially his live a/v performance. He has done a new one in 2019. He also has an installation – silent one on Ars Electronica, and there are two things I’m wondering about – what and how did he incoporate A.I. into his work to be featured on the show, and secondly, why is his a/v language very dear to me. It is extremely powerful to me and something I “understand” (in affective sense) very well. Like his artistic language is very clear and domestic. When I watch his work it is … I don’t know how to describe it.
A four hour session started as quick 20-30 minutes regrouping – that’s a synonym for checking on the plan so far and readjusting it. I haven’t looked into all the stuff I have layed out in Asana (all online courses and reading I’m in delay with), I wanted to refresh my memory and plans about the Jitakami instrument project. That is why I didn’t fiddle too much with the study line.
Let me first go over what was done further on today and then come back to evaluation of past two months.
I worked a bit on basic general concept that the Jitakami engine prototype should follow. The idea is that basicly there’s some kind of top conductor process that overlooks the composition. Conductor is controlled by the user via a touchscreen GUI. Conductor is able to intelligently launch and control agents/workers/operators/sequencers – these are possibly separate algorithms or functions that in turn create actual sequences of events – probably SuperCollider’s Patterns (or Events or Streams?) and deploy them to the concrete timeline.
I arrived at the question “can a function generate a pattern somehow” and off I was into SuperCollider Help system where I started to work on “Understanding Streams, Patterns and Events” tutorial and few hours took me to go through a first part (out of 7). I learned some basic stuff about Streams but the feeling is I don’t know enough yet, so I think I should be working on this tutorial daily for few hours.
To go back and reflect on this last month or two especially with regard to installation at Kapelica and A.I.:
At the beginning of the month (August) I managed to work on pretty difficult rephrasing of the project – kinda refocusing and trying to find more narrow path to produce material and then to rework on the audio-visual of the installation. I came with some interesting material on the area of rare earth elements, did a lot of research, but then felt like I’m not sure how to proceed and kinda left it there. So the installation was not upgraded into another version that I would be satisfied with and today, just few days before the closing of the exhibition, I feel like I failed to do something I liked, something that said something articulated (in an art’s own way) about Artificial Intelligence.
Next week I’m leaving for Linz to be at Ars Electronica festival, and especially their AI x Music part, but I must say I’m very frustrated by the term. It’s just too loaded with hype and it encopases too huge range of disciplines and approaches to actually mean anything but hyper-bloated phantasm.
Coming to studio (after kinda running away from it all in a cafe – so burdening is the timeline, deadline and all the delayed work!) I discover so much more is online (and offline) about “music and A.I.” I’m overwhelmed. I will post some links here as a way to bookmark stuff.
Good news! I got a small working stipend from cultural ministry. The main requirement is to follow my plan and (I think) submit a report at the end. Work/study must not be focused on a final product. Inevitably I am creating project in the same area and direction so the upcoming piece “INTELLIGENCE IS WHATEVER MACHINES HAVEN’T DONE YET” for exhibition at Kapelica gallery is more or less connected to this research. I will need to reschedule the working plan from spring to fall.
visited Rob in Maribor, had long discussion about different solutions, ideas… 
fruitful and constructive meeting with Kapelica team 
ordered hardware (2x computer monitors, 1x computer system with Nvidia graphics)
checked the cover/cocoon for the installation at Kapelica. looks nice, black and acoustically dampened. Made plans with Jure about how to proceed 
picked up two screens from Mlacom. One multi-touch screen (1920×1080) + another wide and curved screen (2560×1080). Both Dell. setup in studio, connected to already working prak system. Testing SuperCollider and Processing functionalities.  That was progress.
Today I’m working on creating a Git project, currently on Gitlab.com (I wish I would make it on my own virtual server using YUNOHOST+Gitlab CE), trying to breakdown the huge octopus with all the tentacles leading to different directions. Also it makes sense to create issues that tie in Milestones and a TODO list automatically.
Today I added an additional volume to the mailserver prefect, since it was running out of disk space for the mailboxes. I have now increased the space by adding a new ‘virtual’ block storage, mounted and some mailboxes have been moved to the new drive. This new drive can be resized as needed with the lowest cost possible. We don’t need more CPU or memory, just more disk space and this works with $0.10/GB per month. The only drawback is that these volumes cannot be automatically backed up.
I have transfered the files from /var/mail/vmail to /mnt/vmail1/vmail by using rsync like this:
Came back to Jitakami research today. It was not easy to consolidate two+ missed months, to throw away a stale plan and try to understand what is realistically possible to do in 4 weeks left for the RAGLAIN/AUTOCOM installation that will be put on on 21th June at Kapelica Gallery.
Yesterday I went through some basic introductory parts of the Machine Learning course at Coursera. Learned about unsupervised and supervised learning, classification and regression ML problems, clustering and non-clustering problems. There’s a lot left to do in the week 1 and I’m not sure if I’ll be able to finish that one till tomorrow. These deadlines can be extended as needed though.
Today starts the logging of research project called Jitakami Research 2019. Basicly it consists of three big blocks – two of them running serially, and one in parallel to the two. The research part, for which I applied to a work stipend on cultural ministry (still pending, and probably will continue to for some time) consists of study and research from now up until the summer. In parallel, the idea is to produce a perfomance/installation at BitShift program at Kapelica gallery sometime in April – currently called Raglain, and then work on the second phase (AUTOCOM) that would presumably be shown in Linz.
Luka Prinčič: a musician, sound & media artist, engineer and dj. My sound goes from broken bass to noise, drone and sonic experiments. I'm one half of Wanda & Nova deViator, I run Kamizdat label and work at Emanat institute. I'm passionate about critical art expressions, free software, social awareness, cyberpunk, and peculiarity of contemporary human condition.
Like what you hear, see, read? Making music and art takes many hours of hard work and releasing it to the commons means less income from sales. Consider a per-release patronage at Patreon, a regular anonymous donation via LiberaPay, or paying for some free music at Bandcamp. Every single ¢ counts.