SCDAWNREM20

Live stream – 3. May 2020 @ 05:00-05:45 CEST/UTC+2

A live composition/improvisation using SuperCollider, live environment atmosphere sound and CC-licenced field recordings. It will be performed from the most eastern part of the municipality of Ljubljana, where we remain in lockdown due to the pandemic. The location is elevated enough to afford a good view of the sunrise. The sound broadcast will happen roughly during civil twilight, starting at 45 minutes before sunrise.

This live stream is produced by CONA, as part of larger international projects Soundcamp, Acoustic Commons, and Reveil channel.

SCDAWNREM20 general rehearsal

A week ago I made a similar trip to Volavlje, eastern-most part of Ljubljana municipality, but it was during the sunrise. A week ago that meant I was able to see the surrounding landscape on the way: the valley that starts at the end of the basin after crossing a steep hill and then final ascend to the rise, but through weirdly positioned and maintained houses – some obviously weekend holiday types, some rustic ones without the isolation. A peculiar aesthetic experience.

Today I got up at four in the morning, quickly made some non-coffee (rye, chicory) and off I was into the night onto the motorway, the ring around Ljubljana until veering perpendicular from it eastward. I It was a good thirty-five minute drive, and when I arrived to my spot, it was not completely dark anymore. The birds were already loud. There was a light breeze, but nothing substantial. Good conditions for my microphone, which I managed to attach to the top edge of the car’s front windshield. The USB cable then came through only small openning of the side window and into my laptop.

Continue reading

Rhizosphere log #200316

Here’s the gist at this point (details below):

  • The code is in progress and already available for the public at my new git space git.tmp.si – specifically: git.tmp.si/luka/Rhizosphere.
  • I’m still thinking to perhaps do a rehearsal live performance stream tomorrow, Tuesday, when the live performance should’ve taken place at Steklenik. Surely opus/audio but also possibly a webm a/v via self-hosted IceCast2 or YouTube or Picarto or Twitch or all at once.

Today I did some testing with streaming from home.

A5 booklet/brochure on Linux

First you have a normal PDF with A4 pages. Something you printed from wikipedia but using ‘to pdf’ option. Now you want to print it at half size, well A5, but in order to use duplex and everything, pages have to be rotated and rearranged. On Linux command-line this is easy using ‘ghost-script’ tools.

pdf2ps

convert your pdf to post-script:

$ pdf2ps input.pdf output.ps

ps2ps

the main fun is here:

$ pstops -pa4 "4:1R@0.7(0,1h)+2R@0.7(0,0.5h),0R@0.7(0,0.5h)+3R@0.7(0,1h)" mozilla.ps booklet.ps

ps2pdf

convert resulting post-script back to pdf:

$ gs -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sOutputFile=booklet.pdf -dAutoRotatePages=/None booklet.ps

Print the resulting file with duplex option with short-edge. You will end up with foldable pages that don’t go one into another. Fold them and stack them on top of each other and bind them with stapler or thread.

Jit.log#190904

Trying to make a review of what has been done and researched since last log entry.

About 8 days ago I realised I need to get into understanding underlaying concepts of patterns in SuperCollider in order to be able to imagine first and then know where and how to start laying first prototype/s for Jitakami. I’m been at this point of starting for very long and it’s quite frustrating to be constantly conceptualizing but nothing comes out of it.

I worked through the whole Streams-Patterns-Events SuperCollider tutorial. It did open and solidify some concepts, but I’m painfuly aware how it is really important to keep coding stuff in SC on daily basis. And on the side note – I actually think I should stick with one programming language for a while, master it and write the whole sound_engine+GUI in it first and then start expanding it to visuals.

I was writing and testing and trying hard to move from the dead point for the last two days. Current question is – what data should an ‘agent’ actually output – send and execute an algorithm. How to shape ‘agents‘, what kind of information do they have and what do they output. I started to write what kind of libraries we (should) work with (inventory): libraries of instruments, libraries of duration patterns… what other patterns are important, how to classify them, how to treat them so that I can create a simple specification.

I guess there could be a small library of basic instruments, like a drum machines (hihats, snares, kicks, claps, percussion, glitches), bass generators/synths, pads, leads. This is classification according to function of an instrument in a song. Let’s start simple.

The other ‘sort of’ library should be duration/trigger patterns – 8 or 16 bars of beat patterns, about 10 variations that could be applied to beat machine for example. And then melodic patterns for basslines, and melodic patterns for leads, and the same for pads.

This needs to be laid down in a simple specification and that’s our blueprint version 0.1 (or 2019-9).


Tomorrow I’m leaving for Linz with a train and a bike. The program is packed and the only thing I know and I’m looking forward is Ryoichi Kurokawa – especially his live a/v performance. He has done a new one in 2019. He also has an installation – silent one on Ars Electronica, and there are two things I’m wondering about – what and how did he incoporate A.I. into his work to be featured on the show, and secondly, why is his a/v language very dear to me. It is extremely powerful to me and something I “understand” (in affective sense) very well. Like his artistic language is very clear and domestic. When I watch his work it is … I don’t know how to describe it.

Jit.log #190827

Short report on today’s work:

A four hour session started as quick 20-30 minutes regrouping – that’s a synonym for checking on the plan so far and readjusting it. I haven’t looked into all the stuff I have layed out in Asana (all online courses and reading I’m in delay with), I wanted to refresh my memory and plans about the Jitakami instrument project. That is why I didn’t fiddle too much with the study line.

Let me first go over what was done further on today and then come back to evaluation of past two months.

I worked a bit on basic general concept that the Jitakami engine prototype should follow. The idea is that basicly there’s some kind of top conductor process that overlooks the composition. Conductor is controlled by the user via a touchscreen GUI. Conductor is able to intelligently launch and control agents/workers/operators/sequencers – these are possibly separate algorithms or functions that in turn create actual sequences of events – probably SuperCollider’s Patterns (or Events or Streams?) and deploy them to the concrete timeline.

I arrived at the question “can a function generate a pattern somehow” and off I was into SuperCollider Help system where I started to work on “Understanding Streams, Patterns and Events” tutorial and few hours took me to go through a first part (out of 7). I learned some basic stuff about Streams but the feeling is I don’t know enough yet, so I think I should be working on this tutorial daily for few hours.

To go back and reflect on this last month or two especially with regard to installation at Kapelica and A.I.:

At the beginning of the month (August) I managed to work on pretty difficult rephrasing of the project – kinda refocusing and trying to find more narrow path to produce material and then to rework on the audio-visual of the installation. I came with some interesting material on the area of rare earth elements, did a lot of research, but then felt like I’m not sure how to proceed and kinda left it there. So the installation was not upgraded into another version that I would be satisfied with and today, just few days before the closing of the exhibition, I feel like I failed to do something I liked, something that said something articulated (in an art’s own way) about Artificial Intelligence.

Next week I’m leaving for Linz to be at Ars Electronica festival, and especially their AI x Music part, but I must say I’m very frustrated by the term. It’s just too loaded with hype and it encopases too huge range of disciplines and approaches to actually mean anything but hyper-bloated phantasm.