Development log

Code, tricks, quick technical ideas, recipes, progress reports, bash, ffmpeg and more…

making of Requiem for The Future: A.I. video

The biggest part of creating this video was to generate satisfactory glitched material. I researched various ways how to glitch images using SoX tool and a glitch in pre-3.0 ffmpeg’s jpeg-le codec. Below is a BASH script which worked through all .mov videos in the current folder, extracted frames, glitched it and stitched back the videos from frames. Here’s the gist of all glitches in a summary:

 

Here’s a full bash script:

Continue reading

jpeg-LS glitching with FFmpeg

In an older version of FFmpeg it was possible to glitch the image with JPEG-LS codec. Newer versions of FFmpeg don’t work anymore in this way, so one must download an old version and compile it (keep it local) – here 2.0.7 is used.

This script takes a video file as an argument, extracts frames, glitches them, and gathers frames back into a video file.

 

 

glitching images & movies with audio effect using sox

Some tutorial will be here soon, but for now, this script that glitches your video file.

Requires bash, ffmpeg & sox.

 

Processing & multitouch

I wasn’t supposed to be doing this today, but:

Multitouch support on Linux seems much simpler with a different library than SMT that I was unsuccessfully tryin last year. “Simple Touch” actually “just works” via evdev library. The only thing that was needed was to supply the right event device from /dev/input and also make it readable and writable for normal user.

Find out which event device is your touch screen:

Above you see that id of the touch-screen is 11. Which means we have to allow access to /dev/input/event11.

Alternatively (and perhaps much more reliably) one should check the contents of /dev/input/by-id/

You can see that the touch-screen reports to be device “../event5”.

Simple Touch library is available from the Tools menu in Processing, but also here:
https://github.com/gohai/processing-simpletouch/

In Processing there is couple of examples for Simple Touch library, but the essence for a successful start is

1) list devices

will output something like

Touch-screen (event11) has [3] index, so to open this device you finally use

It seems also to be possible to do just this:

and the following also works:

This opens the door for writing multitouch interfaces in Processing, which however means a lot more work. For now it seems a good way to go with writing control GUI in SuperCollider, but eventually, the Processing possibility seems like very interesting and inviting.

So a final complete working example for my machine is this:

timelapse aka speeding up with ffmpeg

These days I’m recording my work in the studio using a timelapse function in my Panasonic TM700 HD camera. So it happened that I forgot to turn on the the function (needs to be turned on everytime you start recording) which usually records one frame every 10 seconds, so I ended up with a “normal” recording and wanted to convert that to what camera usually does. Ffmpeg to the rescue! To do some frame manipulation a video filter “setpts” is what one needs. Gathering knowledge online with help of two pages:

http://blog.grio.com/2012/01/fast-and-slow-motion-video-with-ffmpeg.html
https://trac.ffmpeg.org/wiki/How to speed up / slow down a video

So, supposedly, the right way to change video speed using ffmpeg is by adjusting the “presentation time stamp” (PTS). This adjusts frames’ metadata related to how long each is displayed—exactly what you want.

this is the crucial piece of code that needs to be passed to ffmpeg:

-filter:v "setpts=2.0*PTS"

Or a more practical example using fraction – if original framerate is 25 frames per second I only need evert 250th one:

$ ffmpeg -i INPUTFILE.mkv -filter:v "setpts=(1/250)*PTS" OUTPUTFILE.mkv

My final conversion was from full-HD .mts to 720p .webm and to mp4:

$ ffmpeg -i INPUT.mts -filter:v "setpts=(1/250)*PTS" -s 1280x720 -c:v libvpx -crf 5 -b:v 8M -an OUTPUT.webm
$ ffmpeg -i INPUT.mts -filter:v "setpts=(1/250)*PTS" -s 1280x720 -c:v libx264 -preset slow -crf 10 -an OUTPUT.mp4

switching caps-lock key into control key

There are couple of ways how to make your caps-lock key into control key. For some of use keyboard-shortcuts nerds keyboard usage optimisation is quite an important topic. One of useful things is to move (or rather add) a control (CTRL) key to the place where (rarely used) CAPS LOCK key is. This is especially usefull if one work in EMACS a lot. So, obviously emacs-wiki is a good source for various ways how to achieve that. See https://www.emacswiki.org/emacs/MovingTheCtrlKey for more. But specificaly in my case I used the following recipe for Debian and derivatives:

To make Caps Lock another Ctrl key, edit the file /etc/default/keyboard and change the line which reads

to

and then run:

Changes take effect on next login and seem to perpetuate across virtual terminals and X session.

notes on hypersexualised/pornified programming of male (and female) mind

Warning: this is mainly short brainstorm about possible future projects. And it’s nothing new. Isn’t it?

Perhaps Crucial Pink & Interface Fractures IV projects can hold hands at the research question: how to realy deal with programming of male mind that creates an addiction to pornography or with lesser effect at least to sexist imagery of hypersexualized female bodies? How do survive it, re-program that male mind, without resorting to denial and repression of desire and/or pleasure?

Does Foucault understanding of pleasure give any clues? Do Deleuze&Guattari’s philosophies of desire?

Personally I see a way of creative/artistic exploration of these issues through queer-ing of male body. Accepting the desire/pleasure of female dresses, underwear, as part of performing a wierd queer subjectivity through live-art sound and video… writing, finding words of pleasure and containment, of enprisonment of one male’s desire and pleasure into pre-shaped images, clips, fetishes.

This is recurring and ongoing. It has to be faced and explored and expressed somewhat.

textures & power of 2

In the early days of OpenGL and DirectX, it was required that textures were powers of two. This meant that interpolation of float values could be done very quickly using shifting and such.Since OpenGL 2.0, and preceding that via an extension, non-power-of-two texture dimensions has been supported.Are there performance advantages to sticking to power-of-two textures on modern integrated and discrete GPUs?What advantages do non-power-of-two textures have, if any?Are there large populations of desktop users who don’t have cards that support non-power-of-two textures?

ANSWER: (2015)

power of 2 textures increase performance about 30% for any type of GPU not only old GPUs (30% faster is the difference between a high end GPU and an average one) they take 30% more ram but less vram is needed they increase quality by providing proper texture size for specific distance it works like anti-aliasing for textures dark line artifact should be handled by game engines and aaa engines handle them fine

Source: opengl – why would you use textures that are not a power of 2? – Game Development Stack Exchange

Line of purples – Wikipedia, the free encyclopedia

In color theory, the line of purples or the purple boundary is the locus on the edge of the chromaticity diagram between extreme spectral red and violet. Except for the endpoints, colors on the line of purples are not spectral. Line-of-purples colors and spectral colors are the only ones which are considered fully saturated in the sense that for any given point on the line of purples there exists no color involving a mixture of red and violet that is more saturated than it. There is no monochromatic light source able to generate a purple color. Instead, every color on the line of purples is produced by mixing a unique ratio of fully saturated red and fully saturated violet, at the extreme points of visibility on the spectrum of pure hues.

Unlike spectral colors (which may be implemented, for example, by nearly monochromatic light of laser, with precision much finer than human chromaticity resolution), colors on the line of purples are more difficult to implement practically. Cones’ sensitivity to both of the spectral colors at the opposite extremes of what the human eye can see is quite low (see luminosity function), so commonly observed purple colors do not achieve a high level of brightness.

The line of purples, a theoretical boundary of chromaticity, should not be confused with “purples“, a more general color term which also refers to less than fully saturated colors (see variations of purple and variations of pink for possible examples) which form an interior of a triangle between white and the line of purples in the CIE chromaticity diagram.

Source: Line of purples – Wikipedia, the free encyclopedia

processing applet on desired monitor

An example how to control on which monitor does processing applet (sketch output window) appear if you’re using multi-head setup: