IF3

textures & power of 2

In the early days of OpenGL and DirectX, it was required that textures were powers of two. This meant that interpolation of float values could be done very quickly using shifting and such.Since OpenGL 2.0, and preceding that via an extension, non-power-of-two texture dimensions has been supported.Are there performance advantages to sticking to power-of-two textures on modern integrated and discrete GPUs?What advantages do non-power-of-two textures have, if any?Are there large populations of desktop users who don’t have cards that support non-power-of-two textures?

ANSWER: (2015)

power of 2 textures increase performance about 30% for any type of GPU not only old GPUs (30% faster is the difference between a high end GPU and an average one) they take 30% more ram but less vram is needed they increase quality by providing proper texture size for specific distance it works like anti-aliasing for textures dark line artifact should be handled by game engines and aaa engines handle them fine

Source: opengl – why would you use textures that are not a power of 2? – Game Development Stack Exchange

Line of purples – Wikipedia, the free encyclopedia

In color theory, the line of purples or the purple boundary is the locus on the edge of the chromaticity diagram between extreme spectral red and violet. Except for the endpoints, colors on the line of purples are not spectral. Line-of-purples colors and spectral colors are the only ones which are considered fully saturated in the sense that for any given point on the line of purples there exists no color involving a mixture of red and violet that is more saturated than it. There is no monochromatic light source able to generate a purple color. Instead, every color on the line of purples is produced by mixing a unique ratio of fully saturated red and fully saturated violet, at the extreme points of visibility on the spectrum of pure hues.

Unlike spectral colors (which may be implemented, for example, by nearly monochromatic light of laser, with precision much finer than human chromaticity resolution), colors on the line of purples are more difficult to implement practically. Cones’ sensitivity to both of the spectral colors at the opposite extremes of what the human eye can see is quite low (see luminosity function), so commonly observed purple colors do not achieve a high level of brightness.

The line of purples, a theoretical boundary of chromaticity, should not be confused with “purples“, a more general color term which also refers to less than fully saturated colors (see variations of purple and variations of pink for possible examples) which form an interior of a triangle between white and the line of purples in the CIE chromaticity diagram.

Source: Line of purples – Wikipedia, the free encyclopedia

The Interface = relation

The interface, Hookway tells us, has always exhibited a ‘a tendency toward a seeming transparency and disappearance’ and this ‘illusory disappearance is an essential aspect of the operation of a user interface, in as much as an operator internalizes the user interface in the course of working through it.’

‘Control upon the interface involves a double moment,’ we are told, ‘where power at once confines and enables.’ We are at once augmented and reduced by our interactions, promised limitless powers but only if we may shrink ourselves to fit a machine-readable vision of the human.

‘[T]he surface refers back to a thing,’ Hookway explains, ‘and expresses the properties of that thing, while the interface refers back to a relation between things and expresses an action.’

— Branden Hookway, Interface (MIT Press, 184pp, £17.95, ISBN 9780262525503), via review31.co.uk

Modell 5 – Granular Synthesis

Kurt Hentschlaeger und Ulf Langheinrich have been working together as Granular-Synthesis since 1991

“from a few expressions on the face of the performer Akemi Takeya to a frenzied exploration of the alter ego, any known context of meaning ends in the dissolved movements, is stalled in denaturalized redundancy, in machine pain. The semantic void is too loud to be amenable to meditative reception. The frontal images, the rhythmic structures generate contradictory emotions and great strain. Entertainment is offered and almost violently denied. At the highest level of energy, enjoyment reaches the limit.” Sample session performed by Akemi Takeya. Edited on various AVID Suites in England and Austria between 1994-96.
Produced by: Mike Stubbs, at HTBA (Hull Time Based Arts) in Hull England.
Co-produced by PYRAMEDIA Vienna.

from https://www.youtube.com/watch?v=ATWljMbvVTg

see also http://www.granularsynthesis.info

processing applet on desired monitor

An example how to control on which monitor does processing applet (sketch output window) appear if you’re using multi-head setup:

multitouch with dual head/monitor Xorg

Having a multi-touch monitor (DELL P2314T) together with another non-multi-touch output confuses (in my case) the pointer maping – in other words, the pointer (mouse) is not where you touch the screen.

1) Make sure the touch screen is the leftmost monitor. Seems like offset-ing the pointer with xinput does not work (and something is buggy here), but scaling does. Actually that is not entirely true: offseting works with xinput, but in the case of multi-touch screen not being left-most the pointer is thrown to the rightmost pixel on X-axis the moment it’s supposed to appear on the multi-touch screen (this is true only for MT input, not for the actual mouse). If the touch-screen the leftmost, there’s no need to do offset, just proper scaling.

2) use xinput’s “Coordinate Transformation Matrix” to ‘remap’ it correctly:

see wiki.archlinux.org:Calibrating_Touchscreen

and here’s a simple /etc/X11/xorg.conf:

IF3: theories on audio-visual

Been reading the last part of Deleuze’s Cinema 2 – Time-image for most of the day. In the last chapter (The components of the image) he seems to be focused very much on sound (words, sounds, music) and finally on a ‘birth of the audio-visual‘. It looks like I’m searching for certain views, perspectives, thinking about the audio-visual, about the cinematic sound and image, electronic image in the cinema, in order to find paths towards concrete actions – programming, searching for content, recording, composing… Of course, Deleuze’s writing is philosophical, deep and extremely challenging, while Michele Chion’s is somewhat chaotic and (especially compared to Deleuze) superficial. But it seems to me that what I need is to extract workable concepts that will help me in a practical way. I suppose that Chion’s concepts are still imaginative and interesting enough for that purpose.

It is quite amazing that Deleuze writes in 1985:

“When the frame or the screen functions as instrument panel, printing or computing table, the image is constantly being cut into another image, being printed through a visible mesh, sliding over other images in an ‘incessant stream of messages, and the shot itself is less like an eye than an overloaded brain endlessly absorbing information: it is the brain-information, brain-city couple which replaces that of eye-Nature. […] a brain which has a direct experience of time, anterior to all motivity of bodies […].”

IF3 Progress Report #1

With the summer-time, a working-time on my new audio-visual piece, Interface Fractures III, begun. It is now almost confirmed that the date of premiere showing at Slovenian Cinemateque (Slovenska Kinoteka) is most probably 15/september. Since the plan was that we spend some quality sun&salt time at Croatian coast I brough some machinery with me to vacation. It’s always fun to work in the summer heat!

Anyway, with this next episode in the series I want to upgrade technically a “little bit”, so I acquired a better graphic card (Nvidia GTX960) and a multi-touch screen monitor with fullHD 1080p resolution. Adding also a 120GB SSD drive I needed to reinstall operating system (UbuntuStudio 14.4.1), separately compiled drivers for Nvidia and the rest worked pretty much out of the box (after some apt-get-ing). Multi-touch is application-dependent and my idea (for many years now) is to write custom interfaces for live sound/music/noise and visual composition and improvisation.

More technicalities: I compiled Processing and SuperCollider, tried a multi-touch library in Processing (SMT) but it didn’t work. Filed an issue at their GitHub and went on with a version of SuperCollider that supports multi-touch (I was kindly pushed in the right direction by Scott Cazan, who added MT support to his own branch of SC on GitHub). After some basic testing I wrote a simple granulator with a GUI. I also tested a very basic idea of TABs-like interface. In Processing I’ve whet my appetite with an excercise that focused on off-screen rendering and blending two images together.

Processing: slice and blend screeshot

 

Continue reading

SuperCollider GUI – tabs proof of concept

These are very newbie baby steps in the construction of a something bigger, a powerful flexible interface for a touch-screen device. Novels are being written one word at the time, right?

The following is a snippet of code that I needed to write in order to test a TABs-like behaviour in SuperCollider QT GUI system. Essentially, I was curious if it’s possible to show and hide whole windows/areas of different widgets using a tabs-like paging system – something we’re all used to now from browsers, for example.

SC GUI TABs proof of concept anim gif