what I learned about A.I.

For the last 6+ months I have been doing some research on Artificial Intelligence for an art project. The field is vast and artistic engagement with it is far from simple. Here I’m laying down some quick thoughts about it.

multi-disciplinary

The term A.I. encompases various approaches and disciplines: natural language processing, machine learning, robotics, artificial neural networks, evolutionary algorithms, deep learning, ethics and philosophy among many more (wikipedia.org/wiki/Artifical_intelligence). Apart from ethical, sociological and philosophical considerations it is mainly hardcore computer science. It has quite a historical timeline of more than half a century – it was founded as academic discipline in 1956. Today, scientific papers touching on the subject in one way or another are being published on a daily basis (arxiv.org/list/cs.AI/recent).

hard science

Learning how to use artificial neural networks, machine learning and other algorithms is fairly complicated and difficult. Learning by doing takes fair amount of resources: good planning, heaps of time and a lot of processing power – brain and computer alike. At least for me personally it has been a great challenge to tacle basics of Machine Learning math and all the computer science involved with it for example.

the hype

A.I. seems like a huge buzzword these days. Judging from media and YouTube everyone is doing something with it. Some uses are quite superficial and seemingly done just because it’s trendy, some are actually using it in interesting ways. But regardless of depth it is being marketed in a big way, ‘attached’ to many projects to raise value in attention capital. Like for example a song made from all winning Eurovision songs put into some form of machine learning, then taking the output and produce the hell out of it. Of course, this new technology is exciting and becoming more and more accessible thanks to open publishing of code and tools, but it seems it is also often abused as simply something that is currently a hot topic.

it’s here to stay

Trying to look beyond the hype, to disregard it, it is clear that technology powered by some form of A.I. is increasingly in use today. The growth is exponential, more and more tech used daily is one way or another made with A.I. It’s not just hype, there are many real life applications and even purchasable processing power in the cloud for anyone’s apps and projects. To a certain degree the research is also overflowing the confines of academic and corporate worlds into open source and DIY scene. This is not too new in itself but the degree of ubiquity and spread that is happening right now is bigger every day.

environmentally unfriendly

I’ve learned that deep learning has a terrible carbon footprint. According to an article in MIT Technology Review computation required for training a single AI model can emit as much carbon as five cars in their lifetimes (technologyreview.com):

They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model’s final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-America flight for one person

technologyreview.com)

safe, accessible

Recently I also discovered “OpenAI” – a not-for-profit organization and recently incorporated company that strives for safe and accessible A.I. It’s charter is close to my values and ideas, especially to the idea of critical use of technology where technology in itself is not good or bad per se – it does hold certain potentials in certain contexts but at the end of the line what matters is how people use it. When technology is used by the people is where abuse can happen and all undesired and unethical results. Furthermore, the way how we use and exploit technology is goverened by what we know about it, how we think about it, how aware are we of consequences of different types of use that can come from emerging technologies. It’s education and discussion and action. I always felt that critical thinking that is not technophobic on one hand, and that is not evangelical and blindly fascinated by new tech on the other is what is the right approach – ethical and good for all mankind.

music

David Cope and his work on EMI (Experiments in Musical Intelligence) way back in 1980’s was the first machine intelligence project that was working with music that I engaged with on a more thorough level. It didn’t use machine learning but it was a complex rule-based software that was able to analyse sheet music (midi files) and tried to output a new composition in the same style.

Recent developments in ML and ANN have created oportunities to engage machines to improve their learning by themselves.

I beta-tested AIVA, a recent commercial product aimed at composers as a compositional aid where the A.I. system is capable of generating a basis – melody, harmony and some structure – for further editing into a more polished composition by a human.

Open A.I. has recently (April 2019) published some results of a project called MuseNet – “a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles”. The examples seem quite amazing to listen to but I’m quite sure that they are hand-picked cherries from the top. There’s an online demo, but no code that one could try to use.

MuseNet points to a project Magenta if one wants to explore AI generated music using so-called ‘transformers’, a novel machine learning approach using neural networks that is in use in MuseNet. Magenta is a Google-backed Tensorflow-based “open source research project exploring the role of machine learning as a tool in the creative process”. They have released open source tools in Python and JavaScript. There’s lots of experimentation in the wild using Magenta.js, plugins and web applications.

questions – story

I feel like this specific art project I’m working on and that is dealing with AI should be telling a story. Not a grand narrative, but a story with important questions at its core.

The questions thus far are:

  • can AI help us understand what is human creativity?
  • what kind of use of AI is ethical towards people, animals, plants and planet?
  • generally looking, can AI help us understand what it is human?
  • how (and why) are we afraid of AI?

I think that the project I developed so far which is titled “INTELLIGENCE IS WHATEVER MACHINES HAVEN’T DONE YET” has reached a (first) basic phase but needs to be upgraded.

More to come.