Coming to studio (after kinda running away from it all in a cafe – so burdening is the timeline, deadline and all the delayed work!) I discover so much more is online (and offline) about “music and A.I.” I’m overwhelmed. I will post some links here as a way to bookmark stuff.
I presume I will need to write a report for the end of my stipend and that’s actually a wonderful excuse to go back to these logs and see what was going on daily. I really hope I will come back to it in couple of months.
links for 190609
MAGENTA >> An open source research project exploring the role of machine learning as a tool in the creative process. — Magenta is distributed as an open source Python library, powered by TensorFlow. This library includes utilities for manipulating source data (primarily music and images), using this data to train machine learning models, and finally generating new content from these models.
Anna Huang @huangcza >> Google AI Resident on Magenta, working on generative models for music. Previously ML, HCI and music composition at MILA, Harvard, MIT Media Lab and USC
Music Transformer: Generating Music with Long-Term Structure >> post @ Magenta.tensorflow.google
Music Professor Deirdre Loughridge Accepted into Institute for Advanced Study to Research Human Versus Machine-Thinking in Music + profile at northeastern.edu (Boston) + her wordpres + twitter
Professor Loughridge taught a course last fall that introduced her students to this breadth of machine applications to music, and the evolution of their reception over time. Many of the course concepts exemplify what she will be researching at the IAS.
The class, entitled Sounding Human, explored how people have used music to answer the question of what it means to be human and how boundaries between the human and non-human (animal, machine, alien, etc.) have been defined musically. Students were challenged to engage with the ethical quandaries that inevitably arise when examining the line between human and non-human.
One course assignment that explored these ethical implications was a creative writing assignment based on Space Opera, a science fiction book by Catherynne M. Valente. The book depicts a scenario where humankind from planet Earth must compete in an intergalactic musical contest which serves as a kind of test of sentience, and is used by the alien hosts to determine the fate of the human species. After reading the book, the students in Professor Loughridge’s class engaged through their own fictional scenarios with questions such as: what does it mean to be human, who gets to count as part of our human community, what rights are reserved for various community members, and how can music – for better or worse – sway these sorts of judgments?
Christine M. Payne // @openai MuseNet researcher, pianist
The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.
Training a single AI model can emit as much carbon as five cars in their lifetimes. Deep learning has a terrible carbon footprint. by Karen Hao MIT Technology Review, Jun 6, 2019
In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).