The current state of AI in Music Production
One thing’s for sure; Artificial Intelligence, or AI, in short, has a significant impact on today’s industries. Take a closer look into healthcare, finance, management, education, transportation, manufacturing, logistics and analytics, and you’ll find enough traces of AI playing an essential role in the future of these industries. And while Google and Amazon brought AI into our homes, the creative industry – and more specifically the music industry – has been experimenting with AI for some time, reaching a fascinating point. What’s the impact of AI in music, how can you apply it in your productions and what will it bring for the future? Let’s have a closer look.
What do we mean by AI/ machine learning?
Down to its core, artificial intelligence (AI) describes the advanced process for a machine to make decisions based on logic. A more elaborate definition comes through Wikipedia which defines it as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Another term, often used within the AI context, is Machine Learning (ML), which is actually a subdiscipline of computer science and a branch of AI. Its goal is to develop techniques that allow computers to learn. AI incorporates both the learning segment as well as the related task and decision process. Referring to our friends at Wikipedia again: The term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem-solving”. So there you have it, AI involves learning, problem-solving and mimicking the human mind. You could say that makes them ‘almost’ human. But not quite.
AI in MUSIC AND AUDIO production
When you say AI and audio, the first thing that comes to the general mind is Speech Recognition and language processing. Siri on the iPhone is just one of the systems that can understand you, all based on AI. Every day, more and more systems are created that can transcribe human language, reaching hundreds of thousands through voice-response interactive systems and mobile apps. Companies like Omega, founded by the legendary performer and a true AI evangelist, Will.i.am, are becoming significant players in this field. But also in music production, AI is developing fast.
It may be surprising to the casual listener, but according to several estimates, between 20% and 30% of the top 40 singles will be written partially or entirely with machine-learning software in the next decade, meaning we are not in a baby steps phase anymore. The latest development in AI is already fueling its use in popular music.
“I use it as a basis for improvisation, which might be enough to send me off to writing a song. It’s kind of like a technical dream in its own way, it will give me access to areas that I wouldn’t be thinking about otherwise.” – David Bowie
But it’s not really an entirely new idea. More than two decades ago, David Bowie helped create the Verbasizer, a program for Apple’s Mac that randomised portions of his inputted text sentences to create new ones with new meanings and moods. It was basically an advanced version of a ‘cut-up’ technique he used, writing out ideas, then physically slicing and rearranging them to see what stuck. In the video below he gives a brief explanation of the tool.
Before Bowie’s experiment, which focused on songwriting and lyrics, having machines write music is also not new. In the 1950s, the composer Lejaren Hiller used a computer to produce the “Illiac” Suite for string quartet, the first musical score for traditional instruments made through computer-assisted composition. According to Hiller, an American researcher and chemist with a keen interest in music, music could be defined as a sensible form governed by laws of organisation that could be encoded in ways quite accurate, approaching it from a mathematical point of view, making the computer particularly suited for composing musical works. The results were quite stunning. Since then, artificial intelligence systems in music have become increasingly sophisticated.
One of the highly proclaimed AI composed music projects happened only 2 years ago. It started with Youtuber Taryn Southern getting slightly frustrated with finding background music for her YouTube videos. As an early YouTube adapter (her videos include popular covers, behind the scenes videos, and parody songs), Southern was used to experiment with new technology. However, the ‘problem’ was that she either had to write the songs herself, meaning writing a song every week, or pay to license other people’s work, something that quickly became “very, very expensive“. This might stir up the emotions of some music purists, but when you need a simple tune or song as a background for a Youtube video, computer-generated music could be a fast solution.
After reading an article in the NYTimes about the rise of AI in music and the number of start-ups immersed in it, she realised that this might have an answer to her frustration. Fast forward; after playing with the software and platforms, she soon realised that the music coming out of these AI systems was not just a fast and easy solution for her weekly videos, it was better than expected, helping her to explore other parts of her musical endeavours. Soon she became one of the first artists releasing songs composed with the help of AI and releasing her first AI album, I AM AI, in September last year. Taken from her website: I AM AI is the first album by a solo artist composed and produced with artificial intelligence. The songs explore the future of humans and machines. It’s important to note one word in that sentence; ‘with’ and not ‘by’ artificial intelligence.
When she was asked in an interview if we are anywhere near an AI no.1 track, she responded ‘Absolutely! However, you have to define A1 no.1 track. Is it sung by AI, written by AI, produced by AI or is it all of that?’ Those questions seem to be very relevant in the AI discussion. Southern’s way of working incorporates a lot of AI, but still, here own creativity plays a significant role in the process, as you can read in her interview with The Verge.
The bottom line is that most AI music systems are really good at composing and producing instrumentation, but it doesn’t yet understand song structure. That part remains a human element.
Here is a Youtube video of her 2nd release (Feb 2018), Life Support, complete with a virtual reality video for which she’s received a grant from YouTube VR Creator Lab:
Taryn is not the only musician trying to make pop AI. One of the first serious efforts was made using a system developed by Sony Computer Science Laboratories in Paris, called Flow Machines. The song called Daddy’s Car, released in September 2016, had to be in the style of The Beatles. The system is one of the most advanced musical AI’s. It is even capable of chopping up vocals and fitting them to its melody, as you can hear in the video below, being surprisingly catchy despite making no sense whatsoever. That’s because the system does not know language…yet.
The team behind the song decided to take it to the next level by working on an album and collaborate with artists. It’s released under the name SKYGGE (Danish for shadow), containing sci-fi and Europop tunes, featuring collaborations with the likes of Stromae and Kiesza. It’s produced by Benoît Carré, who has written songs for many different artists like Johnny Halliday and Françoise Hardy. When asked if a computer can really write original Europop songs, he responded: “Flow Machines does indeed write original melodies. It also suggests the chords and sounds to play them with. But a human is always needed to stitch the songs together, give them structure and emotion. Without people, its songs would be a bit rubbish.” He adds: “The philosophy of the project is to develop a creative tool to help artists.” This sounds very much like the way that Taryn works with AI in her music. It’s composed with the help of AI, turning a creator’s intention into music.
What about other tools like plugins?
For that question, we reached out to our very own Head of Education, Robin Reumers, who is also one of the directors, plugin- and business developer at Leapwing Audio, a boutique plugin company.
Robin, what are the challenges in AI when it comes to plugin development? “Difficulties with real-time audio rendering” is his immediate response. “The thing is that AI or Machine Learning is excellent when you give it data and let it find patterns or solutions to problems. So one could say if you throw a whole song at it, it can be very good at analyzing this and being able to change elements of the song. But when you’re listening to elements in real-time, there is a restriction in the fact that plugins in a DAW only see the buffers they’re being fed. They can’t “know” that they heard the whole song and then auto-adjust their settings, it’s built on the design philosophy of human interaction.”
Reumers continues “What we do see happen a lot in plugin development is that machine learning is used in the R&D phase, to find the most optimal way of solving a solution. This then gets typically converted into an algorithm”.
Can you give some examples? “Innovative plugin companies like iZotope use AI, but generally in their offline processing (noise reduction, reverb, …). They do this by loading the whole song into their GUI, which can then work on the song/material all at once. This has many benefits over real-time processing.”
Where will it go? Will we be out of a job soon?
“IT’S CHEATING.” This has always been the response from self-proclaimed music purists talking about technological innovation in song creation and music production. Sampling, synthesisers, drum machines, DAW’s, Auto-Tune, they all have been criticised as lazy ways to make chart-topping hits by taking away the human element. Even though many of these innovations have been disruptive in their early stages, eventually they are incorporated as new tools within the production process, exploring different levels of musical expression. When seeing AI as an extension to your creative process, where it can produce interesting material, you can build upon that as a creative.
Quoting Microsoft’s approach, another major player in the field of AI: “We believe that, when designed with people at the centre, AI can extend your capabilities, free you up for more creative and strategic endeavours, and help you or your organisation achieve more.”
Reumers view on the future “We do expect to see more AI and Machine Learning in things like sound design and creation, which is a different story. An AI might suggest a certain timbre of sound based on the type of song, to speed up songwriting. You play one chord, and it gives suggestions on how to finish the song.” “The one thing about AI / Machine learning is that it is very good at being predictable, but music often times isn’t. So fortunate for all of us, the creatives won’t be out of a job any time soon.”
It’s exciting to keep an eye on the use and development of AI in music production to get a glimpse of what the future might look like. Including topics like writing credits, copyrights and royalties. Because that is a whole new ball game (and article). And with people like Francois Pachet, Flow Machines system creator who is now director of Spotify’s Creator Technology Research Lab, we can only guess what AI will bring to the future of music.
Have you tried AI in your music?
So what about you? What is your view on this? Have you ever experimented with AI tools in your music? Why not give open source tools like Magenta (by Google) or platforms like Amper or Jukedeck a check and see with it brings? We look forward to hearing your fresh new inputs!
References:
Is music about to have its first AI No.1?
A.I. Songwriting Has Arrived. Don’t Panic