Will AI Make the Music of the Future? – Part 2

March 16, 2020 - 7 minutes read

Is artificial intelligence (AI) taking over everything audio-related?

Welcome back to our short series on AI’s role in the future of music! In our first entry, we delved into how AI is already impacting the current sonic landscape. We also explored AI tools that allow anyone to create music scores or provide an adaptable soundscape to your day. In case you missed it, check it out here.

For this second and final post, we’ll take a closer look at how musicians and songwriters are pushing their art forward with AI.

YACHT Is on Board With AI

Many people hold concerns that AI could “flatten” the musical landscape and make every popular song sound generic and similar. The fear that major record labels will use algorithms to concoct and cram simplistic, functional ditties down our ears is a real one.

But Claire Evans, lead singer of Los Angeles-based electropop duo YACHT, has a different perspective on the matter: She thinks that sort of heartless optimization already occurs in the music industry. “That algorithm exists and it’s called Dr. Luke,” Evans explains. She’s referring to Lukasz Gottwald, an American record producer who has utilized specific formulas to create massive pop hits for a variety of people such as Britney Spears, Rihanna, Nicki Minaj, and Kesha.

Instead of viewing AI with pessimism, forward-thinking musicians actually have the opportunity to fight against this dystopian flattened soundscape; they can use the technology to venture into new creative territory. This is exactly what YACHT did for their newest album, Chain Tripping.

The musical duo applied machine learning (ML) to their songwriting process. After training an ML system on their entire musical catalog, it output hours of new tunes. The band gleaned the most intriguing pieces from these results and meshed them together into coherent songs. Evans admits that learning the new music was challenging and time-consuming — mainly because the chord changes and riffs incorporated deviated from their usual inclinations.

“AI forced us to come up against patterns that have no relationship to comfort. It gave us the skills to break out of our own habits,” Evans explains. And it seems like the hard work paid off. Chain Tripping garnered the band their first-ever Grammy nomination.

AI & Audio Experimentation Around the World

Across the globe, AI is making a unique impact on many musicians’ work. For instance, Ash Koosha, a British-Iranian composer and tech entrepreneur, actually create an AI pop star. Named Yona, it writes music via generative software. Admittedly, many of Yona’s lyrics are nonsensical. But some of them are also surprisingly insightful and emotional.

Koosha thinks this already pushes past boundaries for most humans: “Being so blunt and so open — so emotionally naked — is not something most humans can do. I wouldn’t be able to be that honest unless something triggers me.”Another pertinent example of AI’s effect on audio experimentation is Berlin-based duo Dadabots. The pair utilizes neural networks to generate 24/7 live streams of music spanning genres such as death metal, free jazz, and skate punk. They’re currently in the middle of a residency in which they’re creating new AI tools.

Dadabots co-founder CJ Carr sees AI development as a trainer that can help musicians improve their craft. And part of the fun comes from seeing what AI cooks up. Carr says, “I want to see expressions and emotions and sounds that have never existed before.”

Besides creating the future of music, AI can also reinvent its past. Last summer, a mutated version of English singer and songwriter Jai Paul’s popular track “Jasmine” appeared online. Initially, it sounds the same as the original. But it quickly morphs into an infinite, spontaneous jam. This version was generated via AI by London-based development company Bronze.

Behind Bronze is scientist Mick Grierson and musicians Gwilym Gold and Lexx. “We wanted a system for people to listen to music in the same state it existed in our hands — as a constant, evolving form,” Gold explained in a recent interview with TIME. Bronze’s ability to capture the ephemerality of live music intrigued Venezuelan record producer Arca, who has worked on critically-acclaimed albums like Kanye West’s Yeezus and Björk’s Vulnicura.

Arca and Bronze’s team used AI to collaborate on an art piece by French artist Philippe Parreno. It currently resides in the lobby of New York’s Museum of Modern Art. The music transforms and the speakers swivel according to factors like crowd density and temperature, meaning no two minutes are the same. Arca already has a few other ideas she’d like to implement with Bronze’s technology. She thinks “it opens up a world of possibilities.”

A New Era of Audio’s Just Beginning

Despite all of these musical developments with AI, many still worry that the technology will eventually replace musicians. Koosha says that this sort of fear and concern have been a part of every major technological advancement in recent decades. While some musicians may have experienced displacement, the new developments always ushered in an era in which the barrier for entry was lowered to make impactful art.

We’re still very much in the early days of AI music experimentation. And there’s no telling what the future holds. But for optimistic innovators like Carr, they can’t wait to see what unfolds: “I want to see 14-year-old bedroom producers inventing music that I can’t even imagine.”

Thanks for tuning in to our series on AI’s effect on music! Where do you think the technology will go from here? Do you think more artists will utilize it in their work? Let us know your thoughts in the comments below!

Tags: , , , , , , , , , , , , , , , , , , , ,