Whether it’s potential job loss or the Terminator becoming a reality, there’s no shortage of fears surrounding the development of artificial intelligence (AI). But last November on Sean Carroll’s Mindscape podcast, the musician Grimes took things in a different direction regarding AI’s capabilities: “I feel like we’re in the end of art, human art. Once there’s actually AGI (Artificial General Intelligence), they’re gonna be so much better at making art than us.”
Is Grimes right? Are we on the cusp of an audio revolution where AI is both the main composer and frontman?
A New Era of Creativity?
Not long after Grimes’ comments, numerous artists took to Twitter and other social media platforms to voice their own opinions on the matter. While many lambasted the musician’s comments, some contributed another unique perspective on the subject: Maybe AI won’t end human art; maybe, it will augment it.
There’s a substantial amount of evidence to back up this argument. The past few years have seen several artists (Toro y Moi, Holly Herndon, and Arca name just a few) incorporating AI into their work to give it a fresh direction. Across the world right now, researchers and musicians are working on developing AI tools to make the technology more accessible to creatives.
Copyright complexities and other obstacles still need to be worked out. But many of these musicians working with AI hope that one day, it will not only be a democratizing force in the industry but an essential part of artistic endeavors. And for some of these people, this work is proving that the sky is still the limit.
“It’s provided me a sense of relief and excitement that not everything has been done — that there’s a wide-open horizon of possibility,” Arca told TIME in a recent interview. She’s a music producer who has worked with the likes of Björk and Kanye West on some of their most innovative albums.
The Relationship Between AI and Music
But as fresh as all of this feels to some artists, the relationship between AI and music goes back quite a few decades. In 1951, Alan Turing built a machine that could generate three melodies. And in the 90s, David Bowie employed a digital lyric randomizer for inspiration.
Around this time, a music theory professor was also training a computer program to compose new music in the style of Johann Sebastian Bach. When put to the test, an audience couldn’t differentiate between original Bach pieces and the imitations.
Of course, combining AI and music has come a long way since then. University research teams, major tech investments, and machine learning conferences such as NeurIPS have all played a part in this rapid advancement. And it has culminated in some unprecedented possibilities.
AI music innovator Francois Pachet released Hello, World in 2018, the first pop album composed with AI. And in 2019, singer-songwriter Holly Herndon harmonized with an AI version of herself on her critically-acclaimed album Proto.In spite of these achievements, many still believe that we’re far away from a hit song completely crafted by AI. “AI is simply not good enough to create a song that you will listen to and be like, ‘I would rather listen to this than Drake,'” explains Ole Stavitsky, CEO and co-founder of Endel, a sound environment-generating app.
While AI hasn’t smashed world records in the pop genre of music, it is making significant headway in other avenues.
AI Tools to Meet New Audio Demands
The explosion in popularity of streaming and social media platforms has caused the number of content creators to balloon in recent years. As a result, more music is needed than ever before. This problem became readily apparent early last decade.
While working on musical scores for films like The Dark Knight, composers Michael Hobe, Drew Silverstein, and Sam Estes flooded with requests for background music for an array of content like video games, TV, and more. To make matters more convoluted, many of these people could not afford original music and didn’t have time to make it themselves. And they certainly didn’t want to depend on stock music.
The trio of composers turned to AI to see how it could help. Eventually, they created Amper, an AI composition tool that lets anyone create new music by specifying parameters such as genre and tempo. The NYC- and Los Angeles-based company quickly became a smash hit; its music is now used in commercials, podcasts, and many other types of content.
On the other end of the spectrum, Berlin-based Endel is providing personalized soundscapes, another modern need. The concept of Endel came about when Stavitsky realized, “there’s no playlist or song that can adapt to the context of whatever’s happening around you.”
By accounting for real-time factors such as weather, the listener’s heart rate, and circadian rhythms, Endel generates music that can help you focus, relax, and sleep better. Stavitsky says that users have used Endel to combat problems like insomnia and ADHD. Last January, the app passed one million downloads.
Tune in to Our Follow-Up
AI may have not perfected a smash hit pop song yet. But Amper and Endel help fulfill the functional and experimental demands of the modern music industry. And this is just the beginning of a new audio era.
Tune in to our follow-up post, where we’ll take a closer look at how musicians are pushing their art forward with AI!Tags: AI, AI and machine learning, AI and ML, app developer los angeles, app developers in los angeles, app development los angeles, artificial intelligence, artificial intelligence and creativity, artificial intelligence app, artificial intelligence app developer, artificial intelligence app development, iOS app developer Los Angeles, machine learning, machine learning app developer, machine learning app developer Los Angeles, music app, music apps