AI is making its impact felt everywhere. That includes AI in the music industry.
Among all the discussion of AI and its impact on our personal, business, and artistic pursuits, there is one thread that feels especially familiar: the race for best practices, regulated systems, and monetization to keep up with a technology that’s evolving in real time.
In music, the last time we saw anything with a comparable impact was the rise of rampant MP3 file sharing of the late 1990s. Consumers were tired of paying $18 for a CD containing one or two songs they liked… and new technology enabled what amounted to artistic theft on a massive scale. Years of lawsuits and hand wringing followed. Legitimate music streaming services emerged and evolved.
Well here we go again, in many ways. The issue is different but many of the themes are the same: a populace with access to generative AI tools — some of whom use it to exploit art, others to empower it — and an industry working to craft laws, or at least good-faith agreements, that protect and compensate artists.
Some of the recent music-AI items we’re watching:
Will Universal & Google Take it Full Circle?
Back in April, when TikTok user @ghostwriter977 went viral with a track featuring AI-generated vocals emulating Drake and The Weeknd, Universal — the parent label for both actual artists — issued takedowns and sent a letter to streaming platforms urging them to block developers from using artist IP (their recordings, voices, etc.) to train AI technology.
The argument is that even if a musical composition itself is original and doesn’t break the letter of current copyright law, misappropriating a singer’s distinctive voice should be considered an infringement on an artist’s protected right of publicity. US copyright law does not protect a performer’s voice, tone or unique singing style. However, misappropriation can be argued, and in the case of human-generated so-called “soundalike” recordings, has been argued successfully in the past.
Jump to August, 2023 and it is reported that Universal and Google are in early talks to allow users to create “deepfake” songs using generative AI under a model that would compensate the actual artists.
When (or will) this platform develop and become publicly available? What will the monetization model look like for artists? Which artists will opt-in? All great questions… without any answers as yet.
What Are Artists & Songwriters Doing?
There is no one default position for creatives when it comes to AI. At least some artists see the potential benefits of generative AI, when employed within certain parameters.
Certain songwriters, for instance, are using AI to add generated artist vocals to demo recordings when pitching their compositions to the artist in question. They posit: rather than sing the lyrics themselves or hire a singer with a voice and tone similar to the target artist’s, why not use AI to help demonstrate to the artist more or less exactly how they would sound on the track themselves?
Shortly after the ghostwriter977 track exploded, Grimes encouraged fans to create songs using an AI-generated version of her voice (she did subsequently clarify that she may issue “copyright takedowns” for any songs with especially “toxic” lyrics).
In June, Paul McCartney said AI had been used to extricate John Lennon’s voice from an old demo so that it could be added to what McCartney referred to as an upcoming “final” Beatles record.
No one can say how Lennon himself would feel about this, of course, but the use of AI to isolate, remove or alter certain components of preexisting tracks has also proven to drive solutions that do indeed empower art while also rewarding creatives.
What Will Label Moves and Regulation Bring?
Even before the more recent Google announcement, Universal Music announced a deal with sound wellness company Endel in May that enables Universal artists to use AI ito create music soundscapes for sleep and meditation, and to be paid for them. In June, Sony Music hired its first ever Executive VP of Artificial Intelligence. Warner Music Group head Robert Kyncl has said that “When it comes to generating AI, it needs to be put in proper context. Framing it only as a threat is inaccurate.” The EU is developing what is set to be the world’s first comprehensive AI regulatory law.
In other words: watch this space. As AI technology and user implementation thereof continue to evolve, so too will regulation, deal structures and, most interestingly, art itself. Some may employ AI as a tool to great artistic effect (consider that sampling was once a new technology, and is one for which we still see legal discussion arise) and some may forswear it for the sake of wholly-human songcraft. What those on the artistic side of the discussion are sure to agree on is that it would be nice to know the difference between the two.
Jake Terrell is VP of Music & Brand Partnerships at BENlabs and offers a unique music industry insider perspective. He leads the BENlabs teams that lead the way on powerful, bespoke music artist brand integrations, as well as commercial music clearances, music video placements, and custom content campaigns with recording artists and composers.