AI
Page: 4
AI music companies Suno and Udio have hired elite law firm Latham & Watkins to defend them against lawsuits filed by the three major labels in late June, according to court documents.
Filed by plaintiffs Sony Music, Warner Music Group (WMG) and Universal Music Group (UMG), the lawsuits claim that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
Latham & Watkins has already played a key role in defending top companies in the field of artificial intelligence. This includes the firm’s work to defend Anthropic against allegations of infringement levied by UMG, Concord Music Group and ABKCO last October. Latham represents OpenAI in all of its lawsuits filed by authors and other rights owners, including the case filed by the New York Times and a case filed by comedian Sarah Silverman and other writers.
Trending on Billboard
The Latham team is led by Andrew Gass, Steve Feldman, Sy Damle, Britt Lovejoy and Nate Taylor. Plaintiffs UMG, WMG and Sony Music are represented by Moez Kaba, Mariah Rivera, Alexander Perry and Robert Klieger of Hueston Hennigan as well as Daniel Cloherty of Cloherty & Steinberg.
It is common for AI companies to argue that training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law — and it is likely this will become a core part of Latham’s defense of Suno and Udio’s practices. Though fair use has historically allowed for things like news reporting and parody, AI firms say it applies equally to the “intermediate” use of millions of works to build a machine that spits out entirely new creations.
So far, both Suno and Udio have declined to comment on whether or not they have used unlicensed copyrights in their datasets. However, the music industry started to question what was in those datasets after a series of articles written by Ed Newton-Rex, founder of AI music safety nonprofit Fairly Trained, were published by Music Business Worldwide. In one of them, Newton-Rex said he was able to generate music from both Suno and Udio that “bears a striking resemblance to copyrighted music.”
The lawsuit cites circumstantial evidence to support the labels’ belief that their copyrighted material has been used by Suno and Udio in AI training. This includes generated songs by Suno and Udio that sound just like the voices of Bruce Springsteen, Lin-Manuel Miranda, Michael Jackson and ABBA; outputs that parrot the producer tags of Cash Money AP and Jason Derulo; and outputs that sound nearly identical to Mariah Carey’s “All I Want For Christmas Is You,” The Beach Boys’ “I Get Around,” ABBA’s “Dancing Queen,” The Temptations’ “My Girl,” Green Day’s “American Idiot” and more.
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Pharrell Williams and Louis Vuitton face a trademark lawsuit over “Pocket Socks”; Diplo is hit with a lawsuit claiming he distributed “revenge porn”; the Village People move forward with a lawsuit against Disney; a longtime attorney repping Britney Spears moves on; and much more.
Top stories this week…
SOCKED WITH A LAWSUIT – Pharrell Williams and Louis Vuitton were hit with a trademark lawsuit over their launch of a high-end line of “Pocket Socks” a literal sock-with-a-pocket that launched at Paris Fashion Week last year and sells for the whopping price of $530. The case was filed by a California company called Pocket Socks Inc. that says it’s been using that same name for more than a decade on a similar product. AI FIRMS FIRE BACK – Suno and Udio, the two AI music startups sued by the major record label last week over allegations that they had stolen copyrighted works on a mass scale to create their models, fired back with statements in their defense. Suno called its tech “transformative” and promised that it would only generate “completely new outputs”; Udio said it was “completely uninterested in reproducing content in our training set.”REVENGE PORN CLAIMS – Diplo was sued by an unnamed former romantic partner who accused him of violating “revenge porn” laws by sharing sexually-explicit videos and images of her without permission. NYPD confirmed to Billboard that a criminal investigation into the alleged incident was also underway. DISCO v. DISNEY – A California judge refused to dismiss a lawsuit filed by the Village People that claims the Walt Disney Co. has blackballed the legendary disco band from performing at Disney World. Disney had invoked California’s anti-SLAPP law and argued it had a free speech right to book whatever bands it chooses, but a judge ruled that the company had failed to show the issue was linked to the kind of “public conversation” that’s protected under the statute. WRIT ME BABY ONE MORE TIME – More than two years after Mathew Rosengart helped Britney Spears escape the longstanding legal conservatorship imposed by her father, the powerhouse litigator is no longer representing the pop star. In a statement, the Greenberg Traurig attorney said he was shifting to focusing on other clients: “It’s been an honor to serve as Britney’s litigator and primarily to work with her to achieve her goals.” PHONY FEES? – SiriusXM was hit with a class action lawsuit that claims the company has been earning billions in revenue by tacking a shady “U.S. Music Royalty Fee” onto consumers’ bills. The fee — allegedly 21.4% of the actual advertised price — represents a “deceptive pricing scheme whereby SiriusXM falsely advertises its music plans at lower prices than it actually charges,” the suit claims. DIVORCE DRAMA – Amid an increasingly ugly divorce case, Billy Ray Cyrus filed a new response claiming that he had been abused physically, verbally and emotionally by his soon-to-be-ex-wife, Firerose. The filing actually came in response to allegations that it was Cyrus who had subjected Firerose to “psychological abuse” during their short-lived marriage. UK ROYALTIES LAWSUIT – A group of British musicians filed a joint lawsuit against U.K. collecting society PRS, accusing the organization of a “lack of transparency” and “unreasonable” terms in how it licenses and administers live performance rights. The case, filed at London’s High Court, was brought by King Crimson’s Robert Fripp, as well as rock band The Jesus and Mary Chain and numerous other artists.
Trending on Billboard
Last January, Olivia King sat at her dining room table and made a beat — in five minutes.
The Rhode Island-based pop/R&B artist doesn’t play instruments or use music-production software. Instead, she created her track with Overtune, a music-making app that allows users to combine beats and samples from a wide range of instruments and other sounds, write and record vocals, and otherwise use a simple smartphone interface to make music meant to soundtrack content on platforms like TikTok, Instagram and YouTube. Overtune was developed in Iceland and launched in 2020.
Now, King’s use of the app is helping expand Overtune’s applications beyond social platforms and into more traditional releases. After using Overtune to add her own vocals to her five-minute beat, she made a video of herself performing the song snippet, then posted it to TikTok as part of a brand deal with the app. The video started racking up views; it now has more than 10 million of them.
Capitalizing on this interest, King created an entire song based on her original minute-long TikTok. A steamy ballad called “Unfinished Business,” the two-minute, 18-second song was made entirely with Overtune beat packs and released last Friday (June 21). It marks the first release through Overtune’s new label service, which is centered on a partnership with SoundOn, the music distribution model launched by TikTok in 2022 in the U.S. and U.K.
Trending on Billboard
Building SoundOn into Overtune “fits directly into the changing music industry,” says Overtune co-founder Jason Daði Guðjónsson. “Social media platforms like TikTok are at the forefront of that kind of transformation, and I think Overtune is perfectly positioned to help artists navigate the changing landscape by providing them with the tools to create and now also share and monetize their music.”
[embedded content]
SoundOn is designed to help independent, emerging artists navigate TikTok, upload music, get paid for its use, market and promote themselves on the platform, and distribute their music to outside DSPs. Through its integration into Overtune, paid users can release Overtune-produced songs via SoundOn directly in the app, which has a free tier along with a subscription service priced at $9.99 a month. (This paid option also offers other features like exclusive beat packs.)
“I’ve worked with probably every distributor under the sun, but never before with SoundOn,” says King. “I’m excited for it, because TikTok has changed the music industry.”
Overtune’s ability to produce music tailor-made for TikTok has attracted serious interest, with the company receiving $2 million in seed funding from Whynow media (founded by Mick Jagger’s son, Gabriel Jagger), along with investments from a group that includes Guitar Hero founder Charles Huang. Its advisory board includes former Sony Music UK head Nick Gatfield. And while the use of the app to make full-length songs is relatively new, along with King’s song, Overtune was used in the creation of “Framtíðin er hérna” (“The Future is Here”), a song made for the National Broadcasting Station of Iceland’s 2023 New Year’s Eve show.
Overtune’s founders want to make music creation ultra-simple by providing thousands of different sounds that are organized by tempo and pitch for easy matching. (Some commenters were suspicious about whether King had actually made her beat in five minutes, so she made another video in which she recreated the process to prove it.) The app currently offers assistive AI that answers user questions and is developing other AI functions that are being trained on Overtune’s proprietary beat packs. Later this year, the company will also launch a function that lets users generate loops using written prompts.
Overtune recently added an AI function with which users can apply vocals filters that mimic the voices of artists from Snoop Dogg to Elvis, along with celebs like Morgan Freeman and fictional characters like Marge Simpson. (This function will soon be replaced by AI voices developed in-house and designed to modify individual voices, rather than replicate those of celebrities.)
“The beautiful thing about it,” Guðjónsson says of the app as it currently stands, “is that you don’t have to know anything about tech or music to be able to create songs.”
Overtune sounds aren’t copyrighted, so users can earn royalties from the music made on the app when it’s uploaded to TikTok and DSPs like Spotify and Apple Music. But Guðjónsson says Overtune users “gravitate toward TikTok” especially, making SoundOn “a natural addition to our offerings.”
The app also allows users to make music at TikTok’s unique pace. Artists can experiment with song snippets, then use SoundOn to put them on TikTok and test them with audiences before completing the song and releasing it on more traditional DSPs.
Making distribution easier is also just an extension of the company’s broader mission. “Becoming a musician is not supposed to be that difficult,” Guðjónsson says. “As it is today, you have to own a lot of expensive equipment and have a big presence to be noticed by the labels, but anyone can go through our services.”
For King, this ease is a major part of the app’s appeal.
“As an independent artist you have to be consistent, and the best way to be consistent is to be efficient,” she says. “With Overtune I can do a full demo on the app, then distribute through SoundOn, which makes life easier as an independent artist.”
On Monday (June 24), the three major music companies filed lawsuits against artificial intelligence (AI) music startups Suno and Udio, alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale.” Spearheaded by the RIAA, the two similar lawsuits arrived four days after Billboard first reported that the labels were seriously considering legal action against the two startups.
Filed by plaintiffs Sony Music, Warner Music Group and Universal Music Group, the lawsuits allege that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
Trending on Billboard
Hours later, Suno CEO Mikey Shulman responded to the lawsuit with a statement sent to Billboard. “Suno’s mission is to make it possible for everyone to make music,” he said. “Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists. We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.”
An RIAA spokesperson fired back at Shulman’s comment, saying: “Suno continues to dodge the basic question: what sound recordings have they illegally copied? In an apparent attempt to deceive working artists, rightsholders, and the media about its technology, Suno refuses to address the fact that its service has literally been caught on tape — as part of the evidence in this case — doing what Mr. Shulman says his company doesn’t do: memorizing and regurgitating the art made by humans. Winners of the streaming era worked cooperatively with artists and rightsholders to properly license music. The losers did exactly what Suno and Udio are doing now.”
Udio responded on Tuesday (June 25) with a lengthy statement posted to the company’s website. You can read it in full below.
In the past two years, AI has become a powerful tool for creative expression across many media – from text to images to film, and now music. At Udio, our mission is to empower artists of all kinds to create extraordinary music. In our young life as a company, we have sat in the studios of some of the world’s greatest musicians, workshopped lyrics with up-and-coming songwriters, and watched as millions of users created extraordinary new music, ranging from the funny to the profound.
We have heard from a talented musician who, after losing the ability to use his hands, is now making music again. Producers have sampled AI-generated tracks to create hit songs, like ‘BBL Drizzy’, and everyday music-lovers have used the technology to express the gamut of human emotions from love to sorrow to joy. Groundbreaking technologies entail change and uncertainty. Let us offer some insight into how our technology works.
Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.
The goal of model training is to develop an understanding of musical ideas — the basic building blocks of musical expression that are owned by no one. Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.
We stand behind our technology and believe that generative AI will become a mainstay of modern society.
Virtually every new technological development in music has initially been greeted with apprehension, but has ultimately proven to be a boon for artists, record companies, music publishers, technologists, and the public at large. Synthesizers, drum machines, digital recording technology, and the sound recording itself are all examples of once-controversial music creation tools that were feared in their early days. Yet each of these innovations ultimately expanded music as an art and as a business, leading to entirely new genres of music and billions of dollars in the pockets of artists, songwriters and the record labels and music publishers who profit from their creations.
We know that many musicians — especially the next generation — are eager to use AI in their creative workflows. In the near future, artists will compose music alongside their fans, amateur musicians will create entirely new musical genres, and talented creators — regardless of means — will be able to scale the heights of the music industry.
The future of music will see more creative expression than ever before. Let us use this watershed moment in technology to expand the circle of creators, empower artists, and celebrate human creativity.
A few weeks back, a member of the team at my company, Ircam Amplify, joined one of the multiple AI music generators available online and input a brief prompt for a song. Within minutes, a new track was generated and promptly uploaded to a distribution platform. In just a couple of hours, that song, in which no human creativity played a part, was available on various streaming platforms. We diligently took action to remove the track from all of them, but the experiment highlighted a significant point.
It is now that simple! My aim here is not to pass judgment on whether AI-generated music is a good or a bad thing — from that perspective, we are neutral — but we think it is important to emphasize that, while the process is easy and cost-effective, there are absolutely no safeguards currently in place to ensure that consumers know if the music they are listening to is AI-generated. Consequently, they cannot make an informed choice about whether they want to listen to such music.
With AI-generated songs inundating digital platforms, streaming services require vast technological resources to manage the volume of tracks, diverting attention away from the promotion of music created by “human” artists and diluting the royalty pool.
Trending on Billboard
Like it or not, AI is here to stay, and more and more songs will find their way onto streaming platforms given how quick and easy the process is. We already know that there are AI-generated music “farms” flooding streaming platforms; over 25 million tracks were recently removed by Deezer, and it is reasonable to speculate that a significant proportion of these were AI-generated.
In the interest of transparency, consumers surely deserve to know whether the music they are tuning into is the genuine product of human creativity or derived from computer algorithms. But how can AI-generated tracks be easily distinguished? Solutions already exist. At Ircam Amplify, we offer a series of audio tools, from spatial sound to vocal separator, that cover the full audio supply chain. One of the latest technologies we have launched is an AI-generated detector designed to help rights holders, as well as platforms, identify tracks that are AI-generated. Through a series of benchmarks, we have been able to determine the “fingerprints” of AI models and apply them to their output to identify tracks coming from AI-music factories.
The purpose of any solution should be to support the whole music ecosystem by providing a technical answer to a real problem while contributing to a more fluid and transparent digital music market.
Discussions around transparency and AI are gaining traction all around the world. From Tokyo to Washington, D.C., from Brussels to London, policymakers are considering new legislation that would require platforms to identify AI-generated content. That is the second recommendation in the recent report “Artificial Intelligence and the Music Industry — Master or Servant?” published by the UK Parliament.
Consumers are also demanding it. A recent UK Music survey of more than 2,000 people, commissioned by Whitestone Insight, emphatically revealed that more than four out of five people (83%) agree that if AI technology has been used to create a song, it should be distinctly labeled as such.
Similarly, a survey conducted by Goldmedia in 2023 on behalf of rights societies GEMA and SACEM found that 89% of the collective management organizations’ members expressed a desire for AI-generated music tracks and other works to be clearly identified.
These overwhelming numbers tell us that concerns about AI are prevalent within creative circles and are also shared by consumers. There are multiple calls for the ethical use of AI, mostly originating from rights holders — artists, record labels, music publishers, collective management organizations, etc. — and transparency is usually at the core of these initiatives.
Simply put, if there’s AI in the recipe, then it should be flagged. If we can collectively find a way to ensure that AI-generated music is identified, then we will have made serious progress towards transparency.
Nathalie Birocheau currently serves as CEO at Ircam Amplify and is also a certified engineer (Centrale-Supélec) and former strategy consultant who has led several major cultural and media projects, notably within la Maison de la Radio. She became Deputy Director of France Info in 2016, where she led the creation of the global media franceinfo.
Futureverse, an AI music company co-founded by music technology veteran Shara Senderoff, has announced the alpha launch of Jen, its text-to-music AI model. Available for anyone to use on its website, Jen-1 is an AI model that can be safely used by creators, given it was trained on 40 different fully-licensed catalogs, containing about 150 million works in total.
The company’s co-founders, Senderoff and Aaron McDonald, first teased Jen’s launch by releasing a research paper and conducting an interview with Billboard in August 2023. In the interview, Senderoff explained that “Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool.”
Trending on Billboard
Some of Jen’s capabilities, available at its alpha launch, include the ability to generate 10-45 second song snippets using text prompts. To lengthen the song to a full 3:30-long duration, one can use its “continuation” feature to re-prompt and add on additional segments to the song. With a focus on “its commitment to transparency, compensation and copyright identification,” as its press release states, Jen has made much of its inner workings available to the public via its research papers, including that the model uses “latent diffusion,” the same process used by Stable Diffusion, DALL-E 2, and Imagen to create high quality images. (It is unclear which music AI models use “latent diffusion” as well, given many do not share this information publicly).
Additionally, when works are created with Jen, users receive a JENUINE indicator, verifying that the song was made with Jen at a specific timestamp. To be more specific, this indicator is a cryptographic hash that is then recorded on The Root Network blockchain.
In an effort to work more closely with the music business, Futureverse brought on APG founder/CEO Mike Caren as a founding partner in fall 2023. While its mid-2024 release date makes it a late entrant in the music AI space, the company attributes this delay to making sure its 40 licenses were secured.
For now, Futureverse has declined to comment on which songs are included in their overall training catalog for Jen, but a representative for the company says that among these 40 catalogs includes a number of production libraries. Futureverse says it is also in talks with all major music companies and will have more licenses secured soon for Jen’s beta launch, expected for September 2024. Some licensing partners could be announced as soon as 4-6 weeks from the alpha launch.
In September, Futureverse has more capabilities planned, including longer initial song results, inpainting (the process of filling in missing sections or gaps in a musical piece) and a capability the company calls its “StyleFilter,” allowing users to upload an audio snippet of an instrument or track and then change the genre or timbre of it at the click of a button.
Also in September, Futureverse plans to launch a beat marketplace called R3CORD to go along with JEN. This will let JEN users upload whatever they produce with JEN to the marketplace and sell the works to others.
So far, the U.S. Copyright Office has advised that fully AI generated creations are not protected copyrights. Instead, they are considered “public domain” works and are not eligible to earn royalties like copyrights do, but any human additions made to an AI-assisted work are able to be copyright protected. (Already, this guidance has been applied in the music business in the case of Drake and Sexyy Red’s “U My Everything” which sampled the fully-AI generated sound recording “BBL Drizzy).”
“We have reached a defining moment for the future of the music industry. To ensure artistry maintains the value it deserves, we must commit to honor the creativity and copyrights of the past, while embracing the tools that will shape the next generation of music creation,” says Senderoff. “Jen empowers creators with tools that enhance their creative process. Jen is a collaborator; a friend in the studio that works with you to ideate and iterate. As we bring Jen to market, we are partnering with music rights holders and aligning with the industry at large to deliver what we believe is the most ethical approach to generative AI.”
“We’re incredibly proud of the work that’s gone into building Jen, from our research and technology to a strategy that we continue to develop with artists’ rights top of mind,” says Caren. “We welcome an open dialogue for those who’ve yet to meet Jen. There’s a seat at the table for every rightsholder to participate in building this next chapter of the music industry.”
When Snowd4y, a Toronto parody rapper, released the track “Wah Gwan Delilah” featuring Drake via Soundcloud on Monday (June 3), it instantly went viral.
“This has to be AI,” one commenter wrote about the song. It was a sentiment shared by many others, particularly given the track’s ridiculous lyrics and the off-kilter audio quality of Drake’s vocals.
To date, the two rappers have not confirmed or denied the AI rumor. Though Drake posted the track on his Instagram story, it is hardly a confirmation that the vocals in question are AI-free. (As we learned during Drake’s recent beef with Kendrick Lamar, the rapper is not afraid of deep-faking voices).
Trending on Billboard
To try to get to the bottom of the “Wah Gwan Delilah” mystery, Billboard contacted two companies that specialize in AI audio detection to review the track. The answer, unfortunately, was not too satisfying.
“Our first analysis reveals SOME traces of [generative] AI, but there seems to be a lot of mix involved,” wrote Romain Simiand, chief product officer of Ircam Amplify, a French company that creates audio tools for rights holders, in an email response.
Larry Mills, senior vp of sales at Pex, which specializes in tracking and monetizing music usage across the web, also found mixed results. He told Billboard the Pex research and development team “ran the song through [their] VoiceID matcher” and that “Drake’s voice on the ‘Wah Gwan Delilah’ verse does not match as closely to Drake’s voice…[as his voice on] official releases [does], but it is close enough to confirm it could be Drake’s own voice or a good AI copy.” Notably, Pex’s VoiceID tool alone is not enough to definitively distinguish between real and AI voices, but its detection of differences between the singer/rapper’s voice on “Wah Gwan Delilah” and his other, officially released songs could indicate some level of AI manipulation.
A representative for Drake did not immediately respond to Billboard’s request for comment.
How to Screen for AI in Songs
There are multiple types of tools that are currently used to distinguish between AI-generated music and human-made music, but these nascent products are still developing and not definitive. As Pex’s Jakub Galka recently wrote in a company blog post about the topic, “Identifying AI-generated music [is] a particularly difficult task.”
Some detectors, like Ircam’s, identify AI music using “artifact detection,” meaning they detect parts of a work that are off-base from reality. A clear example of this is seen with AI-generated images. Early AI images often contained hands with extra or misshapen fingers, and some detection tools exist to pick up on these inaccuracies.
Other detectors rely on reading watermarks embedded in the AI-generated music. While these watermarks are not perceptible to the human ear, they can be detected by certain tools. Galka writes that “since watermarking is intended to be discoverable by watermark detection algorithms, such algorithms can also be used to show how to remove or modify the watermark embedded in audio so it is no longer discoverable” — something he sees as a major flaw with this system of detection.
Pex’s method of using VoiceID, which can determine if a singer matches between multiple recordings, can also be useful in AI detection, though it is not a clear-cut answer. This technology is particularly helpful when users take to the internet and release random tracks with Drake vocals, whether they’re leaked songs or AI deepfakes. With VoiceID, Pex can tell a rights holder that their voice was detected on another track that might not be an official release from them.
When VoiceID is paired with the company’s other product, Automatic Content Recognition (ACR), it can sometimes determine if a song uses AI vocals or not, but the company says there is not enough information on “Wah Gwan Delilah” to complete a full ACR check.
Parody’s Role in AI Music
Though it can’t be determined without a doubt whether “Wah Gwan Delilah” contains AI vocals, parody songs in general have played a major role in popularizing and normalizing AI music. This is especially evident on TikTok, which is replete with so-called “AI Covers,” pairing famous vocalists with unlikely songs. Popular examples of this trend include Kanye West singing “Pocket Full of Sunshine” by Natasha Bedingfield, Juice WRLD singing “Viva La Vida” by Coldplay, Michael Jackson singing “Careless Whisper” by George Michael and more.
Most recently, AI comedy music took center stage with Metro Boomin‘s SoundCloud-released track “BBL Drizzy” — which sampled an AI-generated song of the same name. The track poked fun at Drake and his supposed “Brazilian Butt Lift” during the rapper’s beef with Lamar, and in the process, it became the first major use of an AI-generated sample. Later, Drake and Sexyy Red sampled the original AI-generated “BBL Drizzy” on their own song, “U My Everything,” lifting “BBL Drizzy” to new heights.
On May 24, Sexyy Red and Drake teamed up on the track “U My Everything.” And in a surprise — Drake’s beef with Kendrick Lamar had seemingly ended — the track samples “BBL Drizzy” (originally created using AI by King Willonius, then remixed by Metro Boomin) during the Toronto rapper’s verse.
It’s another unexpected twist for what many are calling the first-ever AI-generated hit, “BBL Drizzy.” Though Metro Boomin’s remix went viral, his version never appeared on streaming services. “U My Everything” does, making it the first time an AI-generated sample has appeared on an official release — and posing new legal questions in the process. Most importantly: Does an artist need to clear a song with an AI-generated sample?
Trending on Billboard
“This sample is very, very novel,” says Donald Woodard, a partner at the Atlanta-based music law firm Carter Woodard. “There’s nothing like it.” Woodard became the legal representative for Willonius, the comedian and AI enthusiast who generated the original “BBL Drizzy,” after the track went viral and has been helping Willonius navigate the complicated, fast-moving business of viral music. Woodard says music publishers have already expressed interest in signing Willonius for his track, but so far, the comedian/creator is still only exploring the possibility.
Willonius told Billboard that it was “very important” to him to hire the right lawyer as his opportunities mounted. “I wanted a lawyer that understood the landscape and understood how historic this moment is,” he says. “I’ve talked to lawyers who didn’t really understand AI, but I mean, all of us are figuring it out right now.”
Working off recent guidance from the U.S. Copyright Office, Woodard says that the master recording of “BBL Drizzy” is considered “public domain,” meaning anyone can use it royalty-free and it is not protected by copyright, since Willonius created the master using AI music generator Udio. But because Willonius did write the lyrics to “BBL Drizzy,” copyright law says he should be credited and paid for the “U My Everything” sample on the publishing side. “We are focused on the human portion that we can control,” says Woodard. “You only need to clear the human side of it, which is the publishing.”
In hip-hop, it is customary to split the publishing ownership and royalties 50/50: One half is expected to go to the producer, the other is for the lyricists (who are also often the artists, too). “U My Everything” was produced by Tay Keith, Luh Ron, and Jake Fridkis, so it is likely that those three producers split that half of publishing in some fashion. The other half is what Willonius could be eligible for, along with other lyricists Drake and Sexyy Red. Woodard says the splits were solidified “post-release” on Tuesday, May 28, but declined to specify what percentage split Willonius will take home of the publishing. “I will say though,” Woodard says, cracking a smile. “He’s happy.”
Upon the release of “U My Everything,” Willonius was not listed as a songwriter on Spotify or Genius, both of which list detailed credits but can contain errors. It turns out the reason for the omission was simple: the deal wasn’t done yet. “We hammered out this deal in the 24th hour,” jokes Woodard, who adds that he was unaware that “U My Everything” sampled “BBL Drizzy” until the day of its release. “That’s just how it goes sometimes.”
It is relatively common for sample clearance negotiations to drag on long after the release of songs. Some rare cases, like Travis Scott’s epic “Sicko Mode,” which credits about 30 writers due to a myriad of samples, can take years. Willonius tells Billboard when he got the news about the “U My Everything” release, he was “about to enter a meditation retreat” in Chicago and let his lawyer “handle the business.”
This sample clearance process poses another question: should Metro Boomin be credited, too? According to Metro’s lawyer, Uwonda Carter, who is also a partner at Carter Woodard, the simple answer is no. She adds that Metro is not pursuing any ownership or royalties for “U My Everything.”
“Somehow people attach Metro to the original version of ‘BBL Drizzy,’ but he didn’t create it,” Carter says. “As long as [Drake and Sexyy Red] are only using the original version [of “BBL Drizzy”], that’s the only thing that needs to be cleared,” she continues, adding that Metro is not the type of creative “who encroaches upon work that someone else does.”
When Metro’s remix dropped on May 5, Carter says she spoke with the producer, his manager and his label, Republic Records, to discuss how they could officially release the song and capitalize on its grassroots success, but then they ultimately decided against doing a proper release. “Interestingly, the label’s position was if [Metro’s] going to exploit this song, put it up on DSPs, it’s going to need to be cleared, but nobody knew what that clearance would look like because it was obviously AI.”
She adds, “Metro decided that he wasn’t going to exploit the record because trying to clear it was going to be the Wild, Wild West.” In the end, however, the release of “U My Everything” still threw Carter Woodard into that copyright wilderness, forcing them to find a solution for their other client, Willonius.
In the future, the two lawyers predict that AI could make their producer clients’ jobs a lot easier, now that there is a precedent for getting AI-generated masters royalty-free. “It’ll be cheaper,” says Carter. “Yes, cleaner and cheaper,” says Woodard.
Carter does acknowledge that while AI sampling could help some producers with licensing woes, it could hurt others, particularly the “relatively new” phenomenon of “loop producers.” “I don’t want to minimize what they do,” she says, “but I think they have the most to be concerned about [with AI].” Carter notes that using a producer’s loops can cost 5% to 10% from the producer’s side of publishing or more. “I think that, at least in the near future, producers will start using AI sampling and AI-generated records so they could potentially bypass the loop producers.”
Songwriter-turned-publishing executive Evan Bogart previously told Billboard he feels AI could never replace “nostalgic” samples (like “First Class” by Jack Harlow’s use of “Glamorous” by Fergie or “Big Energy” by Latto’s “Fantasy” by Mariah Carey), where the old song imbues the new one with greater meaning. But he said he could foresee it being a digital alternative to crate digging for obscure samples to chop up and manipulate beyond recognition.
Though the “U My Everything” complications are over — and set a new precedent for the nascent field of AI sampling in the process — the legal complications with “BBL Drizzy” will continue for Woodard and his client. Now, they are trying to get the original song back on Spotify after it was flagged for takedown. “Some guy in Australia went in and said that he made it, not me,” says Willonius. A representative for Spotify confirms to Billboard that the takedown of “BBL Drizzy” was due to a copyright claim. “He said he made that song and put it on SoundCloud 12 years ago, and I’m like, ‘How was that possible? Nobody was even saying [BBL] 12 years ago,’” Willonius says. (Udio has previously confirmed to Billboard that its backend data shows Willonius made the song on its platform).
“I’m in conversations with them to try to resolve the matter,” says Woodard, but “unfortunately, the process to deal with these sorts of issues is not easy. Spotify requires the parties to reach a resolution and inform Spotify once this has happened.”
Though there is precedent for other “public domain” music being disqualified from earning royalties, so far, given how new this all is, there is no Spotify policy that would bar an AI-generated song from earning royalties. These songs are also allowed to stay up on the platform as long as the AI songs do not conflict with Spotify’s platform rules, says a representative from Spotify.
Despite the challenges “BBL Drizzy” has posed, Woodard says it’s remarkable, after 25 years in practice as a music attorney, that he is part of setting a precedent for something so new. “The law is still being developed and the guidelines are still being developed,” Woodard says. “It’s exciting that our firm is involved in the conversation, but we are learning as we go.”
This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.
Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid.
Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it. Technology and policy experts alike have promoted the use of ethical training data and partnered with groups like Fairly Trained and the Human Artistry Coalition to set a positive example for other entrants into the AI realm.
What is your biggest career moment with AI?
Trending on Billboard
Diaa El All: I’m proud of starting our product Soundful Collabs. We found a way to do it with the artists’ participation in an ethical way and that we’re not infringing on any of their actual copyrighted music. With Collabs, we make custom AI models that understand someone’s production techniques and allow fans to create beats inspired by those techniques.
Meng Ru Kuok: Being the first creation platform to support the Human Artistry Coalition was a meaningful one. We put our necks out there as a tech company where people would expect us to actually be against regulation of AI. We don’t think of ourselves as a tech company. We’re a music company that represents and helps creators. Protecting them in the future is so important to us.
Tom Clees: I’ve been extremely proud to see that our ideas are coming through in legislation like the No AI Fraud Act in the House [and] the No Fakes Act in the Senate.
The term “AI” represents all kinds of products and companies. What do you consider the biggest misconception around the technology?
Clees: There are so many people who work on these issues on Capitol Hill who have only ever been told that it’s impossible to train these AI platforms and do it while respecting copyright and doing it fairly, or that it couldn’t ever work at scale. (To El All and Kuok.) A lot of them don’t know enough about what you guys are doing in AI. We need to get [you both] to Washington now.
Kuok: One of the misconceptions that I educate [others about] the most, which is counterintuitive to the AI conversation, is that AI is the only way to empower people. AI is going to have a fundamental impact, but we’re taking for granted that people have access to laptops, to studio equipment, to afford guitars — but most places in the world, that isn’t the case. There are billions of people who still don’t have access to making music.
El All: A lot of companies say, “It can’t be done that way.” But there is a way to make technological advancement while protecting the artists’ rights. Meng has done it, we’ve done it, there’s a bunch of other platforms who have, too. AI is a solution, but not for everything. It’s supposed to be the human plus the technology that equals the outcome. We’re here to augment human creativity and give you another tool for your toolbox.
What predictions do you have for the future of AI and music?
Clees: I see a world where so many more people are becoming creators. They are empowered by the technologies that you guys have created. I see the relationship between the artist and fan becoming so much more collaborative.
Kuok: I’m very optimistic that everything’s going to be OK, despite obviously the need for daily pessimism to [inspire the] push for the right regulation and policy around AI. I do believe that there’s going to be even better music made in the future because you’re empowering people who didn’t necessarily have some functionality or tools. In a world where there’s so much distribution and so much content, it enhances the need for differentiation more, so that people will actually stand up and rise to the top or get even better at what they do. It’s a more competitive environment, which is scary … but I think you’re going to see successful musicians from every corner of the world.
El All: I predict that AI tools will help bring fans closer to the artists and producers they look up to. It will give accessibility to more people to be creative. If we give them access to more tools like Soundful and BandLab and protect them also, we could create a completely new creative generation.
This story will appear in the June 1, 2024, issue of Billboard.
Suno, a generative AI music company, has raised $125 million in its latest funding round, according to a post on the company’s blog. The AI music firm, which is one of the rare start-ups that can generate voice, lyrics and instrumentals together, says it wants to usher in a “future where anyone can make music.”
Suno allows users to create full songs from simple text prompts. While most of its technology is proprietary, the company does lean on OpenAI’s ChatGPT for lyric and title generation. Free users can generate up to 10 songs per month, but with its Pro plan ($8 per month) and Premier plan ($24 per month), a user can generate up to 500 songs or 2,000 songs, respectively, on a monthly basis and are given “general commercial terms.”
The company names some of its investors in the announcement, including Lightspeed Venture Partners, Nat Friedman and Daniel Gross, Matrix and Founder Collective. Suno also says it has been working closely with a team of advisors, including 3LAU, Aaron Levie, Alexandr Wang, Amjad Masad, Andrej Karpathy, Aravind Srinivas, Brendan Iribe, Flosstradamus, Fred Ehrsam, Guillermo Rauch and Shane Mac.
Trending on Billboard
Suno is commonly believed to be one of the most advanced AI music models on the market today, but in past interviews, the company has not disclosed what materials are included in its training data. Expert Ed Newton-Rex, founder of Fairly Trained and former vp of audio for Stability AI, warned in a recent piece for Music Business Worldwide that it seems likely that Suno was trained on copyrighted material without consent given the way he has been able to generate music using the model that closely resembles copyrights.
In a recent Rolling Stone story about the company, investor Antonio Rodriguez mentioned that Suno’s lack of licenses with music companies is not a concern to him, saying that this lack of such licenses is “the risk we had to underwrite when we invested in the company, because we’re the fat wallet that will get sued right behind these guys.… Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”
Suno representatives have previously said, however, that their model will not let anyone create music by using prompts like “ballad in the style of Radiohead” or employ the voices of specific artists.
Many AI companies, including OpenAI, argue that training on copyrights without licenses in place is “fair use,” but the legality of this practice is still being determined in the United States. The New York Times has launched a lawsuit against OpenAI for training on its copyrighted archives without consent, credit or compensation, and Universal Music Group, Concord, ABKCO and other music publishers have filed a lawsuit against Anthropic for using its lyrics to train the company’s large language model.
In the Suno blog post, CEO Mikey Shulman wrote: “Today, we are excited to announce we’ve raised $125 million to build a future of music where technology amplifies, rather than replaces, our most precious resource: human creativity.”
“We released our first product eight months ago, enabling anyone to make a song with just a simple idea,” he continued. “It’s very early days, but 10 million people have already made music using Suno. While GRAMMY-winning artists use Suno, our core user base consists of everyday people making music — often for the first time.
“We’ve seen producers crate digging, friends exchanging memes and streamers co-creating songs with stadium-sized audiences. We’ve helped an artist who lost his voice bring his lyrics back to life again after decades on the sidelines. We’ve seen teachers ignite their students’ imaginations by transforming lessons into lyrics and stories into songs. Just this past weekend, we received heartwarming stories of mothers moved to tears by songs their loved ones created for them with a little help from Suno.”