State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

G-MIX

7:00 pm 8:00 pm

Current show
blank

G-MIX

7:00 pm 8:00 pm


AI

Page: 5

When Drake dismissively told Metro Boomin to go and “make some drums” in one of his recent diss tracks during his beef with Kendrick Lamar, the superproducer went off and did just that — and the result marked a turning point for the use of AI in music production. 
The beat, titled “BBL Drizzy,” pairs a vintage-sounding soul vocalist over some 808 drums. The producer released it to SoundCloud on May 5, encouraging his fans to record their own bars over it for the chance to win a free beat, and it swiftly went viral.

But soon after, it was revealed that the singer from the “BBL Drizzy” beat didn’t exist — the voice was AI-generated, as was the song itself. The vocals, melody and instrumental of the sample were generated by Udio, an AI music startup founded by former Google Deep Mind engineers. Though Metro was not aware of the source of the track when he used it, his tongue-in-cheek diss became the first notable use case of AI-generated sampling, proving the potential for AI to impact music production. (A representative for Metro Boomin did not respond to Billboard’s request for comment).

Trending on Billboard

As with all AI tracks, however, a human being prompted it. King Willonius, a comedian, musician and content creator, had put together the Udio-generated song on April 14, pulling inspiration from a recent Rick Ross tweet — in which the rapper joked that Drake looks like he got a Brazilian Butt Lift — to write the lyrics. “I think it’s a misconception that people think AI wrote ‘BBL Drizzy,’” Willonius told Billboard in an interview about the track. “There’s no way AI could write lyrics like ‘I’m thicker than a Snicker and I got the best BBL in history,’” he adds, laughing. 

There are a lot of issues — legal, philosophical, cultural and technical — that are still to be sorted out before this kind of sampling hits the mainstream, but it’s not hard to imagine a future where producers turn to AI to create vintage-sounding samples to chop up and use in beats given that sample clearances are notoriously complicated and can drag on for months or years, even for big name producers like Metro Boomin. 

“If people on the other side [of sample clearance negotiations] know they’re probably going to make money on the new song, like with a Metro Boomin-level artist, they will make it a priority to clear a sample quickly, but that’s not how it is for everyone,” says Todd Rubenstein, a music attorney and founder of Todd Rubenstein Law. Grammy-winning writer/producer Oak Felder says clearing a sample for even a high-profile track is still a challenge for him. “I’ll be honest, I’m dealing with a tough clearance right now, and I’ve dealt with it before,” he says. “I had trouble clearing an Annie Lennox sample for a Nicki Minaj record once… It’s hard.”

Many smaller producers are not able to sample established songs because they know that it could get them into legal trouble. Others go ahead without permission, causing massive legal headaches, like when bedroom producer Young Kio sampled an undisclosed Nine Inch Nails song in an instrumental he licensed out on BeatStars. The beat was used by then-unknown Lil Nas X and resulted in the Billboard Hot 100 No. 1 “Old Town Road.” When the sample was discovered, Nas was forced to give up a large portion of his publishing and master royalties to the band. 

Udio’s co-founder, David Ding, tells Billboard that he believes AI samples “could simplify a lot of the rights management” issues inherent to sampling and explains that Udio’s model is particularly adept at making realistic songs in the vein of “Motown ‘70s soul,” perhaps the most common style of music sampled in hip-hop today, as well as classical, electronic and more. “It’s a wide-ranging model,” Ding says.

Willonius believes AI samples also offer a solution for musicians in today’s relentless online news cycle. While he has made plenty of songs from scratch before, Willonius says AI offered him the chance to respond in real-time to the breakneck pace of the feud between Drake and Kendrick. “I never could’ve done that without AI tools,” he says. Evan Bogart, a Grammy-winning songwriter and founder of Seeker Music, likens it to a form of digital crate digging. “I think it’s super cool to use AI in this way,” he says. “It’s good for when you dig and can’t find the right fit. Now, you can also try to just generate new ideas that sound like old soul samples.”

There’s a significant financial impact incurred from traditional sampling that could also be avoided with AI. To use the melody of “My Favorite Things” in her hit song “7 Rings,” for example, Ariana Grande famously had to cede 90% of her publishing income for the song to “My Favorite Things” writers Rodgers and Hammerstein — and that was just an interpolation rather than a full sample, which entails both the use of compositional elements, like melody, and a portion of the sound recording.

“It certainly could help you having to avoid paying other people and avoid the hassle,” says Rubenstein, who has often dealt with the complications of clearing songs that use samples and beats from marketplaces like BeatStars. But he adds that any user of these AI models must use caution, saying it won’t always make clearances easier: “You really need to know what the terms of service are whenever you use an AI model, and you should know how they train their AI.”

Often, music-making AI models train on copyrighted material without the consent or compensation of its rights holders, a practice that is largely condemned by the music business — even those who are excited about the future of AI tools. Though these AI companies argue this is “fair use,” the legality of this practice is still being determined in the United States. The New York Times has launched a lawsuit against OpenAI for training on its copyrighted archives without consent, credit or compensation, and UMG, Concord, ABKCO and other music publishers have also filed a lawsuit against Anthropic for using their lyrics to train the company’s large language model. Rep. Adam Schiff (D-CA) has also introduced a new bill called the Generative AI Copyright Disclosure Act to require transparency on this matter. 

Udio’s terms of service puts the risk of sharing its AI songs on users, saying that users “shall defend, indemnify, and hold the company entities harmless from and against any and all claims, costs, damages, losses, liabilities and expenses” that come from using whatever works are generated on the platform. In an interview with Billboard, Udio co-founder Ding was unable to answer what works were specifically used in its training data. “We can’t reveal the exact source of our training data. We train our model on publicly available data that we obtained from the internet. It’s basically, like, we train the model on good music just like how human musicians would listen to music,” says Ding. When pressed about copyrights in particular, he replies, “We can’t really comment on that.”

“I think if it’s done right, AI could make things so much easier in this area. It’s extremely fun and exciting but only with the proper license,” says Diaa El All, CEO/founder of Soundful, another AI music company that generates instrumentals specifically. His company is certified by Fairly Trained, a non-profit that ensures certified companies do not use copyrighted materials in training data without consent. El All says that creating novel forms of AI sampling “is a huge focus” for his company, adding that Soundful is working with an artist right now to develop a fine-tuned model to create AI samples based on pre-existing works. 

“I can’t tell you who it is, but it’s a big rapper,” he says. “His favorite producer passed away. The rapper wants to leverage a specific album from that producer to sample. So we got a clearance from the producer’s team to now build a private generative AI model for the rapper to use to come up with beats that are inspired by that producer’s specific album.”

While this will certainly have an impact on the way producers work in the future, Felder and Bogart say that AI sampling will never totally replace the original practice. “People love nostalgia; that’s what a sample can bring,” says Felder. With the success of sample-driven pop songs at the top of the Hot 100 and the number of movie sequels hitting box office highs, it’s clear that there is an appetite for familiarity, and AI originals cannot feed that same craving.

“BBL Drizzy” might’ve been made as a joke, but Felder believes the beat has serious consequences. “I think this is very important,” he says. “This is one of the first successful uses [AI sampling] on a commercial level, but in a year’s time, there’s going to be 1,000 of these. Well, I bet there’s already a thousand of these now.”

This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.

The U.S. Senate Judiciary Committee convened on Tuesday (April 30) to discuss a proposed bill that would effectively create a federal publicity right for artists in a hearing that featured testimony from Warner Music Group CEO Robert Kyncl, artist FKA Twigs, Digital Media Association (DiMA) CEO Graham Davies, SAG-AFTRA national executive director/chief negotiator Duncan Crabtree-Ireland, Motion Picture Association senior vp/associate general counsel Ben Sheffner and the University of San Diego professor Lisa P. Ramsey.
The draft bill — called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission. Those individuals have previously only been protected through a patchwork of state “right of publicity” laws. First introduced in October, the NO FAKES Act is supported by a bipartisan group of U.S. senators including Sen. Chris Coons (D-Del.), Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.) and Sen. Thom Tillis (R-N.C.).

Trending on Billboard

Warner Music Group (WMG) supports the NO FAKES Act along with many other music businesses, the RIAA and the Human Artistry Campaign. During Kyncl’s testimony, the executive noted that “we are in a unique moment of time where we can still act and we can get it right before it gets out of hand,” pointing to how the government was not able to properly handle data privacy in the past. He added that it’s imperative to get out ahead of artificial intelligence (AI) to protect artists’ and entertainment companies’ livelihoods.

“When you have these deepfakes out there [on streaming platforms],” said Kyncl, “the artists are actually competing with themselves for revenue on streaming platforms because there’s a fixed amount of revenue within each of the streaming platforms. If somebody is uploading fake songs of FKA Twigs, for example, and those songs are eating into that revenue pool, then there is less left for her authentic songs. That’s the economic impact of it long term, and the volume of content that will then flow into the digital service providers will increase exponentially, [making it] harder for artists to be heard, and to actually reach lots of fans. Creativity over time will be stifled.”

Kyncl, who recently celebrated his first anniversary at the helm of WMG, previously held the role of chief business officer at YouTube. When questioned about whether platforms, like YouTube, Spotify and others who are represented by DiMA should be held responsible for unauthorized AI fakes on their platforms, Kyncl had a measured take: “There has to be an opportunity for [the services] to cooperate and work together with all of us to [develop a protocol for removal],” he said.

During his testimony, Davies spoke from the perspective of the digital service providers (DSPs) DiMA represents. “There’s been no challenge [from platforms] in taking down the [deepfake] content expeditiously,” he said. “We don’t see our members needing any additional burdens or incentives here. But…if there is to be secondary liability, we would very much seek that to be a safe harbor for effective takedowns.”

Davies added, however, that the Digital Millennium Copyright Act (DMCA), which provides a notice and takedown procedure for copyright infringement, is not a perfect model to follow for right of publicity offenses. “We don’t see [that] as being a good process as [it was] designed for copyright…our members absolutely can work with the committee in terms of what we would think would be an effective [procedure],” said Davies. He added, “It’s really essential that we get specific information on how to identify the offending content so that it can be removed efficiently.”

There is currently no perfect solution for tracking AI deepfakes on the internet, making a takedown procedure tricky to implement. Kyncl said he hopes for a system that builds on the success of YouTube’s Content ID, which tracks sound recordings. “I’m hopeful we can take [a Content ID-like system] further and apply that to AI voice and degrees of similarity by using watermarks to label content and care the provenance,” he said.

The NO FAKES draft bill as currently written would create a nationwide property right in one’s image, voice, or visual likeness, allowing an individual to sue anyone who produced a “newly-created, computer-generated, electronic representation” of it. It also includes publicity rights that would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. Most state right of publicity laws were written far before the invention of AI and often limit or exclude the protection of an individual’s name, image and voice after death.

The proposed 70 years of post-mortem protection was one of the major points of disagreement between participants at the hearing. Kyncl agreed with the points made by Crabtree-Ireland of SAG-AFTRA — the actors’ union that recently came to a tentative agreement with major labels, including WMG, for “ethical” AI use — whose view was that the right should not be limited to 70 years post-mortem and should instead be “perpetual,” in his words.

“Every single one of us is unique, there is no one else like us, and there never will be,” said Crabtree-Ireland. “This is not the same thing as copyright. It’s not the same thing as ‘We’re going to use this to create more creativity on top of that later [after the copyright enters public domain].’ This is about a person’s legacy. This is about a person’s right to give this to their family.”

Kyncl added simply, “I agree with Mr. Crabtree-Ireland 100%.”

However, Sheffner shared a different perspective on post-mortem protection for publicity rights, saying that while “for living professional performers use of a digital replica without their consent impacts their ability to make a living…that job preservation justification goes away post-mortem. I have yet to hear of any compelling government interest in protecting digital replicas once somebody is deceased. I think there’s going to be serious First Amendment problems with it.”

Elsewhere during the hearing, Crabtree-Ireland expressed a need to limit how long a young artist can license out their publicity rights during their lifetime to ensure they are not exploited by entertainment companies. “If you had, say, a 21-year-old artist who’s granting a transfer of rights in their image, likeness or voice, there should not be a possibility of this for 50 years or 60 years during their life and not have any ability to renegotiate that transfer. I think there should be a shorter perhaps seven-year limitation on this.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Tupac’s estate threatens to sue Drake over his use of the late rapper’s voice; Megan Thee Stallion faces a lawsuit over eye-popping allegations from her former cameraman; Britney Spears settles her dispute with her father; and much more.

THE BIG STORY: Drake, Tupac & An AI Showdown

The debate over unauthorized voice cloning burst into the open last week when Tupac Shakur’s estate threatened to sue Drake over a recent diss track against Kendrick Lamar that featured an AI-generated version of the late rapper’s voice.In a cease-and-desist letter first reported by Billboard, litigator Howard King told Drake that the Shakur estate was “deeply dismayed and disappointed” by the rapper’s use of Tupac’s voice in his “Taylor Made Freestyle.” The letter warned Drake to confirm in less than 24 hours that he would pull the track down or the estate would “pursue all of its legal remedies” against him.“Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”AI-powered voice cloning has been top of mind for the music industry since last spring when an unknown artist released a track called “Heart On My Sleeve” that featured — ironically — fake verses from Drake’s voice. As such fake vocals have continued to proliferate on the internet, industry groups, legal experts and lawmakers have wrangled over how best to crack down on them.With last week’s showdown, that debate jumped from hypothetical to reality. The Tupac estate laid out actual legal arguments for why it believed Drake’s use of the late rapper’s voice violated the law. And those arguments were apparently persuasive: Within 24 hours, Drake began to pull his song from the internet.

For more details on the dispute, go read our full story here.

Trending on Billboard

Other top stories this week…

MEGAN THEE STALLION SUED – The rapper and Roc Nation were hit witha lawsuit from a cameraman named Emilio Garcia who claims he was forced to watch Megan have sex with a woman inside a moving vehicle while she was on tour in Spain. The lawsuit, which claims he was subjected to a hostile workplace, was filed by the same attorneys who sued Lizzo last year over similar employment law.BRITNEY SETTLES WITH FATHER – Britney Spears settled her long-running legal dispute with her father, Jamie Spears, that arose following the termination of the pop star’s 13-year conservatorship in 2021. Attorneys for Britney had accused Jamie of misconduct during the years he served as his daughter’s conservator, a charge he adamantly denied. The terms of last week’s agreement were not made public.TRAVIS SCOTT MUST FACE TRIAL – A Houston judge denied a motion from Travis Scott to be dismissed from the sprawling litigation over the 2021 disaster at the Astroworld music festival, leaving him to face a closely-watched jury trial next month. Scott’s attorneys had argued that the star could not be held legally liable since safety and security at live events is “not the job of performing artists.” But the judge overseeing the case denied that motion without written explanation.ASTROWORLD TRIAL LIVESTREAM? Also in the Astroworld litigation, plaintiffs’ attorneys argued that the upcoming trial — a pivotal first test for hundreds of other lawsuits filed by alleged victims over the disaster — should be broadcast live to the public. “The devastating scale of the events at Astroworld, combined with the involvement of high-profile defendants, has generated significant national attention and a legitimate public demand for transparency and accountability,” the lawyers wrote.BALLERINI HACKING CASE – Just a week after Kelsea Ballerini sued a former fan named Bo Ewing over accusations that he hacked her and leaked her unreleased album, his attorneys reached a deal with her legal team in which he agreed not to share her songs with anyone else — and to name any people he’s already sent them to. “Defendant shall, within thirty days of entry of this order, provide plaintiffs with the names and contact information for all people to whom defendant disseminated the recordings,” the agreement read.R. KELLY CONVICTIONS AFFIRMED – A federal appeals court upheld R. Kelly’s 2022 convictions in Chicago on child pornography and enticement charges, rejecting his argument that the case against him was filed too late. The court said that Kelly was convicted by “an even-handed jury” and that “no statute of limitations saves him.” His attorney vowed a trip to the U.S. Supreme Court, though such appeals face long odds.DIDDY RESPONDS TO SUIT – Lawyers for Sean “Diddy” Combs pushed back against a sexual assault lawsuit filed by a woman named Joi Dickerson-Neal, arguing that he should not face claims under statutes that did not exist when the alleged incidents occurred in 1991. His attorneys want the claims — such as revenge porn and human trafficking — to be dismissed from the broader case, which claims that Combs drugged, assaulted and surreptitiously filmed Dickerson-Neal when she was 19 years old.

Cutting Edge Group (“CEG” or the “Group”), an investor in and manager of niche media music rights encompassing more than 2,000 titles across soundtrack albums, completed a $500 million debt refinancing with four banks led by Fifth Third Bank and Northleaf Capital Partners. The new credit facility will be used for corporate purposes as well as the acquisition of music rights from the roughly $1.5 billion pipeline of possible investments already identified by Cutting Edge.
“Cutting Edge has become a world leading music partner to the film and tv industries,” said Philip Moross, CEO of Cutting Edge Group, in a statement. “During that time, the structural trends driving our industries have accelerated exponentially, delivering a proliferation of digital platforms and content, matched by an increase in demand for media music usage. Prior to the pandemic, we identified a similar opportunity in the global wellness market, which is now projected to grow at 10% per annum to a US$7 trillion market by 2025. This refinancing will enable us to execute our growth strategy to take full advantage of these trends in our usual disciplined way.”

TikTok partnered with global ticketing platform AXS, enabling certified artists on the social platform to use its in-app ticketing feature to promote their AXS live dates while allowing fans to buy tickets for events through AXS within TikTok.

Trending on Billboard

Universal Music Greater China (UMGC) struck a new strategic agreement with TF Entertainment, the company behind Chinese idols TFBOYS and Teens In Times. Under the deal, UMGC will handle global distribution of TF’s roster, targeting markets outside Mainland China.

ASM Global Acts, the corporate social responsibility platform of ASM Global, partnered with reuse platform r.World to introduce reusable service ware — including reusable cups and food containers — in venues throughout the company’s North American portfolio, beginning with the Long Beach Convention and Entertainment Center and select hospitality locations at the forthcoming Acure Grand Prix in Long Beach, Calif.

Audacy and Super Hi-Fi, which provides AI-powered radio services for broadcast and digital media companies, announced an expanded partnership that will streamline Audacy’s digital content programming, production and broadcasting processes “while creating stickier listener environments and more opportunities for advertisers to engage with them,” according to a press release. Audacy additionally announced that five of its highest-rated HD radio stations are transitioning to use Super Hi-Fi’s program director radio operating system, allowing programmers and staff to spend less time on production. The stations include WWBX-HD2 in Boston, WLKK-HD2 in Buffalo, KILT-HD2 in Houston, KROQ-HD2 in Los Angeles and KNRK-HD2 in Portland.

Leading classical music artist agencies IMG Artists and TACT Artists Management formed a strategic alliance through which their vocal departments will work together to pool expertise and resources. With the partnership, the companies hope to ensure a broader international network for their rosters, among other benefits.

Sony Music‘s global podcast division acquired podcast production company Neon Hum, whose founder/CEO Jonathan Hirsch joins Sony Music as vp of global podcasts/head of U.S. creative. Sony will utilize Neon Hum’s production expertise to continue developing podcasts for its subscription channel, The Binge, and across its entertainment slate. Sony will also expand its work-for-hire business to provide more services beyond audio production for its client and branded podcasts. Before the acquisition, Sony made a strategic investment in Neon Hum in 2019, with the two jointly launching podcasts including Dinners on Me with Jesse Tyler Ferguson, Smoke Screen and My Fugitive Dad.

Merlin announced a licensing deal with streaming platform Audiomack, giving Merlin members access to Audiomack’s listenership. Merlin members will now also be able to claim their artists’ Audiomack accounts, enabling them to send messages to their fans and more. The deal encompasses Audiomod, a new Audiomack tool that allows fans to customize their listening experience via pre-set filters including “sped up,” “slowed down,” “nightcore” and “daycore” and/or custom listening filters they create themselves.

Secretly announced new distribution deals with Jazz Is Dead and its sister label, Linear Labs. Founded by Adrian Younge, Ali Shaheed Muhammad (A Tribe Called Quest), Andrew Lojero and Adam Block, Jazz Is Dead is dedicated to “honoring the legacies of musical heroes and luminaries,” according to a a press release, having released albums by Rob Ayers, Lonnie Liston Smith and more. Linear Labs focuses on “new progressive music” and has worked with artists including Ghostface Killah, The Delfonics and Angela Muñoz while releasing scores and soundtracks for CBS, Hulu and Netflix. Past and future releases on both labels will now be distributed by Secretly.

Image-Line — the developer of popular digital audio workstation (DAW) FL Studio and FL Cloud — acquired MSXII Sound Design, a manufacturer of sample packs and sonic tools. MSX’s more than 200G sample library is now available to FL Studio users through FL Cloud.

Udio, a new platform developed by former Google DeepMind researchers that allows users to create music using AI using text prompts and then share their creations with the app’s community of users for feedback and collaboration, has raised a seed funding round from investors incluidng a16z, Instagram co-founder/chief technology officer Mike Krieger, will.i.am, Common, Kevin Wall, Tay Keith, UnitedMasters and Oriol Vinyals, head of Gemini at Google.

Entertainment, hospitality and investment holding company Palm Tree Crew — founded by Kygo and Myles Shear — closed a strategic investment in Medium Rare, valuing the company at $50 million. Medium Rare partners with artists, celebrities and athletes to create live entertainment properties; examples include Travis Kelce’s Kelce Jam and Guy Fieri’s Flavortown Tailgate. As part of the deal, Medium Rare will partner on select events and festivals within the Palm Tree Crew holding company. The two companies will also work together on developing new festivals and live experiences.

Oak View Group signed a partnership with the University of Kansas to be the stadium operator for the Gateway District — the future home of Kansas Football and convention center events and conferences — and the reimagined David Booth Kansas Memorial Stadium. Oak View Group will additionally manage food and beverage services and suite catering for all Kansas Athletics venues. It will oversee the day-to-day operations of both the football stadium and conference center when the first phase of the Gateway District opens in August 2025, leading bookings of conference events, concerts and more. Oak View Group will additionally play a key role in the current Allen Fieldhouse upgrades, managing all food and beverage and hospitality in the arena.

AI-driven funding platform beatBread partnered with Kobalt Music Group for publishing administration in the United States and amra for digital licensing and collections internationally. The agreements allow beatBread to extend its existing funding of publishing rights to artists who are currently unpublished and under-collecting their performance and mechanical revenues.

Entertainment company NTERTAIN and The Official Latino Film Festival merged to form the NVISION Film & Music Festival. The festival will feature a mix of film screenings, music performances, art exhibitions, technology showcases and conference-style panels and presentation. It’s set to take place Oct. 10-12 at the Palm Springs Art Museum in Palm Springs, Calif.

The city council of McKinney, Tex. approved the development of the Sunset Amphitheater being built in the town by Notes Live. The agreement includes a public-private partnership between Notes Live, the city, the McKinney Economic Development Corporation and the McKinney Community Development Corporation. The project is estimated to be bringing in more than 1,300 direct and indirect jobs to the community, with an economic impact of around $3 billion to the area over the first 10 years.

Musically Fed, which redistributes surplus backstage and VIP meals to veterans and those facing homelessness and food insecurity in the United States, will once again partner with Live Nation-Hewitt Silva (LNHS) to handle surplus catering at forthcoming LNHS events at the Hollywood Bowl. The organization will also re-team with TaDa! Events to repurpose unused catering for communities in need via several L.A. nonprofits.

Feed.fm partnered with AI company Cyanite for AI-based tagging and music search to enhance discoverability on its platform. Under the deal, Feed.fm will use Cyanite’s technology to enhance the metadata Feed.fm’s compliance and recommendation engine uses to stream curated music for each listener.

Amazon Music has announced a new AI-powered playlist feature that allows users to turn text prompts into entire playlists. Called Maestro, the offering is still in beta and available only to a small number of Amazon Music users on all tiers in the United States on iOS and Android. It can be found on the […]

Representative Adam Schiff (D-Calif.) introduced new legislation in the U.S. House of Representatives on Tuesday (April 9) which, if passed, would require AI companies to disclose which copyrighted works were used to train their models, or face a financial penalty. Called the Generative AI Copyright Disclosure Act, the new bill would apply to both new models and retroactively to previously released and used generative AI systems.
The bill requires that a full list of copyrighted works in an AI model’s training data set be filed with the Copyright Office no later than 30 days before the model becomes available to consumers. This would also be required when the training data set for an existing model is altered in a significant manner. Financial penalties for non-compliance would be determined on a case-by-case basis by the Copyright Office, based on factors like the company’s history of noncompliance and the company’s size.

Generative AI models are trained on up to trillions of existing works. In some cases, data sets, which can include anything from film scripts to news articles to music, are licensed from copyright owners, but often these models will scrape the internet for large swaths of content, some of which is copyrighted, without the consent or knowledge of the author. Many of the world’s largest AI companies have publicly defended this practice, calling it “fair use,” but many of those working in creative industries take the position that this is a form of widespread copyright infringement.

Trending on Billboard

The debate has sparked a number of lawsuits between copyright owners and AI companies. In October, Universal Music Group, ABKCO, Concord Music Group, and other music publishers filed a lawsuit against AI giant Anthropic for “unlawfully” exploiting their copyrighted song lyrics to train AI models.

“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” wrote lawyers for the music companies at the time. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”

While many in the music business are also calling for compensation and the ability to opt in or out of being used in a data set, this bill focuses only on requiring transparency with copyrighted training data. Still, it has garnered support from many music industry groups, including the Recorded Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), ASCAP, Black Music Action Coalition (BMAC), and Human Artistry Campaign.

It is also supported by other creative industry groups, including the Professional Photographers of America, SAG-AFTRA, Writers Guild of America, International Alliance of Theatrical Stage Employees (IATSE) and more.

“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives,” said Rep. Schiff in a statement. “We must balance the immense potential of AI with the crucial need for ethical guidelines and protections. My Generative AI Copyright Disclosure Act is a pivotal step in this direction. It champions innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets. This is about respecting creativity in the age of AI and marrying technological progress with fairness.”

A number of rights groups also weighed in on the introduction of the bill.

“Any effective regulatory regime for AI must start with one of the most fundamental building blocks of effective enforcement of creators’ rights — comprehensive and transparent record keeping,” adds RIAA chief legal officer Ken Doroshow. “RIAA applauds Congressman Schiff for leading on this urgent and foundational issue.”

“We commend Congressman Schiff for his leadership on the Generative AI Copyright Disclosure Act,” NMPA president/CEO David Israelite said. “AI only works because it mines the work of millions of creators every day and it is essential that AI companies reveal exactly what works are training their data. This is a critical first step towards ensuring that AI companies fully license and that songwriters are fully compensated for the work being used to fuel these platforms.”

“Without transparency around the use of copyrighted works in training artificial intelligence, creators will never be fairly compensated and AI tech companies will continue stealing from songwriters,” ASCAP CEO Elizabeth Matthews said. “This bill is an important step toward ensuring that the law puts humans first, and we thank Congressman Schiff for his leadership.”

“Protecting the work of music creators is essential, and this all begins with transparency and tracking the use of copyrighted materials in generative AI,” Black Music Action Coalition (BMAC) co-chair Willie “Prophet” Stiggers said. “BMAC hopes Rep. Schiff’s Generative AI Copyright Disclosure Act helps garner support for this mission and that author and creator rights continue to be protected and preserved.”

“Congressman Schiff’s proposal is a big step forward towards responsible AI that partners with artists and creators instead of exploiting them,” Human Artistry Campaign senior advisor Dr. Moiya McTier said. “AI companies should stop hiding the ball when they copy creative works into AI systems and embrace clear rules of the road for recordkeeping that create a level and transparent playing field for the development and licensing of genuinely innovative applications and tools.”

Spotify has launched a new AI playlist feature for premium users in the United Kingdom and Australia, the company revealed in a blog post on Sunday (April 7). The new feature, which is still in beta, allows Spotify users in those markets to turn any concept into a playlist by using prompts like “an indie […]

Stability AI has launched Stable Audio 2.0, adding key new functions to the company’s text-to-music generator. Now, users can generate tracks that are up to three minutes long at 44.1 KHz stereo from a natural language prompt like, “A beautiful piano arpeggio grows to a full beautiful orchestral piece” or “Lo-fi funk.” Stable Audio 2.0 […]

Recording Industry Association of America (RIAA) COO Michele Ballantyne has been promoted to president, the organization has announced. She will continue to serve as COO, running daily operations and managing RIAA’s 56-person team.
A 2024 Billboard Women in Music honoree, Ballantyne serves on the RIAA executive leadership team alongside chairman/CEO Mitch Glazier while spearheading daily operations and helping lead advocacy efforts across the industry. During her tenure, she’s played a key role in the passage of the landmark Music Modernization Act as well as the PRO-IP Act, which established the first U.S. intellectual property enforcement coordinator in the executive office; and the Higher Education Opportunity Act, which provided colleges and universities with tools to reduce the illegal downloading of copyrighted works on campuses.

“I love my job, and I feel really lucky to have it,” Ballantyne tells Billboard in an exclusive interview (full Q&A below). “Music is something that is so important to everyone, and there are obviously lots of challenges…AI, TikTok, COVID. But one thing I’m really proud about is that at RIAA we’re nimble and we punch above our weight and I think that speaks a lot to the team we have in place. I really feel grateful to be at the helm with Mitch and see where we can take things.”

Trending on Billboard

Mitch Glazier, Busta Rhymes and Michelle Ballantyne

Daniel Swartz

More recently, Ballantyne has focused particular attention on the growing use of artificial intelligence in music and its ethical implications for creators. Under her leadership, the RIAA became a founding member of the Human Artistry Campaign, a coalition of music and entertainment organizations supporting ethical standards around AI that launched in August. The organization also supported the ELVIS Act, the landmark law designed to protect creators from AI deep fakes that was signed into law in March. On the federal level, the RIAA is supporting bills including the No AI FRAUD Act in the House and the NO FAKES Act in the Senate.

“Michele and I have had the privilege of guiding RIAA and supporting our member companies through amazing celebrations and challenges in the industry,” said Glazier in a statement. “I am grateful for her remarkable leadership and genuine care for people. Our playlists may not always be in sync, but our determination for a thriving and equitable community for music creators is.”

Ballantyne earned her law degree from the Georgetown University Law Center and started her career in government, serving in roles including general counsel for Sen. Tom Daschle, special assistant for President Bill Clinton and special counsel for former White House chief of staff John Podesta. She joined RIAA in 2004 as senior vp of federal government and industry relations. A Black female executive, Ballantyne’s work at the organization has also focused on social justice advocacy, including mobilizing RIAA members to support police reform bills, guiding the implementation of members’ social change commitments and managing the most diverse RIAA board of directors in its history.

On the occasion of her promotion, Billboard spoke with Ballantyne about her new role, the importance of combatting AI deep fakes, Universal Music Group’s dispute with TikTok and the possible implications of the upcoming presidential election.

You’ve been COO for several years now, but you’ve now added “president” to your title. How will your purview at the RIAA change?

It will change a little bit, but maybe not that much. It does catch up to the way we’ve been working, especially Mitch and I, who sort of approach things as a partnership. But the COO part, which is the sort of the nuts and bolts of running the organization and dealing with the internal stuff, it’s not all that I do. I do a lot of industry relations and coordinating with outside groups and coordinating with our member companies and making sure everything runs smoothly, that people are communicating. And so I think it reflects that piece of it too. I’m grateful for the recognition because I enjoy the work, and the title makes it clear to everybody.

You’ve been with the RIAA for around two decades now, and you’ve helped tackle some of the biggest issues in recorded music over that time. What do you see as some of the biggest issues facing the RIAA and its members currently?

No question it’s AI. AI has sort of supercharged everyone’s work. I am not the lead on it, but it’s everybody’s issue. We’re out talking about it and thinking about it and trying to figure out, “How are we gonna meet the challenges that it brings for artists and for labels and everyone in music?” It’s such a challenging time and everything is moving so fast. We’re just trying to figure out how we’re going to navigate all of it. And it’s an exciting time. It brings a lot of innovations to the table.

I think that the music industry in general is usually, in the time that I’ve been at RIAA, in the front. We’re the — I’m not fond of the saying — but the canary in the coal mine. All of these issues are ones that we confront first, the same as with file-sharing or any of those other issues that happened way back when. And the policymakers are grappling with how to handle these changes confronting society with AI, so it’s so multifaceted and very challenging.

We’ve been working on the deep fakes issue. That is one thing that pretty much everyone can come together around. We had that bill pass in Tennessee last week [the ELVIS Act] and we’re working on some federal bills as well. So, this is, I think, where all the focus is going to be. But in general, I think things are good, the industry is moving in a positive direction. You probably saw our revenue numbers came out earlier this week. One of the things that we’re so excited about, and I think that music companies have really embraced, is offering so much choice to fans. And I think that’s really positive.

I was curious about the year-end report. One interesting takeaway is that the record labels may have become almost too reliant on paid subscriptions for revenue and revenue growth. Do you think that revenue mix needs to be more dynamic? And if so, how do you feel labels can get there?

That’s a very tricky question. I’m not sure I can really answer that one. There are a lot of different components that go into it. And a lot of the pieces that are business issues, we aren’t at RIAA going to be able to see into those. It is a concern, for sure, and something that our folks are paying attention to.

I will say that one of the things that I have noticed that has changed most over the time that I’ve been at RIAA is this willingness to innovate and pivot. When I first came to RIAA in 2004, the focus was on how do we address file sharing? It was the Grokster case, and I think that within the companies, the old guard has sort of shifted out and the folks who are there now and have come in have very successfully navigated those challenges to the place we are today.

Today, everyone streams and anybody can get the music they want, whenever they want it. And it is not something that even occurs to young folks. I have a 16-year-old. He doesn’t even think about like, “I can just go on Spotify and listen.” To me, watching that change has been really impactful. And I’m just trying to think about it, like, something exciting will happen next. I’m not sure what it is. But I think it will happen.

One of the other big stories in the last few months was UMG pulling its catalog from TikTok and the ripple effect that that’s had on the industry. What do you think needs to happen to resolve that dispute?

I don’t know. TikTok has has grown so fast, and even among our companies and among policymakers, there’s differing opinions on how to handle that. Universal certainly put their marker down, and we haven’t commented because our companies aren’t all in the same place about it. So I don’t know how that’s going to resolve and I also don’t know what’s going to happen with the federal bill that policymakers are pursuing to say that they’re going to ban TikTok. I mean, it passed the House. It’s very tricky.

We have a big election coming up. What should RIAA members be on the lookout for when either candidate wins, whether it’s Trump or Biden?

We used to go, Mitch and myself, to our companies and board meetings and we would talk to them about what’s happening in D.C. and how it’s all gonna shake out and what we think will happen based on what we know and our experiences working both in the House and the Senate. It’s really hard to tell now. We gave up some years ago on doing our own punditry. The polling doesn’t seem to be as reliable and, as a D.C. person, even some of my colleagues from prior administrations or from the Hill, they’re like, “It’s really hard to tell.”

The good news for everyone in the music industry, not just RIAA, is that largely music issues are bipartisan, and on the committees that handle intellectual property, policy and copyright issues, the Judiciary Committee, they are dealing with many more complex issues such as guns and immigration and reproductive rights and so on. So a lot of times they are more willing to come to the table to talk about music issues, for a variety of reasons. One is that they can get to an agreement, there can be some bipartisan action, and, you know, music touches everyone. And policymakers are no different.

I think that hopefully we can get some action on making sure that we continue to protect the rights of artists and labels and songwriters and others in the music community, and not roll back any rights. We’ll be paying particular attention to AI and deep fakes and making sure that their rights are protected there. But it’s not clear how things will go, either from the standpoint of the election, but also getting bills passed is really hard nowadays. But can we get some engagement? Yes, we’ll get engagement. A lot of times what we try to do too, is if members feel like bills won’t pass, there are other ways to get them to engage, to help bring parties, stakeholders to the table to talk through issues and see if we can get some resolutions and things like that. I expect that to continue. But, you know, D.C. is…it’s tricky.

This interview has been edited and condensed.

In November, I quit my job in generative AI to campaign for creators’ right not to have their work used for AI training without permission. I started Fairly Trained, a non-profit that certifies generative AI companies that obtain a license before training models on copyrighted works.
Mostly, I’ve felt good about this decision — but there have been a few times when I’ve questioned it. Like when a big media company, though keen to defend its own rights, told me it couldn’t find a way to stop using unfairly-trained generative AI in other domains. Or whenever demos from the latest models receive unquestioning praise despite how they’re trained. Or, last week, with the publication of a series of articles about AI music company Suno that I think downplay serious questions about the training data it uses.

Suno is an AI music generation company with impressive text-to-song capabilities. I have nothing against Suno, with one exception: Piecing together various clues, it seems likely that its model is trained on copyrighted work without rights holders’ consent.

Trending on Billboard

What are these clues? Suno refuses to reveal its training data sources. In an interview with Rolling Stone, one of its investors disclosed that Suno didn’t have deals with the labels “when the company got started” (there is no indication this has changed), that they invested in the company “with the full knowledge that music labels and publishers could sue,” and that the founders’ lack of open hostility to the music industry “doesn’t mean we’re not going to get sued.” And, though I’ve approached the company through two channels about getting certified as Fairly Trained, they’ve so far not taken me up on the offer, in contrast to the 12 other AI music companies we’ve certified for training their platforms fairly. 

There is, of course, a chance that Suno licenses its training data, and I genuinely hope I’m wrong. If they correct the record, I’ll be the first to loudly and regularly trumpet the company’s fair training credentials.

But I’d like to see media coverage of companies like Suno give more weight to the question of what training data is being used. This is an existential issue for creators. 

Editor’s note: Suno’s founders did not respond to requests for comment from Billboard about their training practices. Sources confirm that the company does not have licensing agreements in place with some of the most prominent music rightsholders, including the three major label groups and the National Music Publishers’ Association. 

Limiting discussion of Suno’s training data to the fact that it “decline[s] to reveal details” and not explicitly stating the possibility that Suno uses copyrighted music without permission means that readers may not be aware of the potential for unfair exploitation of musicians’ work by AI music companies. This should factor into our thoughts about which AI music companies to support.

If Suno is training on copyrighted music without permission, this is likely the technological factor that sets it apart from other AI music products. The Rolling Stone article mentions some of the tough technical problems that Suno is solving  — having to do with tokens, the sampling rate of audio and more — but these are problems that other companies have solved. In fact, several competitors have models as capable as Suno’s. The reason you don’t see more models like Suno’s being released to the public is that most AI music companies want to ensure training data is licensed before they release their products.

The context here is important. Some of the biggest generative AI companies in the world are using untold numbers of creators’ work without permission in order to train AI models that compete with those creators. There is, understandably, a big public outcry at this large-scale scraping of copyrighted work from the creative community. This has led to a number of lawsuits, which Rolling Stone mentions.

The fact that generative AI competes with human creators is something AI companies prefer not to talk about. But it’s undeniable. People are already listening to music from companies like Suno in place of Spotify, and generative AI listening will inevitably eat into music industry revenues — and therefore human musicians’ income — if training data isn’t licensed.

Generative AI is a powerful technology that will likely bring a number of benefits. But if we support the exploitation of people’s work for training without permission, we implicitly support the unfair destruction of the creative industries. We must instead support companies that take a fairer approach to training data.

And those companies do exist. There are a number — generally startups — taking a fairer approach, refusing to use copyrighted work without consent. They are licensing, or using public domain data, or commissioning data, or all of the above. In short, they are working hard not to train unethically. At Fairly Trained, we have certified 12 of these companies in AI music. If you want to use AI music and you care about creators’ rights, you have options.

There is a chance Suno has licensed its data. I encourage the company to disclose what it’s training its AI model on. Until we know more, I hope anyone looking to use AI music will opt instead to work with companies that we know take a fair approach to using creators’ work.

To put it simply — and to use some details pulled from Suno’s Rolling Stone interview — it doesn’t matter whether you’re a team of musicians, what you profess to think about IP, or how many pictures of famous composers you have on the walls. If you train on copyrighted work without a license, you’re not on the side of musicians. You’re unfairly exploiting their work to build something that competes with them. You’re taking from them to your gain — and their cost.

Ed Newton-Rex is the CEO of Fairly Trained and a composer. He previously founded Jukedeck, one of the first AI music companies, ran product in Europe for TikTok, and was vp of audio at Stability AI.