artificial intelligence
Page: 6
Stockholm-born music library giant Epidemic Sound is launching a new remix series, called Extra Version, on Wednesday (May 28) with help from DJ/producer Honey Dijon. As part of Extra Version, Epidemic pays participating DJs and producers “five to six figure sums” to pick from the songs, stems, samples and loops in its catalog of over 250,000 pieces of IP — and remix them into something new.”
Epidemic then adds the results to its ever-growing catalog available for use by clients — like content creators, advertisers and brands looking for easy-to-clear songs to soundtrack videos — and distributes them to streaming services.
To kick it off, Honey Dijon flipped the Epidemic-owned song “Umbélé” by electronic artist Ooyy and Swedish Grammy-award winning performer Ebo Krdum. “Teaming up with Epidemic Sound was a vibe,” she said in a press release. “They’re shaking things up in the best way… It’s all about freedom, fun and keeping the groove 100%.” The company plans to also collaborate with Major Lazer co-founder Switch and rising Korean talent Jeonghyeon on future Extra Version editions.
Trending on Billboard
With this series, Epidemic Sound CEO/co-founder Oscar Höglund tells Billboard he wants to show off the “high quality” of Epidemic’s catalog, which he believes rivals the quality of traditional major label releases. “The art of consuming music is changing,” he says. “It’s going from being a spectator sport to being a participative one. People want to remix their favorite music, they want to collaborate, and they want to create.” Epidemic, he continues “is creating the opportunity for incredibly talented producers and remixers and DJs to collaborate [and remix] our catalog. And then, we will help them distribute their remixes around the globe [both to streaming services and to their platform which provides pre-cleared music to content creators] – and they’ll get paid well while doing it.”
In this interview, Höglund talks the ins and outs of Extra Version, the ways he is integrating AI remix features to “create more use cases for the same songs,” and why he feels the allegations that Epidemic Sound has filled Spotify mood playlists with “ghost artists” is “deeply offensive to the artist in question.”
Why are you launching Extra Version?
We’ve seen that even though culture is moving towards [more participation and remixing], creatives have held back from doing it because from a legal perspective, it’s very hard to get rights to music because of fractional ownership. Our catalog has been built up, for almost a decade and a half, around the premise of having all the rights in one place. There’s nothing fractional about it. We own all of our music, and we are more than happy to offer the catalog up to producers. We just want to do it in a way that works for the artists who originally made the music [that Epidemic now owns], the remixers, the creators and the platforms where this music will go live and proliferate.
What is the payment model for Extra Version participants, and how does that differ from how producers are typically paid for remixes?
What often ends up happening is that producers are asked to create a remix for an artist, and they don’t get paid much to do it. Rather, the logic has been, the more culturally relevant the artist in question is, the lower your compensation, because your payment is in the cultural value you receive as a remixer from being associated with the artist.
We took a contrarian view here. We’ve always prided ourselves on putting our money where our mouth is, so instead we’re paying much more handsomely up front [to Extra Version participants]. We can’t disclose exactly how much, but I would say between five and six figures [for each remix]. We’re paying a lot up front, and this is not recoupable. It’s not a loan. It’s something the producer gets to keep.
Next step is that we allow the remixer to choose whatever track they feel creatively inclined to use from our catalog. Allowing for choice is a huge part of this. Remixers don’t have to license anything or worry about different samples being unlicensed. We own everything in perpetuity, and it’s all made available to you to pick and choose from. Then we will distribute the song. Most remixers don’t get a commission, just a flat fee. With Extra Version, we want to cut the remixers and producers in.
With Extra Version, Epidemic is opening up its catalog for remixes, and also making stems available so producers can mix and match the building blocks of your catalog. Does Epidemic have the goal of taking on sample or beat marketplaces like Splice or is this just for Extra Version?
When we started commissioning songs, we always got stems for everything. It is important from a soundtracking perspective. To the second part, the way we think about Extra Version is that this is not the end. We’re definitely not stopping here — rather, we’re saying this is the first step in our endeavor to help more music creators sustain themselves and democratize access to music. The bigger picture here is we want to help soundtrack the entire creator economy, and as such, we need to unlock our music.
So it sounds like the future of Epidemic Sound is offering samples, beats, and individual elements of the songs in the catalog to everyone, not just fully formed songs?
Correct.
Do you see Extra Version as an ongoing series, or is it a limited run?
This is not a limited run series. It’s the starting point of ushering in a completely new paradigm, one which is much more centered around the remixing and collaborative nature of culture. This is something that we’re deeply committed to and we’re going to spend a lot of time experimenting and seeing how this space is going to evolve.
It feels like you’re moving in the opposite direction of major labels. Nowadays, competitive deals at majors regularly involve the artist getting their masters back eventually, but the advance the artist gets up front is recoupable. Meanwhile, Epidemic is asking for full ownership of an artist’s tracks, but you provide non-recoupable money up front for the song. What is your goal with this approach?
At our core, we’ve taken one fundamentally contrarian belief: if you’re an artist, common wisdom says you should hold on to all of your [intellectual property]. That’s the traditional music industry right now. We think, in order to provide wide distribution and to provide superior monetization, we need to own 100% of the copyright. If we can build a platform, like we have, where there’s one point of contact when you want to license the song, then we can indemnify our customers and allow them to use the songs across all platforms, in all jurisdictions, and in all different scenarios. This allows for us to create predictability with Epidemic, so we can also pay our artists more predictable fees, too.
How is Epidemic thinking about AI?
We think that AI is an incredible tool to help augment human creativity, but never replace it. We’ve so far found that there’s tremendous amounts of value in using AI to help both music creators and video creators. We can use AI during the recommendation phase. If you’re a video creator we can use advanced AI search tools to help recommend tracks.
The old paradigm was, if you found a track that you like, suddenly you had to spend hours trying to re-edit that track such that it perfectly fit the video story you’re trying to tell, often with huge challenges from a legal perspective. “Am I even allowed to change the composition of this track?” The answer is often no. We’ve now been able to use AI [so] that, if you are a content creator, and you find this one track that you want to use to soundtrack your video, you can now speed it up, slow it down, change it. You can cut it. You can edit it — not replace it. This helps create more use cases for the same songs. Where there might have previously been 10 content creators who can use your track, now with adaptation maybe 20 or 30 or 100 creators will use it. That means the track is going to get played more and it’s going to earn more royalties [on streaming services]. And so ultimately, the human who made that track is going to make much more money, because AI has augmented the use cases.
Point three is purely generative — we’ve launched a product called AI Voice. We’ve gone to human voice actors, and we’ve struck agreements with them such that we pay them up front, we train and we use their voice, and then we allow our customers to use their voices. Every time they do, there’s an additional royalty so that the voice artists make additional money. We also put their personal emails out there in case content creators want to work directly with them. So suddenly, even when we go into the generative world of voice, we’re seeing that voice actors get used more, get more work, and get paid.
There have been allegations dating back to 2016 that Epidemic Sound has a deal of some kind with Spotify to fill some Spotify playlists with royalty-free music. It has been highly criticized. Can you explain what that arrangement is, if there is one?
I’d be happy to. Epidemic parallel publishes all of its music to all of the major DSPs around the entire world. We do that for a couple of different reasons, but the primary reason is it’s in our artists’ best interest, because we realized early on there was a Stranger Things, Kate Bush effect, meaning when Kate Bush’s track was used in Stranger Things, there was a massive surge in that song on streaming platforms around the world. We realized early on that that happens [when content creators use our songs].
There was also an adjacent trend, which we also tapped into very early, on streaming platforms in general — Spotify being one of them — that there was much more lean-back listening going on. The role of the record [or album] as the [driver] for music consumption started to diminish. More and more, [people were listening to] standalone tracks, but then ultimately playlists started to proliferate and come into their own.
Many of [the playlists] are hits-oriented, but there’s a huge proportion of playlists which are more functionally oriented. We do incredibly well across all the different playlists where people are looking to get to a specific theme or in a specific emotion. There’s music for sleeping, for concentrating, for studying, for getting ready, for meditating or for walking your dogs. Because what we do at our core is soundtracking, it turns out we were really, really good at [those playlists]. And while other people were trying to get into the bigger playlists to create the hits of tomorrow, we just kept doing the thing that we do really well. There was huge demand across all different DSPs, so we started to grow. And we became very, very significant and very, very successful.
Do you feel that it harms non-Epidemic artists in these genres to be competing for spots on mood-based playlists with your music?
No. If anything, the contrary: I think the old articles about Epidemic artists are deeply unfair. There was speculation: “Who are these artists? Do they even exist?” They are super talented artists in their own right. I took issue very much with that.
Various writers have referred to your music that is on Spotify playlists as “ghost artists” or fake artists.” How do you feel about those titles?
I think it’s deeply offensive for the artist in question. If you are an actor, you can play multiple different roles because you portray many different characters. That’s second nature. Artists, like actors, have the right to express their creativity in a multitude of different ways. It’s always them who determines if they want to publish their music under one name or not. Odds are that their fans might think they are all over the place, so quite often what we see happen is that artists have one brand for a certain genre of music, and different brand for another kind. If you look at Elton John, Madonna they [use] aliases. I seriously doubt that Madonna and Elton John would like to be called fake artists or ghost artists. That’s them creatively expressing through a different persona.
Country music star Martina McBride headed to Capitol Hill on Wednesday (May 21) to speak out in support of the NO FAKES Act, arguing the legislation is necessary to protect artists in the AI age.
If passed, the bill (officially titled the Nurture Originals, Foster Art and Keep Entertainment Safe Act), which was recently reintroduced to the U.S. House of Representatives and the U.S. Senate, would create a federal protection against unauthorized deepfakes of one’s name, image, likeness or voice for the first time. It is widely supported by the music industry, the film industry and other groups.
Just prior to McBride’s testimony, the Human Artistry Campaign sent out a press release stating that 393 artists have signed on in support of the NO FAKES Act, including Cardi B, Randy Travis, Mary J. Blige and the Dave Matthews Band.
Trending on Billboard
In her testimony to the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, McBride called unauthorized deepfakes “just terrifying” and added, “I’m pleading with you to give me the tools to stop that kind of betrayal.” She continued that passing the NO FAKES Act could “set America on the right course to develop the world’s best AI while preserving the sacred qualities that make our country so special: authenticity, integrity, humanity and our endlessly inspiring spirit…I urge you to pass this bill now.”
McBride went on to express the challenges that musicians face as unauthorized AI deepfakes proliferate online. “I worked so hard to establish trust with my fans,” she said. “They know when I say something, they can believe it… I don’t know how I can stress enough how [much unauthorized deepfakes] can impact the careers [of] artists.”
During her testimony, the singer-songwriter pointed to more specific concerns, like what can happen to individuals after they pass away. “Far into the future after I’m gone,” she said, there is the threat now that someone could “creat[e] a piece of music or [a video of] me saying something that I never did.” She added that this issue is especially challenging for emerging musicians: “I think for younger artists, to be new, and to have to set up what you stand for and who you are as a person as an artist and what you endorse what you believe in…on top of having to navigate this… is devastating.”
Suzana Carlos, head of music policy for YouTube, also expressed her company’s support for the NO FAKES Act during the hearing. “As technology evolves, we must collectively ensure that it is used responsibly, including when it comes to protecting our creators and viewers,” she said. “Platforms have a responsibility to address the challenges posed by AI-generated content, and Google and YouTube staff [is] ready to apply our expertise to help tackle them on our services and across the digital ecosystem. We know that a practical regulatory framework addressing digital replicas is critical.”
Carlos noted how one’s name, image, likeness and voice (also known as “publicity rights” or “rights of publicity”) is only currently protected on a state-by-state basis, creating a “patchwork of inconsistent legal frameworks.” She noted that YouTube would like to “streamline global operations for platforms.”
Mitch Glazier, CEO/president of the Recording Industry Association of America (RIAA), which has served a strong role in pushing the NO FAKES Act forward, added during his testimony that time is of the essence to pass this bill. “I think there’s a very small window, and an unusual window, for Congress to get ahead of what is happening before it becomes irreparable,” he said.
Also during the hearing, Senator Amy Klobuchar (D-MN) brought up concerns about a “10-year moratorium” that would ban states and localities from implementing AI regulation — a clause the Republican-led House of Representatives baked into the Republicans’ so-called “big beautiful” tax bill last week. “I’m very concerned, having spent years trying to pass some of these things,” Klobuchar said. “If you just put a moratorium [on]…the ELVIS law [a new Tennessee state law that updated protections for deepfakes in the AI age] coming out of Tennessee…and some of the other things, this would stop all of that.”
The NO FAKES Act was introduced by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC) and Klobuchar along with Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4) Nathaniel Moran (R-TX-1) and Becca Balint (D-VT-At Large). It was first introduced as a draft bill in 2023 and formally introduced in the Senate in summer 2024.
Unlike some of the state publicity rights laws, the NO FAKES Act would create a federal right of publicity that would not expire after death and could be controlled by a person’s heirs for 70 years after their passing. It also includes specific carve-outs for replicas used in news, parody, historical works and criticism to ensure the First Amendment right to free speech remains protected.
LONDON — When the European Parliament passed sweeping new laws governing the use of artificial intelligence (AI) last March, the “world first” legislation was hailed as an important victory by music executives and rights holders. Just over one year later — and with less than three months until the European Union’s Artificial Intelligence Act is due to come fully into force — those same execs say they now have “serious concerns” about how the laws are being implemented amid a fierce lobbying battle between creator groups and big tech.
“[Tech companies] are really aggressively lobbying the [European] Commission and the [European] Council to try and water down these provisions wherever they can,” John Phelan, director general of international music publishing trade association ICMP, tells Billboard. “The EU is at a junction and what we’re trying to do is try to push as many people [as possible] in the direction of: ‘The law is the law’. The copyright standards in there are high. Do not be afraid to robustly defend what you’ve got in the AI Act.”
Trending on Billboard
One current source of tension between creator groups, tech lobbyists and policy makers is the generative AI “Code of Practice” being developed by the EU’s newly formed AI Office in consultation with almost 1,000 stakeholders, including music trade groups, tech companies, academics, and independent experts. The code, which is currently on its third draft, is intended to set clear, but not legally binding, guidelines for generative AI models such as OpenAI’s ChatGPT to follow to ensure they are complying with the terms of the AI Act.
Those obligations include the requirement for generative AI developers to provide a “sufficiently detailed summary” of all copyright protected works, including music, that they have used to train their systems. Under the AI Act, tech companies are also required to water mark training data sets used in generative AI music or audio-visual works, so there is a traceable path for rights holders to track the use of their catalog. Significantly, the laws apply to any generative AI company operating within the 27-member EU state, regardless of where they are based, acquired data from, or trained their systems.
“The obligations of the AI Act are clear: you need to respect copyright, and you need to be transparent about the data you have trained on,” says Matthieu Philibert, public affairs director at European independent labels trade body IMPALA.
Putting those provisions into practice is proving less straight-forward, however, with the latest version of the code, published in March, provoking a strong backlash from music execs who say that the draft text risks undermining the very same laws it is designed to support.
“Rather than providing a robust framework for compliance, [the code] sets the bar so low as to provide no meaningful assistance for authors, performers, and other right holders to exercise or enforce their rights,” said a coalition of creators and music associations, including ICMP, IMPALA, international labels trade body IFPI and Paris-based collecting societies trade organization CISAC, in a joint statement published March 28.
Causing the biggest worry for rights holders is the text’s instruction that generative AI providers need only make “reasonable efforts” to comply with European copyright law, including the weakened requirement that signatories undertake “reasonable efforts to not crawl from piracy domains.”
There’s also strong opposition over a lack of meaningful guidance on what AI companies must do to comply with a label, artist or publisher’s right to reserve (block) their rights, including the code’s insistence that robots.txt is the “only” method generative AI models must use to identify rights holders opt out reservations. Creator groups says that robots.txt – a root directory file that tells search engine crawlers which URLs they can access on a website — works for only a fraction of right holders and is unfit for purpose as it takes effect at the point of web crawling, not scraping, training or other downstream uses of their work.
“Every draft we see coming out is basically worse than the previous one,” Philibert tells Billboard. “As it stands, the code of practice leaves a lot to be desired.”
Caught Between Creators, Big Tech and U.S. Pressure
The general view within the music business is that the concessions introduced in the third draft are in response to pressure from tech lobbyists and outside pressure from the Trump administration, which is pursuing a wider deregulation agenda both at home and abroad. In April, the U.S. government’s Mission to the EU (USEU) sent a letter to the European Commission pushing back against the code, which it said contained “flaws.” The Trump administration is also demanding changes to the EU’s Digital Services Act, which governs digital services such as X and Facebook, and the EU’s Digital Markets Act, which looks to curb the power of large digital platforms.
The perception that the draft code favors Big Tech is not shared by their lobby group representatives, however.
“The code of practice for general-purpose AI is a vital step in implementing the EU’s AI Act, offering much-needed guidance [to tech providers] … However, the drafting process has been troubled from the very outset,” says Boniface de Champris, senior policy manager at the European arm of the Computer and Communications Industry Association (CCIA), which counts Google, Amazon, Meta and Apple among its members.
De Champris says that generative AI developers accounted for around 50 of the nearly 1,000 stakeholders that the EU consulted with on the drafting of the code, allowing the process “to veer off course, with months lost to debates that went beyond the AI Act’s agreed scope, including proposals explicitly rejected by EU legislators.” He calls a successful implementation of the code “a make-or-break moment for AI innovation in Europe.”
In response to the backlash from creator groups and the tech sector, the EU’s AI Office recently postponed publishing the final code of practice from May 2 to an unspecified date later this summer to allow for changes to be made.
The AI Act’s key provisions for generative AI models come into force Aug. 2, after which all of its regulations will be legally enforceable with fines of up to 35 million euros ($38 million, per current exchange rate), or up to 7% of global annual turnover, for large companies that breach the rules. Start-up businesses or smaller tech operations will receive proportionate financial punishments.
Creators Demand Stronger Rules
Meanwhile, work continues behind the scenes on what many music executives consider to be the key part of the legislation: the so-called “training template” that is being developed by the AI Office in parallel with the code of practice. The template, which is also overdue and causing equal concern among rights holders, will set the minimum requirements of training data that AI developers have to publicly disclose, including copyright-protected songs that they have used in the form of a “sufficiently detailed summary.”
According to preliminary proposals published in January, the training summary will not require tech companies to specify each work or song they have used to train AI systems, or be “technically detailed,” but will instead be a “generally comprehensive” list of the data sets used and sources.
“For us, the [transparency] template is the most important thing and what we have seen so far, which had been presented in the context of the code, is absolutely not meeting the required threshold,” says Lodovico Benvenuti, managing director of IFPI’s European office. “The act’s obligations on transparency are not only possible but they are needed in order to build a fair and competitive licensing market.”
“Unless we get detailed transparency, we won’t know what works have been used and if that happens most of this obligation will become an empty promise,” agrees IMPALA’s Philibert. “We hear claims [from the European Commission] that the training data is protected as a trade secret. But it’s not a trade secret to say: ‘This is what I trained on.’ The trade secret is how they put together their models, not the ingredients.”
“The big tech companies do not want to disclose [training data] because if they disclose, you will be able to understand if copyrighted material [has been used]. This is why they are trying to dilute this [requirement],” Brando Benifei, a Member of the European Parliament (MEP) and co-rapporteur of the AI Act, tells Billboard. Benifei is co-chair of a working group focused on the implementation of the AI Act and says that he and colleagues are urging policymakers to make sure that the final legislation achieves its overarching aim of defending creators’ rights.
“We think it is very important in this moment to protect human creativity, including the music sector,” warns Benifei, who this week co-hosted a forum in Brussels that brought together voices from music and other media to warn that current AI policies could erode copyright protections and compromise cultural integrity. Speakers, including ABBA member and CISAC president Björn Ulvaeus and Universal Music France CEO Olivier Nusse, stressed that AI must support — and not replace — human creativity, and criticized the lack of strong transparency requirements in AI development. They emphasized that AI-generated content should not be granted the same legal standing as human-created works. The event aligned with the “Stay True to the Act, Stay True to Culture” campaign, which advocates for equitable treatment and fair compensation for creators.
“A lot is happening, almost around the clock, in front of and behind the scenes,” ICMP’s Phelan tells Billboard. He says he and other creator groups are “contesting hard” with the EU’s executive branch, the European Commission, to achieve the transparency standards required by the music business.
“The implementation process doesn’t redefine the law or reduce what was achieved within the AI Act,” says Phelan. “But it does help to create the enforcement tools and it’s those tools which we are concerned about.”
SoundCloud CEO Eliah Seton has today (May 14) published an open letter clarifying the company’s position on AI.
This letters follows backlash that happened last week after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted about SoundCloud’s terms of service quietly changing in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models.
In his letter, Seton repeats what SoundCloud shared in a statement last week, noting that the platform “has never used artist content to train AI models. Not for music creation. Not for large language models. Not for anything that tries to mimic or replace your work. Period. We don’t build generative AI tools, and we don’t allow third parties to scrape or use artist content from SoundCloud to train them either.”
The letter then goes on to directly address the 2024 Terms of Services changes, which were done, Seton writes, “to clarify how we may use AI internally to improve the platform for both artists and fans. This includes powering smarter recommendations, search, playlisting, content tagging and tools that help prevent fraud.”
Trending on Billboard
But he acknowledges that “the language in the Terms of Use was too broad and wasn’t clear enough. It created confusion, and that’s on us. That’s why we’re fixing it.” He write that the company is revising the Terms of Use to make it “absolutely clear” that “SoundCloud will not use your content to train generative AI models that aim to replicate or synthesize your voice, music, or likeness.” He notes that the Terms of Service updates will be reflected online in the coming weeks.
Seton adds that given the rapidly changing landscape, “If there is an opportunity to use generative AI for the benefit of our human artists, we may make this opportunity available to our human artists with their explicit consent, via an opt-in mechanism. We don’t know what we don’t know, and we have a responsibility to give our human artists the opportunities, choices and control to advance their creative journeys.”
Finally, he notes that the platform is “making a formal commitment that any use of AI on SoundCloud will be based on consent, transparency and artist control.”
Read his complete letter here.
The U.K. government’s plans to allow artificial intelligence firms to use copyrighted work, including music, have been dealt another setback by the House of Lords.
An amendment to the data bill which required AI companies to disclose the copyrighted works their models are trained on was backed by peers in the upper chamber of U.K. Parliament, despite government opposition.
The U.K.’s government has proposed an “opt out” approach for copyrighted material, meaning that the creator or owner must explicitly choose for their work not to be eligible for training AI models. The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125 on Monday (May 12).
The data bill will now return to the House of Commons, though the government could remove Kidron’s amendment and send the bill back to the House of Lords next week.
Trending on Billboard
Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.
“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn ($158bn) to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”
The “opt out” move has proved unpopular with many in the creative fields, particularly in the music space. Prior to the vote, over 400 British musicians including Elton John, Paul McCartney, Dua Lipa, Coldplay, Kate Bush and more signed an open letter calling on U.K. prime minister Sir Keir Starmer to update copyright laws to protect their work from AI companies.
The letter said that such an approach would threaten “the UK’s position as a creative powerhouse,” and signatories included major players such as Sir Lucian Grainge (Universal Music Group CEO), Jason Iley MBE (Sony Music UK CEO), Tony Harlow (Warner Music UK CEO) and Dickon Stainer (Universal Music UK CEO).
A spokesperson for the government responded to the letter, saying: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.”
They added: “We’re clear that no changes will be considered unless we are completely satisfied they work for creators.”
Sophie Jones, chief strategist office for the BPI, said: “The House of Lords has once again taken the right decision by voting to establish vital transparency obligations for AI companies. Transparency is crucial in ensuring that the creative industries can retain control over how their works are used, enabling both the licensing and enforcement of rights. If the Government chooses to remove this clause in the House of Commons, it would be preventing progress on a fundamental cornerstone which can help build trust and greater collaboration between the creative and tech sectors, and it would be at odds with its own ambition to build a licensing market in the UK.”
On Friday (May 9), SoundCloud encountered user backlash after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted on X that SoundCloud’s terms of service quietly changed in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models. Over the weekend, SoundCloud clarified via a statement, originally sent to The Verge and also obtained by Billboard, that reads in part: “SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.”
The streaming service adds that this change was made last year “to clarify how content may interact with AI technologies within SoundCloud’s own platform,” including AI-powered personalized recommendation tools, streaming fraud detection, and more, and it apparently did not mean that SoundCloud was allowing external AI companies to train on its users’ songs.
Trending on Billboard
SoundCloud seems to claim the right to train on people’s uploaded music in their terms. I think they have major questions to answer over this.I checked the wayback machine – it seems to have been added to their terms on 12th Feb 2024. I’m a SoundCloud user and I can’t see any… pic.twitter.com/NIk7TP7K3C— Ed Newton-Rex (@ednewtonrex) May 9, 2025
Over the years, SoundCloud has announced various partnerships with AI companies, including its acquisition of Singapore-based AI music curation company Musiio in 2022. SoundCloud’s statement added, “Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.” SoundCloud also has integrations in place with AI firms like Tuney, Voice-Swap, Fadr, Soundful, Tuttii, AIBeatz, TwoShot, Starmony and ACE Studio, and it has teamed up with content identification companies Pex and Audible Magic to ensure these integrations provide rights holders with proper credit and compensation.
The company doesn’t totally rule out the possibility that users’ works will be used for AI training in the future, but says “no such use has taken place to date,” adding that “SoundCloud will introduce robust internal permissioning controls to govern any potential future use. Should we ever consider using user content to train generative AI models, we would introduce clear opt-out mechanisms in advance—at a minimum—and remain committed to transparency with our creator community.”
Read the full statement from SoundCloud below.
“SoundCloud has always been and will remain artist-first. Our focus is on empowering artists with control, clarity, and meaningful opportunities to grow. We believe AI, when developed responsibly, can expand creative potential—especially when guided by principles of consent, attribution, and fair compensation.
SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes. In fact, we implemented technical safeguards, including a “no AI” tag on our site to explicitly prohibit unauthorized use.
The February 2024 update to our Terms of Service was intended to clarify how content may interact with AI technologies within SoundCloud’s own platform. Use cases include personalized recommendations, content organization, fraud detection, and improvements to content identification with the help of AI Technologies.
Any future application of AI at SoundCloud will be designed to support human artists, enhancing the tools, capabilities, reach and opportunities available to them on our platform. Examples include improving music recommendations, generating playlists, organizing content, and detecting fraudulent activity. These efforts are aligned with existing licensing agreements and ethical standards. Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.
We understand the concerns raised and remain committed to open dialogue. Artists will continue to have control over their work, and we’ll keep our community informed every step of the way as we explore innovation and apply AI technologies responsibly, especially as legal and commercial frameworks continue to evolve.”
On Friday afternoon, the U.S. Copyright Office released a report examining copyrights and generative AI training, which supported the idea of licensing copyrights when they are used in commercial AI training.
On Saturday (May 10), the nation’s top copyright official – Register of Copyrights Shira Perlmutter – was terminated by President Donald Trump. Her dismissal shortly follows the firing of the Librarian of Congress, Carla Hayden, who appointed and supervised Perlmutter. In response, Rep. Joe Morelle (D-NY) of the House Administration Committee, which oversees the Copyright Office and the Library of Congress, said that he feels it is “no coincidence [Trump] acted less than a day after [Perlmutter] refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”
This report was largely seen as a win among copyright owners in the music industry, and it noted three key stances: the Office’s support for licensing copyrighted material when a “commercial” AI model uses it for training, its dismissal of compulsory licensing as the correct framework for a future licensing model, and its rejection of “the idea of any opt-out approach.”
Trending on Billboard
The Office affirms that in “commercial” cases, licensing copyrights for training could be a “practical solution” and that using copyrights without a license “[go] beyond established fair use boundaries.” It also notes that some commercial AI models “compete with [copyright owners] in existing markets.” However, if an AI model has been created for “purposes such as analysis or research – the types of uses that are critical to international competitiveness,” the Office says “the outputs are unlikely to substitute” for the works by which they were trained.
“In our view, American leadership in the AI space would best be furthered by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights,” the report reads.
While it is supportive of licensing efforts between copyright owners and AI firms, the report recognizes that most stakeholders do not hold support “for any statutory change” or “government intervention” in this area. “The Office believes…[that] would be premature at this time,” the report reads. Later, it adds “we agree with commenters that a compulsory licensing regime for AI training would have significant disadvantages. A compulsory license establishes fixed royalty rates and terms and can set practices in stone; they can become inextricably embedded in an industry and become difficult to undo. Premature adoption also risks stifling the development of flexible and creative market-based solutions. Moreover, compulsory licenses can take years to develop, often requiring painstaking negotiation of numerous operational details.”
The Office notes the perspectives of music-related organizations, like the National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), and Recording Industry Association of America (RIAA), which all hold a shared distaste for the idea of a future compulsory or government-controlled license for AI training. Already, the music industry deals with a compulsory license for mechanical royalties, allowing the government to control rates for one of the types of royalties earned from streaming and sales.
“Most commenters who addressed this issue opposed or raised concerns about the prospect of compulsory licensing,” the report says. “Those representing copyright owners and creators argued that the compulsory licensing of works for use in AI training would be detrimental to their ability to control uses of their works, and asserted that there is no market failure that would justify it. A2IM and RIAA described compulsory licensing as entailing ‘below-market royalty rates, additional administrative costs, and… restrictions on innovation’… and NMPA saw it as ‘an extreme remedy that deprives copyright owners of their right to contract freely in the market, and takes away their ability to choose whom they do business with, how their works are used, and how much they are paid.’”
The Office leaves it up to the copyright owners and AI companies to figure out the right way to license and compensate for training data, but it does explore a few options. This includes “compensation structures based on a percentage of revenue or profits,” but if the free market fails to find the right licensing solution, the report suggested “targeted intervention such as [Extended Collective Licensing] ECL should be considered.”
ECL, which is employed in some European countries, would allow a collective management organization (CMO) to issue and administer blanket licenses for “all copyrighted works within a particular class,” much like the music industry is already accustomed to with organizations like The MLC (The Mechanical Licensing Collective) and performing rights organizations (PROs) like ASCAP and BMI. The difference between an ECL and a traditional CMO, however, is that under an ECL system, the CMO can license for those who have not affirmatively joined it yet. Though these ECL licenses are still negotiated in a “free market,” the government would “regulat[e] the overall system and excercis[e] some degree of oversight.”
While some AI firms expressed concerns that blanket licensing by copyright holders would lead to antitrust issues, the Copyright Office sided with copyright holders, saying “[the] courts have found that there is nothing intrinsically anticompetitive about the collective, or even blanket, licensing of copyrighted works, as long as certain safeguards are incorporated— such as ensuring that licensees can still obtain direct licenses from copyright owners as an alternative.”
This is a “pre-publication” version of a forthcoming final report, which will be published in the “near future without any substantive changes expected,” according to the Copyright Office. The Office noted this “pre-publication” was pushed out early in an attempt to address inquiries from Congress and key stakeholders.
It marks the Office’s third report about generative AI and its impact on copyrights since it launched an initiative on the matter in 2023. The first report, released July 31, 2024, focused on the topic of digital replicas. The second, from Jan. 29, 2025, addressed the copyright-ability of outputs created with generative AI.
Udio, a generative AI music company backed by will.i.am, Common and a16z, has partnered with Audible Magic to fingerprint all tracks made using the platform at the moment they are created and to check the generated works, using Audible Magic’s “content control pipeline,” for any infringing copyrighted material.
By doing this, Udio and Audible Magic have created a way for streaming services and distributors to trace which songs submitted to their platforms are made with Udio’s AI. The company also aims to proactively detect and block use of copyrighted material that users don’t own or control.
“Working with Audible Magic allows us to create a transparent signal in the music supply chain. By fingerprinting at the point of generation, we’re helping establish a new benchmark for accountability and clarity in the age of generative music,” says Andrew Sanchez, co-founder of Udio. “We believe that this partnership will open the door for new licensing structures and monetization pathways that will benefit stakeholders across the industry from artists to rights holders to technology platforms.”
Trending on Billboard
Last summer, Udio, and its top competitor Suno, were both sued by the three major record companies for training their AI music models on the companies’ copyrighted master recordings. In the lawsuits, the majors argued this constituted copyright infringement “at an almost unimaginable scale.” Additionally, the lawsuits pointed out that the resulting AI-generated songs from Udio and Suno could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
Udio’s new partnership with Audible Magic stops short of promising to eliminate copyright material from its training process, as the majors want, but it shows that Udio is trying out alternative solutions to appease the music establishment. Suno also has a partnership with Audible Magic, announced in October 2024, but the two partnerships believes these deals hold key differences. Suno’s integration focus more specifically on its “audio inputs” and “covers” features, which allow users to generate songs based on an audio file they upload. With Audible Magic’s technology, Suno prevents users from unauthorized uploads of copyrighted material.
“This partnership demonstrates Udio’s substantial commitment to rights holder transparency and content provenance,” says Kuni Takahashi, CEO of Audible Magic. “Registering files directly from the first-party source is a clean and robust way to identify the use of AI-generated music in the supply chain.”
BeatStars has partnered with Sureel, an AI music detection and attribution company, to provide its creators with the ability to express their desire to “opt out” of their works being used in AI training.
To date, AI music companies in the United States are not required to honor opt-outs, but through this partnership, Sureel and Beatstars, the world’s largest music marketplace, hope to create clarity for AI music companies that are wishing to avoid legal and reputational risks and create a digital ledger to keep track of beatmakers’ wishes regarding AI training.
Here’s how it works: Beatstars will send formal opt-out notices for every music asset and artist on its platform, and all of the creators’ choices will be documented on a portal that any AI company can access. By default, all tracks will be marked as shielded from AI training unless permission is granted. Companies can also access creators’ wishes using Sureel’s API. It will also automatically communicate the creators’ desires via a robots.txt file, which is a way to block AI companies that are crawling the web for new training data.
Trending on Billboard
As the U.S. — and countries around the world — continue to debate how to properly regulate issues related to AI, start-ups in the private sector, like Sureel, are trying to find faster solutions, including tools for opting in and out of AI training, detection technology to flag and verify AI generated works, and more.
“This partnership is an extension of our longstanding commitment to put creators first,” said Abe Batshon, CEO of BeatStars, in a statement. “We recognize that some AI companies might not respect intellectual property, so we are taking definitive action to ensure our community’s work remains protected and valued. Ethical AI is the future, and we’re leading the charge in making sure creators are not left behind.”
“BeatStars isn’t just a marketplace — it’s one of the most important creator communities in the world,” added Dr. Tamay Aykut, founder/CEO of Sureel. “They’ve built their platform around trust, transparency, and putting artists in control. That’s exactly the type of environment where our technology belongs. This partnership proves you can scale innovation and ethics together — and shows the rest of the industry what responsible AI collaboration looks like.”
French streaming service Deezer reported in a company blog post on Wednesday (April 16) that it is now receiving over 20,000 fully AI-generated tracks on a daily basis, amounting to 18% of their daily uploaded content — nearly double what it reported in January 2025.
Back in January, Deezer launched a new AI detection tool to try to balance the interests of human creators and the rapidly growing number of AI-generated tracks uploaded to the service. At the time of the tool’s launch, Deezer said it had discovered that roughly 10,000 fully AI-generated tracks were being uploaded to the platform every day. Instead of banning these fully AI-generated tracks, Deezer instead uses its AI detection tool to remove them from its recommendation algorithm and editorial playlisting — meaning users can still find AI-generated music if they choose to, though it won’t be promoted to them.
This tool might even be underestimating the number of AI tracks on Deezer. At the time of its launch, Deezer noted that the tool can detect fully AI-generated music from certain models. This includes Suno and Udio, two of the most popular AI music models on the market today, “with the possibility to add on detection capabilities for practically any other similar tool as long as there’s access to relevant data examples,” as they put it. Still, it’s possible there’s more AI-generated music out there than the tool can currently catch.
Trending on Billboard
Deezer’s tool also does not detect or penalize partially AI-generated works, which likely make up a significant portion of AI-inflected songs today. According to guidance from the U.S. Copyright Office, as long as “a human author has determined sufficient expressive elements,” an AI-assisted work can be eligible for copyright protection.
Deezer is one of the first streaming services to create a policy against fully AI-generated songs, and the first to report how often they’re seeing them uploaded to the service. As Billboard reported in February, most DSPs do not have AI-specific policies, with SoundCloud the only other streamer that has publicly stated that it penalizes AI music. Its policy is to “prohibit the monetization of songs and content that are exclusively generated through AI, encouraging creators to use AI as a tool rather than a replacement of human creation.”
Still, some other streaming services have taken steps to ensure some of the negative impacts of AI are policed, even though their policies aren’t specific to AI. For example, Spotify YouTube Music and others have created procedures for users to report impersonations of likenesses and voices, a major risk posed by (but not unique to) AI. Spotify also screens for users who spam the platform with too many uploads at once, a tactic used by bad actors who are trying to earn extra streaming royalties. This is often done by deploying quickly made AI-generated tracks, though that is not always the case.
“AI generated content continues to flood streaming platforms like Deezer, and we see no sign of it slowing down,” said Aurelien Herault, chief innovation officer at Deezer, in a statement. “Generative AI has the potential to positively impact music creation and consumption, but we need to approach the development with responsibility and care in order to safeguard the rights and revenues of artists and songwriters, while maintaining transparency for the fans. Thanks to our cutting-edge tool we are already removing fully AI generated content from the algorithmic recommendations.”
State Champ Radio
