State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

1:00 pm 7:00 pm

Current show

State Champ Radio Mix

1:00 pm 7:00 pm


AI

Page: 3

Sony Music CEO/chairman Rob Stringer spoke to investors on Friday (June 13) about his vision for how generative AI can be integrated into his business, stating that the company is “going to do deals for new music AI products this year with those that want to construct the future with us the right way.”
To date, Stringer says the major music company has “actively engaged with more than 800 companies on ethical product creation, content protection and detection, enhancing metadata and audio tuning and translation amongst many other shared strategies.” He went on to say that he believes “AI will be a powerful tool in creating exciting new music that will be innovative and futuristic. There is no doubt about this,” but later added: “So far, there is too little collaboration, with the exception of a handful of more ethically minded players.”

Stringer’s statements about the emerging tech, which he made at Sony Group’s 2025 Business Segment Presentation, arrived just a week after news broke that Sony — and its competitors Universal Music Group and Warner Music Group — were engaged in talks with generative AI music companies Suno and Udio about creating a music license for their models. Suno and Udio are currently using copyrighted material, including music from the three majors, to train their models without a license. This spurred the trio to file blockbuster lawsuits against Suno and Udio in June 2024, in which they alleged copyright infringement on an “almost unimaginable scale.”

Trending on Billboard

In his remarks on Friday, Stringer likened the current AI revolution to “the shift from ownership to streaming” just over a decade ago. “We will share all revenues with our artists and songwriters, whether from training or related to outputs, so they are appropriately compensated from day one of this new frontier,” he said.

“I do think that what AI is based on, which is learning models and training models based on existing content, means that those people who have paved the way for this technology do have to be fairly treated in terms of how they get recompense for that usage in the training model,” Stringer continued. “We have been pretty clear on this since day one that there is absolutely no backwards view as to what this technology will do. There will be artists, probably there will be young people sitting in bedrooms today, who will end up making the music of tomorrow through AI. But if they use existing content to blend something into something magical, then those original creators have to be fairly compensated. And I think that’s where we are at the moment.”

There are challenges ahead to figure out proper remuneration for musical artists from generative AI, as Billboard recently described in an analysis of the Suno and Udio licensing talks. While the AI license could borrow the streaming licensing model by having AI firms obtain blanket licenses for a company’s full musical catalog in exchange for payment, it remains to be seen how the payments would be divided up from there. On streaming services, it’s simple to determine how often any given song is consumed and to route money to songs based on their popularity. But for generative AI, the calculation would be far more complicated. To date, Suno and Udio do not offer guidance as to which tracks were used in the making of an output, and experts are divided on whether or not the technology needed to figure that out is ready yet.

Also on Friday, Stringer expressed a desire to come to agreements with AI companies in a free market, stating: “With deals being carried out, it will be clear to governments that a functioning marketplace does exist, so there is no need for them to listen to the lobbying from the tech companies so heavily.”

Today, many AI companies don’t believe they need to license music or other copyrights at all, citing a “fair use” defense. But in his statements, Stringer was optimistic that this would change, citing the recent position of the U.S. Copyright Office, which said that “making commercial use of vast troves of copyrighted works, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” One day after publishing this position about the value of copyrights in the AI age, however, the Register of Copyrights, Shira Perlmutter, was fired by President Donald Trump. (Perlmutter sued soon after, calling Trump’s move “unlawful and ineffective.”)

“We are between us and the AI tech platforms trying to find common ground,” Stringer continued. “And that common ground is not going to take a minute. It’s going to take a moment, and then it’s going to take the trial and error process, and we are in that era right now.”

The RIAA is throwing its support behind a blockbuster copyright lawsuit filed by Disney and Universal against artificial intelligence firm Midjourney, calling the case “a critical stand for human creativity.”
The lawsuit, filed earlier on Wednesday (June 11), claims Midjourney has stolen “countless” copyrighted works to train its AI image generator — and it marks the first foray of major Hollywood studios into a growing legal battle between AI firms and human artists.

Disney and Universal’s new case, which comes as major music companies litigate their own infringement suits against AI firms, “represents a critical stand for human creativity and responsible innovation,” RIAA chairman/CEO Mitch Glazier wrote in a statement.

Trending on Billboard

“There is a clear path forward through partnerships that both further AI innovation and foster human artistry,” Glazier says. “Unfortunately, some bad actors — like Midjourney — see only a zero-sum, winner-take-all game. These short-sighted AI companies are stealing human-created works to generate machine-created, virtually identical products for their own commercial gain. That is not only a violation of black letter copyright law but also manifestly unfair.”

AI models like Midjourney are “trained” by ingesting millions of earlier works, teaching the machine to spit out new ones. Amid the meteoric rise of the new technology, dozens of lawsuits have been filed in federal court over that process, arguing that AI companies are violating copyrights on a massive scale.

AI firms argue such training is legal “fair use,” transforming all those old “inputs” into entirely new “outputs.” Whether that argument succeeds in court is a potentially trillion-dollar question — and one that has yet to be definitively answered by federal judges.

Disney and Universal’s new lawsuit against Midjourney is the latest such case — and immediately one of the most high-profile. The 110-page lawsuit claims the startup “helped itself” to vast amounts of copyrighted content, allowing its users to create images that “blatantly incorporate and copy Disney’s and Universal’s famous characters.”

“Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism,” the companies wrote in their complaint, lodged in Los Angeles federal court on Wednesday morning.

The case echoes arguments made by Universal Music, Warner Music and Sony Music, which filed their own massive lawsuit against the AI music firms Udio and Suno last summer. In that case, the music giants say the tech startups have stolen music on an “unimaginable scale” to build models that are “trampling the rights of copyright owners.”

Music publishers have filed their own case, accusing Anthropic of infringing copyrighted song lyrics with its Claude model. Numerous other artists and creative industries — from newspapers to photographers to visual artists to software coders — have launched similar cases.

Disney and Universal’s complaint makes the same basic argument — that using copyrighted works to train AI is illegal — but does so by citing some of the most iconic movie and TV characters in history. Disney cites Darth Vader from Star Wars,  Buzz Lightyear from Toy Story and Homer Simpson from The Simpsons; Universal mentions Shrek, the Minions, Kung Fu Panda and others.

“Piracy is piracy, and whether an infringing image or video is made with AI or another technology does not make it any less infringing,” lawyers for the studios write. “Midjourney’s conduct misappropriates Disney’s and Universal’s intellectual property and threatens to upend the bedrock incentives of U.S. copyright law that drive American leadership in movies, television, and other creative arts.”

Mindset Ventures, an international venture capital firm that focuses on early-stage tech investments, has launched an early-stage, music-focused fund, Mindset MusicTech, aimed at the music tech sector. In announcing its debut, Mindset Music has revealed its first six investments: Audoo, un:hurd, Music AI, Aiode, ALLOY and OwlDuet.
Mindset Music is looking for startups that “enhance human creativity or improve efficiency” in the music business, partner Lucas Cantor Santigo said in a statement. “We’re looking to support companies with both capital and expertise, and give holistic support to those who are reimagining the music industry for the next generation.”

“The music tech space is extremely undervalued and has an enormous potential for disruption with AI and other new technologies,” added Daniel Ibri, managing partner of both Mindset Music and Mindset Ventures. “We plan to take advantage of this space and make a meaningful difference in the sector for the founders.”

Trending on Billboard

Mindset Music’s roster of advisors includes Drew Thurlow, former senior vp of A&R at Sony Music; music attorney Cliff Fluet; entrepreneur Tomer Elbaz; and music and tech attorney Heather Rafter. 

The companies in Mindset Music’s portfolio provide tools for businesses and creators to operate more efficiently, and many incorporate AI technology. Music AI is an audio intelligence platform that provides what it calls “ethical AI solutions” for audio and music applications. The Salt Lake City-based startup’s products include stem separation and mixing mastering. 

Based in Tel Aviv, Aiode allows musicians to collaborate with virtual musicians using ethically trained AI. Those musicians’ virtual counterparts are compensated through a revenue-sharing model.

U.S.-based OwlDuet calls itself an “AI-powered co-pilot for music creators.” Its production tool purports to allow users to create “Grammy-level production expertise without requiring advanced technical skills.” 

Audoo seeks to improve public performance royalty reporting with music recognition technology. The London-based company works with performance rights organizations and collective management organizations. 

London-based ALLOY provides information that facilitates the sync licensing process. The platform gives artists, songwriters, labels and publishers a means to set sync deal parameters and distribute sync metadata to digital platforms. 

un:hurd has developed a music marketing and promotion platform that guides artists through the release cycle and connects artists with a network of playlist curators.

Timbaland has launched his own AI entertainment company called Stage Zero and its first signee is the artist TaTa. Co-founded with Rocky Mudaliar and Zayd Portillo, Stage Zero’s first signee is an AI pop artist called TaTa, driven by Suno AI. The pop artist, along with a bevy of AI-driven creative tools will all be […]

The three major music companies — Sony Music, Universal Music Group and Warner Music Group — are in talks with AI music companies Suno and Udio to license their works as training data, despite suing the two startups for infringement “on an almost unimaginable scale” last summer. Now, executives in the “ethical” or “responsible” AI music space are voicing displeasure that the alleged infringers could potentially benefit from their actions.
Several of those ethical AI companies said they were led to believe they would be rewarded by the record labels for going through the tough process of licensing music from the beginning, in what one AI music company founder previously told Billboard would be “a carrot and stick approach to AI,” penalizing those who raced ahead and trained models without permission or compensation.

Trending on Billboard

“That’s all out the window,” that founder says now. “I was talking to another founder that does ethical AI voice models, and he told me, ‘F–k it. I don’t care anymore. Why does it matter to be ethical if we just get left behind?’”

Ed Newton-Rex, founder of non-profit Fairly Trained, which certifies ethically-trained AI models, adds: “If I were running a startup that had tried to do the right thing — respecting creators’ rights — and investors had rejected me because I wasn’t exploiting copyrighted work like so many others, and then this happened? I’d definitely be pissed off.”

Tracy Chan, CEO of AI music company Splash, told Billboard via email that she stands by her decision to license music from the start. “At Splash, being ethically trained wasn’t a debate — it was obvious,” she says. “We’re musicians and technologists. We believe AI should amplify creativity, not exploit it. We don’t need to scrape the world’s music to make that happen.”

It remains unclear how far along these licensing talks are between the major music companies and Suno and Udio, and if deals will even come to fruition to avert the blockbuster lawsuits. It’s common in costly and lengthy litigation like this for the two sides to discuss what it would look like to settle the dispute outside of court. Plus, licensing is what the majors have wanted from AI companies all along — does it matter how they come to it?

Multiple executives expressed fear that if the majors ditch the lawsuit and go for deals, they will set a bad precedent for the entire business. “Basically, if they do this deal, I think it would send a message to big tech that if you want to disrupt the music industry, you can do whatever you want and then ask for forgiveness later,” says Anthony Demekhin, CEO/co-founder of Tuney.

This, however, is not the first time the music business has considered a partnership with tech companies that were once their enemy. YouTube, for example, initially launched without properly licensing all of the music on its platform first. In his 2024 New Years’ address to staff, Lucian Grainge, CEO/chairman of UMG, alluded to this, and how he would do it differently this time with his so-called “responsible AI” initiative. “In the past, new and often disruptive technology was simply released into the world, leaving the music community to develop the model by which artists would be fairly compensated and their rights protected,” he wrote, adding that “in a sharp break with the past,” UMG had formed a partnership with YouTube to “give artists a seat at the table” to shape the company’s AI products, and that the company would also collaborate “with several [other] platforms on numerous opportunities and approaches” in the AI space.

Another part of Grainge’s “responsible AI” initiative was “to lobby for ‘guardrails,’ that is public policies setting basic rules for AI.” Mike Pelczynski, co-founder of ethical AI voice company Voice-Swap, also worries that if these deals go through, they could weaken the music industry’s messaging to Capitol Hill, where bills like the NO FAKES Act are still in flux. “All the messaging we had before, all the hard-lining about responsible AI from the beginning, it’s gone,” he says. “Now, if policy makers look at [the music business] they might say, ‘Wait, what side should we take? Where do you stand?’”

If talks about licenses for Suno and Udio move forward, determining exactly how that license works, and how artists will be paid, will be complex. To date, almost all “ethical” AI companies are licensing their musical training data from production libraries, which offer simple, one-stop licenses for songs. Alex Bestall, CEO of music production house and AI company Rightsify, says that the structure of those deals are typically “flat-fee blanket licenses for a fixed term, often one to three years or in some cases perpetuity… all data licensing [music or otherwise] is pretty standardized at this point.”

It’s unclear if the deals the majors have discussed with Suno and Udio will follow this framework, but if they did, the question then comes — how do the majors divide up those fees for their artists and writers? The Wall Street Journal reported that “the [music] companies want the startups to develop fingerprinting and attribution technology — similar to YouTube’s content ID — to track when and how a song is used.” In that scenario, the money received would be given to signees based on usage.

While there are a few startups working on music attribution technology right now, multiple experts tell Billboard they don’t think the tech is ready yet. “Attribution is nowhere,” says Newton-Rex, who also previously worked as vp of audio at Stability AI. “It’s not even close. There’s no system that I have seen that would do a decent job of accurately assigning attribution to what has inspired a given song.”

Even the possibility of deals between the parties has sparked a larger conversation about how to handle tech companies who ask for forgiveness — and not for permission — from the music business.

“If the two biggest offenders actually become the legal standard, it’s effectively like making Pirate Bay into Spotify,” says Demekhin. “I understand it from a business perspective because it’s the path of least resistance [to settle and get a license now]. But this could send a message to tech that could bite the industry on the next wave.”

Over the weekend, Bloomberg broke the news that the Sony Music, Universal Music Group and Warner Music Group are in talks with Suno and Udio to license their music to the artificial intelligence startups. If the deals go through, they could help settle the major music companies’ massive copyright infringement lawsuit against Suno and Udio, filed last summer.
Billboard confirmed that the deals in discussion would include fees and possible equity stakes in Suno and Udio in exchange for licensing the music — which the two AI firms have already been using without a license since they launched over a year ago.

That sounds like a potentially peaceful resolution to this clash over the value of copyrighted music in the AI age. But between artist buy-in, questions over how payments would work and sensitivities on all sides, the deals could be harder to pull off than they seem. Here’s why.

Trending on Billboard

You need everyone on board

Ask anyone who’s tried to license music before: it’s a tedious process. This is especially true when a song has multiple songwriters, all signed to different companies — which is to say, almost all of pop music today. Since any music that is used as training data for an AI model will employ both its master recording copyright and its underlying musical work copyright, Suno and Udio cannot stop at just licensing the majors’ shares of the music. They will also need agreements from independent labels and publishers, too, to use a comprehensive catalog.

And what about the artists and songwriters signed to these companies? Generative AI music is still controversial today, and it is foreseeable that a large number of creatives will not take too kindly to their labels and publishers licensing their works for AI training without their permission. One can imagine that the music companies, to avoid a revolt from signees, would allow talent to either opt-out of or opt-in to this license — but as soon as they do that, they will be left with a patchwork catalog to license to Suno and Udio. Even if a song has one recording artist and five songwriters attached to it, it only takes one of those people to say no to this deal to eliminate the track from the training pool.

Is the expiration date really the expiration date?

Licensing music to train AI models typically takes the form of a blanket license, granted by music companies, that lasts between one and three years, according to Alex Bestall, CEO of Rightsify, a production music library and AI company. Other times it will be done in perpetuity. Ed Newton-Rex, former vp of audio for Stability AI and founder of non-profit Fairly Trained, previously warned Billboard that companies that license on a temporal basis should look out for what happens when a deal term ends: “There’s no current way to just untrain a model, but you can add clauses to control what happens after the license is over,” he said.

Attribution technology seems great — but is still very new

Many experts feel that the best way to remunerate music companies and their artists and songwriters is to base any payouts on how often their work is used in producing the outputs of the AI model. This is known as “attribution” — and while there are companies, like Sureel AI and Musical AI, out there that specialize in this area, it’s still incredibly new. Multiple music industry sources tell Billboard they are not sure the current attribution models are quite ready yet, meaning any payment model based on that system may not be viable, at least in the near term.

Flat-fee licenses are most common, but leave a lot to be desired

Today, Bestall says that flat-fee blanket licenses are the most common form of AI licensing. Given the complexities of fractional licensing (i.e., needing all writers to agree) with mainstream music, the AI music companies that are currently licensing their training data are typically going to production libraries, since those tend to own or control their music 100%. It’s hard to know if this model will hold up with fractional licensing at the mainstream music companies — and how they’ll choose to divide up these fees to their artists.

Plus, Mike Pelczynski, founder of music tech advisory firm Forms and Shapes and former head of strategy for SoundCloud, wrote in a blog post that “flat-fee deals offer upfront payments but limit long-term remuneration. As AI scales beyond the revenue potential of these agreements, rights holders risk being locked into subpar compensation. Unlike past models, such as Facebook’s multi-year deals, AI platforms will evolve in months, not years, leaving IP holders behind. Flat fees, no matter how high, can’t match the exponential growth potential of generative AI.”

There’s still bad blood

The major music companies will likely have a hard time burying the hatchet with Suno and Udio, given how publicly the two companies have challenged them. Today, Suno and Udio are using major label music without any licenses, and that defiance must sting. Suno has also spoken out against the majors, saying in a court filing that “what the major record labels really don’t want is competition. Where Suno sees musicians, teachers and everyday people using a new tool to create original music, the labels see a threat to their market share.”

Given that context, there is a real reputational risk here for the labels, who also represent many stakeholders with many different opinions on the topic — not all of them positive. For this licensing maneuver to work, the majors need to be able to feel (or at least position themselves to look like) they came out on top in any negotiation, particularly to their artists and songwriters, and show that the deals are in everyone’s best interests. It’s a lot to pull off.

Universal Music, Warner Music and Sony Music are in talks with Udio and Suno to license their music to the artificial intelligence startups, Billboard has confirmed, in deals that could help settle blockbuster lawsuits over AI music.
A year after the labels filed billion-dollar copyright cases against Udio and Suno, all three majors are discussing deals in which they would collect fees and receive equity in return for allowing the startups to use music to train their AI models, according to sources with knowledge of the talks. Bloomberg first reported the news on Sunday (June 1).

If reached, such deals would help settle the litigation and establish an influential precedent for how AI companies pay artists and music companies going forward, according to the sources, who requested anonymity to discuss the talks freely.

Trending on Billboard

Such an agreement would mark an abrupt end to a dispute that each side has framed as an existential clash over the future of music. The labels say the startups have stolen music on an “unimaginable scale” to build their models and are “trampling the rights of copyright owners”; Suno and Udio argue back that the music giants are abusing intellectual property to crush upstart competition from firms they see as a “threat to their market share.”

Settlement talks are a common and continuous feature of almost any litigation and do not necessarily indicate that any kind of deal is imminent. It’s unclear how advanced such negotiations are, or what exactly each side would be getting. And striking an actual deal will require sorting out many complex and novel issues relating to brand-new technologies and business models.

Reps for all three majors declined to comment. Suno and Udio did not immediately return requests for comment. A rep for the RIAA, which helped coordinate the lawsuits, declined to comment.

If Suno and Udio do grant equity to the majors in an eventual settlement, it will call to mind the deals struck by Spotify in the late 2000s, in which the upstart technology company gave the music industry a partial ownership stake in return for business-critical content. Those deals turned out to be massively lucrative for the labels and helped Spotify grow into a streaming behemoth.

The cases against Udio and Suno are two of many lawsuits filed against AI firms by book authors, visual artists, newspaper publishers and other creative industries, who have argued AI companies are violating copyrights on a massive scale by using copyrighted works to train their models. AI firms argue that it’s legal fair use, transforming all those old works into “outputs” that are entirely new.

That trillion-dollar question remains unanswered in the courts, where many of the lawsuits, including those against Suno and Udio, are still in the earliest stages. But last month, the U.S. Copyright Office came out against the AI firms, releasing a report that said training was likely not fair use.

“Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the office wrote in the report.

Even with the legal landscape unsettled, some content companies have struck deals with AI firms. Just last week, the New York Times — which is actively litigating one of the copyright cases — struck a deal to license its editorial content to Amazon for AI training. Last fall, Microsoft signed a deal with HarperCollins to use the book publisher’s nonfiction works for AI model training.

Music companies have not struck any such sweeping deals, and instead have preferred more limited partnerships with tech companies for “ethical” AI tools. UMG signed a deal last summer with SoundLabs for an AI-powered voice tool for artists and another one in November with an AI music company called KLAY. Sony made an early-stage investment in March in a licensed AI platform called Vermillio.

The U.K. government’s plans to allow artificial intelligence firms to use copyrighted work, including music, have been dealt another setback by the House of Lords.
An amendment to the data bill which required AI companies to disclose the copyrighted works their models are trained on was backed by peers in the upper chamber of U.K. Parliament, despite government opposition.

The U.K.’s government has proposed an “opt out” approach for copyrighted material, meaning that the creator or owner must explicitly choose for their work not to be eligible for training AI models. The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125 on Monday (May 12).

The data bill will now return to the House of Commons, though the government could remove Kidron’s amendment and send the bill back to the House of Lords next week.

Trending on Billboard

Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.

“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn ($158bn) to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

The “opt out” move has proved unpopular with many in the creative fields, particularly in the music space. Prior to the vote, over 400 British musicians including Elton John, Paul McCartney, Dua Lipa, Coldplay, Kate Bush and more signed an open letter calling on U.K. prime minister Sir Keir Starmer to update copyright laws to protect their work from AI companies. 

The letter said that such an approach would threaten “the UK’s position as a creative powerhouse,” and signatories included major players such as Sir Lucian Grainge (Universal Music Group CEO), Jason Iley MBE (Sony Music UK CEO), Tony Harlow (Warner Music UK CEO) and Dickon Stainer (Universal Music UK CEO).

A spokesperson for the government responded to the letter, saying: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.”

They added: “We’re clear that no changes will be considered unless we are completely satisfied they work for creators.”

Sophie Jones, chief strategist office for the BPI, said: “The House of Lords has once again taken the right decision by voting to establish vital transparency obligations for AI companies. Transparency is crucial in ensuring that the creative industries can retain control over how their works are used, enabling both the licensing and enforcement of rights. If the Government chooses to remove this clause in the House of Commons, it would be preventing progress on a fundamental cornerstone which can help build trust and greater collaboration between the creative and tech sectors, and it would be at odds with its own ambition to build a licensing market in the UK.”

On Friday (May 9), SoundCloud encountered user backlash after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted on X that SoundCloud’s terms of service quietly changed in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models. Over the weekend, SoundCloud clarified via a statement, originally sent to The Verge and also obtained by Billboard, that reads in part: “SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.”
The streaming service adds that this change was made last year “to clarify how content may interact with AI technologies within SoundCloud’s own platform,” including AI-powered personalized recommendation tools, streaming fraud detection, and more, and it apparently did not mean that SoundCloud was allowing external AI companies to train on its users’ songs.

Trending on Billboard

SoundCloud seems to claim the right to train on people’s uploaded music in their terms. I think they have major questions to answer over this.I checked the wayback machine – it seems to have been added to their terms on 12th Feb 2024. I’m a SoundCloud user and I can’t see any… pic.twitter.com/NIk7TP7K3C— Ed Newton-Rex (@ednewtonrex) May 9, 2025

Over the years, SoundCloud has announced various partnerships with AI companies, including its acquisition of Singapore-based AI music curation company Musiio in 2022. SoundCloud’s statement added, “Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.” SoundCloud also has integrations in place with AI firms like Tuney, Voice-Swap, Fadr, Soundful, Tuttii, AIBeatz, TwoShot, Starmony and ACE Studio, and it has teamed up with content identification companies Pex and Audible Magic to ensure these integrations provide rights holders with proper credit and compensation.

The company doesn’t totally rule out the possibility that users’ works will be used for AI training in the future, but says “no such use has taken place to date,” adding that “SoundCloud will introduce robust internal permissioning controls to govern any potential future use. Should we ever consider using user content to train generative AI models, we would introduce clear opt-out mechanisms in advance—at a minimum—and remain committed to transparency with our creator community.”

Read the full statement from SoundCloud below.

“SoundCloud has always been and will remain artist-first. Our focus is on empowering artists with control, clarity, and meaningful opportunities to grow. We believe AI, when developed responsibly, can expand creative potential—especially when guided by principles of consent, attribution, and fair compensation.

SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes. In fact, we implemented technical safeguards, including a “no AI” tag on our site to explicitly prohibit unauthorized use.

The February 2024 update to our Terms of Service was intended to clarify how content may interact with AI technologies within SoundCloud’s own platform. Use cases include personalized recommendations, content organization, fraud detection, and improvements to content identification with the help of AI Technologies.

Any future application of AI at SoundCloud will be designed to support human artists, enhancing the tools, capabilities, reach and opportunities available to them on our platform. Examples include improving music recommendations, generating playlists, organizing content, and detecting fraudulent activity. These efforts are aligned with existing licensing agreements and ethical standards. Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.

We understand the concerns raised and remain committed to open dialogue. Artists will continue to have control over their work, and we’ll keep our community informed every step of the way as we explore innovation and apply AI technologies responsibly, especially as legal and commercial frameworks continue to evolve.”

BeatStars has partnered with Sureel, an AI music detection and attribution company, to provide its creators with the ability to express their desire to “opt out” of their works being used in AI training.
To date, AI music companies in the United States are not required to honor opt-outs, but through this partnership, Sureel and Beatstars, the world’s largest music marketplace, hope to create clarity for AI music companies that are wishing to avoid legal and reputational risks and create a digital ledger to keep track of beatmakers’ wishes regarding AI training.

Here’s how it works: Beatstars will send formal opt-out notices for every music asset and artist on its platform, and all of the creators’ choices will be documented on a portal that any AI company can access. By default, all tracks will be marked as shielded from AI training unless permission is granted. Companies can also access creators’ wishes using Sureel’s API. It will also automatically communicate the creators’ desires via a robots.txt file, which is a way to block AI companies that are crawling the web for new training data.

Trending on Billboard

As the U.S. — and countries around the world — continue to debate how to properly regulate issues related to AI, start-ups in the private sector, like Sureel, are trying to find faster solutions, including tools for opting in and out of AI training, detection technology to flag and verify AI generated works, and more.

“This partnership is an extension of our longstanding commitment to put creators first,” said Abe Batshon, CEO of BeatStars, in a statement. “We recognize that some AI companies might not respect intellectual property, so we are taking definitive action to ensure our community’s work remains protected and valued. Ethical AI is the future, and we’re leading the charge in making sure creators are not left behind.”

“BeatStars isn’t just a marketplace — it’s one of the most important creator communities in the world,” added Dr. Tamay Aykut, founder/CEO of Sureel. “They’ve built their platform around trust, transparency, and putting artists in control. That’s exactly the type of environment where our technology belongs. This partnership proves you can scale innovation and ethics together — and shows the rest of the industry what responsible AI collaboration looks like.”