State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

12:00 am 12:00 pm

Current show

State Champ Radio Mix

12:00 am 12:00 pm


AI

Page: 4

Mindset Ventures, an international venture capital firm that focuses on early-stage tech investments, has launched an early-stage, music-focused fund, Mindset MusicTech, aimed at the music tech sector. In announcing its debut, Mindset Music has revealed its first six investments: Audoo, un:hurd, Music AI, Aiode, ALLOY and OwlDuet.
Mindset Music is looking for startups that “enhance human creativity or improve efficiency” in the music business, partner Lucas Cantor Santigo said in a statement. “We’re looking to support companies with both capital and expertise, and give holistic support to those who are reimagining the music industry for the next generation.”

“The music tech space is extremely undervalued and has an enormous potential for disruption with AI and other new technologies,” added Daniel Ibri, managing partner of both Mindset Music and Mindset Ventures. “We plan to take advantage of this space and make a meaningful difference in the sector for the founders.”

Trending on Billboard

Mindset Music’s roster of advisors includes Drew Thurlow, former senior vp of A&R at Sony Music; music attorney Cliff Fluet; entrepreneur Tomer Elbaz; and music and tech attorney Heather Rafter. 

The companies in Mindset Music’s portfolio provide tools for businesses and creators to operate more efficiently, and many incorporate AI technology. Music AI is an audio intelligence platform that provides what it calls “ethical AI solutions” for audio and music applications. The Salt Lake City-based startup’s products include stem separation and mixing mastering. 

Based in Tel Aviv, Aiode allows musicians to collaborate with virtual musicians using ethically trained AI. Those musicians’ virtual counterparts are compensated through a revenue-sharing model.

U.S.-based OwlDuet calls itself an “AI-powered co-pilot for music creators.” Its production tool purports to allow users to create “Grammy-level production expertise without requiring advanced technical skills.” 

Audoo seeks to improve public performance royalty reporting with music recognition technology. The London-based company works with performance rights organizations and collective management organizations. 

London-based ALLOY provides information that facilitates the sync licensing process. The platform gives artists, songwriters, labels and publishers a means to set sync deal parameters and distribute sync metadata to digital platforms. 

un:hurd has developed a music marketing and promotion platform that guides artists through the release cycle and connects artists with a network of playlist curators.

Timbaland has launched his own AI entertainment company called Stage Zero and its first signee is the artist TaTa. Co-founded with Rocky Mudaliar and Zayd Portillo, Stage Zero’s first signee is an AI pop artist called TaTa, driven by Suno AI. The pop artist, along with a bevy of AI-driven creative tools will all be […]

The three major music companies — Sony Music, Universal Music Group and Warner Music Group — are in talks with AI music companies Suno and Udio to license their works as training data, despite suing the two startups for infringement “on an almost unimaginable scale” last summer. Now, executives in the “ethical” or “responsible” AI music space are voicing displeasure that the alleged infringers could potentially benefit from their actions.
Several of those ethical AI companies said they were led to believe they would be rewarded by the record labels for going through the tough process of licensing music from the beginning, in what one AI music company founder previously told Billboard would be “a carrot and stick approach to AI,” penalizing those who raced ahead and trained models without permission or compensation.

Trending on Billboard

“That’s all out the window,” that founder says now. “I was talking to another founder that does ethical AI voice models, and he told me, ‘F–k it. I don’t care anymore. Why does it matter to be ethical if we just get left behind?’”

Ed Newton-Rex, founder of non-profit Fairly Trained, which certifies ethically-trained AI models, adds: “If I were running a startup that had tried to do the right thing — respecting creators’ rights — and investors had rejected me because I wasn’t exploiting copyrighted work like so many others, and then this happened? I’d definitely be pissed off.”

Tracy Chan, CEO of AI music company Splash, told Billboard via email that she stands by her decision to license music from the start. “At Splash, being ethically trained wasn’t a debate — it was obvious,” she says. “We’re musicians and technologists. We believe AI should amplify creativity, not exploit it. We don’t need to scrape the world’s music to make that happen.”

It remains unclear how far along these licensing talks are between the major music companies and Suno and Udio, and if deals will even come to fruition to avert the blockbuster lawsuits. It’s common in costly and lengthy litigation like this for the two sides to discuss what it would look like to settle the dispute outside of court. Plus, licensing is what the majors have wanted from AI companies all along — does it matter how they come to it?

Multiple executives expressed fear that if the majors ditch the lawsuit and go for deals, they will set a bad precedent for the entire business. “Basically, if they do this deal, I think it would send a message to big tech that if you want to disrupt the music industry, you can do whatever you want and then ask for forgiveness later,” says Anthony Demekhin, CEO/co-founder of Tuney.

This, however, is not the first time the music business has considered a partnership with tech companies that were once their enemy. YouTube, for example, initially launched without properly licensing all of the music on its platform first. In his 2024 New Years’ address to staff, Lucian Grainge, CEO/chairman of UMG, alluded to this, and how he would do it differently this time with his so-called “responsible AI” initiative. “In the past, new and often disruptive technology was simply released into the world, leaving the music community to develop the model by which artists would be fairly compensated and their rights protected,” he wrote, adding that “in a sharp break with the past,” UMG had formed a partnership with YouTube to “give artists a seat at the table” to shape the company’s AI products, and that the company would also collaborate “with several [other] platforms on numerous opportunities and approaches” in the AI space.

Another part of Grainge’s “responsible AI” initiative was “to lobby for ‘guardrails,’ that is public policies setting basic rules for AI.” Mike Pelczynski, co-founder of ethical AI voice company Voice-Swap, also worries that if these deals go through, they could weaken the music industry’s messaging to Capitol Hill, where bills like the NO FAKES Act are still in flux. “All the messaging we had before, all the hard-lining about responsible AI from the beginning, it’s gone,” he says. “Now, if policy makers look at [the music business] they might say, ‘Wait, what side should we take? Where do you stand?’”

If talks about licenses for Suno and Udio move forward, determining exactly how that license works, and how artists will be paid, will be complex. To date, almost all “ethical” AI companies are licensing their musical training data from production libraries, which offer simple, one-stop licenses for songs. Alex Bestall, CEO of music production house and AI company Rightsify, says that the structure of those deals are typically “flat-fee blanket licenses for a fixed term, often one to three years or in some cases perpetuity… all data licensing [music or otherwise] is pretty standardized at this point.”

It’s unclear if the deals the majors have discussed with Suno and Udio will follow this framework, but if they did, the question then comes — how do the majors divide up those fees for their artists and writers? The Wall Street Journal reported that “the [music] companies want the startups to develop fingerprinting and attribution technology — similar to YouTube’s content ID — to track when and how a song is used.” In that scenario, the money received would be given to signees based on usage.

While there are a few startups working on music attribution technology right now, multiple experts tell Billboard they don’t think the tech is ready yet. “Attribution is nowhere,” says Newton-Rex, who also previously worked as vp of audio at Stability AI. “It’s not even close. There’s no system that I have seen that would do a decent job of accurately assigning attribution to what has inspired a given song.”

Even the possibility of deals between the parties has sparked a larger conversation about how to handle tech companies who ask for forgiveness — and not for permission — from the music business.

“If the two biggest offenders actually become the legal standard, it’s effectively like making Pirate Bay into Spotify,” says Demekhin. “I understand it from a business perspective because it’s the path of least resistance [to settle and get a license now]. But this could send a message to tech that could bite the industry on the next wave.”

Over the weekend, Bloomberg broke the news that the Sony Music, Universal Music Group and Warner Music Group are in talks with Suno and Udio to license their music to the artificial intelligence startups. If the deals go through, they could help settle the major music companies’ massive copyright infringement lawsuit against Suno and Udio, filed last summer.
Billboard confirmed that the deals in discussion would include fees and possible equity stakes in Suno and Udio in exchange for licensing the music — which the two AI firms have already been using without a license since they launched over a year ago.

That sounds like a potentially peaceful resolution to this clash over the value of copyrighted music in the AI age. But between artist buy-in, questions over how payments would work and sensitivities on all sides, the deals could be harder to pull off than they seem. Here’s why.

Trending on Billboard

You need everyone on board

Ask anyone who’s tried to license music before: it’s a tedious process. This is especially true when a song has multiple songwriters, all signed to different companies — which is to say, almost all of pop music today. Since any music that is used as training data for an AI model will employ both its master recording copyright and its underlying musical work copyright, Suno and Udio cannot stop at just licensing the majors’ shares of the music. They will also need agreements from independent labels and publishers, too, to use a comprehensive catalog.

And what about the artists and songwriters signed to these companies? Generative AI music is still controversial today, and it is foreseeable that a large number of creatives will not take too kindly to their labels and publishers licensing their works for AI training without their permission. One can imagine that the music companies, to avoid a revolt from signees, would allow talent to either opt-out of or opt-in to this license — but as soon as they do that, they will be left with a patchwork catalog to license to Suno and Udio. Even if a song has one recording artist and five songwriters attached to it, it only takes one of those people to say no to this deal to eliminate the track from the training pool.

Is the expiration date really the expiration date?

Licensing music to train AI models typically takes the form of a blanket license, granted by music companies, that lasts between one and three years, according to Alex Bestall, CEO of Rightsify, a production music library and AI company. Other times it will be done in perpetuity. Ed Newton-Rex, former vp of audio for Stability AI and founder of non-profit Fairly Trained, previously warned Billboard that companies that license on a temporal basis should look out for what happens when a deal term ends: “There’s no current way to just untrain a model, but you can add clauses to control what happens after the license is over,” he said.

Attribution technology seems great — but is still very new

Many experts feel that the best way to remunerate music companies and their artists and songwriters is to base any payouts on how often their work is used in producing the outputs of the AI model. This is known as “attribution” — and while there are companies, like Sureel AI and Musical AI, out there that specialize in this area, it’s still incredibly new. Multiple music industry sources tell Billboard they are not sure the current attribution models are quite ready yet, meaning any payment model based on that system may not be viable, at least in the near term.

Flat-fee licenses are most common, but leave a lot to be desired

Today, Bestall says that flat-fee blanket licenses are the most common form of AI licensing. Given the complexities of fractional licensing (i.e., needing all writers to agree) with mainstream music, the AI music companies that are currently licensing their training data are typically going to production libraries, since those tend to own or control their music 100%. It’s hard to know if this model will hold up with fractional licensing at the mainstream music companies — and how they’ll choose to divide up these fees to their artists.

Plus, Mike Pelczynski, founder of music tech advisory firm Forms and Shapes and former head of strategy for SoundCloud, wrote in a blog post that “flat-fee deals offer upfront payments but limit long-term remuneration. As AI scales beyond the revenue potential of these agreements, rights holders risk being locked into subpar compensation. Unlike past models, such as Facebook’s multi-year deals, AI platforms will evolve in months, not years, leaving IP holders behind. Flat fees, no matter how high, can’t match the exponential growth potential of generative AI.”

There’s still bad blood

The major music companies will likely have a hard time burying the hatchet with Suno and Udio, given how publicly the two companies have challenged them. Today, Suno and Udio are using major label music without any licenses, and that defiance must sting. Suno has also spoken out against the majors, saying in a court filing that “what the major record labels really don’t want is competition. Where Suno sees musicians, teachers and everyday people using a new tool to create original music, the labels see a threat to their market share.”

Given that context, there is a real reputational risk here for the labels, who also represent many stakeholders with many different opinions on the topic — not all of them positive. For this licensing maneuver to work, the majors need to be able to feel (or at least position themselves to look like) they came out on top in any negotiation, particularly to their artists and songwriters, and show that the deals are in everyone’s best interests. It’s a lot to pull off.

Universal Music, Warner Music and Sony Music are in talks with Udio and Suno to license their music to the artificial intelligence startups, Billboard has confirmed, in deals that could help settle blockbuster lawsuits over AI music.
A year after the labels filed billion-dollar copyright cases against Udio and Suno, all three majors are discussing deals in which they would collect fees and receive equity in return for allowing the startups to use music to train their AI models, according to sources with knowledge of the talks. Bloomberg first reported the news on Sunday (June 1).

If reached, such deals would help settle the litigation and establish an influential precedent for how AI companies pay artists and music companies going forward, according to the sources, who requested anonymity to discuss the talks freely.

Trending on Billboard

Such an agreement would mark an abrupt end to a dispute that each side has framed as an existential clash over the future of music. The labels say the startups have stolen music on an “unimaginable scale” to build their models and are “trampling the rights of copyright owners”; Suno and Udio argue back that the music giants are abusing intellectual property to crush upstart competition from firms they see as a “threat to their market share.”

Settlement talks are a common and continuous feature of almost any litigation and do not necessarily indicate that any kind of deal is imminent. It’s unclear how advanced such negotiations are, or what exactly each side would be getting. And striking an actual deal will require sorting out many complex and novel issues relating to brand-new technologies and business models.

Reps for all three majors declined to comment. Suno and Udio did not immediately return requests for comment. A rep for the RIAA, which helped coordinate the lawsuits, declined to comment.

If Suno and Udio do grant equity to the majors in an eventual settlement, it will call to mind the deals struck by Spotify in the late 2000s, in which the upstart technology company gave the music industry a partial ownership stake in return for business-critical content. Those deals turned out to be massively lucrative for the labels and helped Spotify grow into a streaming behemoth.

The cases against Udio and Suno are two of many lawsuits filed against AI firms by book authors, visual artists, newspaper publishers and other creative industries, who have argued AI companies are violating copyrights on a massive scale by using copyrighted works to train their models. AI firms argue that it’s legal fair use, transforming all those old works into “outputs” that are entirely new.

That trillion-dollar question remains unanswered in the courts, where many of the lawsuits, including those against Suno and Udio, are still in the earliest stages. But last month, the U.S. Copyright Office came out against the AI firms, releasing a report that said training was likely not fair use.

“Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the office wrote in the report.

Even with the legal landscape unsettled, some content companies have struck deals with AI firms. Just last week, the New York Times — which is actively litigating one of the copyright cases — struck a deal to license its editorial content to Amazon for AI training. Last fall, Microsoft signed a deal with HarperCollins to use the book publisher’s nonfiction works for AI model training.

Music companies have not struck any such sweeping deals, and instead have preferred more limited partnerships with tech companies for “ethical” AI tools. UMG signed a deal last summer with SoundLabs for an AI-powered voice tool for artists and another one in November with an AI music company called KLAY. Sony made an early-stage investment in March in a licensed AI platform called Vermillio.

The U.K. government’s plans to allow artificial intelligence firms to use copyrighted work, including music, have been dealt another setback by the House of Lords.
An amendment to the data bill which required AI companies to disclose the copyrighted works their models are trained on was backed by peers in the upper chamber of U.K. Parliament, despite government opposition.

The U.K.’s government has proposed an “opt out” approach for copyrighted material, meaning that the creator or owner must explicitly choose for their work not to be eligible for training AI models. The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125 on Monday (May 12).

The data bill will now return to the House of Commons, though the government could remove Kidron’s amendment and send the bill back to the House of Lords next week.

Trending on Billboard

Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.

“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn ($158bn) to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

The “opt out” move has proved unpopular with many in the creative fields, particularly in the music space. Prior to the vote, over 400 British musicians including Elton John, Paul McCartney, Dua Lipa, Coldplay, Kate Bush and more signed an open letter calling on U.K. prime minister Sir Keir Starmer to update copyright laws to protect their work from AI companies. 

The letter said that such an approach would threaten “the UK’s position as a creative powerhouse,” and signatories included major players such as Sir Lucian Grainge (Universal Music Group CEO), Jason Iley MBE (Sony Music UK CEO), Tony Harlow (Warner Music UK CEO) and Dickon Stainer (Universal Music UK CEO).

A spokesperson for the government responded to the letter, saying: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.”

They added: “We’re clear that no changes will be considered unless we are completely satisfied they work for creators.”

Sophie Jones, chief strategist office for the BPI, said: “The House of Lords has once again taken the right decision by voting to establish vital transparency obligations for AI companies. Transparency is crucial in ensuring that the creative industries can retain control over how their works are used, enabling both the licensing and enforcement of rights. If the Government chooses to remove this clause in the House of Commons, it would be preventing progress on a fundamental cornerstone which can help build trust and greater collaboration between the creative and tech sectors, and it would be at odds with its own ambition to build a licensing market in the UK.”

On Friday (May 9), SoundCloud encountered user backlash after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted on X that SoundCloud’s terms of service quietly changed in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models. Over the weekend, SoundCloud clarified via a statement, originally sent to The Verge and also obtained by Billboard, that reads in part: “SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.”
The streaming service adds that this change was made last year “to clarify how content may interact with AI technologies within SoundCloud’s own platform,” including AI-powered personalized recommendation tools, streaming fraud detection, and more, and it apparently did not mean that SoundCloud was allowing external AI companies to train on its users’ songs.

Trending on Billboard

SoundCloud seems to claim the right to train on people’s uploaded music in their terms. I think they have major questions to answer over this.I checked the wayback machine – it seems to have been added to their terms on 12th Feb 2024. I’m a SoundCloud user and I can’t see any… pic.twitter.com/NIk7TP7K3C— Ed Newton-Rex (@ednewtonrex) May 9, 2025

Over the years, SoundCloud has announced various partnerships with AI companies, including its acquisition of Singapore-based AI music curation company Musiio in 2022. SoundCloud’s statement added, “Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.” SoundCloud also has integrations in place with AI firms like Tuney, Voice-Swap, Fadr, Soundful, Tuttii, AIBeatz, TwoShot, Starmony and ACE Studio, and it has teamed up with content identification companies Pex and Audible Magic to ensure these integrations provide rights holders with proper credit and compensation.

The company doesn’t totally rule out the possibility that users’ works will be used for AI training in the future, but says “no such use has taken place to date,” adding that “SoundCloud will introduce robust internal permissioning controls to govern any potential future use. Should we ever consider using user content to train generative AI models, we would introduce clear opt-out mechanisms in advance—at a minimum—and remain committed to transparency with our creator community.”

Read the full statement from SoundCloud below.

“SoundCloud has always been and will remain artist-first. Our focus is on empowering artists with control, clarity, and meaningful opportunities to grow. We believe AI, when developed responsibly, can expand creative potential—especially when guided by principles of consent, attribution, and fair compensation.

SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes. In fact, we implemented technical safeguards, including a “no AI” tag on our site to explicitly prohibit unauthorized use.

The February 2024 update to our Terms of Service was intended to clarify how content may interact with AI technologies within SoundCloud’s own platform. Use cases include personalized recommendations, content organization, fraud detection, and improvements to content identification with the help of AI Technologies.

Any future application of AI at SoundCloud will be designed to support human artists, enhancing the tools, capabilities, reach and opportunities available to them on our platform. Examples include improving music recommendations, generating playlists, organizing content, and detecting fraudulent activity. These efforts are aligned with existing licensing agreements and ethical standards. Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.

We understand the concerns raised and remain committed to open dialogue. Artists will continue to have control over their work, and we’ll keep our community informed every step of the way as we explore innovation and apply AI technologies responsibly, especially as legal and commercial frameworks continue to evolve.”

BeatStars has partnered with Sureel, an AI music detection and attribution company, to provide its creators with the ability to express their desire to “opt out” of their works being used in AI training.
To date, AI music companies in the United States are not required to honor opt-outs, but through this partnership, Sureel and Beatstars, the world’s largest music marketplace, hope to create clarity for AI music companies that are wishing to avoid legal and reputational risks and create a digital ledger to keep track of beatmakers’ wishes regarding AI training.

Here’s how it works: Beatstars will send formal opt-out notices for every music asset and artist on its platform, and all of the creators’ choices will be documented on a portal that any AI company can access. By default, all tracks will be marked as shielded from AI training unless permission is granted. Companies can also access creators’ wishes using Sureel’s API. It will also automatically communicate the creators’ desires via a robots.txt file, which is a way to block AI companies that are crawling the web for new training data.

Trending on Billboard

As the U.S. — and countries around the world — continue to debate how to properly regulate issues related to AI, start-ups in the private sector, like Sureel, are trying to find faster solutions, including tools for opting in and out of AI training, detection technology to flag and verify AI generated works, and more.

“This partnership is an extension of our longstanding commitment to put creators first,” said Abe Batshon, CEO of BeatStars, in a statement. “We recognize that some AI companies might not respect intellectual property, so we are taking definitive action to ensure our community’s work remains protected and valued. Ethical AI is the future, and we’re leading the charge in making sure creators are not left behind.”

“BeatStars isn’t just a marketplace — it’s one of the most important creator communities in the world,” added Dr. Tamay Aykut, founder/CEO of Sureel. “They’ve built their platform around trust, transparency, and putting artists in control. That’s exactly the type of environment where our technology belongs. This partnership proves you can scale innovation and ethics together — and shows the rest of the industry what responsible AI collaboration looks like.”

Generative AI — the creation of compositions from nothing in seconds — isn’t disrupting music licensing; it’s accelerating the economic impact of a system that was never built to last. Here’s the sick reality: If a generative AI company wanted to ethically license Travis Scott’s “Sicko Mode” to train their models, they’d need approvals from more than 30 rights holders, with that number doubling based on rights resold or reassigned after the track’s release. Finding and engaging with all of those parties? Good luck. No unified music database exists to identify rights holders, and even if it did, outdated information, unanswered emails, and, in some cases, deceased rights holders or a group potentially involved in a rap beef make the process a nonstarter.
The music licensing system, or lack thereof, is so fragmented that most AI companies don’t even try. They steal first and deal with lawsuits later. Clearing “Sicko Mode” isn’t just difficult; it’s impossible — a cold example of the complexity of licensing commercial music for AI training that seems left out of most debates surrounding ethical approaches.

Trending on Billboard

For those outside the music business, it’s easy to assume that licensing a song is as simple as getting permission from the artist. But in reality, every track is a tangled web of rights, split across songwriters, producers, publishers and administrators, each with their own deals, disputes and gatekeepers. Now multiply the chaos of clearing one track by the millions of tracks needed for an AI training set, and you’ll quickly see why licensing commercial music for AI at scale is a fool’s errand today.

Generative AI is exposing and accelerating the weaknesses in the traditional music revenue model by flooding the market with more music, driving down licensing fees and further complicating ownership rights. As brands and content creators turn to AI-generated compositions, demand for traditional catalogs will decline, impacting synch and licensing revenues once projected to grow over the next decade. 

Hard truths don’t wait for permission. The entrance of generative AI has exposed the broken system of copyright management and its outdated black-box monetization methods. 

The latest RIAA report shows that while U.S. paid subscriptions have crossed 100 million and revenue hit a record $17.7 billion in 2024, streaming growth has nearly halved — from 8.1% in 2023 to just 3.6% in 2024. The market is plateauing, and the question isn’t if the industry needs a new revenue driver — it’s where that growth will come from. Generative AI is that next wave. If architected ethically, it won’t just create new technological innovation in music; it will create revenue. 

Ironically, the very thing being painted as an existential threat to the industry may be the thing capable of saving it. AI is reshaping music faster than anyone expected, yet its ethical foundation remains unwritten. So we need to move fast.

A Change is Gonna Come: Why Music Needs Ethical AI as a Catalyst for Monetization 

Let’s start by stopping. Generative AI isn’t our villain. It’s not here to replace artistry. It’s a creative partner, a collaborator, a tool that lets musicians work faster, dream bigger and push boundaries in ways we’ve never seen before. While some still doubt AI’s potential because today’s ethically trained outputs may sound like they’re in their infancy, let me be clear: It’s evolving fast. What feels novel now will be industry-standard tomorrow.

Our problem is, and always has been, a lack of transparency. Many AI platforms have trained on commercial catalogs without permission (first they lied about it, then they came clean), extracting value without compensation. “Sicko Mode” very likely included. That behavior isn’t just unethical; it’s economically destructive, devaluing catalogs as imitation tracks saturate the market while the underlying copyrights earn nothing.

If we’re crying about market flooding right now, we’re missing the point. Because what if rights holders and artists participated in those tracks? Energy needs to go into rethinking how music is valued and monetized across licensing, ad tech and digital distribution. Ethical AI frameworks can ensure proper attribution, dynamic pricing and serious revenue generation for rights holders. 

Jen, the ethically-trained generative AI music platform I co-founded, has already set a precedent by training exclusively on 100% licensed music, proving that responsible AI isn’t an abstract concept, it’s a choice. I just avoided Travis’ catalog due to its licensing complexities. Because time is of the essence. We are entering an era of co-creation, where technology can enhance artistry and create new revenue opportunities rather than replace them. Music isn’t just an asset; it’s a cultural force. And it must be treated as such.

Come Together: Why Opt-In is the Only Path Forward and Opt-Out Doesn’t Work

There’s a growing push for AI platforms to adopt opt-out mechanisms, where rights holders must proactively remove their work from AI training datasets. At first glance, this might seem like a fair compromise. In reality, it’s a logistical nightmare destined to fail.

A recent incident in the U.K. highlights these challenges: over 1,000 musicians, including Kate Bush and Damon Albarn, released a silent album titled “Is This What We Want?” to protest proposed changes to copyright laws that would allow AI companies to use artists’ work without explicit permission. This collective action underscores the creative community’s concerns about the impracticality and potential exploitation inherent in opt-out systems.​

For opt-out to work, platforms would need to maintain up-to-date global databases tracking every artist, writer, and producer’s opt-out status or rely on a third party to do so. Neither approach is scalable, enforceable, or backed by a viable business model. No third party is incentivized to take on this responsibility. Full stop.

Music, up until now, has been created predominantly by humans, and human dynamics are inherently complex. Consider a band that breaks up — one member might refuse to opt out purely to spite another, preventing consensus on the use of a shared track. Even if opt-out were technically feasible, interpersonal conflicts would create chaos. This is an often overlooked but critical flaw in the system.

Beyond that, opt-out shifts the burden onto artists, forcing them to police AI models instead of making music. This approach doesn’t close a loophole — it widens it. AI companies will scrape music first and deal with removals later, all while benefiting from the data they’ve already extracted. By the time an artist realizes their work was used, it’s too late. The damage is done.

This is why opt-in is the only viable future for ethical AI. The burden should be on AI companies to prove they have permission before using music — not on artists to chase down every violation. Right now, the system has creators in a headlock.

Speaking of, I want to point out another example of entrepreneurs fighting for and building solutions. Perhaps she’s fighting because she’s an artist herself and deeply knows how the wrong choices affect her livelihood. Grammy-winning and Billboard Hot 100-charting artist, producer and music-tech pioneer Imogen Heap has spent over a decade tackling the industry’s toughest challenges. Her non-profit platform, Auracles, is a much-needed missing data layer for music that enables music makers to create a digital ID that holds their rights information and can grant permissions for approved uses of their works — including for generative AI training or product innovation. We need to support these types of solutions. And stop condoning the camps that feel that stealing music is fair game.

Opt-in isn’t just possible, it’s absolutely necessary. By building systems rooted in transparency, fairness and collaboration, we can forge a future where AI and music thrive together, driven by creativity and respect. 

The challenge here isn’t in building better AI models — it’s designing the right licensing frameworks from the start. Ethical training isn’t a checkbox; it’s a foundational choice. Crafting these frameworks is an art in itself, just like the music we’re protecting. 

Transparent licensing frameworks and artist-first models aren’t just solutions; they’re the guardrails preventing another industry freefall. We’ve seen it before — Napster, TikTok (yes, I know you’re tired of hearing these examples) — where innovation outpaced infrastructure, exposing the cracks in old systems. This time, we have a shot at doing it right. Get it right, and our revenue rises. Get it wrong and… [enter your prompt here].

Shara Senderoff is a well-respected serial entrepreneur and venture capitalist pioneering the future of music creation and monetization through ethically trained generative AI as Co-Founder & CEO of Jen. Senderoff is an industry thought leader with an unwavering commitment to artists and their rights.

JD Vance is certainly getting things done. In the middle of February, the vice president went to Munich to tell Europeans to stop isolating far-right parties, just after speaking at the Paris AI Action Summit, where he warned against strict government regulation. Talk about not knowing an audience: It would be hard to offend more Europeans in less time without kvetching about their vacation time.
This week, my colleague Kristin Robinson wrote a very smart column about what Vance’s — and presumably the Trump administration’s — reluctance to regulate AI might mean for copyright law in the U.S. Both copyright and AI are global issues, of course, so it’s worth noting that efforts by Silicon Valley to keep the Internet unregulated — not only in terms of copyright, but also in terms of privacy and competition law — often run aground in Europe. Vance, like Elon Musk, may simply resent that U.S. technology companies have to follow European laws when they do business there. If he wants to change that dynamic, though, he needs to start by assuring Europeans that the U.S. can regulate its own businesses — not tell them outright that it doesn’t want to do so.

Silicon Valley sees technology as an irresistible force but lawmakers in Brussels, who see privacy and authors’ rights as fundamental to society, have proven to be an immovable object. (Like Nate Dogg and Warren G, they have to regulate.) When they collide, as they have every few years for the past quarter-century, they release massive amounts of energy, in the form of absurd overstatements, and then each give a little ground. (Remember all the claims about how the European data-protection regulation would complicate the Web, or how the 2019 copyright directive would “break the internet?” Turns out it works fine.) In the end, these EU laws often become default global regulations, because it’s easier to run platforms the same way everywhere. And while all of them are pretty complicated, they tend to work reasonably well.

Trending on Billboard

Like many politicians, Vance seems to see the development of AI as a race that one side can somehow win, to its sole benefit. Maybe. On a consumer level, though, online technology tends to emerge gradually and spread globally, and the winners are often companies that use their other products to become default standards. (The losers are often companies that employ more people and pay more taxes, which in politics isn’t so great.) Let’s face it: The best search engine is often the one on your phone; the best map system is whatever’s best integrated into the device you’re using. To the extent that policymakers see this as a race, does winning mean simply developing the best AI, even if it ends up turning into AM or Skynet? Or does winning mean developing AI technology that can create jobs as well as destroy them?

Much of this debate goes far beyond the scope of copyright — let alone the music business — and it’s humbling to consider the prospect of creating rules for something that’s smarter than humans. That’s an important distinction. While developing AI technology before other countries may be a national security issue that justifies a moon-shot urgency, that has nothing to do with allowing software to ingest Blue Öyster Cult songs without a license. Software algorithms are already creating works of art, and they will inevitably continue to do so. But let’s not relax copyright law out of a fear of needing to stay ahead of the Chinese.

Vance didn’t specifically mention copyright — the closest he got to the subject of content was saying “we feel strongly that AI must remain free from ideological bias.” But he did criticize European privacy regulations, which he said require “paying endless legal compliance costs or otherwise risking massive fines.” If there’s another way to protect individual privacy online, though, he didn’t mention it. For that matter, it’s hard to imagine a way to ensure AI remains free from bias without some kind of regulatory regime. Can Congress write and pass a fair and reasonable law to do that? Or will this depend on the same Europeans that Vance just made fun of?

That brings us back to copyright. In the Anglo-American world, including the U.S., copyright is essentially a commercial right, akin to a property right protected by statute. That right, like most, has some exceptions, most relevant fair use. The equivalent under the French civil law tradition is authors’ rights — droit d’auteur — which is more of a fundamental right. (I’m vastly oversimplifying this.) So what seems in the U.S. to be a debate about property rights is in most of the EU more of an issue of human rights. Governments have no choice but to protect them.

There’s going to be a similar debate about privacy. AI algorithms may soon be able to identify and find or deduce information about individuals that they would not choose to share. In some cases, such as security, this might be a good thing. In most, however, it has the potential to be awful: It’s one thing to use AI and databases to identify criminals, quite another to find people who might practice a certain religion or want to buy jeans. The U.S. may not have a problem with that, if people are out in public, but European countries will. As with Napster so many years ago, the relatively small music business could offer an advance look at what will become very important issues.

Inevitably, with the Trump administration, everything comes down to winning — more specifically getting the better end of the deal. At some point, AI will become just another commercial issue, and U.S. companies will only have access to foreign markets if they comply with the laws there. Vance wants to loosen them, which is fair enough. But this won’t help the U.S. — just one particular business in it. And Europeans will push back — as they should.