State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


Generative (Ethical) AI Could Be the Catalyst to Fix Music Licensing — But Only If the Industry Opts In (Guest Column)

Written by on March 24, 2025

blank

Generative AI — the creation of compositions from nothing in seconds — isn’t disrupting music licensing; it’s accelerating the economic impact of a system that was never built to last. Here’s the sick reality: If a generative AI company wanted to ethically license Travis Scott’s “Sicko Mode to train their models, they’d need approvals from more than 30 rights holders, with that number doubling based on rights resold or reassigned after the track’s release. Finding and engaging with all of those parties? Good luck. No unified music database exists to identify rights holders, and even if it did, outdated information, unanswered emails, and, in some cases, deceased rights holders or a group potentially involved in a rap beef make the process a nonstarter.

The music licensing system, or lack thereof, is so fragmented that most AI companies don’t even try. They steal first and deal with lawsuits later. Clearing “Sicko Mode isn’t just difficult; it’s impossible — a cold example of the complexity of licensing commercial music for AI training that seems left out of most debates surrounding ethical approaches.

Trending on Billboard

For those outside the music business, it’s easy to assume that licensing a song is as simple as getting permission from the artist. But in reality, every track is a tangled web of rights, split across songwriters, producers, publishers and administrators, each with their own deals, disputes and gatekeepers. Now multiply the chaos of clearing one track by the millions of tracks needed for an AI training set, and you’ll quickly see why licensing commercial music for AI at scale is a fool’s errand today.

Generative AI is exposing and accelerating the weaknesses in the traditional music revenue model by flooding the market with more music, driving down licensing fees and further complicating ownership rights. As brands and content creators turn to AI-generated compositions, demand for traditional catalogs will decline, impacting synch and licensing revenues once projected to grow over the next decade. 

Hard truths don’t wait for permission. The entrance of generative AI has exposed the broken system of copyright management and its outdated black-box monetization methods. 

The latest RIAA report shows that while U.S. paid subscriptions have crossed 100 million and revenue hit a record $17.7 billion in 2024, streaming growth has nearly halved — from 8.1% in 2023 to just 3.6% in 2024. The market is plateauing, and the question isn’t if the industry needs a new revenue driver — it’s where that growth will come from. Generative AI is that next wave. If architected ethically, it won’t just create new technological innovation in music; it will create revenue. 

Ironically, the very thing being painted as an existential threat to the industry may be the thing capable of saving it. AI is reshaping music faster than anyone expected, yet its ethical foundation remains unwritten. So we need to move fast.

A Change is Gonna Come: Why Music Needs Ethical AI as a Catalyst for Monetization 

Let’s start by stopping. Generative AI isn’t our villain. It’s not here to replace artistry. It’s a creative partner, a collaborator, a tool that lets musicians work faster, dream bigger and push boundaries in ways we’ve never seen before. While some still doubt AI’s potential because today’s ethically trained outputs may sound like they’re in their infancy, let me be clear: It’s evolving fast. What feels novel now will be industry-standard tomorrow.

Our problem is, and always has been, a lack of transparency. Many AI platforms have trained on commercial catalogs without permission (first they lied about it, then they came clean), extracting value without compensation. “Sicko Mode very likely included. That behavior isn’t just unethical; it’s economically destructive, devaluing catalogs as imitation tracks saturate the market while the underlying copyrights earn nothing.

If we’re crying about market flooding right now, we’re missing the point. Because what if rights holders and artists participated in those tracks? Energy needs to go into rethinking how music is valued and monetized across licensing, ad tech and digital distribution. Ethical AI frameworks can ensure proper attribution, dynamic pricing and serious revenue generation for rights holders. 

Jen, the ethically-trained generative AI music platform I co-founded, has already set a precedent by training exclusively on 100% licensed music, proving that responsible AI isn’t an abstract concept, it’s a choice. I just avoided Travis’ catalog due to its licensing complexities. Because time is of the essence. We are entering an era of co-creation, where technology can enhance artistry and create new revenue opportunities rather than replace them. Music isn’t just an asset; it’s a cultural force. And it must be treated as such.

Come Together: Why Opt-In is the Only Path Forward and Opt-Out Doesn’t Work

There’s a growing push for AI platforms to adopt opt-out mechanisms, where rights holders must proactively remove their work from AI training datasets. At first glance, this might seem like a fair compromise. In reality, it’s a logistical nightmare destined to fail.

A recent incident in the U.K. highlights these challenges: over 1,000 musicians, including Kate Bush and Damon Albarn, released a silent album titled “Is This What We Want?” to protest proposed changes to copyright laws that would allow AI companies to use artists’ work without explicit permission. This collective action underscores the creative community’s concerns about the impracticality and potential exploitation inherent in opt-out systems.​

For opt-out to work, platforms would need to maintain up-to-date global databases tracking every artist, writer, and producer’s opt-out status or rely on a third party to do so. Neither approach is scalable, enforceable, or backed by a viable business model. No third party is incentivized to take on this responsibility. Full stop.

Music, up until now, has been created predominantly by humans, and human dynamics are inherently complex. Consider a band that breaks up — one member might refuse to opt out purely to spite another, preventing consensus on the use of a shared track. Even if opt-out were technically feasible, interpersonal conflicts would create chaos. This is an often overlooked but critical flaw in the system.

Beyond that, opt-out shifts the burden onto artists, forcing them to police AI models instead of making music. This approach doesn’t close a loophole — it widens it. AI companies will scrape music first and deal with removals later, all while benefiting from the data they’ve already extracted. By the time an artist realizes their work was used, it’s too late. The damage is done.

This is why opt-in is the only viable future for ethical AI. The burden should be on AI companies to prove they have permission before using music — not on artists to chase down every violation. Right now, the system has creators in a headlock.

Speaking of, I want to point out another example of entrepreneurs fighting for and building solutions. Perhaps she’s fighting because she’s an artist herself and deeply knows how the wrong choices affect her livelihood. Grammy-winning and Billboard Hot 100-charting artist, producer and music-tech pioneer Imogen Heap has spent over a decade tackling the industry’s toughest challenges. Her non-profit platform, Auracles, is a much-needed missing data layer for music that enables music makers to create a digital ID that holds their rights information and can grant permissions for approved uses of their works — including for generative AI training or product innovation. We need to support these types of solutions. And stop condoning the camps that feel that stealing music is fair game.

Opt-in isn’t just possible, it’s absolutely necessary. By building systems rooted in transparency, fairness and collaboration, we can forge a future where AI and music thrive together, driven by creativity and respect. 

The challenge here isn’t in building better AI models — it’s designing the right licensing frameworks from the start. Ethical training isn’t a checkbox; it’s a foundational choice. Crafting these frameworks is an art in itself, just like the music we’re protecting. 

Transparent licensing frameworks and artist-first models aren’t just solutions; they’re the guardrails preventing another industry freefall. We’ve seen it before — Napster, TikTok (yes, I know you’re tired of hearing these examples) — where innovation outpaced infrastructure, exposing the cracks in old systems. This time, we have a shot at doing it right. Get it right, and our revenue rises. Get it wrong and… [enter your prompt here].

Shara Senderoff is a well-respected serial entrepreneur and venture capitalist pioneering the future of music creation and monetization through ethically trained generative AI as Co-Founder & CEO of Jen. Senderoff is an industry thought leader with an unwavering commitment to artists and their rights.

Related Images:


Reader's opinions

Leave a Reply

Your email address will not be published. Required fields are marked *