State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 3

By the mid-2010s, the power of the playlist — the Spotify playlist to be exact — loomed large in the music business: Everyone knew a spot on Rap Caviar could mint a rap hit overnight; a placement on Fresh Finds could induce a label bidding war; and a lower-than-expected ranking on New Music Friday could ruin a label project manager’s Thursday night.
But in the 2020s, challengers — namely TikTok, with its potent and mysterious algorithm that serves social media users with addictive snippets of songs as they scroll — have threatened Spotify’s reign as music industry kingmaker. Still, Spotify’s editorial playlists remain one of the most important vehicles for music promotion, and its 100-plus member global team, led by its global head of editorial Sulinna Ong, has evolved to meet the changing times.

“Our editorial expertise is both an art and a science,” says Ong, who has led the company through its recent efforts to use technology to offer more personalized playlist options, like its AI DJ, Daylist and daily mixes. “We’re always thinking about how we can introduce you to your next favorite song to your next favorite artist. How do we provide context to get you to engage? Today, the challenge is cutting through the noise to get your attention.”

Trending on Billboard

In conversation with Billboard, Ong talks about training the AI DJ with the editors’ human expertise, using playlists to differentiate Spotify from its competition and looking ahead to Generation Alpha (ages 0-14). 

I’ve seen such a shift in the editorial strategy at Spotify in the last couple years. Daylist, personalized editorial playlists (marked by the “made for you” tag), daily mixes, AI DJ and more. Did those inspire your team to push into these personalized editorial playlists?

To start off, it’s useful to zoom out and think about how people listen to music. The way people listen to music is fluid and curation and editorial has to be fluid as well. We have to understand the changes.

Curators have always been at the core of Spotify’s identity, right from the early days of the company. Back in 2012, Spotify’s music team started with three editors, and it quickly grew to more than 100 around the world today. These curators started by curating what became known as our flagship editorial playlists — Today’s Top Hits, Rap Caviar, Viva Latino. Over time that expanded to playlists like Altar, Lorem, Pollen, etc. Those are all still important.

But around 2018, editors made their first attempts to bridge human curation from our flagship editorial playlists with personalization engines. 2018 is the year when the technology arose with personalization and machine learning to open up these possibilities. At that time, we started making more personalized playlists where the tracks fit with an overall mood or moment curated by editors but varied for each listener — like My Life Is A Movie, Beastmode, Classic Roadtrip Songs. Editors will select a number of songs that they feel fit that playlist. Let’s say for example we have 200 songs selected, you might see the 100 of those that are most aligned with your taste.

Discover Weekly and Release Radar are tailored to listener activity and have been around much longer. Did those inspire your team to push into these personalized editorial playlists around 2018?

Yes, exactly. Algorithmic playlists, like Release Radar [and] Discover Weekly, we found that users liked them [and] that inspired us to then work with the product teams and ask, “What is the next step of this?” Spotify has more than 500 million users. We knew that it would keep growing and as a human curator, you can’t manually curate to that entire pool. Technology can fill in that gap and increase our possibilities. A lot of times, I see narratives where people call this a dichotomy — either playlists are human-made or machine-made. We don’t see it that way.

In 2024, personalization and machine learning are even more important technologies for streaming music and watching content. We’ve kept investing in cutting-edge personalization and it’s making a real impact — 81% of our listeners cite personalization as their favorite thing about Spotify. Our static editorial playlists are still very powerful, but we also have made these other listening experiences to round out the picture.

How someone listens is never one thing. Do you only want to watch movies? No, you want to watch a movie sometimes; other times you want to watch a 20-minute TV show. We have to understand the various ways that you might like to [listen].

Daylist, for example, is very ephemeral. It only exists for a certain amount of time. The appeal is in the title — it also really resonates for a younger audience.

Did your team always intend that Daylist, which often gives users crazy titles like “Whimsical Downtown Vibes Tuesday Evening,” could be shareable — even memeable — on social media?

Absolutely. It’s very shareable. It’s a bite-sized chunk of daily joy that you get that you can post about online.

It reminds me of the innately shareable nature of Spotify Wrapped.

There is a lineage there. It is similar because it’s a reminder of what you’re listening to. But it’s repackaged in a humorous way — light and fun and it updates so it keeps people coming back.

How do you think Spotify’s editorial team differentiates itself from competitors like Apple and Amazon?

Early on, we understood that editorial expertise around the world is really valuable, and it was needed to set us apart. So we have editors all around the world. They are really the music experts of the company. They are focused on understanding the music and the cultural scenes where they are.

We have what we call “editorial philosophy.” One of the tenets of that is our Global Curation Groups, or “GCGs” for short. Once a week, editors from around the world meet and identify tracks that are doing well and should flow from one market to another. We talk about music trends, artists we are excited about. We talk about new music mainly but also music that is resurfacing from social media trends.

This is how we got ahead on spreading genres like K-pop seven years ago. We were playlisting it and advocating for it spreading around the world. Musica Mexicana and Amapiano — we were early [with those] too. We predicted that streaming would reduce the barriers of entry in terms of language, so we see genres and artists coming from non-Western, non-English speaking countries really making an impact on the global music scene.

How was the AI DJ trained to give the commentary and context it gives?

We’ve essentially spun up a writers’ room. We have our editors work with our product team and script writers to add in some context about the artists and tracks that the DJ can share with listeners. The info they feed in can be musical facts, culturally-relevant insights. We want listeners to feel connected to the artists they hear on a human level. At the end of the day, this approach to programming also really helps us broaden out the pool of exposure, particularly for undiscovered artists and tracks. We’ve seen that people who hear the commentary from DJ are more likely to listen to a song they would have otherwise skipped.

When Spotify editorial playlists started, the cool, young, influential audience was millennials. Now it’s Gen Z. What challenges did that generational shift pose?

We think about this every day in our work. Now, we’re even thinking about the next generation after Gen Z, Gen Alpha [children age 14 and younger]. I think the key difference is our move away from genre lines. Where we once had a strictly rock playlist, we are now building playlists like POV or My Life Is A Movie. It’s a lifestyle or an experience playlist. We also see that younger listeners like to experiment with lots of different listening experiences. We try to be very playful about our curation and offer those more ephemeral daily playlists.

What are you seeing with Gen Alpha so far? I’m sure many of them are still on their parents’ accounts, but do you have any insight into how they might see music differently than other generations as they mature?

Gaming. Gaming is really an important space for them. Music is part of the fabric of how we play games now — actually, that’s how these kids often discover and experience music, especially on Discord and big MMOs — massive multiplayer games. We think about this culture a lot because it is mainstream culture for someone of that age.

Gaming is so interesting because it is such a dynamic, controllable medium. Recorded music, however, is totally static. There have been a few startups, though, that are experimenting with music that can morph as you play the game.

Yeah, we’re working on making things playful. There’s a gamification in using Daylist, right? It’s a habit. You come back because you want to see what’s new. We see the AI DJ as another way to make music listening more interactive, less static.

Spotify has been known as a destination for music discovery for a long time. Now, listeners are increasingly turning to TikTok and social media for this. How do you make sure music discovery still continues within Spotify for its users?

That comes down to, again, the editorial expertise and the GCGs I mentioned before. We have 100-plus people whose job it is to be the most tapped-in people in terms of what’s happening around the world in their genre. That’s our biggest strength in terms of discovery because we have a large team of people focused on it. Technology just adds on to that human expertise.

Back when Spotify playlists first got popular, a lot of people compared the editors to the new generation of radio DJs. How do you feel about that comparison?

It’s not a one-to-one comparison. I can understand the logic of how some people might get there. But, if I’m very frank, the editorial job that we do is not about us. Radio DJs, it’s all about them, their personality. It’s not about them as a DJ or a front face of a show. Not to be disparaging to radio DJs — their role is important — it’s just not the same thing. I don’t think we are gatekeepers. I say that because it is never about me or us as editors. It’s about the music, the artist and the audience’s experience. It’s very simple: I want to introduce you to your next favorite song. Yes, we have influence. I recognize that in the industry. It’s one I take very seriously. That’s a privilege and a responsibility, but it is not about us at the end of the day.

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

A North Carolina musician has been indicted by federal prosecutors over allegations that he used AI to help create “hundreds of thousands” of songs and then used the AI tracks to earn more than $10 million in fraudulent streaming royalty payments since 2017.
In a newly unsealed indictment, Manhattan federal prosecutors charged the musician, Michael Smith, 52, with three counts of wire fraud, wire fraud conspiracy and money laundering conspiracy. According to the indictment, Smith was aided by the CEO of an unnamed AI music company as well as other co-conspirators in the U.S. and around the world, and some of the millions he was paid were funneled back to the AI music company.

According to the indictment, the hundreds of thousands of AI songs Smith allegedly helped create were available on music streaming platforms like Spotify, Amazon Music, Apple Music and YouTube Music. It also claims Smith has made “false and misleading” statements to the streaming platforms, as well as collection societies including the Mechanical Licensing Collective (the MLC) and distributors, to “promote and conceal” his alleged fraud.

Trending on Billboard

Because of Smith’s alleged activities, he diverted over $1 million in streaming payments per year that “ultimately should have been paid to the songwriters and artists whose works were streamed legitimately by real consumers,” says the indictment.

The indictment also details exactly how Smith allegedly pulled off the scheme he’s accused of. First, it says he gathered thousands of email accounts, often in the names of fictitious identities, to create thousands of so-called “bot accounts” on the streaming platforms. At its peak, Smith’s operation allegedly had “as many as 10,000 active bot accounts” running; he also allegedly hired a number of co-conspirators in the U.S. and abroad to do the data entry work of signing up those accounts. “Make up names and addresses,” reads an email from Smith to an alleged co-conspirator dated May 11, 2017, that was included in the indictment.

To maximize income, the indictment states that Smith often paid for “family plans” on streaming platforms “typically using proceeds generated by his fraudulent scheme” because they are the “most economical way to purchase multiple accounts on streaming services.”

Smith then used cloud computing services and other means to cause the accounts to “continuously stream songs that he owned” and make it look legitimate. The indictment alleges that Smith knew he was in the wrong and used a number of methods to “conceal his fraudulent scheme,” ranging from fictitious email names and VPNs to instructing his co-conspirators to be “undetectable” in their efforts.

In emails sent in late 2018 and obtained by the government, Smith told co-conspirators to not be suspicious while running up tons of streams on the same song. “We need to get a TON of songs fast to make this work around the anti fraud policies these guys are all using now,” Smith wrote in the emails.

Indeed, there have been a number of measures taken up by the music business to try to curb this kind of fraudulent streaming activity in recent years. Anti-streaming fraud start-up Beatdapp, for example, has become an industry leader, hired by a number of top distributors, streaming services and labels to identify and prevent fraud. Additionally, severl independent DIY distributors including TuneCore, Distrokid and CD Baby have recently banded together to form “Music Fights Fraud,” a coalition that shares a database and other resources to prevent fraudsters from hopping from service to service to avoid detection.

Last year, Spotify and Deezer came out with revamped royalty systems that proposed new penalties for fraudulent activity. Still, it seems fraudsters study these new efforts and continue to evolve their efforts to evade detection.

The rise of quickly generated AI songs has been a major point of concern for streaming fraud experts because it allows bad actors to spread their false streaming activity over a larger number of songs and create more competition for streaming dollars. To date, AI songs are not paid out any differently from human-made songs on streaming platforms. A lawsuit filed by Sony Music, Warner Music Group and Universal Music Group against AI companies Suno and Udio in June summed up the industry’s fears well, warning that AI songs from these companies “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”

Though Smith is said to be a musician himself with a small catalog of his own, the indictment states that he leaned on AI music to quickly amass a much larger catalog.

The indictment alleges that around 2018, “Smith began working with the Chief Executive Officer of an unnamed AI music company and a music promoter to create thousands of thousands of songs that Smith could then fraudulently stream.” Within months, the CEO of the AI company was allegedly providing Smith with “thousands of songs each week.” Eventually, Smith entered a “Master Services Agreement” with the AI company that supplied Smith with 1,000-10,000 songs per month, agreeing that Smith would have “full ownership of the intellectual property rights in the songs.” In turn, Smith would provide the AI company with metadata and the “greater of $2,000 or 15% of the streaming revenue” he generated from the AI songs.

“Keep in mind what we’re doing musically here… this is not ‘music,’ it’s ‘instant music’ ;)”, reads an email from the AI company’s CEO to Smith that was included in the indictment.

Over time, various players in the music business questioned Smith’s activities, including a streaming platform, a music distributor and the MLC. By March and April 2023, the MLC halted royalty payments to Smith and confronted him about his possible fraud. In response, Smith and his representatives “repeatedly lied” about the supposed fraud and AI-generated creations, says the indictment.

Christie M. Curtis, FBI acting assistant director, said of the indictment, “The defendant’s alleged scheme played upon the integrity of the music industry by a concerted attempt to circumvent the streaming platforms’ policies. The FBI remains dedicated to plucking out those who manipulate advanced technology to receive illicit profits and infringe on the genuine artistic talent of others.”

Kris Ahrend, CEO of the MLC, added, “Today’s DOJ indictment shines a light on the serious problem of streaming fraud for the music industry. As the DOJ recognized, The MLC identified and challenged the alleged misconduct, and withheld payment of the associated mechanical royalties, which further validates the importance of The MLC’s ongoing efforts to combat fraud and protect songwriters.”

Downtown Music has struck a deal with Hook, an AI social music app, which will pave the way for fans to create authorized remixes of the millions of licensed recordings in Downtown’s catalog.
In a time when many of music’s biggest stars are releasing sped up or slowed down remixes of songs, and fans are taking to TikTok to post all kinds of musical mashups and edits, it’s clear that listeners want to do more than just play songs, they want to play with songs, but often these remixes are made without proper licenses or authorization in place.

According to a recent study by Pex, nearly 40% of all the music used on TikTok is modified in some way, whether its pitch-altered, sped up, slowed down, or spliced together with another song. Hook hopes to create a legal, licensed environment for users to participate in this rapidly growing part of online music fandom.

Trending on Billboard

With Hook’s license in place, Downtown Music will receive financial compensation when their works are used in these user-generated content (UGC) remixes. Hook’s platform also gives Downtown’s artists and labels access to valuable data insights, showing them how and where their augmented music, created on Hook, is being used.

Hook sees their AI-powered remix app as a viable new revenue source for artists and labels, allowing them to better capitalize on the fact that much of music culture and fandom has shifted from traditional streaming services and over to short-form apps like TikTok. Hook’s founder/CEO Gaurav Sharma says, “we are challenging the idea that music on social media and UGC only provides promotional value. We believe fan remixing and UGC is a new form of active music consumption and rights holders should be paid for it. This deal represents a new model for music, social, and AI. The team at Downtown understands our mission and we’re humbled by their support.”

Previous to Sharma founding Hook, he served as chief operating officer for JioSaavn, India’s largest music streaming platform and one of the first platforms to secure global streaming licenses with record labels. During his time at the company, Sharma and his team grew JioSaavn to more than 100 million active monthly users.

Harmen Hemminga, vp of product & services strategy at Downtown Music, says of the deal, “Whilst music consumption continues to increase, broaden and localize, the trend of music “prosumption” on social platforms is ever-growing. Users of these platforms are including music in the experiences they share with others across a variety of contextual, inventive ways. Hook offers rights holders the ability to monetize these new and creative forms of use.”

These days, many in the music business are trying to harness the power of the “superfan” — the highly engaged segment of an artist’s audience that regularly shows up to concerts, buys t-shirts, orders physical albums and obsesses over the artist online. In the digital marketing space, that has meant agencies are increasingly turning their attention to fan pages, hoping to capture the attention of that top tier of listeners online. 
“The TikTok influencer campaign has been front and center for marketing songs for a while,” says Ethan Curtis, founder of PushPlay, a digital marketing agency that has promoted songs like “Bad Habit” by Steve Lacy, “Golden Hour” by JVKE and “Glimpse of Us” by Joji. “But as it’s gotten more saturated and more expensive, we found there was interest in creating your own fan pages where you can have total control of the narrative.” 

“Fan pages” made sneakily by artists’ teams may have become the digital campaign du jour in the last year or so, but the idea isn’t new. Even before TikTok took over music discovery, management and digital teams quietly used anonymous accounts to pose as fans on sites like Tumblr, Instagram and Twitter, sharing interviews, videos and other content around the artists because, as Curtis puts it, “It is a space you can own.”

Trending on Billboard

Curtis is now taking that concept a step further with his innovative, albeit controversial, new company WtrCoolr, a spinoff of his digital firm that’s dedicated to creating “fan fiction” pages for artists. To put it simply, WtrCoolr is hired to create viral-worthy fake stories about their clients, which include Shaboozey and Young Nudy, among others. While Curtis says he is open to creating videos with all kinds of “imaginative” new narratives, he says he draws the line at any fan fiction that could be “negative” or “cause backlash” for the people featured in the videos.

The results speak for themselves. One popular WtrCoolr-made TikTok video that falsely claimed that Dolly Parton is Shaboozey’s godmother has 1.1 million views and 121,500 likes to date. Posted to the digital agency’s fan account @ShaboozeysVault, Curtis says that the popular video was made by splicing together old interview clips of the artists, along with some AI voiceovers. 

“We are huge fans of pop culture, fan fiction and satire,” says Curtis. “We see it as creating our own version of a Marvel Universe but with pop stars.”

All of the TikTok accounts made by WtrCoolr note in their bios that their content is “fan fiction.” The videos on these pages also include “Easter eggs,” which Curtis says point to the fact that the videos are fabrications. But plenty of fans are still falling for it. Many viewers of the Parton video, for example, took it as gospel truth, posting comments like “how many god children does Dolly have and where can I sign up?” and “Dolly is an angel on Earth.”

In the future, Curtis thinks this novel form of “fan fiction” will be useful beyond just trying to engage fan bases online. He sees potential for the pages to serve as “a testing ground” for real-life decisions — like an artist choosing to collaborate with another — to see how the fan base would react. “Traditionally, you don’t get to look before you jump,” he says. “Maybe in the future we will.”

What was the first “fan fiction” post that took off for WtrCoolr?

It was the video of Shaq being a superfan to the rapper Young Nudy [10.4 million views, 1.7 million likes on TikTok]. We had been working on [promoting] the Young Nudy song, “Peaches & Eggplants,” mostly on the influencer side. We had dances and all sorts of different trends going. It was becoming a top rap song by that point and then we sold the client [Young Nudy’s team] on doing one of these fan pages where we just tested out a bunch of stuff. The first narrative video we tried was this video where we found some footage of Shaq — I think it was at Lollapalooza — where he was in the front of the crowd [for a different artist], vibing and head banging. It was a really funny visual. We just got clever with the editing and created the story that Shaq was showing up at every Young Nudy show, and then it went crazy viral. 

It was really exciting to see. It brought fans to Nudy and also made existing Nudy fans super excited that Shaq was engaging. Then there was tons of goodwill for Shaq that came from it too. Lots of comments like “protect Shaq at all costs” or “Shaq’s a damn near perfect human being.” It was all around a positive experience. We put on our pages that this is a fan page and fan fiction. We don’t really push that it’s the truth. We’re just having fun and we let that be known. 

There was some pickup after that video went viral. Weren’t there some rap blogs posting about the video and taking it as truth?

I don’t know if they were taking it as true necessarily. We didn’t really have any conversations with anyone, but it was definitely getting shared all around — whether it was because of that or just because it was such a funny video. Even Nudy reacted and thought it was funny. I think the label may have reached out to Shaq and invited him to a show, and he thought it was funny but was on the other side of the country that day and couldn’t make it. 

I’m sure there’s some people who thought it was true, but a lot of the videos we’ll put Easter eggs at the end that make it obvious that it’s not true. Then in our bios we write that it is fan fiction. 

Do you think that there’s anything bad that could come from fans and blogs believing these videos are real — only to later realize later that it was fake?

I don’t know if anything is really bad. We don’t claim for it to be true, and we’re just having fun, weaving stories and basically saying, “Wouldn’t it be funny if?” or, “Wouldn’t it be heartwarming if?” I don’t think we’re really ever touching on stuff that’s of any importance, that could lead to any negative energy or backlash. We’re just trying to make fun stuff that fans enjoy. Just fun little moments. It’s no different from taking a video out of context and slapping meme headings on it.

Do you see this as the future of memes?

I do. I also think there’s a future where what we’re doing becomes sort of like a testing ground for real-life collabs or TV show concepts. I could see a label coming to us and asking us to test how a new post-beef collab between Drake and Kendrick would be received, for example. They could say, “Can you create a post about this and we can see if people turn on Kendrick for backtracking, or if fans will lose their shit over them coming together?” We could see if it’s a disaster or potentially the biggest release of their careers. Traditionally, you don’t get to look before you jump. Maybe in the future we will. But even now with the Shaq video, it basically proved that if Shaq went to an unexpected show and was raging in the front row people would love it. I mean, if it’s been so successful on socials, why wouldn’t it be so successful in real life?

It seemed like the Shaboozey and Dolly Parton video inserted Shaboozey’s name and other new phrases using an AI voice filter. Do you rely on AI in these videos a lot or is it primarily about careful editing? 

The majority of it is just clever editing. Every now and then we may change a word up or something [using AI], but the majority of it is just collaging clips together. 

How time intensive is it to create these videos? 

The process has been changing. It used to be much more time intensive back before we realized that clever editing was more efficient. In the beginning, we would write scripts for the videos, run them through AI and then try to find clips to match the scripts and stuff like that. You have to match the edit up with the artist’s lips so it looks like lip synching. That’s just super time intensive. Then we started realizing that it’s easier to just define a basic objective, go out on the internet and see what we can find. We develop a story from there so that we only have to do a few fake [AI-assisted] words here and there, and then we’ll cut away from the video, show some footage from a music video or something like that. It makes it more efficient. 

As far as you know, is WtrCoolr the first team in digital marketing that is trying to do these false-narrative, storytelling videos, or is this something that is seen all over the internet? 

We were definitely the first to do it. There’s definitely people that are imitating it now. We see it generally in the content that exists online, especially on meme pages. It’s becoming part of the culture. 

Do you run your ideas for fan fiction narratives by the artist before you do them? 

We’re working with them, and we’re talking through ideas. There’s as much communication as they want. Some artists want to know what’s going on, but some artists just don’t care to be involved. 

It seems like, so far, no one has had any issues with being used in the videos — they even see this positively — but are you concerned about the legal implications of using someone’s likeness to endorse an artist or idea that they haven’t really endorsed?

We’re not claiming it to be true. We include disclaimers that it’s just fan fiction. So, I think if we were claiming for it to be true then that’s a different story, but that’s not what we are doing. 

That’s listed on all the page bios, but it isn’t listed on the actual video captions, right? 

It’s listed on the profiles, and then a lot of videos we just do Easter eggs at the end that make it sort of apparent that it’s a joke. 

I found the idea that you mentioned earlier to be interesting — the idea that you could test out collaborations or things without having to get the artist involved initially, whether it’s Drake and Kendrick collaborating or something else. It reminds me of when people tease a song before they slate it for official release. Do you feel that is a fair comparison? 

Totally. What TikTok did for song teasing, this has done for situation teasing. 

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

Symphonic Distribution has forged a partnership with AI attribution and license management company, Musical AI, that will allow its users to become part of a licensed dataset used in AI training. Joining the dataset is a choice that Symphonic users must opt-in to and participating artists can earn additional income for their contribution.
Musical AI’s goal is to clean up what it calls the “Wild West of AI” by providing a way to track every time an AI model uses a given song in the dataset in hopes that this will help compensate the proper copyright owner for each time their work is employed by the AI model. Symphonic is the first major rights holder to partner with Musical AI, and Musical AI’s co-founder and COO Matt Adell says his team is currently “build[ing] a new layer based on attribution and security for training AI to the benefit of all involved.”

The AI training process is one of the most contentious areas of the burgeoning tech field. To learn how to generate realistic results, generative AI models must train on millions, if not billions, of works. Often, this includes copyrighted material that the AI company has not licensed or otherwise paid for. Today, many of the world’s biggest AI companies, including ChatGPT creator OpenAI and music AI generators Suno and Udio, take the stance that ingesting this copyrighted material is a form of “fair use” and that compensation is not required. Many copyright owners, however, believe that AI companies must obtain their consent prior to using their works and that they should receive some form of compensation.

Trending on Billboard

Already, this issue has sparked major legal battles in the music business. The three major music companies — Universal Music Group, Warner Music Group and Sony Music — filed a lawsuit against Suno and Udio in June, arguing that training on their copyrights without permission or compensation was a form of widespread copyright infringement. A similar argument was made in a 2023 lawsuit filed by UMG, Concord, and ABKCO against Anthropic for allegedly using their copyrighted lyrics in training without proper licenses.

According to a spokesperson for the companies, one AI firm, who wishes to remain anonymous, has already signed up to use the Symphonic-affiliated dataset, and in the future, the dataset will likely be used by more. Artists who wish to participate can only opt-in if they totally control their own publishing and records to ensure there are no rights issues.

Licenses made between AI companies, Musical AI and Symphonic will vary, but ultimately that license will stipulate a certain percentage of revenue made will belong to rights holders represented in the dataset. Musical AI will create an attribution report that details how each song in the dataset was used by the AI company, and then AI companies will either pay out rights holders directly or through Musical AI, depending on what their deal looks like.

“Symphonic’s catalog has clear value to AI companies who need both excellent music by passionate artists and a broad representation of genres and sounds,” says Adell. “We’re thrilled to make them our first major rights holder partner.”

“We strive to make our services the most advanced in the business to support our artists. But any new technology needs to work for our artists and clients — not against them,” adds Jorge Brea, founder and CEO of Symphonic. “By partnering with Musical AI, we’re unlocking a truly sustainable approach to generative AI that honors our community.”

We are in a transformative era of music technology.  
The music industry will continue to experience growth throughout the decade, with total music revenue reaching approximately $131 billion by 2030, according to Goldman Sachs. This lucrative business is built on streaming but also witnessing an unprecedented surge in innovation and entrepreneurship in artificial intelligence. With over 300,000 tech and media professionals laid off since the beginning of 2023, according to TechCrunch, a new wave of talent has been funneled into music tech startups. This influx, coupled with the dramatic decrease in cloud storage costs and the global rise of developer talent, has catalyzed the emergence of many startups (over 400 that we have tracked) dedicated to redefining the music business through AI.  

These music tech startups are not just changing the way music is made and released; they are reshaping the very fabric of the industry. They’re well-funded, too. After raising over $4.8 billion in 2022, music tech startups and companies raised almost $10 billion in funding in 2023, according to Digital Music News, indicating that venture capitalists and investors are highly optimistic about the future growth of music technology.  

As Matt Cartmell, CEO of Music Technology UK, said, “Our members want us to present the music tech sector as a highly investible proposition, educating investors about the opportunities that lie within. Music tech firms are also looking for innovative models of engagement with labels, DSPs and artists, as well as looking for our help to bring diverse talent into the industry, removing the barriers that continue to restrict individuals with passion and enthusiasm from a career in music technology.” 

Trending on Billboard

Riding this wave of investment, several startups have already made a splash in the music AI space. Below is an overview of a few of those companies and how they’re contributing to the industry’s rapid evolution. 

Generative AI: The New Frontier  

At the heart of this revolution is generative AI, a technology rapidly becoming indispensable for creators across the spectrum. From novices to professional artists and producers, AI-powered platforms offer unprecedented musical expression and innovation opportunities. It’s now possible for users without any formal musical training to craft songs in many genres, effectively democratizing music production. Music fans or content creators can utilize products that score their social content, while seasoned musicians can use these tools to enhance their creative workflows.  

“I like to think of generative AI as a new wave of musical instruments,” says Dr. Maya Ackerman, founder of Wave AI, a company that has introduced tools to aid human songwriters. “The most useful AI tools for artists are those that the musicians can ‘play,’ meaning that the musician is in full control, using the AI to aid in their self-expression rather than be hindered.” These tools focus on generating vocal melodies, chords and lyrics, emphasizing collaboration with musicians rather than replacing them. 

For non-professionals, one ambitious company, Beatoven.ai, is building a product to generate music in a host of different ways for many different use cases. “(Users) can get a piece of music generated and customize it as per their content without much knowledge of music,” says Siddharth Bhardwaj, co-founder/CTO of Beatoven. “Going forward, we are working on capturing their intent in the form of multimodal input (text, video, images and audio) and getting them as close to their requirements as possible.” 

The concept of “artist as an avatar” has become increasingly popular, which draws inspiration from the gaming community. Companies like CreateSafe, the startup powering Grimes’ elf.tech, have built generative audio models that enable anyone to either license the voice of a well-known artist or replicate their own voice. This innovative approach also reflects the adaptive and forward-thinking nature of artists. Established artists like deadmau5, Richie Hawtin and Ólafur Arnalds have also delved into AI initiatives and investments. Furthermore, a few innovators are crafting AI music tools tailored for the gaming community, potentially paving the way for the fusion of music and gaming through real-time personalization and adaptive soundtracks during gameplay. 

The Community and Collaboration Ecosystem 

The journey of music creation is often fraught with challenges, including tedious workflows and a sense of isolation. Recognizing this, several startups are focusing on building communities around music creation and feedback. The Singapore-based music tech giant BandLab recently announced that it has acquired a user base of 100 million, making it one of the biggest success stories in this arena. “Our strength lies in our comprehensive approach to our audience’s needs. From the moment of inspiration to distribution, our platform is designed to be a complete toolkit for music creators and their journey,” says founder Meng Ro Kuok. There are several startups pioneering spaces where creators can collaborate, share insights and support each other, heralding a new era of collective creativity. 

A Toolkit for Every Aspect of Music Production 

This landscape of music tech startups offers a comprehensive toolkit that caters to every facet of the music creation process:

Track and Stem Organization. Platforms like Audioshake simplify the management of tracks and stems, streamlining the production process.  

Vocal & Instrument Addition. These technologies allow for the addition of any human voice or instrument sound to a recording environment, expanding the possibilities for frictionless creativity. 

Sound Libraries. Services provide or generate extensive libraries of samples, beats and sounds, offering artists a rich palette.  

Mix and Master. The process of mixing and mastering audio has historically relied heavily on human involvement. However, several startups are utilizing AI technology to automate these services for a more comprehensive audio production experience. Others also offer the ability to convert stereo songs to spatial audio. 

Remixing and Freelance Musicianship. Many platforms now offer creative and innovative solutions for remixing music. Additionally, some platforms allow users to easily source and connect with talented artists, session musicians and other music professionals. Need an orchestra? There are tech platforms that can arrange and source one for you remotely.  

The Future of Music Tech: A Vision of Inclusivity and Innovation 

The barriers that once kept people from participating in music creation are falling away. Now, anyone with a passion for sound can create content, engage with fans, find a community and even monetize their work. This more accessible and collaborative music ecosystem offers an exciting glimpse into a future where anyone can participate in the art of creation. The explosion of creators, facilitated by these technologies, also suggests a new economic opportunity for the industry to service this growing creator class.  

Drew Thurlow is the founder of Opening Ceremony Media where he advises music and music tech companies. Previously he was senior vp of A&R at Sony Music, and director of artists partnerships & industry relations at Pandora. His first book, about music & AI, will be released by Routledge in early 2026.

Rufy Anam Ghazi is a seasoned music business professional with over eight years of experience in product development, data analysis, research, business strategy, and partnerships. Known for her data-driven decision-making and innovative approach, she has successfully led product development, market analysis, and strategic growth initiatives, fostering strong industry relationships.

In March of 2023, as artificial intelligence barnstormed through the headlines, Goldman Sachs published a report on “the enormous economic potential of generative AI.” The writers explored the possibility of a “productivity boom,” comparable to those that followed seismic technological shifts like the mass adoption of personal computers.
Roughly 15 months later, Goldman Sachs published another paper on AI, this time with a sharply different tone. This one sported a blunt title — “Gen AI: Too Much Spend, Too Little Benefit?” — and it included harsh assessments from executives like Jim Covello, Goldman’s head of global equity research. “AI bulls seem to just trust that use cases will proliferate as the technology evolves,” Covello said. “But 18 months after the introduction of generative AI to the world, not one truly transformative — let alone cost-effective — application has been found.”

This skepticism has been echoed elsewhere. Daron Acemoglu, a prominent M.I.T. scholar, published a paper in May arguing that AI would lead to “much more modest productivity effects than most commentators and economists have claimed.” David Cahn, a partner at Sequoia Capital, warned in June that “we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick.” 

Trending on Billboard

“I’m worried that we’re getting this hype cycle going by measuring aspiration and calling it adoption,” says Kristina McElheran, an assistant professor of strategic management at the University of Toronto who recently published a paper examining businesses’ attempts to implement AI technology. “Use is harder than aspiration.” 

The music industry is no exception. A recent survey of music producers conducted by Tracklib, a company that supplies artists with pre-cleared samples, found that 75% of producers said they’re not using AI to make music. Among the 25% who were playing around with the technology, the most common use cases were to help with highly technical and definitely unsexy processes: stem separation (73.9%) and mastering (45.5%). (“Currently, AI has shown the most promise in making existing processes — like coding — more efficient,” Covello noted in Goldman’s report.) Another multi-country survey published in May by the Reuters Institute found that just 3% of people have used AI for making audio.  

At the moment, people use AI products “to do their homework or write their emails,” says Hanna Kahlert, a cultural trends analyst at MIDiA Research, which recently conducted its own survey about AI technology adoption. “But they aren’t interested in it as a creative solution.”

When it comes to assessing AI’s impact — and the speed with which it would remake every facet of society — some recalibration was probably inevitable. “Around the launch of ChatGPT, there was so much excitement and promise, especially because this is a technology that we talk about in pop culture and see in our movies and our TV shows,” says Manav Raj, an assistant professor of management at the University of Pennsylvania’s Wharton School, who studies firms’ responses to technological change. “It was really easy to start thinking about how it could be really transformative.”

“Some of that excitement might have been a little frothy,” he continues. “Even if this is a really important and big technology, it takes time for us to see the effects of these kinds of technological changes in markets.” This was famously true with the development of computers — in 1987, the economist Robert Solow joked, “You can see the computer age everywhere but in the productivity statistics,” a phenomenon later dubbed “the productivity paradox.”

It also takes time to settle the legal and regulatory framework governing AI technologies, which will presumably influence the magnitude of their effects as well. Earlier this year, the major labels sued two genAI music platforms, Suno and Udio, accusing them of copyright infringement on a mass scale; in recently filed court documents, the companies said their activities were lawful under the doctrine of fair use, and that the major labels were just trying to eliminate “a threat to their market share.” Similar suits against AI companies have also been filed in other creative industries. 

When McElheran surveyed manufacturing firms, however, few cited regulatory uncertainty as a barrier to AI use. She points out that “they may have had bigger fish to fry, like no use case.” A U.S. Census Bureau survey of businesses published in March found that 84.2% of respondents hadn’t used AI in the previous two weeks, and 80.9% of the firms that weren’t planning to implement AI in the next six months believe it “is not applicable to this business.” 

Tracklib’s survey found something similar to McElheran’s. Only around 10% of respondents said concern about copyright was a reason they wouldn’t use AI tools. Instead, Tracklib’s results indicated that producers’ most common objections to using AI were moral, not legal — explanations like, “I want my art to be my own.” 

“Generative AI comes up against this wall where it’s so easy, it’s just a push of a button,” Kahlert says. “It’s a fun gimmick, but there’s no real investment on the part of the user, so there’s not much value that they actually place in the outcome.” 

In contrast, MIDiA’s survey found that respondents were interested in AI tech that can help them modify tracks by adjusting tempo — a popular TikTok alteration that can be done without AI — and customizing song lyrics. This interest was especially pronounced among younger music fans: Over a third of 20-to-24-year-olds were intrigued by AI tools that could help them play with tempo, and around 20% of that age group liked the idea of being able to personalize song lyrics.

Antony Demekhin, co-founder of the AI music company Tuney, sees a market for “creative tools” that enable “making, editing, or remixing beats and songs without using a complicated DAW, while still giving users a feeling of ownership over the output.”

“Up until recently,” he adds, “the addressable market for those kinds of tools has been small because the number of producers that use professional production software has been limited, so early-stage tech investors don’t frequently back stuff like that.” 

Demekhin launched Tuney in 2020, well before the general public was thinking about products like ChatGPT. In the wake of that platform’s explosion, “Investors started throwing money around,” he recalls. At the same time, “nobody knew what questions to ask. What is this trained on? Are you exposed to legal risk? How easy would it be for Meta to replicate this and then make it available on Instagram?” 

Today, investors are far better informed, and conversations with them sound very different, Demekhin says. “Cooler heads are prevailing,” he continues. “Now there’s going to be a whole wave of companies that make more sense because people have figured out where these technologies can be useful — and where they can’t.”

AI music firms Suno and Udio are firing back with their first responses to sweeping lawsuits filed by the major record labels, arguing that they were free to use copyrighted songs to train their models and claiming the music industry is abusing intellectual property to crush competition.
In legal filings on Thursday, the two firms admitted to using proprietary materials to create their artificial intelligence, with Suno saying it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet.”

But both companies said that such use was clearly lawful under copyright’s fair use doctrine, which allows for the reuse of existing materials to create new works.

Trending on Billboard

“What Udio has done — use existing sound recordings as data to mine and analyze for the purpose of identifying patterns in the sounds of various musical styles, all to enable people to make their own new creations — is a quintessential ‘fair use,’” Udio wrote in its filing. “Plaintiffs’ contrary vision is fundamentally inconsistent with the law and its underlying values.”

The filings, lodged by the same law firm (Latham & Watkins) that reps both companies, go beyond the normal “answer” to a lawsuit — typically a sparse document that simply denies each claim. Instead, Suno and Udio went on offense, with extended introductions that attempt to frame the narrative of a looming legal battle that could take years to resolve.

In doing so, they took square aim at the major labels (Universal Music Group, Warner Music Group and Sony Music Entertainment) that filed the case in June — a group that they said “dominates the music industry” and is now abusing copyright law to maintain that power.

“What the major record labels really don’t want is competition,” Suno wrote in its filing. “Where Suno sees musicians, teachers and everyday people using a new tool to create original music, the labels see a threat to their market share.”

Suno and Udio have quickly become two of the most important players in the emerging field of AI-generated music. Udio has already produced what could be considered an AI-generated hit with “BBL Drizzy,” a parody track popularized with a remix by super-producer Metro Boomin and later sampled by Drake himself. And as of May, Suno had raised a total of $125 million in funding to create what Rolling Stone called a “ChatGPT for music.”

In June, the major labels sued both companies, claiming they had infringed copyrighted music on an “unimaginable scale” to train their models. The lawsuits accused the two firms of “trampling the rights of copyright owners” as part of a “mad dash to become the dominant AI music generation service.”

The case followed similar lawsuits filed by book authors, visual artists, newspaper publishers and other creative industries, which collectively pose what could be a trillion-dollar legal question: Is it infringement to use vast troves of proprietary works to build an AI model that spits out new creations? Or is it just a form of legal fair use, transforming all those old works into something entirely new?

In Thursday’s response, Suno and Udio argued unequivocally that it was the latter. They likened their machines to a “human musician” who had played earlier songs to learn the “building blocks of music” — and then used what they had learned to create entirely new works in existing styles.

“Those genres and styles — the recognizable sounds of opera, or jazz, or rap music — are not something that anyone owns,” Suno wrote in its filing. “Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song.”

The lawsuit from the labels, Suno and Udio say, are thus an abuse of copyright law, aimed at claiming improper ownership over “entire genres of music.” They called the litigation an “attempt to misuse IP rights to shield incumbents from competition and reduce the universe of people who are equipped to create new expression.”

Both filings hint at how Suno and Udio will make their fair use arguments. The two companies say the cases will not really turn on the “inputs” — the millions of songs used to train the models — but rather on the “outputs,” or the new songs that are created. While the labels are claiming that the inputs were illegally copied, the AI firms say the music companies “explicitly disavow” that any output was a copycat.

“That concession will ultimately prove fatal to plaintiffs’ claims,” Suno wrote in its filing. “It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process,invisible to the public, in the service of creating an ultimately non-infringing new product.”

A spokeswoman and an attorney for the labels did not immediately return a request for comment.

A bipartisan group of U.S. senators introduced the highly anticipated NO FAKES Act on Wednesday (July 31), which aims to protect artists and others from AI deepfakes and other nonconsensual replicas of their voices, images and likenesses.
If passed, the legislation would create federal intellectual property protections for the so-called right of publicity for the first time, which restricts how someone’s name, image, likeness and voice can be used without consent. Currently, such rights are only protected at the state level, leading to a patchwork of different rules across the country.

Unlike many existing state-law systems, the federal right that the NO FAKES Act would create would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. To balance personal publicity rights and the First Amendment right to free speech, the NO FAKES Act also includes specific carveouts for replicas used in news coverage, parody, historical works or criticism. 

Trending on Billboard

Non-consensual AI deepfakes are of great concern to the music business, given so many of its top-billing talent have already been exploited in this way. Taylor Swift, for example, was the subject of a number of sexually-explicit AI deepfakes of her body; the late Tupac Shakur‘s voice was recently deepfaked by fellow rapper Drake in his Kendrick Lamar diss track “Taylor Made Freestyle,” which was posted, and then deleted, on social media; and Drake and The Weeknd had their own voices cloned by AI without their permission in the TikTok viral track “Heart On My Sleeve.”

The NO FAKES Act was first released as a draft bill by the same group of lawmakers — Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (D-NC) — last October, and its formal introduction to the U.S. Senate continues to build on the same principles also laid out in the No AI FRAUD Act, a similar bill which was introduced to the U.S. House of Representatives earlier this year.

While the music industry is overwhelmingly supportive of the creation of a federal right of publicity, there are some detractors in other creative fields, including film/tv, which pose a threat to the passage of bills like the NO FAKES Act. In a speech during Grammy week earlier this year, National Music Publishers Association (NMPA) president/CEO David Israelite explained that “[a federal right of publicity] does not have a good chance… Within the copyright community we don’t agree. … Guess who is bigger than music? Film and TV.” Still, the introduction of the NO FAKES Act and the NO AI Fraud Act proves there is bicameral and bipartisan support for the idea.

Earlier this year, proponents for strengthened publicity rights laws celebrated a win on the state level in their fight to regulate AI deepfakes with the passage of the ELVIS Act in Tennessee. The landmark law greatly expanded protections for artists and others in the state, and explicitly protected voices for the first time.

Though it was celebrated by a who’s who of the music business — from the Recording Academy, Recording Industry Association of America (RIAA), Human Artistry Campaign, NMPA and more — the act also drew a few skeptics, like Professor Jennifer Rothman of University of Pennsylvania law school, who raised concerns that the law could have been an “overreaction” that could potentially open up tribute bands, interpolations, or sharing photos that a celebrity didn’t authorize to lawsuits.

“The Human Artistry Campaign applauds Senators Coons, Blackburn, Klobuchar and Tillis for crafting strong legislation establishing a fundamental right putting every American in control of their own voices and faces against a new onslaught of highly realistic voice clones and deepfakes,” Dr. Moiya McTier, senior advisor of the Human Artistry Campaign — a global initiative for responsible AI use, supported by 185 organizations in the music business and beyond — says of the bill. “The NO FAKES Act will help protect people, culture and art — with clear protections and exceptions for the public interest and free speech. We urge the full Senate to prioritize and pass this vital, bipartisan legislation. The abusive deepfake ecosystem online destroys more lives and generates more victims every day — Americans need these protections now.”

The introduction of the bill is also celebrated by American Federation of Musicians (AFM), ASCAP, Artist Rights Alliance (ARA), American Association of Independent Music (A2IM), Association of American Publishers, Black Music Action Coalition (BMAC), BMI, Fan Alliance, The Azoff Company co-president Susan Genco, Nashville Songwriters Association International (NSAI), National Association of Voice Actors (NAVA), National Independent Talent Organization, National Music Publishers’ Association (NMPA), Organización de Voces Unidas (OVU), Production Music Association, Recording Academy, Recording Industry Association of America (RIAA), SAG-AFTRA, SESAC Music Group, Songwriters of North America (SoNA), SoundExchange, United Talent Agency (UTA) and WME.

The lawsuits filed by the major labels against the AI companies Suno and Udio could be the most important cases to the music business since the Supreme Court Grokster decision, as I explained in last week’s Follow the Money column. The outcomes are hard to predict, however, because the central issue will be “fair use,” a U.S. legal doctrine shaped by judicial decisions that involves famously — sometimes notoriously — nuanced determinations about art and appropriation. And although most creators focus more on issues around generative AI “outputs” — music they’ll have to compete with or songs that might sound similar to theirs — these cases involve the legality of copying music for the purposes of training AI.
Neither Suno nor Udio has said how they’re trained their AI programs, but both have essentially said that copying music in order to do so would qualify as fair use. Determining that could touch on the development of Google Books, the compatibility of the Android operating system, and even a Supreme Court case that involves Prince, Andy Warhol and Vanity Fair. It’s the kind of fair use case that once inspired a judge to call copyright “the metaphysics of the law.” So let’s get metaphysical! 

Trending on Billboard

Fair use essentially provides exceptions to copyright, usually for the purpose of free expression, allowing for quotation (as in book or film reviews) and parody (to comment on art), among other things. (The iconic example in music is the Supreme Court case over 2 Live Crew’s parody of Roy Orbison’s “Oh, Pretty Woman.”) These determinations involve a four-factor test that weighs “the purpose and character of the use”; “the nature of the copyrighted work”; how much and how important a part of the work is used; and the effect of the use upon the potential market value of the copyrighted work. Over the last decade or so, though, the concept of “transformative use,” derived from the first factor, expanded in a way that allowed the development of Google Books (the copying of books to create a database and excerpts) and the use of some Oracle API code in Google’s Android system — which could arguably be said to go beyond the origins of the concept.

Could copying music for the purposes of machine learning qualify as well?  

In a paper on the topic, “Fair Use in the U.S. Redux: Reformed or Still Deformed,” the influential Columbia Law School professor Jane Ginsburg suggests that the influence of the transformative use argument might have reached its peak. (I am oversimplifying a very smart paper, and if you are interested in this topic, you should read it.)  

The Supreme Court decision on the Google-Oracle case involved part of a computer program, far from the creative “core” of copyright, and music recordings would presumably be judged differently. The Supreme Court also made a very different decision last year in a case that pitted the Andy Warhol Foundation for the Visual Arts against prominent rock photographer Lynn Goldsmith. The case involved an Andy Warhol silkscreen of Prince, based on a Goldsmith photograph that the magazine Vanity Fair had licensed for Warhol to use. Warhol used the photo for an entire series — which Goldsmith only found out about when the magazine used the silkscreen image again for a commemorative issue after Prince died.

On the surface, this seemed to cast the Supreme Court Justices as modern art critics, in a position to judge all appropriation art as infringing. But the case wasn’t about whether Warhol’s silkscreen inherently infringed Goldsmith’s copyright but about whether it infringed it for licensed use by a magazine, in a way where it could compete with the original photo. There was a limit to transformative use, after all. “The same copying,” the court decided, “may be fair when used for one purpose but not another.”  

So it might constitute fair use for Google to copy entire books for the purpose of creating a searchable database about those books with excerpts from them, as it did for Google Books — but not necessarily for Suno or Udio to copy terabytes of recordings to spur the creation of new works to compete with them, especially if it results in similar works. In the first case, it’s hard to find real economic harm — there will never be much of a market for licensing book databases — but there’s already a nascent market for licensing music to train AI programs. And, unlike Google Books, the AI programs are designed to make music to compete with the recordings used to train them. Obviously, licensing music to train an AI program is what we might call a secondary use — but so is turning a book into a film, and no one doubts they need permission for that.  

All of this might seem like I think the major labels will win their cases, but that’s a tough call — the truth is that I just don’t think they’ll lose. And there’s a lot of space between victory and defeat here. If one of these cases ends up going to the Supreme Court — and if one of these doesn’t, another case about AI training surely will within the next few years — the decision might be more limited than either side is looking for, since the court has tended to step lightly around technology issues.  

It’s also possible that the decision could depend on whether the outputs that result from all of this training are similar enough to copyrighted works to qualify, or plausibly qualify, as infringing. Both label lawsuits are full of such examples, presumably because that could make a difference. These cases are about the legality of AI inputs, but a fair use determination on that issue could easily involve whether those inputs lead to infringing output.  

In the end, Ginsburg suggests, “system designers may need to disable features that would allow users to create recognizable copies.” Except that — let’s face it — isn’t that really part of the fun? Sure, AI music creation might eventually grow to maturity as some kind of art form — it already has enormous practical value for songwriters — but for ordinary consumers it’s still hard to beat Frank Sinatra singing Lil Jon’s “Get Low.” Of course, that could put a significant burden on AI companies — with severe consequences for crossing a line that won’t always be obvious. It might be easier to just license the content they need. The next questions, which will be the subject of future columns, involve exactly what they need to license and how they might do that, since it won’t be easy to get all the rights they need — or in some cases even agree on who controls them.