State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am


AI

Page: 4

On Monday (June 24), the three major music companies filed lawsuits against artificial intelligence (AI) music startups Suno and Udio, alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale.” Spearheaded by the RIAA, the two similar lawsuits arrived four days after Billboard first reported that the labels were seriously considering legal action against the two startups.
Filed by plaintiffs Sony Music, Warner Music Group and Universal Music Group, the lawsuits allege that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”

Trending on Billboard

Hours later, Suno CEO Mikey Shulman responded to the lawsuit with a statement sent to Billboard. “Suno’s mission is to make it possible for everyone to make music,” he said. “Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists. We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.”

An RIAA spokesperson fired back at Shulman’s comment, saying: “Suno continues to dodge the basic question: what sound recordings have they illegally copied? In an apparent attempt to deceive working artists, rightsholders, and the media about its technology, Suno refuses to address the fact that its service has literally been caught on tape — as part of the evidence in this case — doing what Mr. Shulman says his company doesn’t do: memorizing and regurgitating the art made by humans. Winners of the streaming era worked cooperatively with artists and rightsholders to properly license music. The losers did exactly what Suno and Udio are doing now.”

Udio responded on Tuesday (June 25) with a lengthy statement posted to the company’s website. You can read it in full below.

In the past two years, AI has become a powerful tool for creative expression across many media – from text to images to film, and now music. At Udio, our mission is to empower artists of all kinds to create extraordinary music. In our young life as a company, we have sat in the studios of some of the world’s greatest musicians, workshopped lyrics with up-and-coming songwriters, and watched as millions of users created extraordinary new music, ranging from the funny to the profound.

We have heard from a talented musician who, after losing the ability to use his hands, is now making music again. Producers have sampled AI-generated tracks to create hit songs, like ‘BBL Drizzy’, and everyday music-lovers have used the technology to express the gamut of human emotions from love to sorrow to joy. Groundbreaking technologies entail change and uncertainty. Let us offer some insight into how our technology works.

Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.

The goal of model training is to develop an understanding of musical ideas — the basic building blocks of musical expression that are owned by no one. Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.

We stand behind our technology and believe that generative AI will become a mainstay of modern society.

Virtually every new technological development in music has initially been greeted with apprehension, but has ultimately proven to be a boon for artists, record companies, music publishers, technologists, and the public at large. Synthesizers, drum machines, digital recording technology, and the sound recording itself are all examples of once-controversial music creation tools that were feared in their early days. Yet each of these innovations ultimately expanded music as an art and as a business, leading to entirely new genres of music and billions of dollars in the pockets of artists, songwriters and the record labels and music publishers who profit from their creations.

We know that many musicians — especially the next generation — are eager to use AI in their creative workflows. In the near future, artists will compose music alongside their fans, amateur musicians will create entirely new musical genres, and talented creators — regardless of means — will be able to scale the heights of the music industry.

The future of music will see more creative expression than ever before. Let us use this watershed moment in technology to expand the circle of creators, empower artists, and celebrate human creativity.

A few weeks back, a member of the team at my company, Ircam Amplify, joined one of the multiple AI music generators available online and input a brief prompt for a song. Within minutes, a new track was generated and promptly uploaded to a distribution platform. In just a couple of hours, that song, in which no human creativity played a part, was available on various streaming platforms. We diligently took action to remove the track from all of them, but the experiment highlighted a significant point. 
It is now that simple! My aim here is not to pass judgment on whether AI-generated music is a good or a bad thing — from that perspective, we are neutral — but we think it is important to emphasize that, while the process is easy and cost-effective, there are absolutely no safeguards currently in place to ensure that consumers know if the music they are listening to is AI-generated. Consequently, they cannot make an informed choice about whether they want to listen to such music. 

With AI-generated songs inundating digital platforms, streaming services require vast technological resources to manage the volume of tracks, diverting attention away from the promotion of music created by “human” artists and diluting the royalty pool. 

Trending on Billboard

Like it or not, AI is here to stay, and more and more songs will find their way onto streaming platforms given how quick and easy the process is. We already know that there are AI-generated music “farms” flooding streaming platforms; over 25 million tracks were recently removed by Deezer, and it is reasonable to speculate that a significant proportion of these were AI-generated. 

In the interest of transparency, consumers surely deserve to know whether the music they are tuning into is the genuine product of human creativity or derived from computer algorithms. But how can AI-generated tracks be easily distinguished? Solutions already exist. At Ircam Amplify, we offer a series of audio tools, from spatial sound to vocal separator, that cover the full audio supply chain. One of the latest technologies we have launched is an AI-generated detector designed to help rights holders, as well as platforms, identify tracks that are AI-generated. Through a series of benchmarks, we have been able to determine the “fingerprints” of AI models and apply them to their output to identify tracks coming from AI-music factories.  

The purpose of any solution should be to support the whole music ecosystem by providing a technical answer to a real problem while contributing to a more fluid and transparent digital music market. 

Discussions around transparency and AI are gaining traction all around the world. From Tokyo to Washington, D.C., from Brussels to London, policymakers are considering new legislation that would require platforms to identify AI-generated content. That is the second recommendation in the recent report “Artificial Intelligence and the Music Industry — Master or Servant?” published by the UK Parliament. 

Consumers are also demanding it. A recent UK Music survey of more than 2,000 people, commissioned by Whitestone Insight, emphatically revealed that more than four out of five people (83%) agree that if AI technology has been used to create a song, it should be distinctly labeled as such. 

Similarly, a survey conducted by Goldmedia in 2023 on behalf of rights societies GEMA and SACEM found that 89% of the collective management organizations’ members expressed a desire for AI-generated music tracks and other works to be clearly identified. 

These overwhelming numbers tell us that concerns about AI are prevalent within creative circles and are also shared by consumers. There are multiple calls for the ethical use of AI, mostly originating from rights holders — artists, record labels, music publishers, collective management organizations, etc. — and transparency is usually at the core of these initiatives. 

Simply put, if there’s AI in the recipe, then it should be flagged. If we can collectively find a way to ensure that AI-generated music is identified, then we will have made serious progress towards transparency. 

Nathalie Birocheau currently serves as CEO at Ircam Amplify and is also a certified engineer (Centrale-Supélec) and former strategy consultant who has led several major cultural and media projects, notably within la Maison de la Radio. She became Deputy Director of France Info in 2016, where she led the creation of the global media franceinfo.

Futureverse, an AI music company co-founded by music technology veteran Shara Senderoff, has announced the alpha launch of Jen, its text-to-music AI model. Available for anyone to use on its website, Jen-1 is an AI model that can be safely used by creators, given it was trained on 40 different fully-licensed catalogs, containing about 150 million works in total.
The company’s co-founders, Senderoff and Aaron McDonald, first teased Jen’s launch by releasing a research paper and conducting an interview with Billboard in August 2023. In the interview, Senderoff explained that “Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool.”

Trending on Billboard

Some of Jen’s capabilities, available at its alpha launch, include the ability to generate 10-45 second song snippets using text prompts. To lengthen the song to a full 3:30-long duration, one can use its “continuation” feature to re-prompt and add on additional segments to the song. With a focus on “its commitment to transparency, compensation and copyright identification,” as its press release states, Jen has made much of its inner workings available to the public via its research papers, including that the model uses “latent diffusion,” the same process used by Stable Diffusion, DALL-E 2, and Imagen to create high quality images. (It is unclear which music AI models use “latent diffusion” as well, given many do not share this information publicly).

Additionally, when works are created with Jen, users receive a JENUINE indicator, verifying that the song was made with Jen at a specific timestamp. To be more specific, this indicator is a cryptographic hash that is then recorded on The Root Network blockchain.

In an effort to work more closely with the music business, Futureverse brought on APG founder/CEO Mike Caren as a founding partner in fall 2023. While its mid-2024 release date makes it a late entrant in the music AI space, the company attributes this delay to making sure its 40 licenses were secured.

For now, Futureverse has declined to comment on which songs are included in their overall training catalog for Jen, but a representative for the company says that among these 40 catalogs includes a number of production libraries. Futureverse says it is also in talks with all major music companies and will have more licenses secured soon for Jen’s beta launch, expected for September 2024. Some licensing partners could be announced as soon as 4-6 weeks from the alpha launch.

In September, Futureverse has more capabilities planned, including longer initial song results, inpainting (the process of filling in missing sections or gaps in a musical piece) and a capability the company calls its “StyleFilter,” allowing users to upload an audio snippet of an instrument or track and then change the genre or timbre of it at the click of a button.

Also in September, Futureverse plans to launch a beat marketplace called R3CORD to go along with JEN. This will let JEN users upload whatever they produce with JEN to the marketplace and sell the works to others.

So far, the U.S. Copyright Office has advised that fully AI generated creations are not protected copyrights. Instead, they are considered “public domain” works and are not eligible to earn royalties like copyrights do, but any human additions made to an AI-assisted work are able to be copyright protected. (Already, this guidance has been applied in the music business in the case of Drake and Sexyy Red’s “U My Everything” which sampled the fully-AI generated sound recording “BBL Drizzy).”

“We have reached a defining moment for the future of the music industry. To ensure artistry maintains the value it deserves, we must commit to honor the creativity and copyrights of the past, while embracing the tools that will shape the next generation of music creation,” says Senderoff. “Jen empowers creators with tools that enhance their creative process. Jen is a collaborator; a friend in the studio that works with you to ideate and iterate. As we bring Jen to market, we are partnering with music rights holders and aligning with the industry at large to deliver what we believe is the most ethical approach to generative AI.” 

“We’re incredibly proud of the work that’s gone into building Jen, from our research and technology to a strategy that we continue to develop with artists’ rights top of mind,” says Caren. “We welcome an open dialogue for those who’ve yet to meet Jen. There’s a seat at the table for every rightsholder to participate in building this next chapter of the music industry.”

When Snowd4y, a Toronto parody rapper, released the track “Wah Gwan Delilah” featuring Drake via Soundcloud on Monday (June 3), it instantly went viral.
“This has to be AI,” one commenter wrote about the song. It was a sentiment shared by many others, particularly given the track’s ridiculous lyrics and the off-kilter audio quality of Drake’s vocals.

To date, the two rappers have not confirmed or denied the AI rumor. Though Drake posted the track on his Instagram story, it is hardly a confirmation that the vocals in question are AI-free. (As we learned during Drake’s recent beef with Kendrick Lamar, the rapper is not afraid of deep-faking voices).

Trending on Billboard

To try to get to the bottom of the “Wah Gwan Delilah” mystery, Billboard contacted two companies that specialize in AI audio detection to review the track. The answer, unfortunately, was not too satisfying.

“Our first analysis reveals SOME traces of [generative] AI, but there seems to be a lot of mix involved,” wrote Romain Simiand, chief product officer of Ircam Amplify, a French company that creates audio tools for rights holders, in an email response.

Larry Mills, senior vp of sales at Pex, which specializes in tracking and monetizing music usage across the web, also found mixed results. He told Billboard the Pex research and development team “ran the song through [their] VoiceID matcher” and that “Drake’s voice on the ‘Wah Gwan Delilah’ verse does not match as closely to Drake’s voice…[as his voice on] official releases [does], but it is close enough to confirm it could be Drake’s own voice or a good AI copy.” Notably, Pex’s VoiceID tool alone is not enough to definitively distinguish between real and AI voices, but its detection of differences between the singer/rapper’s voice on “Wah Gwan Delilah” and his other, officially released songs could indicate some level of AI manipulation.

A representative for Drake did not immediately respond to Billboard’s request for comment.

How to Screen for AI in Songs

There are multiple types of tools that are currently used to distinguish between AI-generated music and human-made music, but these nascent products are still developing and not definitive. As Pex’s Jakub Galka recently wrote in a company blog post about the topic, “Identifying AI-generated music [is] a particularly difficult task.”

Some detectors, like Ircam’s, identify AI music using “artifact detection,” meaning they detect parts of a work that are off-base from reality. A clear example of this is seen with AI-generated images. Early AI images often contained hands with extra or misshapen fingers, and some detection tools exist to pick up on these inaccuracies.

Other detectors rely on reading watermarks embedded in the AI-generated music. While these watermarks are not perceptible to the human ear, they can be detected by certain tools. Galka writes that “since watermarking is intended to be discoverable by watermark detection algorithms, such algorithms can also be used to show how to remove or modify the watermark embedded in audio so it is no longer discoverable” — something he sees as a major flaw with this system of detection.

Pex’s method of using VoiceID, which can determine if a singer matches between multiple recordings, can also be useful in AI detection, though it is not a clear-cut answer. This technology is particularly helpful when users take to the internet and release random tracks with Drake vocals, whether they’re leaked songs or AI deepfakes. With VoiceID, Pex can tell a rights holder that their voice was detected on another track that might not be an official release from them.

When VoiceID is paired with the company’s other product, Automatic Content Recognition (ACR), it can sometimes determine if a song uses AI vocals or not, but the company says there is not enough information on “Wah Gwan Delilah” to complete a full ACR check.

Parody’s Role in AI Music

Though it can’t be determined without a doubt whether “Wah Gwan Delilah” contains AI vocals, parody songs in general have played a major role in popularizing and normalizing AI music. This is especially evident on TikTok, which is replete with so-called “AI Covers,” pairing famous vocalists with unlikely songs. Popular examples of this trend include Kanye West singing “Pocket Full of Sunshine” by Natasha Bedingfield, Juice WRLD singing “Viva La Vida” by Coldplay, Michael Jackson singing “Careless Whisper” by George Michael and more.

Most recently, AI comedy music took center stage with Metro Boomin‘s SoundCloud-released track “BBL Drizzy” — which sampled an AI-generated song of the same name. The track poked fun at Drake and his supposed “Brazilian Butt Lift” during the rapper’s beef with Lamar, and in the process, it became the first major use of an AI-generated sample. Later, Drake and Sexyy Red sampled the original AI-generated “BBL Drizzy” on their own song, “U My Everything,” lifting “BBL Drizzy” to new heights.

On May 24, Sexyy Red and Drake teamed up on the track “U My Everything.” And in a surprise — Drake’s beef with Kendrick Lamar had seemingly ended — the track samples “BBL Drizzy” (originally created using AI by King Willonius, then remixed by Metro Boomin) during the Toronto rapper’s verse. 
It’s another unexpected twist for what many are calling the first-ever AI-generated hit, “BBL Drizzy.” Though Metro Boomin’s remix went viral, his version never appeared on streaming services. “U My Everything” does, making it  the first time an AI-generated sample has appeared on an official release — and posing new legal questions in the process. Most importantly: Does an artist need to clear a song with an AI-generated sample?

Trending on Billboard

“This sample is very, very novel,” says Donald Woodard, a partner at the Atlanta-based music law firm Carter Woodard. “There’s nothing like it.” Woodard became the legal representative for Willonius, the comedian and AI enthusiast who generated the original “BBL Drizzy,” after the track went viral and has been helping Willonius navigate the complicated, fast-moving business of viral music. Woodard says music publishers have already expressed interest in signing Willonius for his track, but so far, the comedian/creator is still only exploring the possibility.

Willonius told Billboard that it was “very important” to him to hire the right lawyer as his opportunities mounted. “I wanted a lawyer that understood the landscape and understood how historic this moment is,” he says. “I’ve talked to lawyers who didn’t really understand AI, but I mean, all of us are figuring it out right now.”

Working off recent guidance from the U.S. Copyright Office, Woodard says that the master recording of “BBL Drizzy” is considered “public domain,” meaning anyone can use it royalty-free and it is not protected by copyright, since Willonius created the master using AI music generator Udio. But because Willonius did write the lyrics to “BBL Drizzy,” copyright law says he should be credited and paid for the “U My Everything” sample on the publishing side. “We are focused on the human portion that we can control,” says Woodard. “You only need to clear the human side of it, which is the publishing.”

In hip-hop, it is customary to split the publishing ownership and royalties 50/50: One half is expected to go to the producer, the other is for the lyricists (who are also often the artists, too). “U My Everything” was produced by Tay Keith, Luh Ron, and Jake Fridkis, so it is likely that those three producers split that half of publishing in some fashion. The other half is what Willonius could be eligible for, along with other lyricists Drake and Sexyy Red. Woodard says the splits were solidified “post-release” on Tuesday, May 28, but declined to specify what percentage split Willonius will take home of the publishing. “I will say though,” Woodard says, cracking a smile. “He’s happy.”

Upon the release of “U My Everything,” Willonius was not listed as a songwriter on Spotify or Genius, both of which list detailed credits but can contain errors. It turns out the reason for the omission was simple: the deal wasn’t done yet. “We hammered out this deal in the 24th hour,” jokes Woodard, who adds that he was unaware that “U My Everything” sampled “BBL Drizzy” until the day of its release. “That’s just how it goes sometimes.”

It is relatively common for sample clearance negotiations to drag on long after the release of songs. Some rare cases, like Travis Scott’s epic “Sicko Mode,” which credits about 30 writers due to a myriad of samples, can take years. Willonius tells Billboard when he got the news about the “U My Everything” release, he was “about to enter a meditation retreat” in Chicago and let his lawyer “handle the business.”

This sample clearance process poses another question: should Metro Boomin be credited, too? According to Metro’s lawyer, Uwonda Carter, who is also a partner at Carter Woodard, the simple answer is no. She adds that Metro is not pursuing any ownership or royalties for “U My Everything.”

“Somehow people attach Metro to the original version of ‘BBL Drizzy,’ but he didn’t create it,” Carter says. “As long as [Drake and Sexyy Red] are only using the original version [of “BBL Drizzy”], that’s the only thing that needs to be cleared,” she continues, adding that Metro is not the type of creative “who encroaches upon work that someone else does.”

When Metro’s remix dropped on May 5, Carter says she spoke with the producer, his manager and his label, Republic Records, to discuss how they could officially release the song and capitalize on its grassroots success, but then they ultimately decided against doing a proper release. “Interestingly, the label’s position was if [Metro’s] going to exploit this song, put it up on DSPs, it’s going to need to be cleared, but nobody knew what that clearance would look like because it was obviously AI.”

She adds, “Metro decided that he wasn’t going to exploit the record because trying to clear it was going to be the Wild, Wild West.” In the end, however, the release of “U My Everything” still threw Carter Woodard into that copyright wilderness, forcing them to find a solution for their other client, Willonius.

In the future, the two lawyers predict that AI could make their producer clients’ jobs a lot easier, now that there is a precedent for getting AI-generated masters royalty-free. “It’ll be cheaper,” says Carter. “Yes, cleaner and cheaper,” says Woodard.

Carter does acknowledge that while AI sampling could help some producers with licensing woes, it could hurt others, particularly the “relatively new” phenomenon of “loop producers.” “I don’t want to minimize what they do,” she says, “but I think they have the most to be concerned about [with AI].” Carter notes that using a producer’s loops can cost 5% to 10% from the producer’s side of publishing or more. “I think that, at least in the near future, producers will start using AI sampling and AI-generated records so they could potentially bypass the loop producers.”

Songwriter-turned-publishing executive Evan Bogart previously told Billboard he feels AI could never replace “nostalgic” samples (like “First Class” by Jack Harlow’s use of “Glamorous” by Fergie or “Big Energy” by Latto’s “Fantasy” by Mariah Carey), where the old song imbues the new one with greater meaning. But he said he could foresee it being a digital alternative to crate digging for obscure samples to chop up and manipulate beyond recognition.

Though the “U My Everything” complications are over — and set a new precedent for the nascent field of AI sampling in the process — the legal complications with “BBL Drizzy” will continue for Woodard and his client. Now, they are trying to get the original song back on Spotify after it was flagged for takedown. “Some guy in Australia went in and said that he made it, not me,” says Willonius. A representative for Spotify confirms to Billboard that the takedown of “BBL Drizzy” was due to a copyright claim. “He said he made that song and put it on SoundCloud 12 years ago, and I’m like, ‘How was that possible? Nobody was even saying [BBL] 12 years ago,’” Willonius says. (Udio has previously confirmed to Billboard that its backend data shows Willonius made the song on its platform).   

“I’m in conversations with them to try to resolve the matter,” says Woodard, but “unfortunately, the process to deal with these sorts of issues is not easy. Spotify requires the parties to reach a resolution and inform Spotify once this has happened.” 

Though there is precedent for other “public domain” music being disqualified from earning royalties, so far, given how new this all is, there is no Spotify policy that would bar an AI-generated song from earning royalties. These songs are also allowed to stay up on the platform as long as the AI songs do not conflict with Spotify’s platform rules, says a representative from Spotify.

Despite the challenges “BBL Drizzy” has posed, Woodard says it’s remarkable, after 25 years in practice as a music attorney, that he is part of setting a precedent for something so new. “The law is still being developed and the guidelines are still being developed,” Woodard says. “It’s exciting that our firm is involved in the conversation, but we are learning as we go.”

This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.

Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid.
Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it. Technology and policy experts alike have promoted the use of ethical training data and partnered with groups like Fairly Trained and the Human Artistry Coalition to set a positive example for other entrants into the AI realm.

What is your biggest career moment with AI?

Trending on Billboard

Diaa El All: I’m proud of starting our product Soundful Collabs. We found a way to do it with the artists’ participation in an ethical way and that we’re not infringing on any of their actual copyrighted music. With Collabs, we make custom AI models that understand someone’s production techniques and allow fans to create beats inspired by those techniques.

Meng Ru Kuok: Being the first creation platform to support the Human Artistry Coalition was a meaningful one. We put our necks out there as a tech company where people would expect us to actually be against regulation of AI. We don’t think of ourselves as a tech company. We’re a music company that represents and helps creators. Protecting them in the future is so important to us.

Tom Clees: I’ve been extremely proud to see that our ideas are coming through in legislation like the No AI Fraud Act in the House [and] the No Fakes Act in the Senate.

The term “AI” represents all kinds of products and companies. What do you consider the biggest misconception around the technology?

Clees: There are so many people who work on these issues on Capitol Hill who have only ever been told that it’s impossible to train these AI platforms and do it while respecting copyright and doing it fairly, or that it couldn’t ever work at scale. (To El All and Kuok.) A lot of them don’t know enough about what you guys are doing in AI. We need to get [you both] to Washington now.

Kuok: One of the misconceptions that I educate [others about] the most, which is counterintuitive to the AI conversation, is that AI is the only way to empower people. AI is going to have a fundamental impact, but we’re taking for granted that people have access to laptops, to studio equipment, to afford guitars — but most places in the world, that isn’t the case. There are billions of people who still don’t have access to making music.

El All: A lot of companies say, “It can’t be done that way.” But there is a way to make technological advancement while protecting the artists’ rights. Meng has done it, we’ve done it, there’s a bunch of other platforms who have, too. AI is a solution, but not for everything. It’s supposed to be the human plus the technology that equals the outcome. We’re here to augment human creativity and give you another tool for your toolbox.

What predictions do you have for the future of AI and music?

Clees: I see a world where so many more people are becoming creators. They are empowered by the technologies that you guys have created. I see the relationship between the artist and fan becoming so much more collaborative.

Kuok: I’m very optimistic that everything’s going to be OK, despite obviously the need for daily pessimism to [inspire the] push for the right regulation and policy around AI. I do believe that there’s going to be even better music made in the future because you’re empowering people who didn’t necessarily have some functionality or tools. In a world where there’s so much distribution and so much content, it enhances the need for differentiation more, so that people will actually stand up and rise to the top or get even better at what they do. It’s a more competitive environment, which is scary … but I think you’re going to see successful musicians from every corner of the world.

El All: I predict that AI tools will help bring fans closer to the artists and producers they look up to. It will give accessibility to more people to be creative. If we give them access to more tools like Soundful and BandLab and protect them also, we could create a completely new creative generation.

This story will appear in the June 1, 2024, issue of Billboard.

Suno, a generative AI music company, has raised $125 million in its latest funding round, according to a post on the company’s blog. The AI music firm, which is one of the rare start-ups that can generate voice, lyrics and instrumentals together, says it wants to usher in a “future where anyone can make music.”
Suno allows users to create full songs from simple text prompts. While most of its technology is proprietary, the company does lean on OpenAI’s ChatGPT for lyric and title generation. Free users can generate up to 10 songs per month, but with its Pro plan ($8 per month) and Premier plan ($24 per month), a user can generate up to 500 songs or 2,000 songs, respectively, on a monthly basis and are given “general commercial terms.”

The company names some of its investors in the announcement, including Lightspeed Venture Partners, Nat Friedman and Daniel Gross, Matrix and Founder Collective. Suno also says it has been working closely with a team of advisors, including 3LAU, Aaron Levie, Alexandr Wang, Amjad Masad, Andrej Karpathy, Aravind Srinivas, Brendan Iribe, Flosstradamus, Fred Ehrsam, Guillermo Rauch and Shane Mac.

Trending on Billboard

Suno is commonly believed to be one of the most advanced AI music models on the market today, but in past interviews, the company has not disclosed what materials are included in its training data. Expert Ed Newton-Rex, founder of Fairly Trained and former vp of audio for Stability AI, warned in a recent piece for Music Business Worldwide that it seems likely that Suno was trained on copyrighted material without consent given the way he has been able to generate music using the model that closely resembles copyrights.

In a recent Rolling Stone story about the company, investor Antonio Rodriguez mentioned that Suno’s lack of licenses with music companies is not a concern to him, saying that this lack of such licenses is “the risk we had to underwrite when we invested in the company, because we’re the fat wallet that will get sued right behind these guys.… Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”

Suno representatives have previously said, however, that their model will not let anyone create music by using prompts like “ballad in the style of Radiohead” or employ the voices of specific artists.

Many AI companies, including OpenAI, argue that training on copyrights without licenses in place is “fair use,” but the legality of this practice is still being determined in the United States. The New York Times has launched a lawsuit against OpenAI for training on its copyrighted archives without consent, credit or compensation, and Universal Music Group, Concord, ABKCO and other music publishers have filed a lawsuit against Anthropic for using its lyrics to train the company’s large language model.

In the Suno blog post, CEO Mikey Shulman wrote: “Today, we are excited to announce we’ve raised $125 million to build a future of music where technology amplifies, rather than replaces, our most precious resource: human creativity.”

“We released our first product eight months ago, enabling anyone to make a song with just a simple idea,” he continued. “It’s very early days, but 10 million people have already made music using Suno. While GRAMMY-winning artists use Suno, our core user base consists of everyday people making music — often for the first time.

“We’ve seen producers crate digging, friends exchanging memes and streamers co-creating songs with stadium-sized audiences. We’ve helped an artist who lost his voice bring his lyrics back to life again after decades on the sidelines. We’ve seen teachers ignite their students’ imaginations by transforming lessons into lyrics and stories into songs. Just this past weekend, we received heartwarming stories of mothers moved to tears by songs their loved ones created for them with a little help from Suno.”

When Drake dismissively told Metro Boomin to go and “make some drums” in one of his recent diss tracks during his beef with Kendrick Lamar, the superproducer went off and did just that — and the result marked a turning point for the use of AI in music production. 
The beat, titled “BBL Drizzy,” pairs a vintage-sounding soul vocalist over some 808 drums. The producer released it to SoundCloud on May 5, encouraging his fans to record their own bars over it for the chance to win a free beat, and it swiftly went viral.

But soon after, it was revealed that the singer from the “BBL Drizzy” beat didn’t exist — the voice was AI-generated, as was the song itself. The vocals, melody and instrumental of the sample were generated by Udio, an AI music startup founded by former Google Deep Mind engineers. Though Metro was not aware of the source of the track when he used it, his tongue-in-cheek diss became the first notable use case of AI-generated sampling, proving the potential for AI to impact music production. (A representative for Metro Boomin did not respond to Billboard’s request for comment).

Trending on Billboard

As with all AI tracks, however, a human being prompted it. King Willonius, a comedian, musician and content creator, had put together the Udio-generated song on April 14, pulling inspiration from a recent Rick Ross tweet — in which the rapper joked that Drake looks like he got a Brazilian Butt Lift — to write the lyrics. “I think it’s a misconception that people think AI wrote ‘BBL Drizzy,’” Willonius told Billboard in an interview about the track. “There’s no way AI could write lyrics like ‘I’m thicker than a Snicker and I got the best BBL in history,’” he adds, laughing. 

There are a lot of issues — legal, philosophical, cultural and technical — that are still to be sorted out before this kind of sampling hits the mainstream, but it’s not hard to imagine a future where producers turn to AI to create vintage-sounding samples to chop up and use in beats given that sample clearances are notoriously complicated and can drag on for months or years, even for big name producers like Metro Boomin. 

“If people on the other side [of sample clearance negotiations] know they’re probably going to make money on the new song, like with a Metro Boomin-level artist, they will make it a priority to clear a sample quickly, but that’s not how it is for everyone,” says Todd Rubenstein, a music attorney and founder of Todd Rubenstein Law. Grammy-winning writer/producer Oak Felder says clearing a sample for even a high-profile track is still a challenge for him. “I’ll be honest, I’m dealing with a tough clearance right now, and I’ve dealt with it before,” he says. “I had trouble clearing an Annie Lennox sample for a Nicki Minaj record once… It’s hard.”

Many smaller producers are not able to sample established songs because they know that it could get them into legal trouble. Others go ahead without permission, causing massive legal headaches, like when bedroom producer Young Kio sampled an undisclosed Nine Inch Nails song in an instrumental he licensed out on BeatStars. The beat was used by then-unknown Lil Nas X and resulted in the Billboard Hot 100 No. 1 “Old Town Road.” When the sample was discovered, Nas was forced to give up a large portion of his publishing and master royalties to the band. 

Udio’s co-founder, David Ding, tells Billboard that he believes AI samples “could simplify a lot of the rights management” issues inherent to sampling and explains that Udio’s model is particularly adept at making realistic songs in the vein of “Motown ‘70s soul,” perhaps the most common style of music sampled in hip-hop today, as well as classical, electronic and more. “It’s a wide-ranging model,” Ding says.

Willonius believes AI samples also offer a solution for musicians in today’s relentless online news cycle. While he has made plenty of songs from scratch before, Willonius says AI offered him the chance to respond in real-time to the breakneck pace of the feud between Drake and Kendrick. “I never could’ve done that without AI tools,” he says. Evan Bogart, a Grammy-winning songwriter and founder of Seeker Music, likens it to a form of digital crate digging. “I think it’s super cool to use AI in this way,” he says. “It’s good for when you dig and can’t find the right fit. Now, you can also try to just generate new ideas that sound like old soul samples.”

There’s a significant financial impact incurred from traditional sampling that could also be avoided with AI. To use the melody of “My Favorite Things” in her hit song “7 Rings,” for example, Ariana Grande famously had to cede 90% of her publishing income for the song to “My Favorite Things” writers Rodgers and Hammerstein — and that was just an interpolation rather than a full sample, which entails both the use of compositional elements, like melody, and a portion of the sound recording.

“It certainly could help you having to avoid paying other people and avoid the hassle,” says Rubenstein, who has often dealt with the complications of clearing songs that use samples and beats from marketplaces like BeatStars. But he adds that any user of these AI models must use caution, saying it won’t always make clearances easier: “You really need to know what the terms of service are whenever you use an AI model, and you should know how they train their AI.”

Often, music-making AI models train on copyrighted material without the consent or compensation of its rights holders, a practice that is largely condemned by the music business — even those who are excited about the future of AI tools. Though these AI companies argue this is “fair use,” the legality of this practice is still being determined in the United States. The New York Times has launched a lawsuit against OpenAI for training on its copyrighted archives without consent, credit or compensation, and UMG, Concord, ABKCO and other music publishers have also filed a lawsuit against Anthropic for using their lyrics to train the company’s large language model. Rep. Adam Schiff (D-CA) has also introduced a new bill called the Generative AI Copyright Disclosure Act to require transparency on this matter. 

Udio’s terms of service puts the risk of sharing its AI songs on users, saying that users “shall defend, indemnify, and hold the company entities harmless from and against any and all claims, costs, damages, losses, liabilities and expenses” that come from using whatever works are generated on the platform. In an interview with Billboard, Udio co-founder Ding was unable to answer what works were specifically used in its training data. “We can’t reveal the exact source of our training data. We train our model on publicly available data that we obtained from the internet. It’s basically, like, we train the model on good music just like how human musicians would listen to music,” says Ding. When pressed about copyrights in particular, he replies, “We can’t really comment on that.”

“I think if it’s done right, AI could make things so much easier in this area. It’s extremely fun and exciting but only with the proper license,” says Diaa El All, CEO/founder of Soundful, another AI music company that generates instrumentals specifically. His company is certified by Fairly Trained, a non-profit that ensures certified companies do not use copyrighted materials in training data without consent. El All says that creating novel forms of AI sampling “is a huge focus” for his company, adding that Soundful is working with an artist right now to develop a fine-tuned model to create AI samples based on pre-existing works. 

“I can’t tell you who it is, but it’s a big rapper,” he says. “His favorite producer passed away. The rapper wants to leverage a specific album from that producer to sample. So we got a clearance from the producer’s team to now build a private generative AI model for the rapper to use to come up with beats that are inspired by that producer’s specific album.”

While this will certainly have an impact on the way producers work in the future, Felder and Bogart say that AI sampling will never totally replace the original practice. “People love nostalgia; that’s what a sample can bring,” says Felder. With the success of sample-driven pop songs at the top of the Hot 100 and the number of movie sequels hitting box office highs, it’s clear that there is an appetite for familiarity, and AI originals cannot feed that same craving.

“BBL Drizzy” might’ve been made as a joke, but Felder believes the beat has serious consequences. “I think this is very important,” he says. “This is one of the first successful uses [AI sampling] on a commercial level, but in a year’s time, there’s going to be 1,000 of these. Well, I bet there’s already a thousand of these now.”

This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.

The U.S. Senate Judiciary Committee convened on Tuesday (April 30) to discuss a proposed bill that would effectively create a federal publicity right for artists in a hearing that featured testimony from Warner Music Group CEO Robert Kyncl, artist FKA Twigs, Digital Media Association (DiMA) CEO Graham Davies, SAG-AFTRA national executive director/chief negotiator Duncan Crabtree-Ireland, Motion Picture Association senior vp/associate general counsel Ben Sheffner and the University of San Diego professor Lisa P. Ramsey.
The draft bill — called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission. Those individuals have previously only been protected through a patchwork of state “right of publicity” laws. First introduced in October, the NO FAKES Act is supported by a bipartisan group of U.S. senators including Sen. Chris Coons (D-Del.), Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.) and Sen. Thom Tillis (R-N.C.).

Trending on Billboard

Warner Music Group (WMG) supports the NO FAKES Act along with many other music businesses, the RIAA and the Human Artistry Campaign. During Kyncl’s testimony, the executive noted that “we are in a unique moment of time where we can still act and we can get it right before it gets out of hand,” pointing to how the government was not able to properly handle data privacy in the past. He added that it’s imperative to get out ahead of artificial intelligence (AI) to protect artists’ and entertainment companies’ livelihoods.

“When you have these deepfakes out there [on streaming platforms],” said Kyncl, “the artists are actually competing with themselves for revenue on streaming platforms because there’s a fixed amount of revenue within each of the streaming platforms. If somebody is uploading fake songs of FKA Twigs, for example, and those songs are eating into that revenue pool, then there is less left for her authentic songs. That’s the economic impact of it long term, and the volume of content that will then flow into the digital service providers will increase exponentially, [making it] harder for artists to be heard, and to actually reach lots of fans. Creativity over time will be stifled.”

Kyncl, who recently celebrated his first anniversary at the helm of WMG, previously held the role of chief business officer at YouTube. When questioned about whether platforms, like YouTube, Spotify and others who are represented by DiMA should be held responsible for unauthorized AI fakes on their platforms, Kyncl had a measured take: “There has to be an opportunity for [the services] to cooperate and work together with all of us to [develop a protocol for removal],” he said.

During his testimony, Davies spoke from the perspective of the digital service providers (DSPs) DiMA represents. “There’s been no challenge [from platforms] in taking down the [deepfake] content expeditiously,” he said. “We don’t see our members needing any additional burdens or incentives here. But…if there is to be secondary liability, we would very much seek that to be a safe harbor for effective takedowns.”

Davies added, however, that the Digital Millennium Copyright Act (DMCA), which provides a notice and takedown procedure for copyright infringement, is not a perfect model to follow for right of publicity offenses. “We don’t see [that] as being a good process as [it was] designed for copyright…our members absolutely can work with the committee in terms of what we would think would be an effective [procedure],” said Davies. He added, “It’s really essential that we get specific information on how to identify the offending content so that it can be removed efficiently.”

There is currently no perfect solution for tracking AI deepfakes on the internet, making a takedown procedure tricky to implement. Kyncl said he hopes for a system that builds on the success of YouTube’s Content ID, which tracks sound recordings. “I’m hopeful we can take [a Content ID-like system] further and apply that to AI voice and degrees of similarity by using watermarks to label content and care the provenance,” he said.

The NO FAKES draft bill as currently written would create a nationwide property right in one’s image, voice, or visual likeness, allowing an individual to sue anyone who produced a “newly-created, computer-generated, electronic representation” of it. It also includes publicity rights that would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. Most state right of publicity laws were written far before the invention of AI and often limit or exclude the protection of an individual’s name, image and voice after death.

The proposed 70 years of post-mortem protection was one of the major points of disagreement between participants at the hearing. Kyncl agreed with the points made by Crabtree-Ireland of SAG-AFTRA — the actors’ union that recently came to a tentative agreement with major labels, including WMG, for “ethical” AI use — whose view was that the right should not be limited to 70 years post-mortem and should instead be “perpetual,” in his words.

“Every single one of us is unique, there is no one else like us, and there never will be,” said Crabtree-Ireland. “This is not the same thing as copyright. It’s not the same thing as ‘We’re going to use this to create more creativity on top of that later [after the copyright enters public domain].’ This is about a person’s legacy. This is about a person’s right to give this to their family.”

Kyncl added simply, “I agree with Mr. Crabtree-Ireland 100%.”

However, Sheffner shared a different perspective on post-mortem protection for publicity rights, saying that while “for living professional performers use of a digital replica without their consent impacts their ability to make a living…that job preservation justification goes away post-mortem. I have yet to hear of any compelling government interest in protecting digital replicas once somebody is deceased. I think there’s going to be serious First Amendment problems with it.”

Elsewhere during the hearing, Crabtree-Ireland expressed a need to limit how long a young artist can license out their publicity rights during their lifetime to ensure they are not exploited by entertainment companies. “If you had, say, a 21-year-old artist who’s granting a transfer of rights in their image, likeness or voice, there should not be a possibility of this for 50 years or 60 years during their life and not have any ability to renegotiate that transfer. I think there should be a shorter perhaps seven-year limitation on this.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Tupac’s estate threatens to sue Drake over his use of the late rapper’s voice; Megan Thee Stallion faces a lawsuit over eye-popping allegations from her former cameraman; Britney Spears settles her dispute with her father; and much more.

THE BIG STORY: Drake, Tupac & An AI Showdown

The debate over unauthorized voice cloning burst into the open last week when Tupac Shakur’s estate threatened to sue Drake over a recent diss track against Kendrick Lamar that featured an AI-generated version of the late rapper’s voice.In a cease-and-desist letter first reported by Billboard, litigator Howard King told Drake that the Shakur estate was “deeply dismayed and disappointed” by the rapper’s use of Tupac’s voice in his “Taylor Made Freestyle.” The letter warned Drake to confirm in less than 24 hours that he would pull the track down or the estate would “pursue all of its legal remedies” against him.“Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”AI-powered voice cloning has been top of mind for the music industry since last spring when an unknown artist released a track called “Heart On My Sleeve” that featured — ironically — fake verses from Drake’s voice. As such fake vocals have continued to proliferate on the internet, industry groups, legal experts and lawmakers have wrangled over how best to crack down on them.With last week’s showdown, that debate jumped from hypothetical to reality. The Tupac estate laid out actual legal arguments for why it believed Drake’s use of the late rapper’s voice violated the law. And those arguments were apparently persuasive: Within 24 hours, Drake began to pull his song from the internet.

For more details on the dispute, go read our full story here.

Trending on Billboard

Other top stories this week…

MEGAN THEE STALLION SUED – The rapper and Roc Nation were hit witha lawsuit from a cameraman named Emilio Garcia who claims he was forced to watch Megan have sex with a woman inside a moving vehicle while she was on tour in Spain. The lawsuit, which claims he was subjected to a hostile workplace, was filed by the same attorneys who sued Lizzo last year over similar employment law.BRITNEY SETTLES WITH FATHER – Britney Spears settled her long-running legal dispute with her father, Jamie Spears, that arose following the termination of the pop star’s 13-year conservatorship in 2021. Attorneys for Britney had accused Jamie of misconduct during the years he served as his daughter’s conservator, a charge he adamantly denied. The terms of last week’s agreement were not made public.TRAVIS SCOTT MUST FACE TRIAL – A Houston judge denied a motion from Travis Scott to be dismissed from the sprawling litigation over the 2021 disaster at the Astroworld music festival, leaving him to face a closely-watched jury trial next month. Scott’s attorneys had argued that the star could not be held legally liable since safety and security at live events is “not the job of performing artists.” But the judge overseeing the case denied that motion without written explanation.ASTROWORLD TRIAL LIVESTREAM? Also in the Astroworld litigation, plaintiffs’ attorneys argued that the upcoming trial — a pivotal first test for hundreds of other lawsuits filed by alleged victims over the disaster — should be broadcast live to the public. “The devastating scale of the events at Astroworld, combined with the involvement of high-profile defendants, has generated significant national attention and a legitimate public demand for transparency and accountability,” the lawyers wrote.BALLERINI HACKING CASE – Just a week after Kelsea Ballerini sued a former fan named Bo Ewing over accusations that he hacked her and leaked her unreleased album, his attorneys reached a deal with her legal team in which he agreed not to share her songs with anyone else — and to name any people he’s already sent them to. “Defendant shall, within thirty days of entry of this order, provide plaintiffs with the names and contact information for all people to whom defendant disseminated the recordings,” the agreement read.R. KELLY CONVICTIONS AFFIRMED – A federal appeals court upheld R. Kelly’s 2022 convictions in Chicago on child pornography and enticement charges, rejecting his argument that the case against him was filed too late. The court said that Kelly was convicted by “an even-handed jury” and that “no statute of limitations saves him.” His attorney vowed a trip to the U.S. Supreme Court, though such appeals face long odds.DIDDY RESPONDS TO SUIT – Lawyers for Sean “Diddy” Combs pushed back against a sexual assault lawsuit filed by a woman named Joi Dickerson-Neal, arguing that he should not face claims under statutes that did not exist when the alleged incidents occurred in 1991. His attorneys want the claims — such as revenge porn and human trafficking — to be dismissed from the broader case, which claims that Combs drugged, assaulted and surreptitiously filmed Dickerson-Neal when she was 19 years old.