State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 7

SAG-AFTRA members have voted to ratify the 2024 Sound Recordings Code which requires the record labels — Warner Music Group, Sony Music Entertainment, Universal Music Group and Disney Music Group — to abide by its AI safety rules. Notably, these are the first-ever explicitly defined compensation requirements for the release of sound recordings containing AI voices.
With a vote of 97.69% to 2.31%, SAG-AFTRA members, which include actors as well as singers and recording artists, now will receive this protections, effective immediately, for the term of 2021-2026. Now, the term “artist,” “singer” and “royalty artist” under this agreement only can refer to human talent. “Clear and conspicuous” consent is required prior to the release of a sound recording that uses a digital replication of an artist’s voice.

Artists who are replicated are also entitled to receive specific details about the replica’s intended use and to minimum compensation. Compensation for artists must align with the royalty share the artist would earn on other sound recordings under their contract, and sessions singers must receive a minimum of three sides per project. A minimum of 28 hours notice of any recording session for the purpose of creating a digital replica is expected and that session time should be paid as work time.

Trending on Billboard

Additionally, blanket consent for digital or AI replication is prohibited. Instead, record labels must obtain consent on a per-project basis — a provision which will prevent labels from asking artists to sign away their digital likeness for lengthy terms as part of their deals.

In the year since Ghostwriter’s fake-Drake song “Heart On My Sleeve” brought conversations about AI voices to the forefront, little has been done to actually enforce the protection of artists’ identities. While the federal government is considering a few bills (like the draft NO FAKES Act and the NO AI FRAUD Act) to create a nationwide right of publicity that would create uniform protection for artists’ names, images, and voices, these protections, for now, remain a patchwork of varying state laws that were largely written before artificial intelligence presented new use cases for AI.

“Singers and recording artists have a profound impact on our culture, and I’m thrilled that they’ve achieved a contract that not only recognizes their value with significant wage increases, but also provides them essential protections around artificial intelligence,” said SAG-AFTRA President Fran Drescher. “We celebrate our human performers! I applaud the negotiating committee and staff, the record labels, and SAG-AFTRA members for getting this contract across the finish line!”

SAG-AFTRA National Executive Director & Chief Negotiator Duncan Crabtree-Ireland said, “This contract secures groundbreaking A.I. guardrails while also achieving crucial and substantial wage increases, and other key wins for singers and recording artists. Protecting human artistry will always be SAG-AFTRA’s priority, and I’m heartened that our members have a contract that provides immediate gains and recognizes the importance of human contributions to the industry. I also want to acknowledge Negotiating Committee Chair Dan Navarro and the entire committee and staff for their outstanding and dedicated work in achieving this agreement.”

Sound Recordings Code Negotiating Committee Chair Dan Navarro said, “Members’ feedback played a key role in the formation of this contract and the negotiating committee prioritized the concerns that were most crucial to the singers and recording artists impacted by these terms. We’re proud to have achieved these essential wins in A.I. protections along with substantial wage increases and gains in health and retirement funding.”

Other wins included wage increases and gains in health and retirement funding. To read the full list of provisions, see here.

The U.S. Senate Judiciary Committee convened on Tuesday (April 30) to discuss a proposed bill that would effectively create a federal publicity right for artists in a hearing that featured testimony from Warner Music Group CEO Robert Kyncl, artist FKA Twigs, Digital Media Association (DiMA) CEO Graham Davies, SAG-AFTRA national executive director/chief negotiator Duncan Crabtree-Ireland, Motion Picture Association senior vp/associate general counsel Ben Sheffner and the University of San Diego professor Lisa P. Ramsey.
The draft bill — called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission. Those individuals have previously only been protected through a patchwork of state “right of publicity” laws. First introduced in October, the NO FAKES Act is supported by a bipartisan group of U.S. senators including Sen. Chris Coons (D-Del.), Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.) and Sen. Thom Tillis (R-N.C.).

Trending on Billboard

Warner Music Group (WMG) supports the NO FAKES Act along with many other music businesses, the RIAA and the Human Artistry Campaign. During Kyncl’s testimony, the executive noted that “we are in a unique moment of time where we can still act and we can get it right before it gets out of hand,” pointing to how the government was not able to properly handle data privacy in the past. He added that it’s imperative to get out ahead of artificial intelligence (AI) to protect artists’ and entertainment companies’ livelihoods.

“When you have these deepfakes out there [on streaming platforms],” said Kyncl, “the artists are actually competing with themselves for revenue on streaming platforms because there’s a fixed amount of revenue within each of the streaming platforms. If somebody is uploading fake songs of FKA Twigs, for example, and those songs are eating into that revenue pool, then there is less left for her authentic songs. That’s the economic impact of it long term, and the volume of content that will then flow into the digital service providers will increase exponentially, [making it] harder for artists to be heard, and to actually reach lots of fans. Creativity over time will be stifled.”

Kyncl, who recently celebrated his first anniversary at the helm of WMG, previously held the role of chief business officer at YouTube. When questioned about whether platforms, like YouTube, Spotify and others who are represented by DiMA should be held responsible for unauthorized AI fakes on their platforms, Kyncl had a measured take: “There has to be an opportunity for [the services] to cooperate and work together with all of us to [develop a protocol for removal],” he said.

During his testimony, Davies spoke from the perspective of the digital service providers (DSPs) DiMA represents. “There’s been no challenge [from platforms] in taking down the [deepfake] content expeditiously,” he said. “We don’t see our members needing any additional burdens or incentives here. But…if there is to be secondary liability, we would very much seek that to be a safe harbor for effective takedowns.”

Davies added, however, that the Digital Millennium Copyright Act (DMCA), which provides a notice and takedown procedure for copyright infringement, is not a perfect model to follow for right of publicity offenses. “We don’t see [that] as being a good process as [it was] designed for copyright…our members absolutely can work with the committee in terms of what we would think would be an effective [procedure],” said Davies. He added, “It’s really essential that we get specific information on how to identify the offending content so that it can be removed efficiently.”

There is currently no perfect solution for tracking AI deepfakes on the internet, making a takedown procedure tricky to implement. Kyncl said he hopes for a system that builds on the success of YouTube’s Content ID, which tracks sound recordings. “I’m hopeful we can take [a Content ID-like system] further and apply that to AI voice and degrees of similarity by using watermarks to label content and care the provenance,” he said.

The NO FAKES draft bill as currently written would create a nationwide property right in one’s image, voice, or visual likeness, allowing an individual to sue anyone who produced a “newly-created, computer-generated, electronic representation” of it. It also includes publicity rights that would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. Most state right of publicity laws were written far before the invention of AI and often limit or exclude the protection of an individual’s name, image and voice after death.

The proposed 70 years of post-mortem protection was one of the major points of disagreement between participants at the hearing. Kyncl agreed with the points made by Crabtree-Ireland of SAG-AFTRA — the actors’ union that recently came to a tentative agreement with major labels, including WMG, for “ethical” AI use — whose view was that the right should not be limited to 70 years post-mortem and should instead be “perpetual,” in his words.

“Every single one of us is unique, there is no one else like us, and there never will be,” said Crabtree-Ireland. “This is not the same thing as copyright. It’s not the same thing as ‘We’re going to use this to create more creativity on top of that later [after the copyright enters public domain].’ This is about a person’s legacy. This is about a person’s right to give this to their family.”

Kyncl added simply, “I agree with Mr. Crabtree-Ireland 100%.”

However, Sheffner shared a different perspective on post-mortem protection for publicity rights, saying that while “for living professional performers use of a digital replica without their consent impacts their ability to make a living…that job preservation justification goes away post-mortem. I have yet to hear of any compelling government interest in protecting digital replicas once somebody is deceased. I think there’s going to be serious First Amendment problems with it.”

Elsewhere during the hearing, Crabtree-Ireland expressed a need to limit how long a young artist can license out their publicity rights during their lifetime to ensure they are not exploited by entertainment companies. “If you had, say, a 21-year-old artist who’s granting a transfer of rights in their image, likeness or voice, there should not be a possibility of this for 50 years or 60 years during their life and not have any ability to renegotiate that transfer. I think there should be a shorter perhaps seven-year limitation on this.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Tupac’s estate threatens to sue Drake over his use of the late rapper’s voice; Megan Thee Stallion faces a lawsuit over eye-popping allegations from her former cameraman; Britney Spears settles her dispute with her father; and much more.

THE BIG STORY: Drake, Tupac & An AI Showdown

The debate over unauthorized voice cloning burst into the open last week when Tupac Shakur’s estate threatened to sue Drake over a recent diss track against Kendrick Lamar that featured an AI-generated version of the late rapper’s voice.In a cease-and-desist letter first reported by Billboard, litigator Howard King told Drake that the Shakur estate was “deeply dismayed and disappointed” by the rapper’s use of Tupac’s voice in his “Taylor Made Freestyle.” The letter warned Drake to confirm in less than 24 hours that he would pull the track down or the estate would “pursue all of its legal remedies” against him.“Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”AI-powered voice cloning has been top of mind for the music industry since last spring when an unknown artist released a track called “Heart On My Sleeve” that featured — ironically — fake verses from Drake’s voice. As such fake vocals have continued to proliferate on the internet, industry groups, legal experts and lawmakers have wrangled over how best to crack down on them.With last week’s showdown, that debate jumped from hypothetical to reality. The Tupac estate laid out actual legal arguments for why it believed Drake’s use of the late rapper’s voice violated the law. And those arguments were apparently persuasive: Within 24 hours, Drake began to pull his song from the internet.

For more details on the dispute, go read our full story here.

Trending on Billboard

Other top stories this week…

MEGAN THEE STALLION SUED – The rapper and Roc Nation were hit witha lawsuit from a cameraman named Emilio Garcia who claims he was forced to watch Megan have sex with a woman inside a moving vehicle while she was on tour in Spain. The lawsuit, which claims he was subjected to a hostile workplace, was filed by the same attorneys who sued Lizzo last year over similar employment law.BRITNEY SETTLES WITH FATHER – Britney Spears settled her long-running legal dispute with her father, Jamie Spears, that arose following the termination of the pop star’s 13-year conservatorship in 2021. Attorneys for Britney had accused Jamie of misconduct during the years he served as his daughter’s conservator, a charge he adamantly denied. The terms of last week’s agreement were not made public.TRAVIS SCOTT MUST FACE TRIAL – A Houston judge denied a motion from Travis Scott to be dismissed from the sprawling litigation over the 2021 disaster at the Astroworld music festival, leaving him to face a closely-watched jury trial next month. Scott’s attorneys had argued that the star could not be held legally liable since safety and security at live events is “not the job of performing artists.” But the judge overseeing the case denied that motion without written explanation.ASTROWORLD TRIAL LIVESTREAM? Also in the Astroworld litigation, plaintiffs’ attorneys argued that the upcoming trial — a pivotal first test for hundreds of other lawsuits filed by alleged victims over the disaster — should be broadcast live to the public. “The devastating scale of the events at Astroworld, combined with the involvement of high-profile defendants, has generated significant national attention and a legitimate public demand for transparency and accountability,” the lawyers wrote.BALLERINI HACKING CASE – Just a week after Kelsea Ballerini sued a former fan named Bo Ewing over accusations that he hacked her and leaked her unreleased album, his attorneys reached a deal with her legal team in which he agreed not to share her songs with anyone else — and to name any people he’s already sent them to. “Defendant shall, within thirty days of entry of this order, provide plaintiffs with the names and contact information for all people to whom defendant disseminated the recordings,” the agreement read.R. KELLY CONVICTIONS AFFIRMED – A federal appeals court upheld R. Kelly’s 2022 convictions in Chicago on child pornography and enticement charges, rejecting his argument that the case against him was filed too late. The court said that Kelly was convicted by “an even-handed jury” and that “no statute of limitations saves him.” His attorney vowed a trip to the U.S. Supreme Court, though such appeals face long odds.DIDDY RESPONDS TO SUIT – Lawyers for Sean “Diddy” Combs pushed back against a sexual assault lawsuit filed by a woman named Joi Dickerson-Neal, arguing that he should not face claims under statutes that did not exist when the alleged incidents occurred in 1991. His attorneys want the claims — such as revenge porn and human trafficking — to be dismissed from the broader case, which claims that Combs drugged, assaulted and surreptitiously filmed Dickerson-Neal when she was 19 years old.

Over the past week, the feud between Kendrick Lamar and Drake has entered into a new, more modern realm than any rap beef before it: AI.
As the back and forth has escalated, and fans wait to see what each of the hip-hop heavyweights will say next, a number of fan-fabricated diss tracks began circulating on social media using AI voices to mimic the emcees. And while some were obviously not real — and, thankfully, were voluntarily labeled AI by their authors — others were more convincing, leading to widespread confusion.

People questioned if Drake’s “Push Ups” was real (it was), and if Lamar’s supposed reply, “1 Shot 1 Kill” was real, too (it wasn’t). YouTube is rife with more AI replications, and some are amassing big audiences, including one called “To Kill A Butterfly,” which has amassed 508,000 views to date. To make matters even more convoluted, Drake himself took part in the trend, employing AI to replicate the voices of West Coast legends Tupac Shakur and Snoop Dogg on his diss “Taylor Made,” released on X and Instagram on Friday without their permission, prompting Shakur’s estate to send Drake a cease-and-desist letter.

The phenomenon has illustrated the sizable impact that AI has already had on modern fandom, as impatient fans use generative AI tools to fill in gaps in the conversation and imagine further storylines with a type of uncanny accuracy that was never before possible. And for better or for worse, it has become the most prominent use-case of generative AI in the music industry to date.

Trending on Billboard

This trend in AI use has its origins with Ghostwriter, the controversial TikTok user who deepfaked Drake and The Weeknd’s voices on his song “Heart On My Sleeve” one year ago, in April 2023. In a cover story for Billboard, Ghostwriter and his manager first compared AI voice filters to a form of “fan fiction — a fan-generated genre of music,” as the manager put it.

Traditional, written fan fiction has been a way for fans to engage with their favorite media for decades — whether that’s franchises like Star Wars, Marvel or Twilight, or the music of stars like Drake and Lamar. In it, fans can expand on details that were never fully fleshed out in the original work and write their own storylines and endings. AI fan creations inspired by Drake and Lamar’s beef are doing something similar, letting music fans imagine the artist’s next move and picture collaborations that haven’t happened yet.

Historically, fan fiction is great for the original artist from a marketing point of view. It is one of many forms of user-generated content (UGC) on the internet today that can engage superfans further with the original project without its author having to lift a finger.

But with traditional fan fiction, fans could easily tell where the official canon started and ended, and the writing was often relegated to superfan hot spots like Watt Pad, Discord, Reddit or fan zines. This new form of ‘AI fan fiction’ makes this distinction a lot less obvious and spreads it much wider. For now, trained ears can still tell when AI voices are used like this today, given the slight glitchiness still found in the audio quality, but soon these models will be so good that discerning AI from reality will be virtually impossible.

There is still not a good way to confidently figure out which songs use AI and which do not, and to make matters worse, these fan-made songs are more commonly posted to general social media platforms than written fan-fiction. In a search about this rap beef on X or YouTube, listeners are likely to run into a few AI fan tracks along the way, and many lack the expertise of a superfan to sniff out and differentiate what’s real and what’s fake.

In a time when fans demand nonstop connection to and content from their favorite talents, it is especially common for fans of elusive artists to take matters into their own hands with AI tools — including voices as well as other generative works like images, videos and text. In the absence of a Kendrick response to Drake last week, for example, “1 Shot 1 Kill” was produced by a 23-year-old fan who goes by Sy The Rapper. In an interview with Complex, Sy said he used the tool Voicify to imagine Lamar on the track. (Notably, the RIAA recently reported Voicify to the U.S. government’s piracy watch list).

Followers of famously elusive artist Frank Ocean also had fun with generative AI in the last year, with one fan, @tannerchauct, showing others on X how to create their own alternative forms of Ocean’s album artwork using DALLE-2, an image generator. A Cardi B fan, @iYagamiLight, even dreamed up the creative direction for an entirely fictional Cardi B project with AI, earning them thousands of retweets in October. The user’s cover art rendered Cardi B in a bedazzled corset and posing in a clawfoot bathtub, peacock feathers fanning out around her. They also created a fake tracklist and release date.

The downside of fan-made works has always been the same: they have the tendency to infringe on the artist’s copyrights, to use an artist’s name, image, voice or likeness without permission, or to generally profit from the artist’s work without sharing the spoils. This new age of AI fan fiction and UGC makes all of these pre-existing problems exponentially harder to police.

The Cardi B fan, for example, did not disclose that their work was AI-generated or fictional, and instead paired their creative direction with the misleading caption “Cardi B just announced her long awaited sophomore album “Mayura” coming out Friday 12th January 2024!”

In a recent music law conference at Vanderbilt University, Colin Rushing, general counsel of the Digital Media Association (DiMA) downplayed the commercial impact of AI in music so far, saying that, since Ghostwriter, “one of the things we really haven’t seen in the [last] year is an epidemic of ‘fake-Drakes’ climbing the charts. We’re not seeing popular examples of this in the commercial marketplace.”

Rushing is right — that hasn’t happened yet. Even Drake’s own AI-assisted song is not on streaming services, and thus is not eligible for the charts. (and if the lawyer for Tupac’s estate has his way, it will soon be removed from the internet entirely.) But this rap feud has revealed that while it hasn’t impacted the charts or the “commercial marketplace” all that much, it has impacted something possibly even more important to an artist today: fandoms.

Only one word really describes Drake’s shift from objecting to an AI impersonation of him to using similar technology to add imitations of 2Pac and Snoop Dogg to his Kendrick Lamar diss track “Taylor Made Freestyle”: Chutzpah. (Drake had a bar-mitzvah-themed 31st birthday party, so he probably knows the term.) Last year around this time, the infamous “Heart on My Sleeve,” which featured AI vocal imitations of Drake and The Weeknd, shifted the debate about music and AI into high gear. Ever since, industry lobbyists and artists rights groups have been pushing legislation to regulate generative AI technology based on concepts of rights and permissions. Now Drake goes and blatantly breaks the main principle involved. It’s like something out of a political attack ad: He was against this before he was for it!
To me, using artists’ voices without their permission is wrong and it’s even more wrong — creepier — if the artist in question died relatively recently. The legal situation around this, and AI in general, is in flux, though. Tennessee’s ELVIS Act just passed, and a few federal bills have significant support. But the main point of the ELVIS Act and most of the recently proposed legislation is to impose penalties for exactly the kind of thing Drake did.

Trending on Billboard

And Drake, who must know these laws are necessary because a year ago they would have helped him, just made it harder to pass them. Imagine you’re a music business lobbyist who spent the last year explaining to members of Congress how important it is to protect the unique sounds of particular performers and then suddenly one of the biggest artists in the world goes ahead and violates every principle you’ve been discussing. Forget about Lamar — where’s the diss track from the RIAA?

It’s hard to say for sure whether what Drake did was illegal because laws vary by state — that’s why we need federal legislation in the first place. But Drake seems to have released the recording without his label, Republic Records, a subsidiary of Universal Music Group, which could indicate some concerns. (A representative for Drake declined to comment and Universal did not respond to requests for comment.) And Tupac Shakur’s estate has threatened to sue if Drake doesn’t take the track offline. (Snoop Dogg’s reaction — “They did what? When? How? Are you sure?,” followed by a weary sigh — is a work of art in itself. 10/10, no notes.) Litigation could be complicated, though. The Shakur estate threatened to sue for a violation of Shakur’s right of publicity, as well as for copyright infringement, which may be harder but comes with high statutory damages.

Howard King, the lawyer for Shakur’s estate, lays out the issue in his cease-and-desist letter to Drake. “Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights,” King writes, “it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.” The use of 2Pac’s voice was especially inappropriate, King suggests, since Lamar is “a good friend to the Estate who has given nothing but respect to Tupac.”

In music critic terms, Drake is using simulacra of 2Pac and Snoop to call out Lamar by implying that he’s unworthy of their legacy. In legal terms, this might violate Shakur and Snoop Dogg’s rights of publicity or likeness rights, and there are precedents that would suggest it does — Tom Waits and Bette Midler each won a case about (human) vocal imitation. In moral terms, this feels so wrong because it forces Shakur and Snoop to say something they would never have said in real life. In hip-hop, reputation is everything — you own your words in both senses of the term — and Snoop and Shakur have every right to guard theirs.

This might seem like an awfully pretentious way to talk about what will almost certainly be remembered as a minor track from a major artist. Are reputations really at stake? Doesn’t anyone with even the slightest interest in pop music know that Drake used AI?

That’s a very current way of thinking about a technology that’s evolving really fast. What happens when millions of hobbyist producers release thousands of songs with imitations of hundreds of artists? (There are fan-made AI tracks out there already.) Who’s to know who dissed whom, let alone who favors what politician or endorses which product? For that matter, what happens when this comes for politicians? You can’t regulate digital technology with the legal equivalent of an umbrella — you need to prepare for a flood.

The ELVIS Act and the EU AI legislation represent a good start for that preparation, and most of the federal legislation under discussion seems solid. Hopefully, by the time the flood hits, we’ll remember “Heart on My Sleeve” as the beginning of an important debate and “Taylor Made Freestyle” as an amusing aside.

Tupac Shakur’s estate is threatening to sue Drake over a recent diss track against Kendrick Lamar that featured an AI-generated version of the late rapper’s voice, calling it a “a flagrant violation” of the law and a “blatant abuse” of his legacy.
In a Wednesday cease-and-desist letter obtained exclusively by Billboard, litigator Howard King told Drake (Aubrey Drake Graham) that he must confirm that he will pull down his “Taylor Made Freestyle” in less than 24 hours or the estate would “pursue all of its legal remedies” against him.

Trending on Billboard

“The Estate is deeply dismayed and disappointed by your unauthorized use of Tupac’s voice and personality,” King wrote in the letter. “Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”

Drake released “Taylor Made” on Friday, marking the latest chapter in a back-and-forth war of words between the Canadian rapper and Lamar. Beyond taking shots at both Kendrick and Taylor Swift, the track made headlines because of its prominent use of artificial intelligence technology to create fake verses from Tupac and Snoop Dogg – two West Coast legends idolized by the LA-based Lamar.

“Kendrick, we need ya, the West Coast savior/ Engraving your name in some hip-hop history,” the AI-generated Tupac raps in Drake’s song. “If you deal with this viciously/ You seem a little nervous about all the publicity.”

In Tuesday’s letter, Tupac’s estate warned Drake that the use of his voice clearly violated Tupac’s so-called publicity rights – the legal power to control how your image or likeness is used by others. And they took particular exception the use of his voice to take shots at Lamar.

“The unauthorized, equally dismaying use of Tupac’s voice against Kendrick Lamar, a good friend to the Estate who has given nothing but respect to Tupac and his legacy publicly and privately, compounds the insult,” King wrote.

A rep for Drake declined to comment on the demands of the Shakur estate.

It’s unclear if Snoop Dogg, whose voice was also featured on “Taylor Made,” is planning to raise similar legal objections to Drake’s track. On Saturday, he posted a video to social media in which he seemed to be learning of the song for the first time: “They did what? When? How? Are you sure?” A rep for Snoop Dogg did not return a request for comment.

The unauthorized use of voice cloning technology has become one of the music industry’s thorniest legal subjects, as AI-powered tools have made easier than ever to convincingly mimic real artists.

The issue exploded onto the scene last year, when an unknown artist named Ghostwriter released a track called “Heart On My Sleeve” that featured – ironically – fake verses from Drake’s voice. Since then, as voice-cloning has proliferated on the internet, industry groups, legal experts and lawmakers have wrangled over how best to crack down on it.

It’s not as simple as it might seem. Federal copyrights are difficult to directly apply, since cloned vocals usually feature new words and music that are distinct from existing copyrighted songs. The publicity rights cited by the estate are a better fit because they protect someone’s likeness itself, but they have historically been used to sue over advertisements, rather than over creative works like songs.

Faced with that legal uncertainty, the recording industry and top artists have pushed for new legislation to address the problem. Last month, Tennessee passed a statute called the ELVIS Act that aims to crack down on voice cloning by expanding the state’s publicity right laws beyond just advertisements. Lawmakers in Washington DC are also considering similar bills that would create new, broader publicity rights at a federal level.

In Wednesday’s letter, however, the estate said that California’s existing publicity right laws clearly outlaw something as blatant as Drake’s use of Tupac’s voice in “Taylor Made.” King argued that the song had caused “substantial economic and reputational harm” by creating the “false impression that the estate and Tupac promote or endorse the lyrics for the sound-alike.”

The estate also argued that the song was likely created using an AI model that violated the estate’s copyrights by “training” on existing recordings of Tupac’s music. The legality of using copyrighted “inputs” is another difficult legal issue that’s currently being tested in several closely-watched lawsuits against AI developers, including one filed by major music publishers.

“It is hard to believe that [Tupac’s record label]’s intellectual property was not scraped to create the fake Tupac AI on the Record,” King wrote, before demanding that Drake also provide “a detailed explanation for how the sound-alike was created and the persons or company that created it, including all recordings and other data ‘scraped’ or used.”

Wednesday’s letter also pointedly highlighted that Drake himself has made previous objections to the use of his own likeness by others. In addition to last year’s incident surrounding “Heart on My Sleeve” — which was quickly pulled down from the internet — King pointed to a lesser-known federal lawsuit in which Drake’s attorneys accused a website of using his image without authorization.

“The [“Taylor Made Freestyle”] has generated well more than one million streams at this point and has been widely reported in the general national press and popular entertainment websites and publications,” the estate wrote. “Without question, it is exponentially more serious and damaging than a picture of you with some other people on a low volume website.”

In its closing paragraphs, the letter demanded written confirmation by noon Pacific on Thursday that Drake’s representatives were “expeditiously taking all steps necessary to have it removed.”

“If you comply, the estate will consider whether an informal negotiation to resolve this matter makes sense,” King wrote. “If you do not comply, our client has authorized this firm to pursue all of its legal remedies including, but not limited to, an action for violation of … the estate’s copyright, publicity and personality rights and the resulting damages, injunctive relief, and punitive damages and attorneys’ fees.”

Amazon Music has announced a new AI-powered playlist feature that allows users to turn text prompts into entire playlists. Called Maestro, the offering is still in beta and available only to a small number of Amazon Music users on all tiers in the United States on iOS and Android. It can be found on the […]

Artificial intelligence and user-generated content music tool company Mayk has announced the launch of its latest product, popstarz.ai. With the promise of helping anyone playfully assume the identity of a popstar and let a user sing their favorite song, the company hopes to revolutionize karaoke and strengthen the artist-fan relationship. Explore Explore See latest videos, […]

SAG-AFTRA, the union representing roughly 160,000 actors, dancers, singers, recording artists and other media professionals, and all three major music companies reached a tentative multiyear agreement last week that includes guardrails for the use of artificial intelligence technology across the industry.
A successor to the SAG-AFTRA National Code of Fair Practice for Sound Recordings, the new deal received unanimous approval from the guild’s executive committee and, if ratified by member vote, will cover the period beginning Jan. 1, 2021 through Dec. 31, 2026. Participating labels include Sony Music Entertainment, Universal Music Group and Warner Music Group, as well as Disney Music Group.

The AI guidelines require that the use of terms such as “artist,” “singer” and “royalty artist” only refer to actual humans, plus the deal calls for clear consent, minimum compensation and other stipulations prior to the release of a sound recording using a digital replication of a real artist’s voice.

Trending on Billboard

The tentative contract also includes increased minimums, health and retirement improvements, and an increase in the percentage of streaming revenue to be covered by contributions.

“This agreement ensures that our members are protected,” said Duncan Crabtree-Ireland, SAG-AFTRA national executive director. “SAG-AFTRA stands firm in the belief that while technology can enhance the creative process, the essence of music must always be rooted in genuine human expression and experience. We look forward to working alongside our industry partners to foster an environment where innovation serves to elevate, not diminish, the unique value of each artist’s contribution to our rich cultural tapestry.”

The Record Label Negotiating Committee said, “Together, we’ll chart a successful course forward, embracing new opportunities and facing our common challenges, strengthened by our shared values and commitment to human artistry.”

A new law in Tennessee aimed at protecting artists from AI-powered voice mimicry has won widespread acclaim from the music industry, but some legal experts are worried such laws might be an “overreaction” that could have unintended consequences.  
Less than a year after a fake Drake song created using new artificial intelligence tools took the music world by storm, Tennessee lawmakers enacted first-in-the-nation legislation last month aimed at preventing exactly that scenario — the use of a person’s voice without their permission. The ELVIS Act (Ensuring Likeness Voice and Image Security) does that by expanding the state’s protections against the unauthorized use of a person’s likeness, known as publicity rights.  

The passage of the new law was hailed across the music business. Mitch Glazier of the Recording Industry Association of America called it an “incredible result.” Harvey Mason Jr. of the Recording Academy described it as a “groundbreaking achievement.” David Israelite of the National Music Publishers’ Association called it “an important step forward.” Any musical artist who has had their voice used without permission likely shares those sentiments.  

Trending on Billboard

But legal experts are more divided. Jennifer Rothman, a law professor at the University of Pennsylvania and one of the country’s top experts on publicity rights, rang alarm bells last week at a panel discussion in Nashville, warning that Tennessee’s new statute had not been necessary and had been “rushed” into law.  

“We don’t want a momentary overreaction to lead to the passage of laws that would make things worse, which is currently what is happening,” Rothman told her fellow panel members and the audience. “The ELVIS Act has a number of significant concerns that are raised, particularly with the broad sweep of liability and restrictions on speech.”  

In an effort to combat AI voice cloning, the ELVIS Act makes a number of key changes to the law. Most directly, it expands the state’s existing publicity rights protections to explicitly include someone’s voice as part of their likeness. But the new law also expands the law in ways that have received less attention, including adding a broader definition of who can be sued and for what.  

According to Joseph Fishman, a law professor at Vanderbilt University who has been closely tracking the legislation, that broader wording “sweeps in innocuous behavior that no one seriously thinks is a problem that needs solving” — potentially including tribute bands, interpolations, or even just sharing a photo that a celebrity didn’t authorize. 

“The range of acts that trigger liability is vast,” Fishman tells Billboard. “All the press around this law is focused on deepfakes and digital replicas — and those would indeed be covered — but the law as written goes so much further.”  

Here’s why: Historically, publicity rights in the U.S. have been mostly limited to commercial contexts — like advertisements that use a celebrity’s likeness to make it appear they’re endorsing a product. The singer Bette Midler once famously sued the Ford Motor Co. over a series of commercials featuring vocals by a Midler impersonator.

The new law effectively gets rid of that commercial limitation; under the ELVIS Act, anyone who knowingly “makes available” someone’s likeness without authorization can face a lawsuit. It also broadly defines protected voices as any sound that’s “readily identifiable and attributable to a particular individual.”

Those are great changes if you’re a musical artist trying to sue over a song that’s using a fake version of your voice, since the old conception of publicity rights likely wouldn’t apply to that scenario. But Fishman says they have serious potential for collateral damage beyond their intended target.  

“There’s nothing that would limit it to AI outputs, nothing that would limit it to deceptive uses,” Fishman said. “The lead singer in an Elvis tribute band who sings convincingly like The King certainly seems to fall under the definition. So do Elvis impersonators.”  

In an “even more extreme” hypothetical, Fishman imagined an “unflattering” photo of Elvis that he knew the Presley estate didn’t like. “The law seems to say I’d be liable if I sent that photo to a friend. After all, I’m transmitting his likeness, knowing that the rightsholder hasn’t authorized the use. Stop and think about that for a moment.”

The ELVIS Act does contain exemptions aimed at protecting free speech, including those that allow for the legal use of someone’s likeness in news coverage, criticism, scholarship, parody and other “fair use” contexts. It also expressly allows for “audiovisual works” that contain “a representation of the individual as the individual’s self” — a provision likely aimed at allowing Hollywood to keep making biopics and other films about real people without getting sued in Tennessee.

But confusingly, the law says those exemptions only apply “to the extent such use is protected by the First Amendment.” That wording, according to Rothman, means those exemptions essentially “don’t exist” unless and until a court rules that a specific alleged activity is a form of protected free speech, a costly extra step that will mostly benefit those who want to be in court. “This specific law creates great work for lawyers,” Rothman said. “So much work for lawyers.”  

Those lawyers are going to be filing real lawsuits against real people — some of whom are the scary, voice-cloning bad actors that the music industry wants to crack down on, but also some of whom are likely just regular people doing things that used to be legal.

“The law could absolutely lead to lots of lawsuits,” Fishman says. “There’s plenty of room here for people to test how far the statute can go, whether because they object to how they’re being depicted or because they see an opportunity for an extra licensing stream.”  

Though it only applies to Tennessee, the importance of the ELVIS Act is magnified because it is the first of likely many such legislative efforts aimed at addressing AI mimicry. At least five other states are currently considering amending their publicity rights laws to address the growing problem, and lawmakers on Capitol Hill are also weighing federal legislation that would create a national likeness statute for the first time.  

At last week’s roundtable, Rothman said those efforts were misguided. She said that laws already on the books — including federal trademark law, existing publicity rights laws, and numerous other statutes and torts — already provide avenues to stop voice cloning and deepfakes. And she warned that the proposed federal bills posed even more serious problems, like allowing someone to sign away their likeness rights in perpetuity.

For other legal experts critical of the ELVIS Act, including Harvard University law professor Rebecca Tushnet, the hope is that any subsequent legislation, whether at the state or federal level, can be more directly tailored to the actual AI-fueled deceptions they’re supposed to address. 

“Any new laws need to be far more targeted at specific harms,” says Tushnet, who has written extensively about the intersection of intellectual property and free speech. “Right now, this statute and other proposals are dramatically overbroad, and threaten legitimate creative conduct.”