AI
Page: 6
In November, I quit my job in generative AI to campaign for creators’ right not to have their work used for AI training without permission. I started Fairly Trained, a non-profit that certifies generative AI companies that obtain a license before training models on copyrighted works.
Mostly, I’ve felt good about this decision — but there have been a few times when I’ve questioned it. Like when a big media company, though keen to defend its own rights, told me it couldn’t find a way to stop using unfairly-trained generative AI in other domains. Or whenever demos from the latest models receive unquestioning praise despite how they’re trained. Or, last week, with the publication of a series of articles about AI music company Suno that I think downplay serious questions about the training data it uses.
Suno is an AI music generation company with impressive text-to-song capabilities. I have nothing against Suno, with one exception: Piecing together various clues, it seems likely that its model is trained on copyrighted work without rights holders’ consent.
Trending on Billboard
What are these clues? Suno refuses to reveal its training data sources. In an interview with Rolling Stone, one of its investors disclosed that Suno didn’t have deals with the labels “when the company got started” (there is no indication this has changed), that they invested in the company “with the full knowledge that music labels and publishers could sue,” and that the founders’ lack of open hostility to the music industry “doesn’t mean we’re not going to get sued.” And, though I’ve approached the company through two channels about getting certified as Fairly Trained, they’ve so far not taken me up on the offer, in contrast to the 12 other AI music companies we’ve certified for training their platforms fairly.
There is, of course, a chance that Suno licenses its training data, and I genuinely hope I’m wrong. If they correct the record, I’ll be the first to loudly and regularly trumpet the company’s fair training credentials.
But I’d like to see media coverage of companies like Suno give more weight to the question of what training data is being used. This is an existential issue for creators.
Editor’s note: Suno’s founders did not respond to requests for comment from Billboard about their training practices. Sources confirm that the company does not have licensing agreements in place with some of the most prominent music rightsholders, including the three major label groups and the National Music Publishers’ Association.
Limiting discussion of Suno’s training data to the fact that it “decline[s] to reveal details” and not explicitly stating the possibility that Suno uses copyrighted music without permission means that readers may not be aware of the potential for unfair exploitation of musicians’ work by AI music companies. This should factor into our thoughts about which AI music companies to support.
If Suno is training on copyrighted music without permission, this is likely the technological factor that sets it apart from other AI music products. The Rolling Stone article mentions some of the tough technical problems that Suno is solving — having to do with tokens, the sampling rate of audio and more — but these are problems that other companies have solved. In fact, several competitors have models as capable as Suno’s. The reason you don’t see more models like Suno’s being released to the public is that most AI music companies want to ensure training data is licensed before they release their products.
The context here is important. Some of the biggest generative AI companies in the world are using untold numbers of creators’ work without permission in order to train AI models that compete with those creators. There is, understandably, a big public outcry at this large-scale scraping of copyrighted work from the creative community. This has led to a number of lawsuits, which Rolling Stone mentions.
The fact that generative AI competes with human creators is something AI companies prefer not to talk about. But it’s undeniable. People are already listening to music from companies like Suno in place of Spotify, and generative AI listening will inevitably eat into music industry revenues — and therefore human musicians’ income — if training data isn’t licensed.
Generative AI is a powerful technology that will likely bring a number of benefits. But if we support the exploitation of people’s work for training without permission, we implicitly support the unfair destruction of the creative industries. We must instead support companies that take a fairer approach to training data.
And those companies do exist. There are a number — generally startups — taking a fairer approach, refusing to use copyrighted work without consent. They are licensing, or using public domain data, or commissioning data, or all of the above. In short, they are working hard not to train unethically. At Fairly Trained, we have certified 12 of these companies in AI music. If you want to use AI music and you care about creators’ rights, you have options.
There is a chance Suno has licensed its data. I encourage the company to disclose what it’s training its AI model on. Until we know more, I hope anyone looking to use AI music will opt instead to work with companies that we know take a fair approach to using creators’ work.
To put it simply — and to use some details pulled from Suno’s Rolling Stone interview — it doesn’t matter whether you’re a team of musicians, what you profess to think about IP, or how many pictures of famous composers you have on the walls. If you train on copyrighted work without a license, you’re not on the side of musicians. You’re unfairly exploiting their work to build something that competes with them. You’re taking from them to your gain — and their cost.
Ed Newton-Rex is the CEO of Fairly Trained and a composer. He previously founded Jukedeck, one of the first AI music companies, ran product in Europe for TikTok, and was vp of audio at Stability AI.
Tennessee governor Bill Lee signed the ELVIS Act into law Thursday (Mar. 21), legislation designed to further protect the state’s artists from artificial intelligence deep fakes. The bill, more formally named the Ensuring Likeness Voice and Image Security Act of 2024, replaces the state’s old right of publicity law, which only included explicit protections for one’s “name, photograph, or likeness,” expanding protections to include voice- and AI-specific concerns for the first time.
Gov. Lee signed the bill into law from a local honky tonk, surrounded by superstar supporters like Luke Bryan and Chris Janson. Lee joked that it was “the coolest bill signing ever.”
The ELVIS Act was introduced by Gov. Lee in January along with State Senate Majority Leader Jack Johnson (R-27) and House Majority Leader William Lambert (R-44), and it has since garnered strong support from the state’s artistic class. Talents like Lindsay Ell, Michael W. Smith, Natalie Grant, Matt Maher and Evanescence‘s David Hodges have been vocal in their support for the bill.
Trending on Billboard
It also gained support from the recorded music industry and the Human Artistry Campaign, a global initiative of entertainment organizations that pushes for a responsible approach to AI. The initiative has buy-in from more than 180 organizations worldwide, including the RIAA, NMPA, BMI, ASCAP, Recording Academy and American Association of Independent Music (A2IM).
Right of publicity protections vary state-to-state in the United States, leading to a patchwork of laws that make enforcing one’s ownership over one’s name, likeness and voice more complicated. There is an even greater variation among right of publicity laws postmortem. As AI impersonation concerns have grown more prevalent over the last year, there has been a greater push by the music business to gain a federal right of publicity.
The ELVIS Act replaces the Personal Rights Protection Act of 1984, which was passed, in part, to extend Elvis Presley‘s publicity rights after he passed away. (At the time, Tennessee did not recognize a postmortem right of publicity). Along with explicitly including a person’s voice as a protected right for the first time, the ELVIS Act also broadens which uses of one’s name, image, photograph and voice are barred.
Previously, the Personal Rights Protection Act only banned uses of a person’s name, photograph and likeness “for purpose of advertising,” which would not include the unauthorized use of AI voices for performance purposes. The ELVIS Act does not limit liability based on context, so it would likely bar any unauthorized use, including in a documentary, song or book, among other mediums.
The federal government is also working on solutions to address publicity rights concerns. Within hours of Gov. Lee’s introduction of the ELVIS Act in Tennessee back in January, a bipartisan group of U.S. House lawmakers revealed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a framework for protecting one’s voice and likeness on a federal level and lays out First Amendment protections. It is said to complement the Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), a draft bill that was introduced last October.
While most of the music business is aligned on creating a federal right of publicity, David Israelite, president/CEO of the National Music Publishers’ Association (NMPA), warned in a speech delivered at an Association of Independent Music Publishers (AIMP) meeting in February that “while we are 100% supportive of the record labels’ priority to get a federal right of publicity…it does not have a good chance. Within the copyright community, we don’t even agree on [it]. Guess who doesn’t want a federal right of publicity? Film and TV. Guess who’s bigger than the music industry? Film and TV.”
The subject of AI voice cloning has been a controversial topic in the music business since Ghostwriter released the so-called “Fake-Drake” song “Heart On My Sleeve,” which used the AI technology without permission. In some cases, this form of AI can present novel creative opportunities — including its use for pitch records, lyric translations, estate marketing and fan engagement — but it also poses serious threats. If an artist’s voice is cloned by AI without their permission or knowledge, it can confuse, offend, mislead or even scam fans.
There is no shortage of AI voice synthesis companies on the market today, but Voice-Swap, founded and led by Dan “DJ Fresh” Stein, is trying to reimagine what these companies can be.
The music producer and technologist intends Voice-Swap to act as not just a simple conversion tool but an “agency” for artists’ AI likenesses. He’s also looking to solve the ongoing question of how to monetize these voice models in a way that gets the most money back to the artists — a hotly contested topic since anonymous TikTok user Ghostwriter employed AI renderings of Drake and The Weeknd‘s voices without their permission on the viral song “Heart On My Sleeve.”
In an exclusive interview with Billboard, Stein and Michael Pelczynski, a member of the company’s advisory board and former vp at SoundCloud, explain their business goals as well as their new monetization plan, which includes providing a dividend for participating artists and payment to artists every time a user employs their AI voice — not just when the resulting song is released commercially and streamed on DSPs. The company also reveals that it’s working on a new partnership with Imogen Heap to create her voice model, which will arrive this summer.
Trending on Billboard
Voice-Swap sees the voice as the “new real estate of IP,” as Pelczynski puts it — just another form of ownership that can allow a participating artist to make passive income. (The voice, along with one’s name and likeness, is considered a “right of publicity” which is currently regulated differently state-to-state.)
In addition to seeing AI voice technology as a useful tool to engage fans of notable artists like Heap and make translations of songs, the Voice-Swap team also believes AI voices represent a major opportunity for session vocalists with distinct timbres but lower public profiles to earn additional income. On its platform now, the company has a number of session vocalists of varying vocal styles available for use; Voice-Swap sees session vocalists’ AI voice models as potentially valuable to songwriters and producers who may want to shape-shift those voices during writing and recording sessions. (As Billboard reported in August, using AI voice models to better tailor pitch records to artists has become a common use-case for the emerging technology.)
“We like to think that, much like a record label, we have a brand that we want to build with the style of artists and the quality we represent at Voice-Swap,” says Stein. “It doesn’t have to be a specific genre, but it’s about hosting unique and incredible voices as opposed to [just popular artists].”
Last year, we saw a lot of fear and excitement surrounding this technology as Ghostwriter appeared on social media and Grimes introduced her own voice model soon after. How does your approach compare to these examples?
Pelczynski: This technology did stoke a lot of fear at first. This is because people see it as a magic trick. When you don’t know what’s behind it and you just see the end result and wonder how it just did that, there is wonder and fear that comes. [There is now the risk] that if you don’t work with someone you trust on your vocal rights, someone is going to pick up that magic trick and do it without you. That’s what happened with Ghostwriter and many others.
The one real main thing to emphasize is the magic trick of swapping a voice isn’t where the story ends, it’s where it begins. And I think Grimes in particular is approaching it with an intent to empower artists. We are, too. But I think where we differentiate is the revenue stream part. With the Grimes model, you create what you want to create and then the song goes into the traditional ecosystem of streaming and other ways of consuming music. That’s where the royalties are made off of that.
We are focused on the inference. Our voice artists get paid on the actual conversion of the voice. Not all of these uses of AI voices end up on streaming, so this is important to us. Of course, if the song is released, additional money for the voice can be made then, too. As far as we know, we are the first platform to pay royalties on the inference, the first conversion.
Stein: We also allow artists the right to release their results through any distributor they want. [Grimes’ model is partnered exclusively with TuneCore.] We see ourselves a bit like an agency for artists’ voices.
What do you mean by an “agency” for artists’ voices?
Stein: When we work with an artist at Voice-Swap we intend to represent them and license their voice models created with us to other platforms to increase their opportunities to earn income. It’s like working with an agent to manage your live bookings. We want to be the agent for the artists’ AI presence and help them monetize it on multiple platforms but always with their personal preferences and concerns in mind.
What kinds of platforms would be interested in licensing an AI voice model from Voice-Swap?
Stein: It is early days for all of the possible use cases, but we think the most obvious example at the moment is music production platforms [or DAWs, short for digital audio workstation] that want to use voice models in their products.
There are two approaches you can take [as an AI voice company.] We could say we are a SaaS platform, and the artist can do deals with other platforms themselves. But the way we approach this is we put a lot of focus into the quality of our models and working with artists directly to keep improving it. We want to be the one-stop solution for creating a model the artist is proud of.
I think the whole thing with AI and where this technology is going is that none of us know what it’s going to be doing 10 years from now. So for us, this was also about getting into a place where we can build that credibility in those relationships and not just with the artists. We want to work with labels, too.
Do you have any partnerships with DAWs or other music-making platforms in place already?
Pelczynski: We are in discussions and under NDA pending an announcement. Every creator’s workflow is different — we want our users to have access to our roster of voices wherever they feel most comfortable, be that via the website, in a DAW or elsewhere. That’s why we’re exploring these partnerships, and why we’ve designed our upcoming VST [virtual studio technology] to make that experience even more seamless. We also recently announced a partnership with SoundCloud, with deeper integrations aimed at creators forthcoming.
Ultimately, the more places our voices are available, the more opportunities there are for new revenue for the artists, and that’s our priority.
Can some music editing take place on the Voice-Swap website, or do these converted voices need to be exported?
Pelczynski: Yes, Dan has always wanted to architect a VST so that it can act like a plug-in in someone’s DAW, but we also have the capability of letting users edit and do the voice conversion and some music editing on our website using our product Stem-Swap. That’s an amazing playground for people that are just coming up. It is similar to how BandLab and others are a good quick way to experiment with music creation.
How many users does Voice-Swap have?
Pelczynski: We have 140,000 verified unique users, and counting.
Can you break down the specifics of how much your site costs for users?
Pelczynski: We run a subscription and top-up pricing system. Users pay a monthly or one-off fee and receive audio credits. Credits are then used for voice conversion and stem separation, with more creator tools on the way.
How did your team get connected with Imogen Heap, and given all the competitors in the AI voice space today, why do you think she picked Voice-Swap?
Pelczynski: We’re very excited to be working with her. She’s one of many established artists that we’re working on currently in the pipeline, and I think our partnership comes down to our ethos of trust and consent. I know it sounds trite, but I think it’s absolutely one of the cornerstones to our success.
Nearly 300 artists, songwriters, actors and other creators are voicing support for a new bipartisan Congressional bill that would regulate the use of artificial intelligence for cloning voices and likenesses via a new print ad running in USA Today on Friday (Feb. 2).
Explore
See latest videos, charts and news
See latest videos, charts and news
The bill — dubbed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD” Act) and introduced in the U.S. House on Jan. 10 — would establish a federal framework for protecting voices and likenesses in the age of AI.
Placed by the Human Artistry Campaign, the ad features such bold-faced names as 21 Savage, Bette Midler, Cardi B & Offset, Chuck D, Common, Gloria Estefan, Jason Isbell, the estate of Johnny Cash, Kelsea Ballerini, Lainey Wilson, Lauren Daigle, Lamb of God, Mary J. Blige, Missy Elliott, Nicki Minaj, Questlove, Reba McEntire, Sheryl Crow, Smokey Robinson, the estate of Tomy Petty, Trisha Yearwood and Vince Gill.
“The No AI FRAUD Act would defend your fundamental human right to your voice & likeness, protecting everyone from nonconsensual deepfakes,” the ad reads. “Protect your individuality. Support HR 6943.”
The Human Artistry Campaign is a coalition of music industry organizations that in March 2023 released a series of seven core principles regarding artificial intelligence. They include ensuring that AI developers acquire licenses for artistic works used in developing and training AI models, as well as that governments refrain from creating “new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”
In addition to musical artists, the USA Today ad also bears the names of actors such as Bradley Cooper, Clark Gregg, Debra Messing, F. Murray Abraham, Fran Drescher, Laura Dern, Kevin Bacon, Kyra Sedgwick, Kristen Bell, Kiefer Sutherland, Julianna Margulies and Rosario Dawson.
The No AI FRAUD Act was introduced by Rep. María Elvira Salazar (R-FL) alongside Reps. Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA). The bill is said to be based upon the Senate discussion draft Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES” Act), which was unveiled in October.
“It’s time for bad actors using AI to face the music,” said Rep. Salazar in a statement at the time the legislation was announced. “This bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online.”
Spurred in part by recent incidents including the viral “fake Drake” track “Heart On My Sleeve,” the No AI FRAUD Act would establish a federal standard barring the use of AI to copy the voices and likenesses of public figures without consent. As it stands, an artist’s voice, image or likeness is typically covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but those laws vary state by state.
The bill was introduced on the same day a similar piece of legislation — the Ensuring Likeness Voice and Image Security (ELVIS) Act — was unveiled in Tennessee by Governor Bill Lee. That bill would update the state’s Protection of Personal Rights law “to include protections for songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI),” according to a press release.
Since its unveiling, the No AI Fraud Act has received support from a range of music companies and organizations including the Recording Industry Association of America (RIAA), Universal Music Group, the National Music Publishers’ Assocation (NMPA), the Recording Academy, SoundExchange, the American Association of Independent Music (A2IM) and the Latin Recording Academy.
You can view the full ad below.
To judge from the results of a report commissioned by GEMA and SACEM, the specter of artificial intelligence (AI) is haunting Europe.
A full 35% of members of the respective German and French collective management societies surveyed said they had used some kind of AI technology in their work with music, according to a Goldmedia report shared in a Tuesday (Jan. 30) press conference — but 71% were afraid that the technology would make it hard for them to earn a living. That means that some creators who are using the technology fear it, too.
The report, which involved expert interviews as well as an online survey, valued the market for generative AI music applications at $300 million last year – 8% of the total market for generative AI. By 2028, though, that market could be worth $3.1 billion. That same year, 27% of creator revenues – or $950 million – would be at risk, in large part due to AI-created music replacing that made by humans.
Explore
See latest videos, charts and news
See latest videos, charts and news
Although many of us think of the music business as being one where fans make deliberate choices of what to listen to – either by streaming or purchasing music – collecting societies take in a fair amount of revenue from music used in films and TV shows, in advertising, and in restaurants and stores. So even if generative AI technology isn’t developed enough to write a pop song, it could still cost the music business money – and creators part or even all of their livelihood.
“So far,” as the report points out, “there is no remuneration system that closes the AI-generated financial gap for creators.” Although some superstars are looking to license the rights to their voices, there is a lack of legal clarity in many jurisdictions about under what conditions a generative AI can use copyrighted material for training purposes. (In the United States, this is a question of fair use, a legal doctrine that doesn’t exist in the same form in France or Germany.) Assuming that music used to train an AI would need to be licensed, however, raises other questions, such as how many times and how that would pay.
Unsurprisingly, the vast majority of songwriters want credit and transparency: 95% want AI companies to disclose which copyrighted works they used for training purposes and 89% want companies to disclose which works are generated by AI. Additionally, 90% believe they should be asked for permission before their work is used for training purposes and the same amount want to benefit financially. A full 90% want policymakers to pay more attention to issues around AI and copyright.
The report further breaks down how the creators interviewed feel about using AI. In addition to the 35% who use the technology, 13% are potential users, 26% would rather not use it and 19% would refuse. Of those who use the technology already, 54% work on electronic music, 53% work on “urban/rap,” 52% on advertising music, 47% on “music library” and 46% on “audiovisual industry.”
These statistics underscore that AI isn’t a technology that’s coming to music – it’s one that’s here now. That means that policymakers looking to regulate this technology need to act soon.
The report also shows that smart regulation could resolve the debate between the benefits and drawbacks of AI. Creators are clearly using it productively, but more still fear it: 64% think the risks outweigh the opportunities, while just 11% thought the opposite. This is a familiar pattern with the music business, to which technologies are both dangerous and promising. Perhaps AI could end up being both.
Following the spread of AI-generated, sexually explicit photos of Taylor Swift, the White House is speaking out and calling for legislation to protect victims on online harassment. Explore Explore See latest videos, charts and news See latest videos, charts and news White House press secretary Karine Jean-Pierre called the incident “alarming,” and that the negatives […]
Flavor Flav teamed up with his Public Enemy bandmate Chuck D for “Every Where Man,” a new single that uses AI to translate the track to dozens of different languages. To celebrate, he joined Billboard’s Rania Aniftos to discuss the inspiration behind the song. Explore Explore See latest videos, charts and news See latest videos, […]
The leader of the American Federation of Musicians proclaimed that Hollywood labor is “in a new era” as dozens of members of various entertainment unions came to the doorstep of studio labor negotiators in support of the start of his union’s contract negotiations on Monday.
As an early drizzle that morning turned into driving rain, members of the Writers Guild of America, SAG-AFTRA, IATSE and Teamsters Local 399 rallied in front of the Sherman Oaks offices of the Alliance of Motion Picture and Television Producers with picket signs, and a few umbrellas, in hand. To AFM‘s chief negotiator and international president Tino Gagliardi, this kind of unity for musicians was unlike anything he’d seen in his time in union leadership. “We’re in a new era, especially in the American labor movement, with regard to everyone coalescing and coming together and collaborating in order to get what we all need in this industry,” Gagliardi told The Hollywood Reporter. “Together we are the product, we are the ones that bring the audiences in, that controls the emotion, if you will.”
The program — which featured music performed by AFM brass musicians and speeches from labor leaders including Teamsters Local 399 secretary-treasurer Lindsay Dougherty, Writers Guild of America West vice president Michele Mulroney and L.A. County Federation of Labor president Yvonne Wheeler — took place hours before the AFM was scheduled to begin negotiations over new Basic Theatrical Motion Picture and Basic Television Motion Picture contracts with the AMPTP in an office just steps away.
The message that speakers drove home was sticking together in the wake of the actors’ and writers’ strikes that shut down much of entertainment for half a year the previous summer and fall. The 2023 WGA and SAG-AFTRA strikes saw an unusual amount of teamwork occur between entertainment unions, which the AFM is clearly hoping to repeat in their contract talks. “We learned a hard, long lesson last year that we had to be together since day one. That’s going to be the difference going into this fight for the musicians, is that we’re all together in this industry,” Dougherty said in her speech.
The WGA West’s Mulroney addressed the musicians present, saying that her members “never took your support for granted” during the writers’ work stoppage. She added, “The WGA has your back just as you had our backs this past summer.” Though he wasn’t present at Monday’s event, SAG-AFTRA national executive director Duncan Crabtree-Ireland sent a message, delivered by his chief communications officer, that “the heat of the hot labor summer is as strong as ever.”
The AMPTP said in a statement on Monday that it looks forward to “productive” negotiations with the AFM, “with the goal of concluding an agreement that will ensure an active year ahead for the industry and recognize the value that musicians add to motion pictures and television.”
Though the AFM contracts under discussions initially expired in Nov. 2023, the writers’ and actors’ strikes that year prompted both sides to extend the pacts by six months. Top priorities for the musicians’ union in this round of talks include instituting AI protections, amplifying wages and greater streaming residuals.
For rank-and-file writers and actors who showed up at Monday’s rally, one recurring theme was repaying the AFM for its support during their work stoppages. SAG-AFTRA member Miki Yamashita (Cobra Kai), who is also a member of the American Guild of Musical Artists, explained that during the actors’ strike she organized an opera singers-themed picket at Paramount, which AFM members asked to take part in. “Because of them, we had orchestra players and a pianist to play for us during our picket, and I’ll never forget how much that meant to me, that show of solidarity,” she said. “I promised myself that if they ever needed my presence of my help, that I would rush to help them.”
Carlos Cisco and Eric Robbins, both writers on Star Trek: Discovery and WGA members, worked as lot coordinators at Disney during the writers’ strike. They recalled AFM members providing a morale boost during the work stoppage by occasionally playing music on the picket lines. “The struggles that labor faces in this [industry] are universal, whether it’s the hours, the residual payments as we’ve moved to streaming or the concern about AI coming into various spaces. We have far more in common than separates us,” said Robbins.
The AFM’s negotiations are set to continue through Jan. 31. Though the AMPTP offices don’t often see labor demonstrations, Gagliardi says that as a former president of New York-based AFM Local 802, he staged rallies in front of employer headquarters with some frequency. “I did this on a regular basis,” he said. “It was about bringing everyone together to fight for a common cause, and that’s what we’re doing today.”
This story was originally published by The Hollywood Reporter.
Tennessee governor Bill Lee has announced a new state bill to further protect the state’s “best in class artists and songwriters” from AI deepfakes.
While the state already has laws to protect Tennesseans against the exploitation of their name, image and likeness without their consent, this new law, called the Ensuring Likeness Voice and Image Security Act (ELVIS Act), is an update to the existing law to specifically address the challenges posed by new generative AI tools. The ELVIS Act also introduces protection for voices.
The announcement arrives just hours after a bipartisan group of U.S. House lawmakers revealed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a framework for protecting one’s voice and likeness on a federal level and lays out First Amendment protections. It is said to be a complement to the Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), a draft bill that was introduced last October.
An artist’s voice, image or likeness may be covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but this is a right that varies state by state. The ELVIS Act aims to provide Tennessee-based talent with much clearer protection for their voices in particular at the state level, and the No AI FRAUD Act hopes to establish a harmonized baseline of protection on the federal level. (If one lives in a state with an even stronger right of publicity law than the No AI FRAUD Act, that state protection is still viable and may be easier to address in court.)
The subject of AI voice cloning has been a controversial topic in the music business in the past year. In some cases, it presents novel creative opportunities — including its use for pitch records, lyric translations, estate marketing and fan engagement — but it also poses serious threats. If an artist’s voice is cloned by AI without their permission or knowledge, it can confuse, offend, mislead or even scam fans.
“From Beale Street to Broadway, to Bristol and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” says Gov. Lee in a statement. “As the technology landscape evolves with artificial intelligence, we’re proud to lead the nation in proposing legal protection for our best-in-class artists and songwriters.”
“As AI technology continues to develop, today marks an important step towards groundbreaking state-level AI legislation,” added Harvey Mason Jr., CEO of the Recording Academy. “This bipartisan, bicameral bill will protect Tennessee’s creative community against AI deepfakes and voice cloning and will serve as the standard for other states to follow. The Academy appreciates Governor Lee and bipartisan members of the Tennessee legislature for leading the way — we’re eager to collaborate with lawmakers to move this bill forward.”
“The emergence of generative Artificial Intelligence (AI) resulted in fake recordings that are not authorized by the artist and is wrong, period,” said a representative from Nashville Songwriters Association International (NSAI). “The Nashville Songwriters Association International (NSAI) applauds Tennessee Governor Bill Lee, Senate Leader Jack Johnson and House Leader William Lamberth for introducing legislation that adds the word “voice” to the existing law — making it crystal clear that unauthorized AI-generated fake recordings are subject to legal action in the State of Tennessee. This is an important step in what will be an ongoing challenge to regulate generative AI music creations.”
“I commend Governor Lee of Tennessee for this forward-thinking legislation,” said A2IM president/CEO Dr. Richard James Burgess. “Protecting the rights to an individual’s name, voice, and likeness in the digital era is not just about respecting personal identity but also about safeguarding the integrity of artistic expression. This act is a significant step towards balancing innovation with the rightful interests of creators and performers. It acknowledges the evolving landscape of technology and media, setting a precedent for responsible and ethical use of personal attributes, in the music industry.”
“The Artist Rights Alliance is grateful to Gov. Lee, State Senator Jack Johnson and Rep. William Lamberth for launching this effort to prevent an artist’s voice and likeness from being exploited without permission,” said Jen Jacobsen, executive director of the Artist Rights Alliance. “Recording artists and performers put their very selves into their art. Scraping or copying their work to replicate or clone a musician’s voice or image violates the most fundamental aspects of creative identity and artistic integrity. This important bill will help ensure that creators and their livelihoods are respected and protected in the age of AI.”
“AI deepfakes and voice cloning threaten the integrity of all music,” added David Israelite, president/CEO of the National Music Publishers’ Association. “It makes sense that Tennessee state would pioneer these important policies which will bolster and protect the entire industry. Music creators face enough forces working to devalue their work – technology that steals their voice and likeness should not be one of them.”
“Responsible innovation has expanded the talents of creators — artists, songwriters, producers, engineers, and visual performers, among others — for decades, but use of generative AI that exploits an individual’s most personal attributes without consent is detrimental to our humanity and culture,” said Mitch Glazier, chairman/CEO of the Recording Industry Association of America (RIAA). “We applaud Governor Bill Lee, State Senate Majority Leader Jack Johnson and House Majority Leader William Lamberth’s foresight in launching this groundbreaking effort to defend creators’ most essential rights from AI deepfakes, unauthorized digital replicas and clones. The ELVIS Act reaffirms the State of Tennessee’s commitment to creators and complements Senator Blackburn’s bipartisan work to advance strong legislation protecting creators’ voices and images at the federal level.”
“Evolving laws to keep pace with technology is essential to protecting the creative community,” said Michael Huppe, president/CEO of SoundExchange. “As we embrace the enormous potential of artificial intelligence, Tennessee is working to ensure that music and those who make it are protected under the law from exploitation without consent, credit, and compensation. We applaud the cradle of country music and the birthplace of rock n’ roll for leading the way.”
According to a press release from the state of Tennessee, the ELVIS Act is also supported by Academy of Country Music, American Association of Independent Music (A2IM), The Americana Music Association, American Society of Composers, Authors and Publishers (ASCAP), Broadcast Music, Inc. (BMI), Church Music Publishers Association (CMPA), Christian Music Trade Association, Folk Alliance International, Global Music Rights, Gospel Music Association, The Living Legends Foundation, Music Artists Coalition, Nashville Musicians Association, National Music Publishers’ Association, Rhythm & Blues Foundation, Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), Society of European Stage Authors and Composers (SESAC), Songwriters of North America (SONA) and Tennessee Entertainment Commission.
Throughout history, music has embraced constructive change and innovation. And we will do so again as we confront the opportunities and risks of artificial intelligence.
Done right, AI should offer avenues for new growth and artistic accomplishment. When creators’ rights are respected, innovation thrives.
Already, music companies have unveiled compelling projects that use AI technologies in groundbreaking ways — with full consent and participation of the artists and rights holders involved. Working together with responsible AI companies, music companies are finding new ways to enhance production and marketing, gain new understandings from data and research, and improve wellness and health. They’ve used it to help identify new audiences for artists and pioneer new ways to celebrate iconic catalogs and performers. This is just the beginning of a new era of possibilities.
But many AI developers are resisting collaborative efforts by the creative sector to develop a responsible policy framework for AI, even though the elements of such a framework are straightforward and common-sense. In short, AI companies must honor:
Authorization: only use copyrighted music if it is authorized (for example, through a license)
Transparency: keep and disclose adequately detailed records of the content on which they train their systems
Authenticity: prevent deepfakes, voice clones, and similar violations of individuals’ rights in their own voice, image, name likeness and identity.
These foundational, consensus principles are detailed by the Human Artistry Campaign and supported by virtually the entire creative community. They set forth a baseline for responsible development and deployment of AI.
But as if on cue, some of the worst instincts of Big Technology have returned. Some AI developers claim it’s “fair use” to scrape up protected music so it can be copied and repackaged by their models. That’s just wrong.
Put bluntly, that’s digital theft.
In every legitimate market in the world, the use of others’ property requires the owner’s consent and agreed-upon compensation. Together, for example, music and technology have developed a burgeoning streaming market built on the common-sense principle that use of copyrighted creative works requires licensing and consent.
Indeed, the developers’ claim that they can use decades’ worth of iconic and extremely valuable recordings for AI without bothering to ask or pay the rightsholders is so far-fetched that former Stability AI developer Ed Newton-Rex quit his job in November rather than be party to an extreme effort to rip off artists and misappropriate their work, explaining via X:
“Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable[.]”
It’s not.
This is why transparency is essential. AI developers must keep accurate records of the copyrighted works used by their models and make them available to rights holders seeking to enforce their rights. We need rules requiring that developers maintain adequately detailed records and share this information — or bear the consequences if they fail to produce it. We were pleased to see that the European Union enshrined this as a core principle in its landmark AI Act.
AI policy must also establish clear rules protecting every performer’s right to their own voice, image, name and likeness — the most fundamental cornerstones of individual identity. AI fakes that mine an artist’s body of work to create artificial replicas and voice clones, fashion phony endorsements, or depict individuals in ways they haven’t consented to represent the worst kind of personal invasion. Congress needs to put an end to wrongful appropriation of the most central components of individual human identity.
These are the challenges of 2024.
We either work to continue a strong and sustainable foundation for music in the era of generative AI that moves both art and technology forward together, or generative AI devolves into just another “move fast and break things” novelty that fails to deliver anything of value while eroding our culture.
These are the choices policymakers will face this coming year. Let’s work to help them forge the right path.
Mitch Glazier is chairman/CEO of the Recording Industry Association of America.