artificial intelligence
Page: 10
Country star Lainey Wilson and Recording Academy president/CEO Harvey Mason voiced their support for federal regulation of AI technology at a hearing conducted by the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet in Los Angeles on Friday (Feb. 2).
“Our voices and likenesses are indelible parts of us that have enabled us to showcase our talents and grow our audiences, not mere digital kibble for a machine to duplicate without consent,” Wilson said during her comments.
“The artists and creators I talk to are concerned that there’s very little protection for artists who see their own name or likeness or voice used to create AI-generated materials,” Mason added. “This misuse hurts artists and their fans alike.”
“The problem of AI fakes is clear to everyone,” he continued later. “This is a problem that only Congress can address to protect all Americans. For this reason, the academy is grateful for the introduction of the No AI FRAUD Act,” a bill announced in January that aims to establish a federal framework for protecting voice and likeness.
The star of the hearing was not from the music industry, though. Jennifer Rothman, a professor of law at University of Pennsylvania Law School, offered an eloquent challenge to a key provision of the No AI FRAUD act, which would allow artists to transfer the rights to their voice and likeness to a third party.
It’s easy to imagine this provision is popular with labels, who historically built their large catalogs by taking control of artists’ recordings for perpetuity. However, Rothman argued that “any federal right to a person’s voice or likeness must not be transferable away from that person” and “there must be significant limits on licensing” as well.
“Allowing another person or entity to own a living human being’s likeness or voice in perpetuity violates our fundamental and constitutional right to liberty,” she said.
Rothman cleverly invoked the music industry’s long history of perpetuity deals — a history that has upset many artists, including stars like Taylor Swift, over the years — as part of the reason for her objection.
“Imagine a world in which Taylor Swift‘s first record label obtained rights in perpetuity to young Swift’s voice and likeness,” Rothman explained. “The label could then replicate Swift’s voice over and over in new songs that she never wrote and have AI renditions of her perform and endorse the songs and videos and even have holograms perform them on tour. In fact, under the proposed No AI Fraud Act, the label would be able to sue Swift herself for violating her own right of publicity if she used her voice and likeness to write and record new songs and publicly perform them. This is the topsy-turvy world that the draft bills would create.”
(Rothman’s reference to Swift was just one of several at the hearing. Rep. Kevin Kiley [R – CA] alluded to the debate over whether or not the singer would be able to make it to the Super Bowl from her performance in Tokyo, while Rep. Nathaniel Moran [R – TX] joked, “I have not mentioned Travis Kelce’s girlfriend once during this testimony.”)
Rothman pointed out that the ability to transfer voice or likeness rights in perpetuity potentially “threatens ordinary people” as well: They “may unwittingly sign over those rights as part of online Terms of Service” that exist on so many platforms and are barely ever read. In the music industry, there is a similar problem already causing problems for a number of young artists who sign up to distribute their music through an online service, agree to Terms of Service without reading them, and later discover that they have unknowingly locked their music into some sort of agreement. In an AI world, this problem could be magnified.
Rothman’s comments put her at odds with the Recording Academy. “In this particular bill, there are certain safeguards, there’s language that says there have to be attorneys present and involved,” Mason said during questioning. (Though many young artists can’t afford counsel or can’t find good counsel.) “But we also believe that families should have the freedom to enter into different business arrangements.”
Mason’s view was shared by Rep. Matt Gaetz (R – FL). “If tomorrow I wanted to sell my voice to a robot and let that robot say whatever in the world that it wanted to say, and I wanted to take the money from that sale and go buy a sailboat and never turn on the internet again, why should I not have the right to do that?” he asked.
In addition to Rothman, Mason and Wilson, there was one other witness at the hearing: Christopher Mohr, who serves as president of the Software & Information Industry Association. He spoke little and mostly reiterated that his members wanted the courts to answer key questions around AI. “It’s really important that these cases get thoroughly litigated,” Mohr said.
This answer did not satisfy Rep. Glenn Ivey (D – MD), a former litigator. “It could take years before all of that gets solved and you might have conflicting decisions from different courts in jury trials,” Ivey noted. “What should we be doing to try and fix it now?”
Nearly 300 artists, songwriters, actors and other creators are voicing support for a new bipartisan Congressional bill that would regulate the use of artificial intelligence for cloning voices and likenesses via a new print ad running in USA Today on Friday (Feb. 2).
Explore
See latest videos, charts and news
See latest videos, charts and news
The bill — dubbed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD” Act) and introduced in the U.S. House on Jan. 10 — would establish a federal framework for protecting voices and likenesses in the age of AI.
Placed by the Human Artistry Campaign, the ad features such bold-faced names as 21 Savage, Bette Midler, Cardi B & Offset, Chuck D, Common, Gloria Estefan, Jason Isbell, the estate of Johnny Cash, Kelsea Ballerini, Lainey Wilson, Lauren Daigle, Lamb of God, Mary J. Blige, Missy Elliott, Nicki Minaj, Questlove, Reba McEntire, Sheryl Crow, Smokey Robinson, the estate of Tomy Petty, Trisha Yearwood and Vince Gill.
“The No AI FRAUD Act would defend your fundamental human right to your voice & likeness, protecting everyone from nonconsensual deepfakes,” the ad reads. “Protect your individuality. Support HR 6943.”
The Human Artistry Campaign is a coalition of music industry organizations that in March 2023 released a series of seven core principles regarding artificial intelligence. They include ensuring that AI developers acquire licenses for artistic works used in developing and training AI models, as well as that governments refrain from creating “new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”
In addition to musical artists, the USA Today ad also bears the names of actors such as Bradley Cooper, Clark Gregg, Debra Messing, F. Murray Abraham, Fran Drescher, Laura Dern, Kevin Bacon, Kyra Sedgwick, Kristen Bell, Kiefer Sutherland, Julianna Margulies and Rosario Dawson.
The No AI FRAUD Act was introduced by Rep. María Elvira Salazar (R-FL) alongside Reps. Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA). The bill is said to be based upon the Senate discussion draft Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES” Act), which was unveiled in October.
“It’s time for bad actors using AI to face the music,” said Rep. Salazar in a statement at the time the legislation was announced. “This bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online.”
Spurred in part by recent incidents including the viral “fake Drake” track “Heart On My Sleeve,” the No AI FRAUD Act would establish a federal standard barring the use of AI to copy the voices and likenesses of public figures without consent. As it stands, an artist’s voice, image or likeness is typically covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but those laws vary state by state.
The bill was introduced on the same day a similar piece of legislation — the Ensuring Likeness Voice and Image Security (ELVIS) Act — was unveiled in Tennessee by Governor Bill Lee. That bill would update the state’s Protection of Personal Rights law “to include protections for songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI),” according to a press release.
Since its unveiling, the No AI Fraud Act has received support from a range of music companies and organizations including the Recording Industry Association of America (RIAA), Universal Music Group, the National Music Publishers’ Assocation (NMPA), the Recording Academy, SoundExchange, the American Association of Independent Music (A2IM) and the Latin Recording Academy.
You can view the full ad below.
To judge from the results of a report commissioned by GEMA and SACEM, the specter of artificial intelligence (AI) is haunting Europe.
A full 35% of members of the respective German and French collective management societies surveyed said they had used some kind of AI technology in their work with music, according to a Goldmedia report shared in a Tuesday (Jan. 30) press conference — but 71% were afraid that the technology would make it hard for them to earn a living. That means that some creators who are using the technology fear it, too.
The report, which involved expert interviews as well as an online survey, valued the market for generative AI music applications at $300 million last year – 8% of the total market for generative AI. By 2028, though, that market could be worth $3.1 billion. That same year, 27% of creator revenues – or $950 million – would be at risk, in large part due to AI-created music replacing that made by humans.
Explore
See latest videos, charts and news
See latest videos, charts and news
Although many of us think of the music business as being one where fans make deliberate choices of what to listen to – either by streaming or purchasing music – collecting societies take in a fair amount of revenue from music used in films and TV shows, in advertising, and in restaurants and stores. So even if generative AI technology isn’t developed enough to write a pop song, it could still cost the music business money – and creators part or even all of their livelihood.
“So far,” as the report points out, “there is no remuneration system that closes the AI-generated financial gap for creators.” Although some superstars are looking to license the rights to their voices, there is a lack of legal clarity in many jurisdictions about under what conditions a generative AI can use copyrighted material for training purposes. (In the United States, this is a question of fair use, a legal doctrine that doesn’t exist in the same form in France or Germany.) Assuming that music used to train an AI would need to be licensed, however, raises other questions, such as how many times and how that would pay.
Unsurprisingly, the vast majority of songwriters want credit and transparency: 95% want AI companies to disclose which copyrighted works they used for training purposes and 89% want companies to disclose which works are generated by AI. Additionally, 90% believe they should be asked for permission before their work is used for training purposes and the same amount want to benefit financially. A full 90% want policymakers to pay more attention to issues around AI and copyright.
The report further breaks down how the creators interviewed feel about using AI. In addition to the 35% who use the technology, 13% are potential users, 26% would rather not use it and 19% would refuse. Of those who use the technology already, 54% work on electronic music, 53% work on “urban/rap,” 52% on advertising music, 47% on “music library” and 46% on “audiovisual industry.”
These statistics underscore that AI isn’t a technology that’s coming to music – it’s one that’s here now. That means that policymakers looking to regulate this technology need to act soon.
The report also shows that smart regulation could resolve the debate between the benefits and drawbacks of AI. Creators are clearly using it productively, but more still fear it: 64% think the risks outweigh the opportunities, while just 11% thought the opposite. This is a familiar pattern with the music business, to which technologies are both dangerous and promising. Perhaps AI could end up being both.
When fake, sexually-explicit images of Taylor Swift flooded social media last week, it shocked the world. But legal experts weren’t exactly surprised, saying it’s just a glaring example of a growing problem — and one that’ll keep getting worse without changes to the law and tech industry norms.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
The images, some of which were reportedly viewed millions of times on X before they were pulled down, were so-called deepfakes — computer-generated depictions of real people doing fake things. Their spread on Thursday quickly prompted outrage from Swifties, who mass-flagged the images for removal and demanded to know how something like that was allowed to happen to the beloved pop star.
But for legal experts who have been tracking the growing phenomenon of non-consensual deepfake pornography, the episode was sadly nothing new.
“This is just the highest profile instance of something that has been victimizing many people, mostly women, for quite some time now,” said Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law.
Experts warned Billboard that the Swift incident could be the sign of things to come — not just for artists and other celebrities, but for normal individuals with fewer resources to fight back. The explosive growth of artificial intelligence tools over the past year has made deepfakes far easier to create, and some web platforms have become less aggressive in their approach to content moderation in recent years.
“What we’re seeing now is a particularly toxic cocktail,” Hartzog said. “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”
To some extent, images like the ones that cropped up last week are already illegal. Though no federal law squarely bans them, 10 states around the country have enacted statutes criminalizing non-consensual deepfake pornography. Victims like Swift can also theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights, and torts like invasion of privacy and intentional infliction of emotional distress.
Such images also clearly violate the rules on all major platforms, including X. In a statement last week, the company said it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” as well as “closely monitoring the situation to ensure that any further violations are immediately addressed.” Sunday to Tuesday, the site disabled searches for “Taylor Swift” out of “an abundance of caution as we prioritize safety on this issue.”
But for the victims of such images, legal remedies and platform policies often don’t mean much in practice. Even if an image is illegal, it is difficult and prohibitively expensive to try to sue the anonymous people who posted it; even if you flag an image for breaking the rules, it’s sometimes hard to convince a platform to pull it down; even if you get one pulled down, others crop up just as quickly.
“No matter her status, or the number of resources Swift devotes to the removal of these images, she won’t be completely successful in that effort,” said Rebecca A. Delfino, a professor and associate dean at Loyola Law School who has written extensively about harm caused by pornographic deepfakes.
That process is extremely difficult, and it’s almost always reactive — started after some level of damage is already done. Think about it this way: Even for a celebrity with every legal resource in the world, the images still flooded the web. “That Swift, currently one of the most powerful and known women in the world, could not avoid being victimized shows the exploitive power of pornographic deepfakes,” Delfino said.
There’s currently no federal statute that squarely targets the problem. A bill called the Preventing Deepfakes of Intimate Images Act, introduced last year, would allow deepfake victims to file civil lawsuits, and criminalize such images when they’re sexually-explicit. Another, called the Deepfake Accountability Act, would require all deepfakes to be disclaimed as such and impose criminal penalties for those that aren’t. And earlier this month, lawmakers introduced No AI FRAUD Act, which would create a federal right for individuals to sue if their voice or any other part of their likeness is used without permission.
Could last week’s incident spur lawmakers to take action? Don’t forget: Ticketmaster’s messy 2022 rollout of tickets for Taylor’s Eras tour sparked congressional hearings, investigations by state attorneys general, new legislation proposals and calls by some lawmakers to break up Live Nation under federal antitrust laws.
Experts like Delfino are hopeful that such influence on the national discourse — call it the Taylor effect, maybe — could spark a similar conversation over the problem of deepfake pornography. And they might have reason for optimism: Polling conducted by the AI thinktank Artificial Intelligence Policy Institute over the weekend showed that more than 80% of voters supported legislation making non-consensual deepfake porn illegal, and that 84% of them said the Swift incident had increased their concerns about AI.
“Her status as a worldwide celebrity shed a huge spotlight on the need for both criminal and civil remedies to address this problem, which today has victimized hundreds of thousands of others, primarily women,” Delfino said.
But even after last week’s debacle, new laws targeting deepfakes are no guarantee. Some civil liberties activists and lawmakers worry that such legislation could violate the First Amendment by imposing overly-broad restrictions on free speech, including criminalizing innocent images and empowering money-making troll lawsuits. Any new law would eventually need to pass muster at the U.S. Supreme Court, which has signaled in recent years that it is highly skeptical of efforts to restrict speech.
In the absence of writing new laws that make deepfake porn even more illegal, concrete solutions will likely require stronger action by social media platforms themselves, which have created vast, lucrative networks for the spread of such materials and are in the best position to police them.
But Jacob Noti-Victor, a professor at Cardozo School of Law who researches how the law impacts innovation and the deployment of new technologies, says it’s not as simple as it might seem. After all, the images of Swift last week were already clearly in violation of X’s rules, yet they spread widely on the site.
“X and other platforms certainly need to do more to tackle this problem and that requires large, dedicated content moderation teams,” Noti-Victor said. “That said, it’s not an easy task. Content detection tools have not been very good at detecting deepfakes so far, which limits the tools that platforms can use proactively to detect this kind of material as it’s being posted.”
And even if it were easy for platforms find and stop harmful deepfakes, tech companies have hardly been beefing up their content moderation efforts in recent years.
Since Elon Musk acquired X (then named Twitter) in 2022, the company has loosened restrictions on offensive content and fired thousands of employees, including many on the “trust and safety” teams that handle content moderation. Mark Zuckerberg’s Meta, which owns Facebook and Instagram, laid off more than 20,000 employees last year, reportedly also including hundreds of moderators. Google, Microsoft and Amazon have all reportedly made similar cuts.
Amid a broader wave of tech layoffs, why were those employees some of the first to go? Because at the end of the day, there’s no real legal requirement for platforms to police offensive content. Section 230 of the Communications Decency Act, a much-debated provision of federal law, largely shields internet platforms from legal liability for materials posted by their users. That means Taylor could try to sue the anonymous X users who posted her image, but she would have a much harder time suing X itself for failing to stop them.
In the absence of regulation and legal liability, the only real incentives for platforms to do a better job at combating deepfakes are “market incentives,” said Hartzog, the BU professor — meaning, fear of negative publicity that scares away advertisers or alienates users.
On that front, maybe the Taylor fiasco is already having an impact. On Friday, X announced that it would build a “Trust and Safety center of excellence” in Austin, Texas, including hiring 100 new employees to handle content moderation.
“These platforms have an incentive to attract as many people as possible and suck out as much data as possible, with no obligation to create meaningful tools to help victims,” Hartzog said. “Hopefully, this Taylor Swift incident advances the conversation in productive ways that results in meaningful changes to better protect victims of this kind of behavior.”
BMG has entered into a strategic partnership with the TUM School of Management as it looks to fast-track the implementation of artificial intelligence across the Berlin-based company’s marketing campaigns for artists. BMG said in its announcement on Tuesday (Jan. 30) that it sees generative AI as a way to help manage the complex array of […]
Elon Musk’s social media platform X has restored searches for Taylor Swift after temporarily blocking users from seeing some results as pornographic deepfake images of the singer circulated online. Searches for the singer’s name on the site Tuesday turned up a list of tweets as normal. A day earlier, the same search resulted in an […]
Social media users searching up Taylor Swift‘s name on X (formerly Twitter) were in for a surprise on Saturday (Jan. 27). After typing in the 34-year-old pop superstar’s name in the search box on X, users received the following error message: “Something went wrong. Try reloading.” Swift’s official X account was still accessible at press […]
Lyor Cohen’s first encounter with Google’s generative artificial intelligence left him gobsmacked. “Demis [Hassabis, CEO of Google Deepmind] and his team presented a research project around genAI and music and my head came off of my shoulders,” Cohen, global head of music for Google and YouTube, told Billboard in November. “I walked around London for two days excited about the possibilities, thinking about all the issues and recognizing that genAI in music is here — it’s not around the corner.”
While some of the major labels are touting YouTube as an important partner in the evolving world of music and AI, not everyone in the music industry has been as enthusiastic about these new efforts. That’s because Google trained its model on a large set of music — including copyrighted major-label recordings — and then went to show it to rights holders, rather than asking permission first, according to four sources with knowledge of the search giant’s push into generative AI and music. That could mean artists “opting out” of such AI training — a key condition for many rights holders — is not an option.
YouTube did make sure to sign one-off licenses with some parties before rolling out a beta version of its new genAI “experiment” in November. Dream Track, the only AI product it has released publicly so far, allows select YouTube creators to soundtrack clips on Shorts with pieces of music, based on text prompts, that can include replicas of famous artists’ voices. (A handful of major-label acts participated, including Demi Lovato and Charli XCX.) “Our superpower was our deep collaboration with the music industry,” Cohen said at the time. But negotiations that many in the business see as precedent-setting for broader, labelwide licensing deals have dragged on for months.
Negotiating with a company as massive as YouTube was made harder because it had already taken what it wanted, according to multiple sources familiar with the company’s label talks. Meanwhile, other AI companies continue to move ahead with their own music products, adding pressure on YouTube to keep progressing its technology.
In a statement, a YouTube representative said, “We remain committed to working collaboratively with our partners across the music industry to develop AI responsibly and in a way that rewards participants with long-term opportunities for monetization, controls and attribution for potential genAI tools and content down the road,” declining to get specific about licenses.
GenAI models require training before they can start generating properly. “AI training is a computational process of deconstructing existing works for the purpose of modeling mathematically how [they] work,” Google explained in comments to the U.S. Copyright Office in October. “By taking existing works apart, the algorithm develops a capacity to infer how new ones should be put together.”
Whether a company needs permission before undertaking this process on copyrighted works is already the subject of several lawsuits, including Getty Images v. Stability AI and the Authors Guild v. OpenAI. In October, Universal Music Group (UMG) was among the companies that sued AI startup Anthropic, alleging that “in the process of building and operating AI models, [the company] unlawfully copies and disseminates vast amounts of copyrighted works.”
As these cases proceed, they are expected to set precedent for AI training — but that could take years. In the meantime, many technology companies seem set on adhering to the Silicon Valley rallying call of “move fast and break things.”
While rights holders decry what they call copyright infringement, tech companies argue their activities fall under “fair use” — the U.S. legal doctrine that allows for the unlicensed use of copyrighted works in certain situations. News reporting and criticism are the most common examples, but recording a TV show to watch later, parody and other uses are also covered.
“A diverse array of cases supports the proposition that copying of a copyrighted work as an intermediate step to create a noninfringing output can constitute fair use,” Anthropic wrote in its own comments to the U.S. Copyright Office. “Innovation in AI fundamentally depends on the ability of [large language models] to learn in the computational sense from the widest possible variety of publicly available material,” Google said in its comments.
“When you think of generative AI, you mostly think of the companies taking that very modern approach — Google, OpenAI — with state-of-the-art models that need a lot of data,” says Ed Newton-Rex, who resigned as Stability AI’s vp of audio in November because the company was training on copyrighted works. “In that community, where you need a huge amount of data, you don’t see many people talking about the concerns of rights holders.”
When Dennis Kooker, president of global digital business and U.S. sales for Sony Music Entertainment, spoke at a Senate forum on AI in November, he rejected the fair use argument. “If a generative AI model is trained on music for the purpose of creating new musical works that compete in the music market, then the training is not a fair use,” Kooker said. “Training in that case, cannot be without consent, credit and compensation to the artists and rights holders.”
UMG and other music companies took a similar stance in their lawsuit against Anthropic, warning that AI firms should not be “excused from complying with copyright law” simply because they claim they’ll “facilitate immense value to society.”
“Undisputedly, Anthropic will be a more valuable company if it can avoid paying for the content on which it admittedly relies,” UMG wrote at the time. “But that should hardly compel the court to provide it a get-out-of-jail-free card for its wholesale theft of copyrighted content.”
In this climate, bringing the major labels on board as Google and YouTube did last year with Dream Track — after training the model, but before releasing it — may well be a step forward from the music industry’s perspective. At least it’s better than nothing: Google infamously started scanning massive numbers of books in 2004 without asking permission from copyright holders to create what is now known as Google Books. The Authors Guild sued, accusing Google of violating copyright, but the suit was eventually dismissed — almost a decade later in 2013.
While AI-related bills supported by the music business have already been proposed in Congress, for now the two sides are shouting past each other. Newton-Rex summarized the different mindsets succinctly: “What we in the AI world think of as ‘training data’ is what the rest of the world has thought of for a long time as creative output.”
Additional reporting by Bill Donahue.
The leader of the American Federation of Musicians proclaimed that Hollywood labor is “in a new era” as dozens of members of various entertainment unions came to the doorstep of studio labor negotiators in support of the start of his union’s contract negotiations on Monday.
As an early drizzle that morning turned into driving rain, members of the Writers Guild of America, SAG-AFTRA, IATSE and Teamsters Local 399 rallied in front of the Sherman Oaks offices of the Alliance of Motion Picture and Television Producers with picket signs, and a few umbrellas, in hand. To AFM‘s chief negotiator and international president Tino Gagliardi, this kind of unity for musicians was unlike anything he’d seen in his time in union leadership. “We’re in a new era, especially in the American labor movement, with regard to everyone coalescing and coming together and collaborating in order to get what we all need in this industry,” Gagliardi told The Hollywood Reporter. “Together we are the product, we are the ones that bring the audiences in, that controls the emotion, if you will.”
The program — which featured music performed by AFM brass musicians and speeches from labor leaders including Teamsters Local 399 secretary-treasurer Lindsay Dougherty, Writers Guild of America West vice president Michele Mulroney and L.A. County Federation of Labor president Yvonne Wheeler — took place hours before the AFM was scheduled to begin negotiations over new Basic Theatrical Motion Picture and Basic Television Motion Picture contracts with the AMPTP in an office just steps away.
The message that speakers drove home was sticking together in the wake of the actors’ and writers’ strikes that shut down much of entertainment for half a year the previous summer and fall. The 2023 WGA and SAG-AFTRA strikes saw an unusual amount of teamwork occur between entertainment unions, which the AFM is clearly hoping to repeat in their contract talks. “We learned a hard, long lesson last year that we had to be together since day one. That’s going to be the difference going into this fight for the musicians, is that we’re all together in this industry,” Dougherty said in her speech.
The WGA West’s Mulroney addressed the musicians present, saying that her members “never took your support for granted” during the writers’ work stoppage. She added, “The WGA has your back just as you had our backs this past summer.” Though he wasn’t present at Monday’s event, SAG-AFTRA national executive director Duncan Crabtree-Ireland sent a message, delivered by his chief communications officer, that “the heat of the hot labor summer is as strong as ever.”
The AMPTP said in a statement on Monday that it looks forward to “productive” negotiations with the AFM, “with the goal of concluding an agreement that will ensure an active year ahead for the industry and recognize the value that musicians add to motion pictures and television.”
Though the AFM contracts under discussions initially expired in Nov. 2023, the writers’ and actors’ strikes that year prompted both sides to extend the pacts by six months. Top priorities for the musicians’ union in this round of talks include instituting AI protections, amplifying wages and greater streaming residuals.
For rank-and-file writers and actors who showed up at Monday’s rally, one recurring theme was repaying the AFM for its support during their work stoppages. SAG-AFTRA member Miki Yamashita (Cobra Kai), who is also a member of the American Guild of Musical Artists, explained that during the actors’ strike she organized an opera singers-themed picket at Paramount, which AFM members asked to take part in. “Because of them, we had orchestra players and a pianist to play for us during our picket, and I’ll never forget how much that meant to me, that show of solidarity,” she said. “I promised myself that if they ever needed my presence of my help, that I would rush to help them.”
Carlos Cisco and Eric Robbins, both writers on Star Trek: Discovery and WGA members, worked as lot coordinators at Disney during the writers’ strike. They recalled AFM members providing a morale boost during the work stoppage by occasionally playing music on the picket lines. “The struggles that labor faces in this [industry] are universal, whether it’s the hours, the residual payments as we’ve moved to streaming or the concern about AI coming into various spaces. We have far more in common than separates us,” said Robbins.
The AFM’s negotiations are set to continue through Jan. 31. Though the AMPTP offices don’t often see labor demonstrations, Gagliardi says that as a former president of New York-based AFM Local 802, he staged rallies in front of employer headquarters with some frequency. “I did this on a regular basis,” he said. “It was about bringing everyone together to fight for a common cause, and that’s what we’re doing today.”
This story was originally published by The Hollywood Reporter.
Tennessee governor Bill Lee has announced a new state bill to further protect the state’s “best in class artists and songwriters” from AI deepfakes.
While the state already has laws to protect Tennesseans against the exploitation of their name, image and likeness without their consent, this new law, called the Ensuring Likeness Voice and Image Security Act (ELVIS Act), is an update to the existing law to specifically address the challenges posed by new generative AI tools. The ELVIS Act also introduces protection for voices.
The announcement arrives just hours after a bipartisan group of U.S. House lawmakers revealed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a framework for protecting one’s voice and likeness on a federal level and lays out First Amendment protections. It is said to be a complement to the Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), a draft bill that was introduced last October.
An artist’s voice, image or likeness may be covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but this is a right that varies state by state. The ELVIS Act aims to provide Tennessee-based talent with much clearer protection for their voices in particular at the state level, and the No AI FRAUD Act hopes to establish a harmonized baseline of protection on the federal level. (If one lives in a state with an even stronger right of publicity law than the No AI FRAUD Act, that state protection is still viable and may be easier to address in court.)
The subject of AI voice cloning has been a controversial topic in the music business in the past year. In some cases, it presents novel creative opportunities — including its use for pitch records, lyric translations, estate marketing and fan engagement — but it also poses serious threats. If an artist’s voice is cloned by AI without their permission or knowledge, it can confuse, offend, mislead or even scam fans.
“From Beale Street to Broadway, to Bristol and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” says Gov. Lee in a statement. “As the technology landscape evolves with artificial intelligence, we’re proud to lead the nation in proposing legal protection for our best-in-class artists and songwriters.”
“As AI technology continues to develop, today marks an important step towards groundbreaking state-level AI legislation,” added Harvey Mason Jr., CEO of the Recording Academy. “This bipartisan, bicameral bill will protect Tennessee’s creative community against AI deepfakes and voice cloning and will serve as the standard for other states to follow. The Academy appreciates Governor Lee and bipartisan members of the Tennessee legislature for leading the way — we’re eager to collaborate with lawmakers to move this bill forward.”
“The emergence of generative Artificial Intelligence (AI) resulted in fake recordings that are not authorized by the artist and is wrong, period,” said a representative from Nashville Songwriters Association International (NSAI). “The Nashville Songwriters Association International (NSAI) applauds Tennessee Governor Bill Lee, Senate Leader Jack Johnson and House Leader William Lamberth for introducing legislation that adds the word “voice” to the existing law — making it crystal clear that unauthorized AI-generated fake recordings are subject to legal action in the State of Tennessee. This is an important step in what will be an ongoing challenge to regulate generative AI music creations.”
“I commend Governor Lee of Tennessee for this forward-thinking legislation,” said A2IM president/CEO Dr. Richard James Burgess. “Protecting the rights to an individual’s name, voice, and likeness in the digital era is not just about respecting personal identity but also about safeguarding the integrity of artistic expression. This act is a significant step towards balancing innovation with the rightful interests of creators and performers. It acknowledges the evolving landscape of technology and media, setting a precedent for responsible and ethical use of personal attributes, in the music industry.”
“The Artist Rights Alliance is grateful to Gov. Lee, State Senator Jack Johnson and Rep. William Lamberth for launching this effort to prevent an artist’s voice and likeness from being exploited without permission,” said Jen Jacobsen, executive director of the Artist Rights Alliance. “Recording artists and performers put their very selves into their art. Scraping or copying their work to replicate or clone a musician’s voice or image violates the most fundamental aspects of creative identity and artistic integrity. This important bill will help ensure that creators and their livelihoods are respected and protected in the age of AI.”
“AI deepfakes and voice cloning threaten the integrity of all music,” added David Israelite, president/CEO of the National Music Publishers’ Association. “It makes sense that Tennessee state would pioneer these important policies which will bolster and protect the entire industry. Music creators face enough forces working to devalue their work – technology that steals their voice and likeness should not be one of them.”
“Responsible innovation has expanded the talents of creators — artists, songwriters, producers, engineers, and visual performers, among others — for decades, but use of generative AI that exploits an individual’s most personal attributes without consent is detrimental to our humanity and culture,” said Mitch Glazier, chairman/CEO of the Recording Industry Association of America (RIAA). “We applaud Governor Bill Lee, State Senate Majority Leader Jack Johnson and House Majority Leader William Lamberth’s foresight in launching this groundbreaking effort to defend creators’ most essential rights from AI deepfakes, unauthorized digital replicas and clones. The ELVIS Act reaffirms the State of Tennessee’s commitment to creators and complements Senator Blackburn’s bipartisan work to advance strong legislation protecting creators’ voices and images at the federal level.”
“Evolving laws to keep pace with technology is essential to protecting the creative community,” said Michael Huppe, president/CEO of SoundExchange. “As we embrace the enormous potential of artificial intelligence, Tennessee is working to ensure that music and those who make it are protected under the law from exploitation without consent, credit, and compensation. We applaud the cradle of country music and the birthplace of rock n’ roll for leading the way.”
According to a press release from the state of Tennessee, the ELVIS Act is also supported by Academy of Country Music, American Association of Independent Music (A2IM), The Americana Music Association, American Society of Composers, Authors and Publishers (ASCAP), Broadcast Music, Inc. (BMI), Church Music Publishers Association (CMPA), Christian Music Trade Association, Folk Alliance International, Global Music Rights, Gospel Music Association, The Living Legends Foundation, Music Artists Coalition, Nashville Musicians Association, National Music Publishers’ Association, Rhythm & Blues Foundation, Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), Society of European Stage Authors and Composers (SESAC), Songwriters of North America (SONA) and Tennessee Entertainment Commission.