State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

12:00 am 12:00 pm

Current show

State Champ Radio Mix

12:00 am 12:00 pm


deepfakes

The U.S. Senate Judiciary Committee convened on Tuesday (April 30) to discuss a proposed bill that would effectively create a federal publicity right for artists in a hearing that featured testimony from Warner Music Group CEO Robert Kyncl, artist FKA Twigs, Digital Media Association (DiMA) CEO Graham Davies, SAG-AFTRA national executive director/chief negotiator Duncan Crabtree-Ireland, Motion Picture Association senior vp/associate general counsel Ben Sheffner and the University of San Diego professor Lisa P. Ramsey.
The draft bill — called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission. Those individuals have previously only been protected through a patchwork of state “right of publicity” laws. First introduced in October, the NO FAKES Act is supported by a bipartisan group of U.S. senators including Sen. Chris Coons (D-Del.), Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.) and Sen. Thom Tillis (R-N.C.).

Trending on Billboard

Warner Music Group (WMG) supports the NO FAKES Act along with many other music businesses, the RIAA and the Human Artistry Campaign. During Kyncl’s testimony, the executive noted that “we are in a unique moment of time where we can still act and we can get it right before it gets out of hand,” pointing to how the government was not able to properly handle data privacy in the past. He added that it’s imperative to get out ahead of artificial intelligence (AI) to protect artists’ and entertainment companies’ livelihoods.

“When you have these deepfakes out there [on streaming platforms],” said Kyncl, “the artists are actually competing with themselves for revenue on streaming platforms because there’s a fixed amount of revenue within each of the streaming platforms. If somebody is uploading fake songs of FKA Twigs, for example, and those songs are eating into that revenue pool, then there is less left for her authentic songs. That’s the economic impact of it long term, and the volume of content that will then flow into the digital service providers will increase exponentially, [making it] harder for artists to be heard, and to actually reach lots of fans. Creativity over time will be stifled.”

Kyncl, who recently celebrated his first anniversary at the helm of WMG, previously held the role of chief business officer at YouTube. When questioned about whether platforms, like YouTube, Spotify and others who are represented by DiMA should be held responsible for unauthorized AI fakes on their platforms, Kyncl had a measured take: “There has to be an opportunity for [the services] to cooperate and work together with all of us to [develop a protocol for removal],” he said.

During his testimony, Davies spoke from the perspective of the digital service providers (DSPs) DiMA represents. “There’s been no challenge [from platforms] in taking down the [deepfake] content expeditiously,” he said. “We don’t see our members needing any additional burdens or incentives here. But…if there is to be secondary liability, we would very much seek that to be a safe harbor for effective takedowns.”

Davies added, however, that the Digital Millennium Copyright Act (DMCA), which provides a notice and takedown procedure for copyright infringement, is not a perfect model to follow for right of publicity offenses. “We don’t see [that] as being a good process as [it was] designed for copyright…our members absolutely can work with the committee in terms of what we would think would be an effective [procedure],” said Davies. He added, “It’s really essential that we get specific information on how to identify the offending content so that it can be removed efficiently.”

There is currently no perfect solution for tracking AI deepfakes on the internet, making a takedown procedure tricky to implement. Kyncl said he hopes for a system that builds on the success of YouTube’s Content ID, which tracks sound recordings. “I’m hopeful we can take [a Content ID-like system] further and apply that to AI voice and degrees of similarity by using watermarks to label content and care the provenance,” he said.

The NO FAKES draft bill as currently written would create a nationwide property right in one’s image, voice, or visual likeness, allowing an individual to sue anyone who produced a “newly-created, computer-generated, electronic representation” of it. It also includes publicity rights that would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. Most state right of publicity laws were written far before the invention of AI and often limit or exclude the protection of an individual’s name, image and voice after death.

The proposed 70 years of post-mortem protection was one of the major points of disagreement between participants at the hearing. Kyncl agreed with the points made by Crabtree-Ireland of SAG-AFTRA — the actors’ union that recently came to a tentative agreement with major labels, including WMG, for “ethical” AI use — whose view was that the right should not be limited to 70 years post-mortem and should instead be “perpetual,” in his words.

“Every single one of us is unique, there is no one else like us, and there never will be,” said Crabtree-Ireland. “This is not the same thing as copyright. It’s not the same thing as ‘We’re going to use this to create more creativity on top of that later [after the copyright enters public domain].’ This is about a person’s legacy. This is about a person’s right to give this to their family.”

Kyncl added simply, “I agree with Mr. Crabtree-Ireland 100%.”

However, Sheffner shared a different perspective on post-mortem protection for publicity rights, saying that while “for living professional performers use of a digital replica without their consent impacts their ability to make a living…that job preservation justification goes away post-mortem. I have yet to hear of any compelling government interest in protecting digital replicas once somebody is deceased. I think there’s going to be serious First Amendment problems with it.”

Elsewhere during the hearing, Crabtree-Ireland expressed a need to limit how long a young artist can license out their publicity rights during their lifetime to ensure they are not exploited by entertainment companies. “If you had, say, a 21-year-old artist who’s granting a transfer of rights in their image, likeness or voice, there should not be a possibility of this for 50 years or 60 years during their life and not have any ability to renegotiate that transfer. I think there should be a shorter perhaps seven-year limitation on this.”

When fake, sexually-explicit images of Taylor Swift flooded social media last week, it shocked the world. But legal experts weren’t exactly surprised, saying it’s just a glaring example of a growing problem — and one that’ll keep getting worse without changes to the law and tech industry norms.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

The images, some of which were reportedly viewed millions of times on X before they were pulled down, were so-called deepfakes — computer-generated depictions of real people doing fake things. Their spread on Thursday quickly prompted outrage from Swifties, who mass-flagged the images for removal and demanded to know how something like that was allowed to happen to the beloved pop star.

But for legal experts who have been tracking the growing phenomenon of non-consensual deepfake pornography, the episode was sadly nothing new.

“This is just the highest profile instance of something that has been victimizing many people, mostly women, for quite some time now,” said Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law.

Experts warned Billboard that the Swift incident could be the sign of things to come — not just for artists and other celebrities, but for normal individuals with fewer resources to fight back. The explosive growth of artificial intelligence tools over the past year has made deepfakes far easier to create, and some web platforms have become less aggressive in their approach to content moderation in recent years.

“What we’re seeing now is a particularly toxic cocktail,” Hartzog said. “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”

To some extent, images like the ones that cropped up last week are already illegal. Though no federal law squarely bans them, 10 states around the country have enacted statutes criminalizing non-consensual deepfake pornography. Victims like Swift can also theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights, and torts like invasion of privacy and intentional infliction of emotional distress.

Such images also clearly violate the rules on all major platforms, including X. In a statement last week, the company said it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” as well as “closely monitoring the situation to ensure that any further violations are immediately addressed.” Sunday to Tuesday, the site disabled searches for “Taylor Swift” out of “an abundance of caution as we prioritize safety on this issue.”

But for the victims of such images, legal remedies and platform policies often don’t mean much in practice. Even if an image is illegal, it is difficult and prohibitively expensive to try to sue the anonymous people who posted it; even if you flag an image for breaking the rules, it’s sometimes hard to convince a platform to pull it down; even if you get one pulled down, others crop up just as quickly.

“No matter her status, or the number of resources Swift devotes to the removal of these images, she won’t be completely successful in that effort,” said Rebecca A. Delfino, a professor and associate dean at Loyola Law School who has written extensively about harm caused by pornographic deepfakes.

That process is extremely difficult, and it’s almost always reactive — started after some level of damage is already done. Think about it this way: Even for a celebrity with every legal resource in the world, the images still flooded the web. “That Swift, currently one of the most powerful and known women in the world, could not avoid being victimized shows the exploitive power of pornographic deepfakes,” Delfino said.

There’s currently no federal statute that squarely targets the problem. A bill called the Preventing Deepfakes of Intimate Images Act, introduced last year, would allow deepfake victims to file civil lawsuits, and criminalize such images when they’re sexually-explicit. Another, called the Deepfake Accountability Act, would require all deepfakes to be disclaimed as such and impose criminal penalties for those that aren’t. And earlier this month, lawmakers introduced No AI FRAUD Act, which would create a federal right for individuals to sue if their voice or any other part of their likeness is used without permission.

Could last week’s incident spur lawmakers to take action? Don’t forget: Ticketmaster’s messy 2022 rollout of tickets for Taylor’s Eras tour sparked congressional hearings, investigations by state attorneys general, new legislation proposals and calls by some lawmakers to break up Live Nation under federal antitrust laws.

Experts like Delfino are hopeful that such influence on the national discourse — call it the Taylor effect, maybe — could spark a similar conversation over the problem of deepfake pornography. And they might have reason for optimism: Polling conducted by the AI thinktank Artificial Intelligence Policy Institute over the weekend showed that more than 80% of voters supported legislation making non-consensual deepfake porn illegal, and that 84% of them said the Swift incident had increased their concerns about AI.

“Her status as a worldwide celebrity shed a huge spotlight on the need for both criminal and civil remedies to address this problem, which today has victimized hundreds of thousands of others, primarily women,” Delfino said.

But even after last week’s debacle, new laws targeting deepfakes are no guarantee. Some civil liberties activists and lawmakers worry that such legislation could violate the First Amendment by imposing overly-broad restrictions on free speech, including criminalizing innocent images and empowering money-making troll lawsuits. Any new law would eventually need to pass muster at the U.S. Supreme Court, which has signaled in recent years that it is highly skeptical of efforts to restrict speech.

In the absence of writing new laws that make deepfake porn even more illegal, concrete solutions will likely require stronger action by social media platforms themselves, which have created vast, lucrative networks for the spread of such materials and are in the best position to police them.

But Jacob Noti-Victor, a professor at Cardozo School of Law who researches how the law impacts innovation and the deployment of new technologies, says it’s not as simple as it might seem. After all, the images of Swift last week were already clearly in violation of X’s rules, yet they spread widely on the site.

“X and other platforms certainly need to do more to tackle this problem and that requires large, dedicated content moderation teams,” Noti-Victor said. “That said, it’s not an easy task. Content detection tools have not been very good at detecting deepfakes so far, which limits the tools that platforms can use proactively to detect this kind of material as it’s being posted.”

And even if it were easy for platforms find and stop harmful deepfakes, tech companies have hardly been beefing up their content moderation efforts in recent years.

Since Elon Musk acquired X (then named Twitter) in 2022, the company has loosened restrictions on offensive content and fired thousands of employees, including many on the “trust and safety” teams that handle content moderation. Mark Zuckerberg’s Meta, which owns Facebook and Instagram, laid off more than 20,000 employees last year, reportedly also including hundreds of moderators. Google, Microsoft and Amazon have all reportedly made similar cuts.

Amid a broader wave of tech layoffs, why were those employees some of the first to go? Because at the end of the day, there’s no real legal requirement for platforms to police offensive content. Section 230 of the Communications Decency Act, a much-debated provision of federal law, largely shields internet platforms from legal liability for materials posted by their users. That means Taylor could try to sue the anonymous X users who posted her image, but she would have a much harder time suing X itself for failing to stop them.

In the absence of regulation and legal liability, the only real incentives for platforms to do a better job at combating deepfakes are “market incentives,” said Hartzog, the BU professor — meaning, fear of negative publicity that scares away advertisers or alienates users.

On that front, maybe the Taylor fiasco is already having an impact. On Friday, X announced that it would build a “Trust and Safety center of excellence” in Austin, Texas, including hiring 100 new employees to handle content moderation.

“These platforms have an incentive to attract as many people as possible and suck out as much data as possible, with no obligation to create meaningful tools to help victims,” Hartzog said. “Hopefully, this Taylor Swift incident advances the conversation in productive ways that results in meaningful changes to better protect victims of this kind of behavior.”

TikTok went on a counteroffensive Tuesday amid increasing Western pressure over cybersecurity and misinformation concerns, rolling out updated rules and standards for content as its CEO warned against a possible U.S. ban on the Chinese-owned video sharing app.

CEO Shou Zi Chew is scheduled to appear Thursday before U.S. congressional lawmakers, who will grill him about the company’s privacy and data-security practices and relationship with the Chinese government.

Chew said in a TikTok video that the hearing “comes at a pivotal moment” for the company, after lawmakers introduced measures that would expand the Biden administration’s authority to enact a U.S. ban on the app, which the CEO said more than 150 million Americans use.

“Some politicians have started talking about banning TikTok. Now this could take TikTok away from all 150 million of you,” said Chew, who was dressed casually in jeans and blue hoodie, with the dome of the U.S. Capitol in Washington in the background.

“I’ll be testifying before Congress this week to share all that we’re are doing to protect Americans using the app,” he said.

TikTok app has come under fire in the U.S., Europe and Asia-Pacific, where a growing number of governments have banned TikTok from devices used for official business over worries it poses risks to cybersecurity and data privacy or could be used to push pro-Beijing narratives and misinformation.

So far, there is no evidence to suggest this has happened or that TikTok has turned over user data to the Chinese government, as some of its critics have argued it would do.

Norway and the Netherlands on Tuesday warned apps like TikTok should not be installed on phones issued to government employees, both citing security or intelligence agencies.

There’s a “high risk” if TikTok or Telegram are installed on devices that have access to “internal digital infrastructure or services,” Norway’s justice ministry said, without providing further details.

TikTok also rolled out updated rules and standards for content and users in a reorganized set of community guidelines that include eight principles to guide content moderation decisions.

“These principles are based on our commitment to uphold human rights and aligned with international legal frameworks,” said Julie de Bailliencourt, TikTok’s global head of product policy.

She said TikTok strives to be fair, protect human dignity and balance freedom of expression with preventing harm.

The guidelines, which take effect April 21, were repackaged from TikTok’s existing rules with extra details and explanations.

Among the more significant changes are additional details about its restrictions on deepfakes, also known as synthetic media created by artificial intelligence technology. TikTok more clearly spells out its policy, saying all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way.

TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm. Its updated guidelines say deepfakes of private figures and young people are also not allowed.

Deepfakes of public figures are OK in certain contexts, such as for artistic or educational content, but not for political or commercial endorsements.