State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


artificial intelligence

Page: 9

As artificial intelligence and its potential effects on creativity, copyright and a host of other sectors continues to dominate conversation, the Universal Music Group and electronic instruments maker Roland Corporation have teamed up to create a set of guidelines that the companies published under the heading “Principles for Music Creation With AI.”
The seven principles, or “clarifying statements,” as the companies put it, are an acknowledgment that AI is certainly here to stay, but that it should be used in a responsible and transparent way that protects and respects human creators. The companies say that they hope additional organizations will sign on to support the framework. The seven principles, which can be found with slightly more detail at this site, are as follows:

— We believe music is central to humanity.

Trending on Billboard

— We believe humanity and music are inseparable.

— We believe that technology has long supported human artistic expression, and applied sustainably, AI will amplify human creativity.

— We believe that human-created works must be respected and protected.

— We believe that transparency is essential to responsible and trustworthy AI.

— We believe the perspectives of music artists, songwriters, and other creators must be sought after and respected.

— We are proud to help bring music to life.

The creation of the principles is part of a partnership between UMG and Roland that will also involve research projects, including one designed to create “methods for confirming the origin and ownership of music,” according to a press release.

“As companies who share a mutual history of technology innovation, both Roland and UMG believe that AI can play an important role in the creative process of producing music,” Roland’s chief innovation officer Masahiro Minowa said in a statement. “We also have a deep belief that human creativity is irreplaceable, and it is our responsibility to protect artists’ rights. The Principles for Music Creation with AI establishes a framework for our ongoing collaboration to explore opportunities that converge at the intersection of technology and human creativity.”

Universal has been proactive around the issue of AI in music over the past several months, partnering with YouTube last summer on a series of AI principles and an AI Music Incubator to help artists use AI responsibly, forming a strategic partnership with BandLab to create a set of ethical practices around music creation, and partnering with Endel on functional music, among other initiatives. But UMG has also taken stands to protect against what it sees as harmful uses of AI, including suing AI platform Anthropic for allegedly using its copyrights to train its software in creating new works, and cited AI concerns as part of its rationale for allowing its licensing agreement with TikTok to expire earlier this year.

“At UMG, we have long recognized and embraced the potential of AI to enhance and amplify human creativity, advance musical innovation, and expand the realms of audio production and sound technology,” UMG’s executive vp and chief digital officer Michael Nash said in a statement. “This can only happen if it is applied ethically and responsibly across the entire industry. We are delighted to collaborate with Roland, to explore new opportunities in this area together, while helping to galvanize consensus among key stakeholders across music’s creative community to promote adoption of these core principles with the goal of ensuring human creativity continues to thrive alongside the evolution of new technology.”

LONDON — Sweeping new laws regulating the use of artificial intelligence (AI) in Europe, including controls around the use of copyrighted music, have been approved by the European Parliament, following fierce lobbying from both the tech and music communities.
Members of the European Parliament (MEPs) voted in favor of the EU’s Artificial Intelligence Act by a clear majority of 523 votes for, 46 against and 49 abstentions. The “world first” legislation, which was first proposed in April 2021 and covers a wide range of AI applications including biometric surveillance and predictive policing, was provisionally approved in December, but Wednesday’s vote formally establishes its passage into law.

The act places a number of legal and transparency obligations on tech companies and AI developers operating in Europe, including those working in the creative sector and music business. Among them is the core requirement that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 provide detailed summaries of any copyrighted works, including music, that they have used to train their systems.

Trending on Billboard

Significantly, the law’s transparency provisions apply regardless of when or where in the world a tech company scraped its data from. For instance, even if an AI developer scraped copyrighted music and/or trained its systems in a non-EU country — or bought data sets from outside the 27-member state — as soon as they are used or made available in Europe the company is required to make publicly available a “sufficiently detailed summary” of all copyright protected music it has used to create AI works. 

There is also a requirement that any training data sets used in generative AI music or audio-visual works are water marked, so there is a traceable path for rights holders to track and block the illegal use of their catalog. 

In addition, content created by AI, as opposed to human works, must be clearly labeled as such, while tech companies have to ensure that their systems cannot be used to generate illegal and infringing content.

Large tech companies who break the rules – which govern all applications of AI inside the 27-member block of EU countries, including so-called “high risk” uses — will face fines of up to €35 million or 7% of global annual turnover. Start-up businesses or smaller tech operations will receive proportionate financial punishments. 

Speaking ahead of Wednesday’s vote, which took place in Strasbourg, co-rapporteur Brando Benifei said the legislation means that “unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.” 

Co-rapporteur Dragos Tudorache called the AI Act “a starting point for a new model of governance built around technology.” 

European legislators first proposed introducing regulation of artificial intelligence in 2021, although it was the subsequent launch of ChatGPT — followed by the high-profile release of “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, last April — that made many music executives sit up and pay closer attention to the technology’s potential impact on the record business. 

In response, lobbyists stepped up their efforts to convince lawmakers to add transparency provisions around the use of music in AI – a move which was fiercely opposed by the technology industry, which argued that tougher regulations would put European AI developers at a competitive disadvantage.

Now that the AI Act has been approved by the European Parliament, the legislation will undergo a number of procedural rubber-stamping stages before it is published in the EU’s Official Journal — most likely in late April or early May — with its regulations coming into force 20 days after that. 

There are, however, tiered exceptions for tech companies to comply with its terms and some of its provisions are not fully applicable for up to two-years after its enactment. (The rules governing existing generative AI models commence after 12 months, although any new generative AI companies or models entering the European market after the Act has come into force have to immediately comply with its regulations).

In response to Wednesday’s vote, a coalition of European creative and copyright organizations, including global recorded-music trade body IFPI and international music publishing trade group ICMP, issued a joint statement thanking regulators and MEPs for the “essential role they have played in supporting creators and rightsholders.”

“While these obligations provide a first step for rightsholders to enforce their rights, we call on the European Parliament to continue to support the development of responsible and sustainable AI by ensuring that these important rules are put into practice in a meaningful and effective way,” said the 18 signatories, which also included European independent labels trade association IMPALA, European Authors Society GESAC and CISAC, the international trade organization for copyright collecting societies.

As broadcasters begin assembling in Nashville this morning (Feb. 28) for the Country Radio Seminar, expect a lot of talk. About talk.
Radio personalities’ importance has been on the decline for decades. They used to pick the music on their shows. That privilege was taken away. Then many were encouraged to cut down their segues and get to the music. Then syndicated morning and overnight shows moved in to replace local talent.

But once the streaming era hit and started stealing some of radio’s time spent listening, terrestrial programmers began reevaluating their product to discover what differentiates it from streaming. Thus, this year’s CRS focus is talk.

“That’s what’s so important about this year,” says iHeartMedia talent Brooke Taylor, who voicetracks weekday shows in three markets and airs on 100 stations on weekends. “The radio on-air personality is sort of regaining their importance in the stratosphere of a particular station.”

Taylor will appear on a panel designed for show hosts — “Personal Branding: It’s Not Ego, It’s Branding!” — but it’s hardly the only element geared to the talent. Other entries include “On Air Personalities: The OG Influencers,” a research study about audience expectations of their DJs; a podcasting deep dive; and four different panels devoted to the threats and opportunities in artificial intelligence (AI).

Trending on Billboard

As it turns out, artifice is not particularly popular, according to the research study “On Air Talent and Their Roles on All Platforms,” conducted by media analytics firm Smith Geiger. 

“Americans have very mixed feelings about AI,” says Smith Geiger executive vp of digital media strategies Andrew Finlayson. “This research proves that the audience is very interested in authentic content and authentic voices.”

Not to say that AI will be rejected. Sounds Profitable partner Tom Webster expects that it will be effective at matching advertisers to podcasts that fit their audience and market priorities. And he also sees it as a research tool that can assist content creation.

“If I’m a DJ and I’ve got a break coming up, and I’ve pre-sold or back-sold the same record 1,000 times, why not ask an assistant, ‘Give me something new about this record to say’?” Webster suggests. “That’s the easy kind of thing right there that can actually help the DJ do their job.”

CRS has been helping country radio do its job for more than 50 years, providing network opportunities, exposure to new artists and a steady array of educational panels that grapple with legal issues, industry trends and listener research. In the early 1980s, the format’s leaders aspired to make country more like adult contemporary, offering a predictable experience that would be easy to consume for hours in an office situation. The music, and radio production techniques, became more aggressive in the ’90s, and as technology provided a bulging wave of competitors and new ways to move around the dial, stations have been particularly challenged to maintain listeners’ attention during the 21st century.

Meanwhile, major chains have significantly cut staffs. Many stations cover at least two daily shifts with syndicated shows, and the talent that’s left often works on multiple stations in several different markets, sometimes covering more than one format. Those same personalities are expected to maintain a busy social media presence and potentially establish a podcast, too.

That’s an opportunity, according to Webster. Podcast revenue has risen to an estimated $2.5 billion in advertising and sponsorship billing, he says, while radio income has dropped from around $14 billion to $9 billion. He envisions that the two platforms will be on equal financial footing in perhaps a decade, and he believes radio companies and personalities should get involved if they haven’t already.

“It’s difficult to do a really good podcast,” Webster observes. “We talk a lot about the number of podcasts — there are a lot, and most podcasts are not great. Most podcasts are listened to by friends and family. There’s no barrier to entry to a podcast, and then radio has this stable of people whose very job it is to develop a relationship with an audience. That is the thing that they’re skilled at.”

That ’80s idea of radio as predictable background music has been amended. It’s frequently still “a lean-back soundtrack to what it is that you’re doing,” Webster suggests, though listeners want to be engaged with it.

“One of the people in the survey, verbatim, said it’s ‘a surprise box,’ ” Finlayson notes. “I think people like that serendipity that an on-air personality who really knows and understands the music can bring to the equation. And country music knowledge is one of the things that the audience craves from an on-air talent.”

It’s a challenge. Between working multiple stations, creating social media content and podcasting, many personalities are so stretched that it has become difficult to maintain a personal life, which in turn reduces their sources for new material. Add in the threat of AI, and it’s an uneasy time.

“What I see is a great deal of anxiety and stress levels, and I don’t know how we fix it,” concedes Country Radio Broadcasters executive director R.J. Curtis. “There’s just so much work put on our shoulders, it’s hard to manage that and then have a life.”

Curtis made sure that CRS addresses that, too, with “Your Brain Is a Liar: Recognizing and Understanding the Impact of Your Mental Health,” a presentation delivered by 25-year radio and label executive Jason Prinzo.

That tension is one of the ways that on-air talent likely relates to its audience — there are plenty of stressed, overbooked citizens in every market. And as tech continues to consume their lives, it naturally feeds the need for authenticity, which is likely to be a buzzword as CRS emphasizes radio’s personalities.

“Imagine having a radiothon for St. Jude with an AI talent,” Taylor says. “You’ll get a bunch of facts, but you’ll never get a tear. You’ll never get a real story. You’ll never get that shaky voice talking about somebody in your family or somebody that you know has cancer. The big thing that just will never be replaced is that emotion.” 

Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.

Each year during Grammy week, members of the Association of Independent Music Publishers‘ (AIMP) gather at Lawry’s steakhouse in Beverly Hills to hear a speech from David Israelite, president and CEO of the National Music Publishers’ Association (NMPA). In it, Israelite discussed the successes of the Music Modernization Act, the new UMG TikTok licensing feud, the viability of artificial intelligence regulation, and the more.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

He started the presentation with slides showcasing the publishing revenue for 2022, divided by categories: performance (48.29% or $2.7 billion), mechanical (20.27% or $1.1 billion), synch (26.07% or $1.4 billion), and other (5.37% or $300 million). Synch, he says, is the fastest growing source of revenue.

Israelite focused much of his time on addressing the Music Modernization Act, which was passed about five years ago. “I don’t want you to forget is just how amazing the Music Modernization Act was and is for this industry,” he said. “I believe that it is the most important legislation in the history of the music business… You’re going to start to take for granted some of the things… but we had to fight and win to get this done.” He pointed to successes of the landmark law like the change in the rate standard to a willing seller, willing buyer model and its creation of the Mechanical Licensing Collective (The MLC).

Earlier this week, the MLC (and the digital licensee coordinator, DLC) began the process of its first-ever re-designation. This is a routine five-year reassessment of the organization and how well it is doing its job of administering the blanket mechanical license created by the MMA. As part of the re-designation process, songwriters, publishers and digital services are allowed to submit comments to the Copyright Office about the MLC’s performance. “Many of you will have a role in offering your opinions to the copyright office about that,” says Israelite. “The process needs to be respected and played out, but [The MLC] will be re-designated, and it is an absolute no brainer decision. There’s a lot about the MLC that I want to remind you about.”

Israelite then highlighted the organization’s “transparency,” the lack of administration fees for publishers and that the projection of 2023 revenue from streaming for recorded music ($6.3 billion) and publishing ($1.7 billion) “the split is the closest it has ever been,” attributing this, in part, to the MLC’s work.

He also addressed Grammy week’s biggest story: the UMG TikTok licensing standoff. “I’m only going to say two things about TikTok: the first is I think music is tremendously important to the business model of TikTok, and, secondly, I am just stating the fact that the NMPA model license, which many of you are using, with TikTok expires in April.” At that time, the NMPA can either re-up its model license with TikTok or walk away. If it were to pull a similar punch to what UMG has done, indie publishers could either negotiate with TikTok directly for their own license, or they could also walk away from the platform.

Later, in addressing artificial intelligence concerns, he pledged his support for the creation of a federal right of publicity, but he admitted “I want to be honest with you, it does not have a good chance.” Even though the music business is vying for its adoption, Israelite says that film and TV industry does not want it. “Within the copyright community we don’t agree… and guess who is bigger than music? Film and TV.”

Still, he believes there is merit in fighting for the proposed bill. “It might help with state legislative efforts and it raises the profile,” he said, but Israelite stated that his priority for AI regulation is to require transparency from AI companies and to keep records of how AI models are trained.

Country star Lainey Wilson and Recording Academy president/CEO Harvey Mason voiced their support for federal regulation of AI technology at a hearing conducted by the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet in Los Angeles on Friday (Feb. 2). 
“Our voices and likenesses are indelible parts of us that have enabled us to showcase our talents and grow our audiences, not mere digital kibble for a machine to duplicate without consent,” Wilson said during her comments. 

“The artists and creators I talk to are concerned that there’s very little protection for artists who see their own name or likeness or voice used to create AI-generated materials,” Mason added. “This misuse hurts artists and their fans alike.” 

“The problem of AI fakes is clear to everyone,” he continued later. “This is a problem that only Congress can address to protect all Americans. For this reason, the academy is grateful for the introduction of the No AI FRAUD Act,” a bill announced in January that aims to establish a federal framework for protecting voice and likeness. 

The star of the hearing was not from the music industry, though. Jennifer Rothman, a professor of law at University of Pennsylvania Law School, offered an eloquent challenge to a key provision of the No AI FRAUD act, which would allow artists to transfer the rights to their voice and likeness to a third party. 

It’s easy to imagine this provision is popular with labels, who historically built their large catalogs by taking control of artists’ recordings for perpetuity. However, Rothman argued that “any federal right to a person’s voice or likeness must not be transferable away from that person” and “there must be significant limits on licensing” as well.  

“Allowing another person or entity to own a living human being’s likeness or voice in perpetuity violates our fundamental and constitutional right to liberty,” she said.

Rothman cleverly invoked the music industry’s long history of perpetuity deals — a history that has upset many artists, including stars like Taylor Swift, over the years — as part of the reason for her objection. 

“Imagine a world in which Taylor Swift‘s first record label obtained rights in perpetuity to young Swift’s voice and likeness,” Rothman explained. “The label could then replicate Swift’s voice over and over in new songs that she never wrote and have AI renditions of her perform and endorse the songs and videos and even have holograms perform them on tour. In fact, under the proposed No AI Fraud Act, the label would be able to sue Swift herself for violating her own right of publicity if she used her voice and likeness to write and record new songs and publicly perform them. This is the topsy-turvy world that the draft bills would create.”

(Rothman’s reference to Swift was just one of several at the hearing. Rep. Kevin Kiley [R – CA] alluded to the debate over whether or not the singer would be able to make it to the Super Bowl from her performance in Tokyo, while Rep. Nathaniel Moran [R – TX] joked, “I have not mentioned Travis Kelce’s girlfriend once during this testimony.”)

Rothman pointed out that the ability to transfer voice or likeness rights in perpetuity potentially “threatens ordinary people” as well: They “may unwittingly sign over those rights as part of online Terms of Service” that exist on so many platforms and are barely ever read. In the music industry, there is a similar problem already causing problems for a number of young artists who sign up to distribute their music through an online service, agree to Terms of Service without reading them, and later discover that they have unknowingly locked their music into some sort of agreement. In an AI world, this problem could be magnified. 

Rothman’s comments put her at odds with the Recording Academy. “In this particular bill, there are certain safeguards, there’s language that says there have to be attorneys present and involved,” Mason said during questioning. (Though many young artists can’t afford counsel or can’t find good counsel.) “But we also believe that families should have the freedom to enter into different business arrangements.” 

Mason’s view was shared by Rep. Matt Gaetz (R – FL). “If tomorrow I wanted to sell my voice to a robot and let that robot say whatever in the world that it wanted to say, and I wanted to take the money from that sale and go buy a sailboat and never turn on the internet again, why should I not have the right to do that?” he asked.

In addition to Rothman, Mason and Wilson, there was one other witness at the hearing: Christopher Mohr, who serves as president of the Software & Information Industry Association. He spoke little and mostly reiterated that his members wanted the courts to answer key questions around AI. “It’s really important that these cases get thoroughly litigated,” Mohr said.

This answer did not satisfy Rep. Glenn Ivey (D – MD), a former litigator. “It could take years before all of that gets solved and you might have conflicting decisions from different courts in jury trials,” Ivey noted. “What should we be doing to try and fix it now?”

Nearly 300 artists, songwriters, actors and other creators are voicing support for a new bipartisan Congressional bill that would regulate the use of artificial intelligence for cloning voices and likenesses via a new print ad running in USA Today on Friday (Feb. 2).

Explore

See latest videos, charts and news

See latest videos, charts and news

The bill — dubbed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD” Act) and introduced in the U.S. House on Jan. 10 — would establish a federal framework for protecting voices and likenesses in the age of AI.

Placed by the Human Artistry Campaign, the ad features such bold-faced names as 21 Savage, Bette Midler, Cardi B & Offset, Chuck D, Common, Gloria Estefan, Jason Isbell, the estate of Johnny Cash, Kelsea Ballerini, Lainey Wilson, Lauren Daigle, Lamb of God, Mary J. Blige, Missy Elliott, Nicki Minaj, Questlove, Reba McEntire, Sheryl Crow, Smokey Robinson, the estate of Tomy Petty, Trisha Yearwood and Vince Gill.

“The No AI FRAUD Act would defend your fundamental human right to your voice & likeness, protecting everyone from nonconsensual deepfakes,” the ad reads. “Protect your individuality. Support HR 6943.”

The Human Artistry Campaign is a coalition of music industry organizations that in March 2023 released a series of seven core principles regarding artificial intelligence. They include ensuring that AI developers acquire licenses for artistic works used in developing and training AI models, as well as that governments refrain from creating “new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”

In addition to musical artists, the USA Today ad also bears the names of actors such as Bradley Cooper, Clark Gregg, Debra Messing, F. Murray Abraham, Fran Drescher, Laura Dern, Kevin Bacon, Kyra Sedgwick, Kristen Bell, Kiefer Sutherland, Julianna Margulies and Rosario Dawson.

The No AI FRAUD Act was introduced by Rep. María Elvira Salazar (R-FL) alongside Reps. Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA). The bill is said to be based upon the Senate discussion draft Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES” Act), which was unveiled in October.

“It’s time for bad actors using AI to face the music,” said Rep. Salazar in a statement at the time the legislation was announced. “This bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online.”

Spurred in part by recent incidents including the viral “fake Drake” track “Heart On My Sleeve,” the No AI FRAUD Act would establish a federal standard barring the use of AI to copy the voices and likenesses of public figures without consent. As it stands, an artist’s voice, image or likeness is typically covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but those laws vary state by state.

The bill was introduced on the same day a similar piece of legislation — the Ensuring Likeness Voice and Image Security (ELVIS) Act — was unveiled in Tennessee by Governor Bill Lee. That bill would update the state’s Protection of Personal Rights law “to include protections for songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI),” according to a press release.

Since its unveiling, the No AI Fraud Act has received support from a range of music companies and organizations including the Recording Industry Association of America (RIAA), Universal Music Group, the National Music Publishers’ Assocation (NMPA), the Recording Academy, SoundExchange, the American Association of Independent Music (A2IM) and the Latin Recording Academy.

You can view the full ad below.

To judge from the results of a report commissioned by GEMA and SACEM, the specter of artificial intelligence (AI) is haunting Europe.
A full 35% of members of the respective German and French collective management societies surveyed said they had used some kind of AI technology in their work with music, according to a Goldmedia report shared in a Tuesday (Jan. 30) press conference — but 71% were afraid that the technology would make it hard for them to earn a living. That means that some creators who are using the technology fear it, too.

The report, which involved expert interviews as well as an online survey, valued the market for generative AI music applications at $300 million last year – 8% of the total market for generative AI. By 2028, though, that market could be worth $3.1 billion. That same year, 27% of creator revenues – or $950 million – would be at risk, in large part due to AI-created music replacing that made by humans.

Explore

See latest videos, charts and news

See latest videos, charts and news

Although many of us think of the music business as being one where fans make deliberate choices of what to listen to – either by streaming or purchasing music – collecting societies take in a fair amount of revenue from music used in films and TV shows, in advertising, and in restaurants and stores. So even if generative AI technology isn’t developed enough to write a pop song, it could still cost the music business money – and creators part or even all of their livelihood.

“So far,” as the report points out, “there is no remuneration system that closes the AI-generated financial gap for creators.” Although some superstars are looking to license the rights to their voices, there is a lack of legal clarity in many jurisdictions about under what conditions a generative AI can use copyrighted material for training purposes. (In the United States, this is a question of fair use, a legal doctrine that doesn’t exist in the same form in France or Germany.) Assuming that music used to train an AI would need to be licensed, however, raises other questions, such as how many times and how that would pay.

Unsurprisingly, the vast majority of songwriters want credit and transparency: 95% want AI companies to disclose which copyrighted works they used for training purposes and 89% want companies to disclose which works are generated by AI. Additionally, 90% believe they should be asked for permission before their work is used for training purposes and the same amount want to benefit financially. A full 90% want policymakers to pay more attention to issues around AI and copyright.

The report further breaks down how the creators interviewed feel about using AI. In addition to the 35% who use the technology, 13% are potential users, 26% would rather not use it and 19% would refuse. Of those who use the technology already, 54% work on electronic music, 53% work on “urban/rap,” 52% on advertising music, 47% on “music library” and 46% on “audiovisual industry.”

These statistics underscore that AI isn’t a technology that’s coming to music – it’s one that’s here now. That means that policymakers looking to regulate this technology need to act soon.

The report also shows that smart regulation could resolve the debate between the benefits and drawbacks of AI. Creators are clearly using it productively, but more still fear it: 64% think the risks outweigh the opportunities, while just 11% thought the opposite. This is a familiar pattern with the music business, to which technologies are both dangerous and promising. Perhaps AI could end up being both.

When fake, sexually-explicit images of Taylor Swift flooded social media last week, it shocked the world. But legal experts weren’t exactly surprised, saying it’s just a glaring example of a growing problem — and one that’ll keep getting worse without changes to the law and tech industry norms.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

The images, some of which were reportedly viewed millions of times on X before they were pulled down, were so-called deepfakes — computer-generated depictions of real people doing fake things. Their spread on Thursday quickly prompted outrage from Swifties, who mass-flagged the images for removal and demanded to know how something like that was allowed to happen to the beloved pop star.

But for legal experts who have been tracking the growing phenomenon of non-consensual deepfake pornography, the episode was sadly nothing new.

“This is just the highest profile instance of something that has been victimizing many people, mostly women, for quite some time now,” said Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law.

Experts warned Billboard that the Swift incident could be the sign of things to come — not just for artists and other celebrities, but for normal individuals with fewer resources to fight back. The explosive growth of artificial intelligence tools over the past year has made deepfakes far easier to create, and some web platforms have become less aggressive in their approach to content moderation in recent years.

“What we’re seeing now is a particularly toxic cocktail,” Hartzog said. “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”

To some extent, images like the ones that cropped up last week are already illegal. Though no federal law squarely bans them, 10 states around the country have enacted statutes criminalizing non-consensual deepfake pornography. Victims like Swift can also theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights, and torts like invasion of privacy and intentional infliction of emotional distress.

Such images also clearly violate the rules on all major platforms, including X. In a statement last week, the company said it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” as well as “closely monitoring the situation to ensure that any further violations are immediately addressed.” Sunday to Tuesday, the site disabled searches for “Taylor Swift” out of “an abundance of caution as we prioritize safety on this issue.”

But for the victims of such images, legal remedies and platform policies often don’t mean much in practice. Even if an image is illegal, it is difficult and prohibitively expensive to try to sue the anonymous people who posted it; even if you flag an image for breaking the rules, it’s sometimes hard to convince a platform to pull it down; even if you get one pulled down, others crop up just as quickly.

“No matter her status, or the number of resources Swift devotes to the removal of these images, she won’t be completely successful in that effort,” said Rebecca A. Delfino, a professor and associate dean at Loyola Law School who has written extensively about harm caused by pornographic deepfakes.

That process is extremely difficult, and it’s almost always reactive — started after some level of damage is already done. Think about it this way: Even for a celebrity with every legal resource in the world, the images still flooded the web. “That Swift, currently one of the most powerful and known women in the world, could not avoid being victimized shows the exploitive power of pornographic deepfakes,” Delfino said.

There’s currently no federal statute that squarely targets the problem. A bill called the Preventing Deepfakes of Intimate Images Act, introduced last year, would allow deepfake victims to file civil lawsuits, and criminalize such images when they’re sexually-explicit. Another, called the Deepfake Accountability Act, would require all deepfakes to be disclaimed as such and impose criminal penalties for those that aren’t. And earlier this month, lawmakers introduced No AI FRAUD Act, which would create a federal right for individuals to sue if their voice or any other part of their likeness is used without permission.

Could last week’s incident spur lawmakers to take action? Don’t forget: Ticketmaster’s messy 2022 rollout of tickets for Taylor’s Eras tour sparked congressional hearings, investigations by state attorneys general, new legislation proposals and calls by some lawmakers to break up Live Nation under federal antitrust laws.

Experts like Delfino are hopeful that such influence on the national discourse — call it the Taylor effect, maybe — could spark a similar conversation over the problem of deepfake pornography. And they might have reason for optimism: Polling conducted by the AI thinktank Artificial Intelligence Policy Institute over the weekend showed that more than 80% of voters supported legislation making non-consensual deepfake porn illegal, and that 84% of them said the Swift incident had increased their concerns about AI.

“Her status as a worldwide celebrity shed a huge spotlight on the need for both criminal and civil remedies to address this problem, which today has victimized hundreds of thousands of others, primarily women,” Delfino said.

But even after last week’s debacle, new laws targeting deepfakes are no guarantee. Some civil liberties activists and lawmakers worry that such legislation could violate the First Amendment by imposing overly-broad restrictions on free speech, including criminalizing innocent images and empowering money-making troll lawsuits. Any new law would eventually need to pass muster at the U.S. Supreme Court, which has signaled in recent years that it is highly skeptical of efforts to restrict speech.

In the absence of writing new laws that make deepfake porn even more illegal, concrete solutions will likely require stronger action by social media platforms themselves, which have created vast, lucrative networks for the spread of such materials and are in the best position to police them.

But Jacob Noti-Victor, a professor at Cardozo School of Law who researches how the law impacts innovation and the deployment of new technologies, says it’s not as simple as it might seem. After all, the images of Swift last week were already clearly in violation of X’s rules, yet they spread widely on the site.

“X and other platforms certainly need to do more to tackle this problem and that requires large, dedicated content moderation teams,” Noti-Victor said. “That said, it’s not an easy task. Content detection tools have not been very good at detecting deepfakes so far, which limits the tools that platforms can use proactively to detect this kind of material as it’s being posted.”

And even if it were easy for platforms find and stop harmful deepfakes, tech companies have hardly been beefing up their content moderation efforts in recent years.

Since Elon Musk acquired X (then named Twitter) in 2022, the company has loosened restrictions on offensive content and fired thousands of employees, including many on the “trust and safety” teams that handle content moderation. Mark Zuckerberg’s Meta, which owns Facebook and Instagram, laid off more than 20,000 employees last year, reportedly also including hundreds of moderators. Google, Microsoft and Amazon have all reportedly made similar cuts.

Amid a broader wave of tech layoffs, why were those employees some of the first to go? Because at the end of the day, there’s no real legal requirement for platforms to police offensive content. Section 230 of the Communications Decency Act, a much-debated provision of federal law, largely shields internet platforms from legal liability for materials posted by their users. That means Taylor could try to sue the anonymous X users who posted her image, but she would have a much harder time suing X itself for failing to stop them.

In the absence of regulation and legal liability, the only real incentives for platforms to do a better job at combating deepfakes are “market incentives,” said Hartzog, the BU professor — meaning, fear of negative publicity that scares away advertisers or alienates users.

On that front, maybe the Taylor fiasco is already having an impact. On Friday, X announced that it would build a “Trust and Safety center of excellence” in Austin, Texas, including hiring 100 new employees to handle content moderation.

“These platforms have an incentive to attract as many people as possible and suck out as much data as possible, with no obligation to create meaningful tools to help victims,” Hartzog said. “Hopefully, this Taylor Swift incident advances the conversation in productive ways that results in meaningful changes to better protect victims of this kind of behavior.”

BMG has entered into a strategic partnership with the TUM School of Management as it looks to fast-track the implementation of artificial intelligence across the Berlin-based company’s marketing campaigns for artists. BMG said in its announcement on Tuesday (Jan. 30) that it sees generative AI as a way to help manage the complex array of […]

Elon Musk’s social media platform X has restored searches for Taylor Swift after temporarily blocking users from seeing some results as pornographic deepfake images of the singer circulated online. Searches for the singer’s name on the site Tuesday turned up a list of tweets as normal. A day earlier, the same search resulted in an […]