State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

1:00 pm 7:00 pm

Current show

State Champ Radio Mix

1:00 pm 7:00 pm


artificial intelligence

Page: 3

Tennessee governor Bill Lee signed the ELVIS Act into law Thursday (Mar. 21), legislation designed to further protect the state’s artists from artificial intelligence deep fakes. The bill, more formally named the Ensuring Likeness Voice and Image Security Act of 2024, replaces the state’s old right of publicity law, which only included explicit protections for one’s “name, photograph, or likeness,” expanding protections to include voice- and AI-specific concerns for the first time.
Gov. Lee signed the bill into law from a local honky tonk, surrounded by superstar supporters like Luke Bryan and Chris Janson. Lee joked that it was “the coolest bill signing ever.”

The ELVIS Act was introduced by Gov. Lee in January along with State Senate Majority Leader Jack Johnson (R-27) and House Majority Leader William Lambert (R-44), and it has since garnered strong support from the state’s artistic class. Talents like Lindsay Ell, Michael W. Smith, Natalie Grant, Matt Maher and Evanescence‘s David Hodges have been vocal in their support for the bill.

Trending on Billboard

It also gained support from the recorded music industry and the Human Artistry Campaign, a global initiative of entertainment organizations that pushes for a responsible approach to AI. The initiative has buy-in from more than 180 organizations worldwide, including the RIAA, NMPA, BMI, ASCAP, Recording Academy and American Association of Independent Music (A2IM).

Right of publicity protections vary state-to-state in the United States, leading to a patchwork of laws that make enforcing one’s ownership over one’s name, likeness and voice more complicated. There is an even greater variation among right of publicity laws postmortem. As AI impersonation concerns have grown more prevalent over the last year, there has been a greater push by the music business to gain a federal right of publicity.

The ELVIS Act replaces the Personal Rights Protection Act of 1984, which was passed, in part, to extend Elvis Presley‘s publicity rights after he passed away. (At the time, Tennessee did not recognize a postmortem right of publicity). Along with explicitly including a person’s voice as a protected right for the first time, the ELVIS Act also broadens which uses of one’s name, image, photograph and voice are barred.

Previously, the Personal Rights Protection Act only banned uses of a person’s name, photograph and likeness “for purpose of advertising,” which would not include the unauthorized use of AI voices for performance purposes. The ELVIS Act does not limit liability based on context, so it would likely bar any unauthorized use, including in a documentary, song or book, among other mediums.

The federal government is also working on solutions to address publicity rights concerns. Within hours of Gov. Lee’s introduction of the ELVIS Act in Tennessee back in January, a bipartisan group of U.S. House lawmakers revealed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a framework for protecting one’s voice and likeness on a federal level and lays out First Amendment protections. It is said to complement the Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), a draft bill that was introduced last October.

While most of the music business is aligned on creating a federal right of publicity, David Israelite, president/CEO of the National Music Publishers’ Association (NMPA), warned in a speech delivered at an Association of Independent Music Publishers (AIMP) meeting in February that “while we are 100% supportive of the record labels’ priority to get a federal right of publicity…it does not have a good chance. Within the copyright community, we don’t even agree on [it]. Guess who doesn’t want a federal right of publicity? Film and TV. Guess who’s bigger than the music industry? Film and TV.”

The subject of AI voice cloning has been a controversial topic in the music business since Ghostwriter released the so-called “Fake-Drake” song “Heart On My Sleeve,” which used the AI technology without permission. In some cases, this form of AI can present novel creative opportunities — including its use for pitch records, lyric translations, estate marketing and fan engagement — but it also poses serious threats. If an artist’s voice is cloned by AI without their permission or knowledge, it can confuse, offend, mislead or even scam fans.

There is no shortage of AI voice synthesis companies on the market today, but Voice-Swap, founded and led by Dan “DJ Fresh” Stein, is trying to reimagine what these companies can be. 
The music producer and technologist intends Voice-Swap to act as not just a simple conversion tool but an “agency” for artists’ AI likenesses. He’s also looking to solve the ongoing question of how to monetize these voice models in a way that gets the most money back to the artists — a hotly contested topic since anonymous TikTok user Ghostwriter employed AI renderings of Drake and The Weeknd‘s voices without their permission on the viral song “Heart On My Sleeve.”

In an exclusive interview with Billboard, Stein and Michael Pelczynski, a member of the company’s advisory board and former vp at SoundCloud, explain their business goals as well as their new monetization plan, which includes providing a dividend for participating artists and payment to artists every time a user employs their AI voice — not just when the resulting song is released commercially and streamed on DSPs. The company also reveals that it’s working on a new partnership with Imogen Heap to create her voice model, which will arrive this summer.

Trending on Billboard

Voice-Swap sees the voice as the “new real estate of IP,” as Pelczynski puts it — just another form of ownership that can allow a participating artist to make passive income. (The voice, along with one’s name and likeness, is considered a “right of publicity” which is currently regulated differently state-to-state.)

In addition to seeing AI voice technology as a useful tool to engage fans of notable artists like Heap and make translations of songs, the Voice-Swap team also believes AI voices represent a major opportunity for session vocalists with distinct timbres but lower public profiles to earn additional income. On its platform now, the company has a number of session vocalists of varying vocal styles available for use; Voice-Swap sees session vocalists’ AI voice models as potentially valuable to songwriters and producers who may want to shape-shift those voices during writing and recording sessions. (As Billboard reported in August, using AI voice models to better tailor pitch records to artists has become a common use-case for the emerging technology.)

“We like to think that, much like a record label, we have a brand that we want to build with the style of artists and the quality we represent at Voice-Swap,” says Stein. “It doesn’t have to be a specific genre, but it’s about hosting unique and incredible voices as opposed to [just popular artists].”

Last year, we saw a lot of fear and excitement surrounding this technology as Ghostwriter appeared on social media and Grimes introduced her own voice model soon after. How does your approach compare to these examples?

Pelczynski: This technology did stoke a lot of fear at first. This is because people see it as a magic trick. When you don’t know what’s behind it and you just see the end result and wonder how it just did that, there is wonder and fear that comes. [There is now the risk] that if you don’t work with someone you trust on your vocal rights, someone is going to pick up that magic trick and do it without you. That’s what happened with Ghostwriter and many others.

The one real main thing to emphasize is the magic trick of swapping a voice isn’t where the story ends, it’s where it begins. And I think Grimes in particular is approaching it with an intent to empower artists. We are, too. But I think where we differentiate is the revenue stream part. With the Grimes model, you create what you want to create and then the song goes into the traditional ecosystem of streaming and other ways of consuming music. That’s where the royalties are made off of that.

We are focused on the inference. Our voice artists get paid on the actual conversion of the voice. Not all of these uses of AI voices end up on streaming, so this is important to us. Of course, if the song is released, additional money for the voice can be made then, too. As far as we know, we are the first platform to pay royalties on the inference, the first conversion.

Stein: We also allow artists the right to release their results through any distributor they want. [Grimes’ model is partnered exclusively with TuneCore.] We see ourselves a bit like an agency for artists’ voices.

What do you mean by an “agency” for artists’ voices?

Stein: When we work with an artist at Voice-Swap we intend to represent them and license their voice models created with us to other platforms to increase their opportunities to earn income. It’s like working with an agent to manage your live bookings. We want to be the agent for the artists’ AI presence and help them monetize it on multiple platforms but always with their personal preferences and concerns in mind.

What kinds of platforms would be interested in licensing an AI voice model from Voice-Swap?

Stein: It is early days for all of the possible use cases, but we think the most obvious example at the moment is music production platforms [or DAWs, short for digital audio workstation] that want to use voice models in their products.

There are two approaches you can take [as an AI voice company.] We could say we are a SaaS platform, and the artist can do deals with other platforms themselves. But the way we approach this is we put a lot of focus into the quality of our models and working with artists directly to keep improving it. We want to be the one-stop solution for creating a model the artist is proud of.

I think the whole thing with AI and where this technology is going is that none of us know what it’s going to be doing 10 years from now. So for us, this was also about getting into a place where we can build that credibility in those relationships and not just with the artists. We want to work with labels, too. 

Do you have any partnerships with DAWs or other music-making platforms in place already?

Pelczynski: We are in discussions and under NDA pending an announcement. Every creator’s workflow is different — we want our users to have access to our roster of voices wherever they feel most comfortable, be that via the website, in a DAW or elsewhere. That’s why we’re exploring these partnerships, and why we’ve designed our upcoming VST [virtual studio technology] to make that experience even more seamless. We also recently announced a partnership with SoundCloud, with deeper integrations aimed at creators forthcoming.

Ultimately, the more places our voices are available, the more opportunities there are for new revenue for the artists, and that’s our priority.

Can some music editing take place on the Voice-Swap website, or do these converted voices need to be exported?

Pelczynski: Yes, Dan has always wanted to architect a VST so that it can act like a plug-in in someone’s DAW, but we also have the capability of letting users edit and do the voice conversion and some music editing on our website using our product Stem-Swap. That’s an amazing playground for people that are just coming up. It is similar to how BandLab and others are a good quick way to experiment with music creation.

How many users does Voice-Swap have?

Pelczynski: We have 140,000 verified unique users, and counting.

Can you break down the specifics of how much your site costs for users?

Pelczynski: We run a subscription and top-up pricing system. Users pay a monthly or one-off fee and receive audio credits. Credits are then used for voice conversion and stem separation, with more creator tools on the way.

How did your team get connected with Imogen Heap, and given all the competitors in the AI voice space today, why do you think she picked Voice-Swap? 

Pelczynski: We’re very excited to be working with her. She’s one of many established artists that we’re working on currently in the pipeline, and I think our partnership comes down to our ethos of trust and consent. I know it sounds trite, but I think it’s absolutely one of the cornerstones to our success. 

As artificial intelligence and its potential effects on creativity, copyright and a host of other sectors continues to dominate conversation, the Universal Music Group and electronic instruments maker Roland Corporation have teamed up to create a set of guidelines that the companies published under the heading “Principles for Music Creation With AI.”
The seven principles, or “clarifying statements,” as the companies put it, are an acknowledgment that AI is certainly here to stay, but that it should be used in a responsible and transparent way that protects and respects human creators. The companies say that they hope additional organizations will sign on to support the framework. The seven principles, which can be found with slightly more detail at this site, are as follows:

— We believe music is central to humanity.

Trending on Billboard

— We believe humanity and music are inseparable.

— We believe that technology has long supported human artistic expression, and applied sustainably, AI will amplify human creativity.

— We believe that human-created works must be respected and protected.

— We believe that transparency is essential to responsible and trustworthy AI.

— We believe the perspectives of music artists, songwriters, and other creators must be sought after and respected.

— We are proud to help bring music to life.

The creation of the principles is part of a partnership between UMG and Roland that will also involve research projects, including one designed to create “methods for confirming the origin and ownership of music,” according to a press release.

“As companies who share a mutual history of technology innovation, both Roland and UMG believe that AI can play an important role in the creative process of producing music,” Roland’s chief innovation officer Masahiro Minowa said in a statement. “We also have a deep belief that human creativity is irreplaceable, and it is our responsibility to protect artists’ rights. The Principles for Music Creation with AI establishes a framework for our ongoing collaboration to explore opportunities that converge at the intersection of technology and human creativity.”

Universal has been proactive around the issue of AI in music over the past several months, partnering with YouTube last summer on a series of AI principles and an AI Music Incubator to help artists use AI responsibly, forming a strategic partnership with BandLab to create a set of ethical practices around music creation, and partnering with Endel on functional music, among other initiatives. But UMG has also taken stands to protect against what it sees as harmful uses of AI, including suing AI platform Anthropic for allegedly using its copyrights to train its software in creating new works, and cited AI concerns as part of its rationale for allowing its licensing agreement with TikTok to expire earlier this year.

“At UMG, we have long recognized and embraced the potential of AI to enhance and amplify human creativity, advance musical innovation, and expand the realms of audio production and sound technology,” UMG’s executive vp and chief digital officer Michael Nash said in a statement. “This can only happen if it is applied ethically and responsibly across the entire industry. We are delighted to collaborate with Roland, to explore new opportunities in this area together, while helping to galvanize consensus among key stakeholders across music’s creative community to promote adoption of these core principles with the goal of ensuring human creativity continues to thrive alongside the evolution of new technology.”

LONDON — Sweeping new laws regulating the use of artificial intelligence (AI) in Europe, including controls around the use of copyrighted music, have been approved by the European Parliament, following fierce lobbying from both the tech and music communities.
Members of the European Parliament (MEPs) voted in favor of the EU’s Artificial Intelligence Act by a clear majority of 523 votes for, 46 against and 49 abstentions. The “world first” legislation, which was first proposed in April 2021 and covers a wide range of AI applications including biometric surveillance and predictive policing, was provisionally approved in December, but Wednesday’s vote formally establishes its passage into law.

The act places a number of legal and transparency obligations on tech companies and AI developers operating in Europe, including those working in the creative sector and music business. Among them is the core requirement that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 provide detailed summaries of any copyrighted works, including music, that they have used to train their systems.

Trending on Billboard

Significantly, the law’s transparency provisions apply regardless of when or where in the world a tech company scraped its data from. For instance, even if an AI developer scraped copyrighted music and/or trained its systems in a non-EU country — or bought data sets from outside the 27-member state — as soon as they are used or made available in Europe the company is required to make publicly available a “sufficiently detailed summary” of all copyright protected music it has used to create AI works. 

There is also a requirement that any training data sets used in generative AI music or audio-visual works are water marked, so there is a traceable path for rights holders to track and block the illegal use of their catalog. 

In addition, content created by AI, as opposed to human works, must be clearly labeled as such, while tech companies have to ensure that their systems cannot be used to generate illegal and infringing content.

Large tech companies who break the rules – which govern all applications of AI inside the 27-member block of EU countries, including so-called “high risk” uses — will face fines of up to €35 million or 7% of global annual turnover. Start-up businesses or smaller tech operations will receive proportionate financial punishments. 

Speaking ahead of Wednesday’s vote, which took place in Strasbourg, co-rapporteur Brando Benifei said the legislation means that “unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.” 

Co-rapporteur Dragos Tudorache called the AI Act “a starting point for a new model of governance built around technology.” 

European legislators first proposed introducing regulation of artificial intelligence in 2021, although it was the subsequent launch of ChatGPT — followed by the high-profile release of “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, last April — that made many music executives sit up and pay closer attention to the technology’s potential impact on the record business. 

In response, lobbyists stepped up their efforts to convince lawmakers to add transparency provisions around the use of music in AI – a move which was fiercely opposed by the technology industry, which argued that tougher regulations would put European AI developers at a competitive disadvantage.

Now that the AI Act has been approved by the European Parliament, the legislation will undergo a number of procedural rubber-stamping stages before it is published in the EU’s Official Journal — most likely in late April or early May — with its regulations coming into force 20 days after that. 

There are, however, tiered exceptions for tech companies to comply with its terms and some of its provisions are not fully applicable for up to two-years after its enactment. (The rules governing existing generative AI models commence after 12 months, although any new generative AI companies or models entering the European market after the Act has come into force have to immediately comply with its regulations).

In response to Wednesday’s vote, a coalition of European creative and copyright organizations, including global recorded-music trade body IFPI and international music publishing trade group ICMP, issued a joint statement thanking regulators and MEPs for the “essential role they have played in supporting creators and rightsholders.”

“While these obligations provide a first step for rightsholders to enforce their rights, we call on the European Parliament to continue to support the development of responsible and sustainable AI by ensuring that these important rules are put into practice in a meaningful and effective way,” said the 18 signatories, which also included European independent labels trade association IMPALA, European Authors Society GESAC and CISAC, the international trade organization for copyright collecting societies.

As broadcasters begin assembling in Nashville this morning (Feb. 28) for the Country Radio Seminar, expect a lot of talk. About talk.
Radio personalities’ importance has been on the decline for decades. They used to pick the music on their shows. That privilege was taken away. Then many were encouraged to cut down their segues and get to the music. Then syndicated morning and overnight shows moved in to replace local talent.

But once the streaming era hit and started stealing some of radio’s time spent listening, terrestrial programmers began reevaluating their product to discover what differentiates it from streaming. Thus, this year’s CRS focus is talk.

“That’s what’s so important about this year,” says iHeartMedia talent Brooke Taylor, who voicetracks weekday shows in three markets and airs on 100 stations on weekends. “The radio on-air personality is sort of regaining their importance in the stratosphere of a particular station.”

Taylor will appear on a panel designed for show hosts — “Personal Branding: It’s Not Ego, It’s Branding!” — but it’s hardly the only element geared to the talent. Other entries include “On Air Personalities: The OG Influencers,” a research study about audience expectations of their DJs; a podcasting deep dive; and four different panels devoted to the threats and opportunities in artificial intelligence (AI).

Trending on Billboard

As it turns out, artifice is not particularly popular, according to the research study “On Air Talent and Their Roles on All Platforms,” conducted by media analytics firm Smith Geiger. 

“Americans have very mixed feelings about AI,” says Smith Geiger executive vp of digital media strategies Andrew Finlayson. “This research proves that the audience is very interested in authentic content and authentic voices.”

Not to say that AI will be rejected. Sounds Profitable partner Tom Webster expects that it will be effective at matching advertisers to podcasts that fit their audience and market priorities. And he also sees it as a research tool that can assist content creation.

“If I’m a DJ and I’ve got a break coming up, and I’ve pre-sold or back-sold the same record 1,000 times, why not ask an assistant, ‘Give me something new about this record to say’?” Webster suggests. “That’s the easy kind of thing right there that can actually help the DJ do their job.”

CRS has been helping country radio do its job for more than 50 years, providing network opportunities, exposure to new artists and a steady array of educational panels that grapple with legal issues, industry trends and listener research. In the early 1980s, the format’s leaders aspired to make country more like adult contemporary, offering a predictable experience that would be easy to consume for hours in an office situation. The music, and radio production techniques, became more aggressive in the ’90s, and as technology provided a bulging wave of competitors and new ways to move around the dial, stations have been particularly challenged to maintain listeners’ attention during the 21st century.

Meanwhile, major chains have significantly cut staffs. Many stations cover at least two daily shifts with syndicated shows, and the talent that’s left often works on multiple stations in several different markets, sometimes covering more than one format. Those same personalities are expected to maintain a busy social media presence and potentially establish a podcast, too.

That’s an opportunity, according to Webster. Podcast revenue has risen to an estimated $2.5 billion in advertising and sponsorship billing, he says, while radio income has dropped from around $14 billion to $9 billion. He envisions that the two platforms will be on equal financial footing in perhaps a decade, and he believes radio companies and personalities should get involved if they haven’t already.

“It’s difficult to do a really good podcast,” Webster observes. “We talk a lot about the number of podcasts — there are a lot, and most podcasts are not great. Most podcasts are listened to by friends and family. There’s no barrier to entry to a podcast, and then radio has this stable of people whose very job it is to develop a relationship with an audience. That is the thing that they’re skilled at.”

That ’80s idea of radio as predictable background music has been amended. It’s frequently still “a lean-back soundtrack to what it is that you’re doing,” Webster suggests, though listeners want to be engaged with it.

“One of the people in the survey, verbatim, said it’s ‘a surprise box,’ ” Finlayson notes. “I think people like that serendipity that an on-air personality who really knows and understands the music can bring to the equation. And country music knowledge is one of the things that the audience craves from an on-air talent.”

It’s a challenge. Between working multiple stations, creating social media content and podcasting, many personalities are so stretched that it has become difficult to maintain a personal life, which in turn reduces their sources for new material. Add in the threat of AI, and it’s an uneasy time.

“What I see is a great deal of anxiety and stress levels, and I don’t know how we fix it,” concedes Country Radio Broadcasters executive director R.J. Curtis. “There’s just so much work put on our shoulders, it’s hard to manage that and then have a life.”

Curtis made sure that CRS addresses that, too, with “Your Brain Is a Liar: Recognizing and Understanding the Impact of Your Mental Health,” a presentation delivered by 25-year radio and label executive Jason Prinzo.

That tension is one of the ways that on-air talent likely relates to its audience — there are plenty of stressed, overbooked citizens in every market. And as tech continues to consume their lives, it naturally feeds the need for authenticity, which is likely to be a buzzword as CRS emphasizes radio’s personalities.

“Imagine having a radiothon for St. Jude with an AI talent,” Taylor says. “You’ll get a bunch of facts, but you’ll never get a tear. You’ll never get a real story. You’ll never get that shaky voice talking about somebody in your family or somebody that you know has cancer. The big thing that just will never be replaced is that emotion.” 

Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.

Each year during Grammy week, members of the Association of Independent Music Publishers‘ (AIMP) gather at Lawry’s steakhouse in Beverly Hills to hear a speech from David Israelite, president and CEO of the National Music Publishers’ Association (NMPA). In it, Israelite discussed the successes of the Music Modernization Act, the new UMG TikTok licensing feud, the viability of artificial intelligence regulation, and the more.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

He started the presentation with slides showcasing the publishing revenue for 2022, divided by categories: performance (48.29% or $2.7 billion), mechanical (20.27% or $1.1 billion), synch (26.07% or $1.4 billion), and other (5.37% or $300 million). Synch, he says, is the fastest growing source of revenue.

Israelite focused much of his time on addressing the Music Modernization Act, which was passed about five years ago. “I don’t want you to forget is just how amazing the Music Modernization Act was and is for this industry,” he said. “I believe that it is the most important legislation in the history of the music business… You’re going to start to take for granted some of the things… but we had to fight and win to get this done.” He pointed to successes of the landmark law like the change in the rate standard to a willing seller, willing buyer model and its creation of the Mechanical Licensing Collective (The MLC).

Earlier this week, the MLC (and the digital licensee coordinator, DLC) began the process of its first-ever re-designation. This is a routine five-year reassessment of the organization and how well it is doing its job of administering the blanket mechanical license created by the MMA. As part of the re-designation process, songwriters, publishers and digital services are allowed to submit comments to the Copyright Office about the MLC’s performance. “Many of you will have a role in offering your opinions to the copyright office about that,” says Israelite. “The process needs to be respected and played out, but [The MLC] will be re-designated, and it is an absolute no brainer decision. There’s a lot about the MLC that I want to remind you about.”

Israelite then highlighted the organization’s “transparency,” the lack of administration fees for publishers and that the projection of 2023 revenue from streaming for recorded music ($6.3 billion) and publishing ($1.7 billion) “the split is the closest it has ever been,” attributing this, in part, to the MLC’s work.

He also addressed Grammy week’s biggest story: the UMG TikTok licensing standoff. “I’m only going to say two things about TikTok: the first is I think music is tremendously important to the business model of TikTok, and, secondly, I am just stating the fact that the NMPA model license, which many of you are using, with TikTok expires in April.” At that time, the NMPA can either re-up its model license with TikTok or walk away. If it were to pull a similar punch to what UMG has done, indie publishers could either negotiate with TikTok directly for their own license, or they could also walk away from the platform.

Later, in addressing artificial intelligence concerns, he pledged his support for the creation of a federal right of publicity, but he admitted “I want to be honest with you, it does not have a good chance.” Even though the music business is vying for its adoption, Israelite says that film and TV industry does not want it. “Within the copyright community we don’t agree… and guess who is bigger than music? Film and TV.”

Still, he believes there is merit in fighting for the proposed bill. “It might help with state legislative efforts and it raises the profile,” he said, but Israelite stated that his priority for AI regulation is to require transparency from AI companies and to keep records of how AI models are trained.

Country star Lainey Wilson and Recording Academy president/CEO Harvey Mason voiced their support for federal regulation of AI technology at a hearing conducted by the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet in Los Angeles on Friday (Feb. 2). 
“Our voices and likenesses are indelible parts of us that have enabled us to showcase our talents and grow our audiences, not mere digital kibble for a machine to duplicate without consent,” Wilson said during her comments. 

“The artists and creators I talk to are concerned that there’s very little protection for artists who see their own name or likeness or voice used to create AI-generated materials,” Mason added. “This misuse hurts artists and their fans alike.” 

“The problem of AI fakes is clear to everyone,” he continued later. “This is a problem that only Congress can address to protect all Americans. For this reason, the academy is grateful for the introduction of the No AI FRAUD Act,” a bill announced in January that aims to establish a federal framework for protecting voice and likeness. 

The star of the hearing was not from the music industry, though. Jennifer Rothman, a professor of law at University of Pennsylvania Law School, offered an eloquent challenge to a key provision of the No AI FRAUD act, which would allow artists to transfer the rights to their voice and likeness to a third party. 

It’s easy to imagine this provision is popular with labels, who historically built their large catalogs by taking control of artists’ recordings for perpetuity. However, Rothman argued that “any federal right to a person’s voice or likeness must not be transferable away from that person” and “there must be significant limits on licensing” as well.  

“Allowing another person or entity to own a living human being’s likeness or voice in perpetuity violates our fundamental and constitutional right to liberty,” she said.

Rothman cleverly invoked the music industry’s long history of perpetuity deals — a history that has upset many artists, including stars like Taylor Swift, over the years — as part of the reason for her objection. 

“Imagine a world in which Taylor Swift‘s first record label obtained rights in perpetuity to young Swift’s voice and likeness,” Rothman explained. “The label could then replicate Swift’s voice over and over in new songs that she never wrote and have AI renditions of her perform and endorse the songs and videos and even have holograms perform them on tour. In fact, under the proposed No AI Fraud Act, the label would be able to sue Swift herself for violating her own right of publicity if she used her voice and likeness to write and record new songs and publicly perform them. This is the topsy-turvy world that the draft bills would create.”

(Rothman’s reference to Swift was just one of several at the hearing. Rep. Kevin Kiley [R – CA] alluded to the debate over whether or not the singer would be able to make it to the Super Bowl from her performance in Tokyo, while Rep. Nathaniel Moran [R – TX] joked, “I have not mentioned Travis Kelce’s girlfriend once during this testimony.”)

Rothman pointed out that the ability to transfer voice or likeness rights in perpetuity potentially “threatens ordinary people” as well: They “may unwittingly sign over those rights as part of online Terms of Service” that exist on so many platforms and are barely ever read. In the music industry, there is a similar problem already causing problems for a number of young artists who sign up to distribute their music through an online service, agree to Terms of Service without reading them, and later discover that they have unknowingly locked their music into some sort of agreement. In an AI world, this problem could be magnified. 

Rothman’s comments put her at odds with the Recording Academy. “In this particular bill, there are certain safeguards, there’s language that says there have to be attorneys present and involved,” Mason said during questioning. (Though many young artists can’t afford counsel or can’t find good counsel.) “But we also believe that families should have the freedom to enter into different business arrangements.” 

Mason’s view was shared by Rep. Matt Gaetz (R – FL). “If tomorrow I wanted to sell my voice to a robot and let that robot say whatever in the world that it wanted to say, and I wanted to take the money from that sale and go buy a sailboat and never turn on the internet again, why should I not have the right to do that?” he asked.

In addition to Rothman, Mason and Wilson, there was one other witness at the hearing: Christopher Mohr, who serves as president of the Software & Information Industry Association. He spoke little and mostly reiterated that his members wanted the courts to answer key questions around AI. “It’s really important that these cases get thoroughly litigated,” Mohr said.

This answer did not satisfy Rep. Glenn Ivey (D – MD), a former litigator. “It could take years before all of that gets solved and you might have conflicting decisions from different courts in jury trials,” Ivey noted. “What should we be doing to try and fix it now?”

Nearly 300 artists, songwriters, actors and other creators are voicing support for a new bipartisan Congressional bill that would regulate the use of artificial intelligence for cloning voices and likenesses via a new print ad running in USA Today on Friday (Feb. 2).

Explore

See latest videos, charts and news

See latest videos, charts and news

The bill — dubbed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD” Act) and introduced in the U.S. House on Jan. 10 — would establish a federal framework for protecting voices and likenesses in the age of AI.

Placed by the Human Artistry Campaign, the ad features such bold-faced names as 21 Savage, Bette Midler, Cardi B & Offset, Chuck D, Common, Gloria Estefan, Jason Isbell, the estate of Johnny Cash, Kelsea Ballerini, Lainey Wilson, Lauren Daigle, Lamb of God, Mary J. Blige, Missy Elliott, Nicki Minaj, Questlove, Reba McEntire, Sheryl Crow, Smokey Robinson, the estate of Tomy Petty, Trisha Yearwood and Vince Gill.

“The No AI FRAUD Act would defend your fundamental human right to your voice & likeness, protecting everyone from nonconsensual deepfakes,” the ad reads. “Protect your individuality. Support HR 6943.”

The Human Artistry Campaign is a coalition of music industry organizations that in March 2023 released a series of seven core principles regarding artificial intelligence. They include ensuring that AI developers acquire licenses for artistic works used in developing and training AI models, as well as that governments refrain from creating “new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”

In addition to musical artists, the USA Today ad also bears the names of actors such as Bradley Cooper, Clark Gregg, Debra Messing, F. Murray Abraham, Fran Drescher, Laura Dern, Kevin Bacon, Kyra Sedgwick, Kristen Bell, Kiefer Sutherland, Julianna Margulies and Rosario Dawson.

The No AI FRAUD Act was introduced by Rep. María Elvira Salazar (R-FL) alongside Reps. Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA). The bill is said to be based upon the Senate discussion draft Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES” Act), which was unveiled in October.

“It’s time for bad actors using AI to face the music,” said Rep. Salazar in a statement at the time the legislation was announced. “This bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online.”

Spurred in part by recent incidents including the viral “fake Drake” track “Heart On My Sleeve,” the No AI FRAUD Act would establish a federal standard barring the use of AI to copy the voices and likenesses of public figures without consent. As it stands, an artist’s voice, image or likeness is typically covered by “right of publicity” laws that protect them from commercial exploitation without authorization, but those laws vary state by state.

The bill was introduced on the same day a similar piece of legislation — the Ensuring Likeness Voice and Image Security (ELVIS) Act — was unveiled in Tennessee by Governor Bill Lee. That bill would update the state’s Protection of Personal Rights law “to include protections for songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI),” according to a press release.

Since its unveiling, the No AI Fraud Act has received support from a range of music companies and organizations including the Recording Industry Association of America (RIAA), Universal Music Group, the National Music Publishers’ Assocation (NMPA), the Recording Academy, SoundExchange, the American Association of Independent Music (A2IM) and the Latin Recording Academy.

You can view the full ad below.

To judge from the results of a report commissioned by GEMA and SACEM, the specter of artificial intelligence (AI) is haunting Europe.
A full 35% of members of the respective German and French collective management societies surveyed said they had used some kind of AI technology in their work with music, according to a Goldmedia report shared in a Tuesday (Jan. 30) press conference — but 71% were afraid that the technology would make it hard for them to earn a living. That means that some creators who are using the technology fear it, too.

The report, which involved expert interviews as well as an online survey, valued the market for generative AI music applications at $300 million last year – 8% of the total market for generative AI. By 2028, though, that market could be worth $3.1 billion. That same year, 27% of creator revenues – or $950 million – would be at risk, in large part due to AI-created music replacing that made by humans.

Explore

See latest videos, charts and news

See latest videos, charts and news

Although many of us think of the music business as being one where fans make deliberate choices of what to listen to – either by streaming or purchasing music – collecting societies take in a fair amount of revenue from music used in films and TV shows, in advertising, and in restaurants and stores. So even if generative AI technology isn’t developed enough to write a pop song, it could still cost the music business money – and creators part or even all of their livelihood.

“So far,” as the report points out, “there is no remuneration system that closes the AI-generated financial gap for creators.” Although some superstars are looking to license the rights to their voices, there is a lack of legal clarity in many jurisdictions about under what conditions a generative AI can use copyrighted material for training purposes. (In the United States, this is a question of fair use, a legal doctrine that doesn’t exist in the same form in France or Germany.) Assuming that music used to train an AI would need to be licensed, however, raises other questions, such as how many times and how that would pay.

Unsurprisingly, the vast majority of songwriters want credit and transparency: 95% want AI companies to disclose which copyrighted works they used for training purposes and 89% want companies to disclose which works are generated by AI. Additionally, 90% believe they should be asked for permission before their work is used for training purposes and the same amount want to benefit financially. A full 90% want policymakers to pay more attention to issues around AI and copyright.

The report further breaks down how the creators interviewed feel about using AI. In addition to the 35% who use the technology, 13% are potential users, 26% would rather not use it and 19% would refuse. Of those who use the technology already, 54% work on electronic music, 53% work on “urban/rap,” 52% on advertising music, 47% on “music library” and 46% on “audiovisual industry.”

These statistics underscore that AI isn’t a technology that’s coming to music – it’s one that’s here now. That means that policymakers looking to regulate this technology need to act soon.

The report also shows that smart regulation could resolve the debate between the benefits and drawbacks of AI. Creators are clearly using it productively, but more still fear it: 64% think the risks outweigh the opportunities, while just 11% thought the opposite. This is a familiar pattern with the music business, to which technologies are both dangerous and promising. Perhaps AI could end up being both.

When fake, sexually-explicit images of Taylor Swift flooded social media last week, it shocked the world. But legal experts weren’t exactly surprised, saying it’s just a glaring example of a growing problem — and one that’ll keep getting worse without changes to the law and tech industry norms.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

The images, some of which were reportedly viewed millions of times on X before they were pulled down, were so-called deepfakes — computer-generated depictions of real people doing fake things. Their spread on Thursday quickly prompted outrage from Swifties, who mass-flagged the images for removal and demanded to know how something like that was allowed to happen to the beloved pop star.

But for legal experts who have been tracking the growing phenomenon of non-consensual deepfake pornography, the episode was sadly nothing new.

“This is just the highest profile instance of something that has been victimizing many people, mostly women, for quite some time now,” said Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law.

Experts warned Billboard that the Swift incident could be the sign of things to come — not just for artists and other celebrities, but for normal individuals with fewer resources to fight back. The explosive growth of artificial intelligence tools over the past year has made deepfakes far easier to create, and some web platforms have become less aggressive in their approach to content moderation in recent years.

“What we’re seeing now is a particularly toxic cocktail,” Hartzog said. “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”

To some extent, images like the ones that cropped up last week are already illegal. Though no federal law squarely bans them, 10 states around the country have enacted statutes criminalizing non-consensual deepfake pornography. Victims like Swift can also theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights, and torts like invasion of privacy and intentional infliction of emotional distress.

Such images also clearly violate the rules on all major platforms, including X. In a statement last week, the company said it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” as well as “closely monitoring the situation to ensure that any further violations are immediately addressed.” Sunday to Tuesday, the site disabled searches for “Taylor Swift” out of “an abundance of caution as we prioritize safety on this issue.”

But for the victims of such images, legal remedies and platform policies often don’t mean much in practice. Even if an image is illegal, it is difficult and prohibitively expensive to try to sue the anonymous people who posted it; even if you flag an image for breaking the rules, it’s sometimes hard to convince a platform to pull it down; even if you get one pulled down, others crop up just as quickly.

“No matter her status, or the number of resources Swift devotes to the removal of these images, she won’t be completely successful in that effort,” said Rebecca A. Delfino, a professor and associate dean at Loyola Law School who has written extensively about harm caused by pornographic deepfakes.

That process is extremely difficult, and it’s almost always reactive — started after some level of damage is already done. Think about it this way: Even for a celebrity with every legal resource in the world, the images still flooded the web. “That Swift, currently one of the most powerful and known women in the world, could not avoid being victimized shows the exploitive power of pornographic deepfakes,” Delfino said.

There’s currently no federal statute that squarely targets the problem. A bill called the Preventing Deepfakes of Intimate Images Act, introduced last year, would allow deepfake victims to file civil lawsuits, and criminalize such images when they’re sexually-explicit. Another, called the Deepfake Accountability Act, would require all deepfakes to be disclaimed as such and impose criminal penalties for those that aren’t. And earlier this month, lawmakers introduced No AI FRAUD Act, which would create a federal right for individuals to sue if their voice or any other part of their likeness is used without permission.

Could last week’s incident spur lawmakers to take action? Don’t forget: Ticketmaster’s messy 2022 rollout of tickets for Taylor’s Eras tour sparked congressional hearings, investigations by state attorneys general, new legislation proposals and calls by some lawmakers to break up Live Nation under federal antitrust laws.

Experts like Delfino are hopeful that such influence on the national discourse — call it the Taylor effect, maybe — could spark a similar conversation over the problem of deepfake pornography. And they might have reason for optimism: Polling conducted by the AI thinktank Artificial Intelligence Policy Institute over the weekend showed that more than 80% of voters supported legislation making non-consensual deepfake porn illegal, and that 84% of them said the Swift incident had increased their concerns about AI.

“Her status as a worldwide celebrity shed a huge spotlight on the need for both criminal and civil remedies to address this problem, which today has victimized hundreds of thousands of others, primarily women,” Delfino said.

But even after last week’s debacle, new laws targeting deepfakes are no guarantee. Some civil liberties activists and lawmakers worry that such legislation could violate the First Amendment by imposing overly-broad restrictions on free speech, including criminalizing innocent images and empowering money-making troll lawsuits. Any new law would eventually need to pass muster at the U.S. Supreme Court, which has signaled in recent years that it is highly skeptical of efforts to restrict speech.

In the absence of writing new laws that make deepfake porn even more illegal, concrete solutions will likely require stronger action by social media platforms themselves, which have created vast, lucrative networks for the spread of such materials and are in the best position to police them.

But Jacob Noti-Victor, a professor at Cardozo School of Law who researches how the law impacts innovation and the deployment of new technologies, says it’s not as simple as it might seem. After all, the images of Swift last week were already clearly in violation of X’s rules, yet they spread widely on the site.

“X and other platforms certainly need to do more to tackle this problem and that requires large, dedicated content moderation teams,” Noti-Victor said. “That said, it’s not an easy task. Content detection tools have not been very good at detecting deepfakes so far, which limits the tools that platforms can use proactively to detect this kind of material as it’s being posted.”

And even if it were easy for platforms find and stop harmful deepfakes, tech companies have hardly been beefing up their content moderation efforts in recent years.

Since Elon Musk acquired X (then named Twitter) in 2022, the company has loosened restrictions on offensive content and fired thousands of employees, including many on the “trust and safety” teams that handle content moderation. Mark Zuckerberg’s Meta, which owns Facebook and Instagram, laid off more than 20,000 employees last year, reportedly also including hundreds of moderators. Google, Microsoft and Amazon have all reportedly made similar cuts.

Amid a broader wave of tech layoffs, why were those employees some of the first to go? Because at the end of the day, there’s no real legal requirement for platforms to police offensive content. Section 230 of the Communications Decency Act, a much-debated provision of federal law, largely shields internet platforms from legal liability for materials posted by their users. That means Taylor could try to sue the anonymous X users who posted her image, but she would have a much harder time suing X itself for failing to stop them.

In the absence of regulation and legal liability, the only real incentives for platforms to do a better job at combating deepfakes are “market incentives,” said Hartzog, the BU professor — meaning, fear of negative publicity that scares away advertisers or alienates users.

On that front, maybe the Taylor fiasco is already having an impact. On Friday, X announced that it would build a “Trust and Safety center of excellence” in Austin, Texas, including hiring 100 new employees to handle content moderation.

“These platforms have an incentive to attract as many people as possible and suck out as much data as possible, with no obligation to create meaningful tools to help victims,” Hartzog said. “Hopefully, this Taylor Swift incident advances the conversation in productive ways that results in meaningful changes to better protect victims of this kind of behavior.”