artificial intelligence
Page: 14
President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence — requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list to oversee the rapidly progressing technology.
The order reflects the government’s effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils. AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.
White House chief of staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.
“We can’t move at a normal government pace,” Zients said the Democratic president told him. “We have to move as fast, if not faster than the technology itself.”
In Biden’s view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.
The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.
Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.
The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.
An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.
Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.
Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings.
“He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”
The possibility of false images and sounds led the president to prioritize the labeling and watermarking of anything produced by AI. Biden also wanted to thwart the risk of older Americans getting a phone call from someone who sounded like a loved one, only to be scammed by an AI tool.
Meetings could go beyond schedule, with Biden telling civil society advocates in a ballroom of San Francisco’s Fairmont Hotel in June: “This is important. Take as long as you need.”
The president also talked with scientists and saw the upside that AI created if harnessed for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origins of the universe. Another scientist showed how AI could model extreme weather like 100-year floods, as the past data used to assess these events has lost its accuracy because of climate change.
The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes.
“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said Reed, who watched the film with the president.
With Congress still in the early stages of debating AI safeguards, Biden’s order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.
U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.
The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.
But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI’s real-world harms.
The American Civil Liberties Union is among the groups that met with the White House to try to ensure “we’re holding the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us and not just a few,” said ReNika Moore, director of the ACLU’s racial justice program.
Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.
“These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Universal Music Group (UMG) and other music companies file a hotly-anticipated copyright lawsuit over how artificial intelligence (AI) models are trained; DJ Envy’s business partner Cesar Pina is hit with criminal charges claiming he ran a “Ponzi-like” fraud scheme; Megan Thee Stallion reaches a settlement with her former label to end a contentious legal battle; Fyre Fest fraudster Billy McFarland is hit with a civil lawsuit by a jilted investor in his new project; and more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
THE BIG STORY: AI Music Heads To Court
When UMG and several other music companies filed a lawsuit last week, accusing an artificial intelligence company called Anthropic PBC of violating its copyrights en masse to “train” its AI models, my initial reaction was: “What took so long?”
The creators of other forms of content had already been in court for months. A group of photographers and Getty Images sued Stability AI over its training practices in January, and a slew of book authors, including Game of Thrones writer George R.R. Martin and legal novelist John Grisham, sued ChatGPT-maker OpenAI over the same thing in June and again in September. And music industry voices, like the RIAA and UMG itself, had repeatedly signaled that they viewed such training as illegal.
For months, we asked around, scanned dockets and waited for the music equivalent. Was the delay a deliberate litigation strategy, allowing the fast-changing market and the existing lawsuits to play out more before diving in? Was the music business focusing on legislative, regulatory or business solutions instead of the judicial warpath they chose during the file-sharing debacle of the early 2000s?
Maybe they were just waiting for the right defendant. In a complaint filed in Nashville federal court on Oct. 18, UMG claimed that Anthropic — a company that got a $4 billion investment from Amazon last month — “unlawfully copies and disseminates vast amounts of copyrighted works” in the process of teaching its models to spit out new lyrics. The lengthy complaint, co-signed by Concord Music Group, ABKCO and other music publishers, echoed arguments made by many rightsholders in the wake of the AI boom: “Copyrighted material is not free for the taking simply because it can be found on the internet.”
Like the previous cases filed by photographers and authors, the new lawsuit poses something of an existential question for AI companies. AI models are only as good as the “inputs” they ingest; if federal courts make all copyrighted material off-limits for such purposes, it would not only make current models illegal but would undoubtedly hamstring further development.
The battle ahead will center on fair use — the hugely important legal doctrine that allows for the free use of copyrighted material in certain situations. Fair use might make you think of parody or criticism, but more recently, it’s empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.
Are AI models, which imbibe millions of copyrighted works to create something new, the next landmark fair use? Or are they just a new form of copyright piracy on a vast new scale? We’re about to find out.
More key details about the AI case:
– The timing of the lawsuit would suggest that UMG is aiming for a carrot-and-stick approach when it comes to AI. On the same day the new case was filed, UMG announced that it was partnering with a company called BandLab Technologies to forge an “an ethical approach to AI.” Hours later, news also broke that UMG and other labels were actively negotiating with YouTube on a new AI tool that would allow creators to make videos using the voices of popular (consenting) recording artists.
-The huge issue in the case is whether the use of training inputs amounts to infringement, but UMG’s lawyers also allege that Anthropic violates its copyrights with the outputs that its models spit out — that it sometimes simply presents verbatim lyrics to songs. That adds a different dimension to the case that’s not present in earlier AI cases filed by authors and photographers and could perhaps make it a bit easier for UMG to win.
-While it’s the first such case about music, it should be noted that the Anthropic lawsuit deals only with song lyrics — meaning not with sound recordings, written musical notation, or voice likeness rights. While a ruling in any of the AI training cases would likely set precedent across different areas of copyright, those specific issues will have to wait for a future lawsuit, or perhaps an act of Congress.
Go read the full story on UMG’s lawsuit, with access to the actual complaint filed in court.
Other top stories this week…
MEGAN THEE SETTLEMENT – Megan Thee Stallion reached an agreement with her record label 1501 Certified Entertainment to end more than three years of ugly litigation over a record deal that Megan calls “unconscionable.” After battling for more than a year over whether she owed another album under the contract, the two sides now say they will “amicably part ways.”
DJ ENVY SCANDAL DEEPENS – Cesar Pina, a celebrity house-flipper with close ties to New York City radio host DJ Envy, was arrested on federal charges that he perpetrated “a multimillion-dollar Ponzi-like investment fraud scheme.” Though Envy was not charged, federal prosecutors specifically noted that Pina had “partnered with a celebrity disc jockey and radio personality” — listed in the charges as “Individual-1” — to boost his reputation as a real estate guru. The charges came after months of criticism against Envy, who is named in a slew of civil lawsuits filed by alleged victims who say he helped promote the fraud.
FOOL ME ONCE… – Billy McFarland, the creator of the infamous Fyre Festival who served nearly four years in prison for fraud and lying to the FBI, is facing a new civil lawsuit claiming he ripped off an investor who gave him $740,000 for his new PYRT venture. The case was filed by Jonathan Taylor, a fellow felon who met McFarland in prison after pleading guilty to a single count of child sex trafficking.
AI-GENERATED CLOSING ARGS? – Months after ex-Fugees rapper Prakazrel “Pras” Michel was convicted on foreign lobbying charges, he demanded a new trial by making extraordinary accusations against his ex-lawyer David Kenner. Michel claims Kenner, a well-known L.A. criminal defense attorney, used an unproven artificial intelligence (AI) tool called EyeLevel.AI to craft closing arguments — and that he did so because he owned a stake in the tech platform. Kenner declined to comment, but EyeLevel has denied that Kenner has any equity in the company.
ROLLING STONES GET SATISFACTION – A federal judge dismissed a lawsuit accusing The Rolling Stones members Mick Jagger and Keith Richards of copying their 2020 single “Living in a Ghost Town” from a pair of little-known songs, ruling that the dispute — a Spanish artist suing two Brits — clearly didn’t belong in his Louisiana federal courthouse.
JUICE WRLD COPYRIGHT CASE – Dr. Luke and the estate of the late Juice WRLD were hit with a copyright lawsuit that claims they unfairly cut out one of the co-writers (an artist named PD Beats) from the profits of the rapper’s 2021 track “Not Enough.”
YouTube is planning to roll out a new artificial intelligence tool that will allow creators to make videos using the voices of popular recording artists — but inking deals with record companies to launch the beta version is taking longer than expected, sources tell Billboard.
The new AI tool, which YouTube had hoped to debut at its Made On YouTube event in September, will in beta let a select pool of artists to give permission to a select group of creators to use their voices in videos on the platform. From there, the product could be released broadly to all users with the voices of artists who choose to opt in. YouTube is also looking at those artists to contribute input on that will help steer the company’s AI strategy beyond this, sources say.
The major labels, Universal Music Group, Sony Music Entertainment and Warner Music Group, are still negotiating licensing deals that would cover voice rights for the beta version of the tool, sources say; a wide launch would require separate agreements. As label leaders have made public statements about their commitments to embracing AI in recent months, with UMG CEO Lucian Grainge saying the technology could “amplify human imagination and enrich musical creativity in extraordinary new ways” and WMG CEO Robert Kyncl saying, “You have to embrace the technology, because it’s not like you can put technology in a bottle” — some music executives worry they’ve given up some of their leverage in these initial deals, given that they want to be seen as proponents of progress and not as holding up innovation. Label executives are especially conscious of projecting that image now, having shortsightedly resisted the shift from CDs to downloads two decades ago, which allowed Apple to unbundle the album and sent the music business into years of decline. Some executives say it’s also been challenging to find top artists to participate in the new YouTube tool, with even some of the most forward-thinking acts hesitant to put their voices in the hands of unknown creators who could use them to make statements or sing lyrics they might not like.
The labels, sources say, view the deal as potentially precedent-setting for future AI deals to come — as well as creating a “framework,” as one source put it, for YouTube’s future AI initiatives. The key issues in negotiations are how the AI model is trained and that artists should have the option to opt-in (or out); and how monetization works — are artists paid for the use of their music as an input into the AI model or for the output that’s created using the AI tool? While negotiations are taking time, label sources say YouTube is seen as an important, reliable early partner in this space, based on the platform’s work developing its Content ID system that identifies and monetizes copyrighted materials in user-generated videos.
Publishing, meanwhile, is even more complicated, given that even with a small sampling of artists to launch the tool at beta there could be hundreds of songwriters with credits across their catalogs — which would be sampled by the model. Because of this, a source suggests that YouTube may prefer paying a lump sum licensing fee rather that publishers will need to figure out how to divide among their writers.
As complicated as the deal terms may be, sources say music rights holders are acting in good faith to get a deal done. That’s because there’s a dominant belief this sort of technology is inevitable and if the music business doesn’t come to the table to create licensing deals now, they’ll get left behind. However, one source familiar with the negotiations says this attitude is also putting music companies at a disadvantage because there is less room to drive a hard bargain.
For months, AI-soundalike tools that synthesize vocals to sound like famous artists have been garnering attention and triggering debate. The issue hit the mainstream in April when an anonymous musician calling himself Ghostwriter released a song to streaming services with soundalike versions of Drake and The Weeknd on it that he said were created with artificial intelligence. The song was quickly taken down due to copyright infringement on the recording, not based on the voices’ likenesses, but in the aftermath a month later Billboard reported that the streaming services seemed amenable to requests from the major labels to remove recordings with AI-generated vocals created to sound like popular artists.
In August, YouTube announced a new initiative with UMG artists and producers it called an “AI Music Incubator” that would “explore, experiment and offer feedback on the AI-related musical tools and products,” according to a blog post by Grainge at the time. “Once these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.” That partnership was separate from the licensing negotiations currently taking place and the beta product in development.
On Wednesday, UMG, Concord Music Group, ABKCO and other music publishers filed a lawsuit against AI platform Anthropic PBC for using copyrighted song lyrics to “train” its software. This marked the first major lawsuit in what is expected to be a key legal battle over the future of AI music, and as one source put it a signal that major labels will litigate with AI companies they see as bad players.
Universal Music Group (UMG) and other music companies are suing an artificial intelligence platform called Anthropic PBC for using copyrighted song lyrics to “train” its software — marking the first major lawsuit in what is expected to be a key legal battle over the future of AI music.
In a complaint filed Wednesday morning (Oct. 18) in Nashville federal court, lawyers for UMG, Concord Music Group, ABKCO and other music publishers accused Anthropic of violating the companies’ copyrights en masse by using vast numbers of songs to help its AI models learn how to spit out new lyrics.
“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” lawyers for the music companies wrote. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”
A spokesperson for Anthropic did not immediately return a request for comment.
The new lawsuit is similar to cases filed by visual artists over the unauthorized use of their works to train AI image generators, as well as cases filed by authors like Game of Thrones writer George R.R. Martin and novelist John Grisham over the use of their books. But it’s the first to squarely target music.
AI models like the popular ChatGPT are “trained” to produce new content by feeding them vast quantities of existing works known as “inputs.” In the case of AI music, that process involves huge numbers of songs. Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities.
Major music companies and other industry players have already argued that such training is illegal. Last year, the RIAA said that any use of copyrighted songs to build AI platforms “infringes our members’ rights.” In April, when UMG asked Spotify and other streamers in April to stop allowing AI companies to use their platforms to ingest music, it said it “will not hesitate to take steps to protect our rights.”
On Wednesday, the company took those steps. In the lawsuit, it said Anthropic “profits richly” from the “vast troves of copyrighted material that Anthropic scrapes from the internet.”
“Unlike songwriters, who are creative by nature, Anthropic’s AI models are not creative — they depend entirely on the creativity of others,” lawyers for the publishers wrote. “Yet, Anthropic pays nothing to publishers, their songwriters, or the countless other copyright owners whose copyrighted works Anthropic uses to train its AI models. Anthropic has never even attempted to license the use of Publishers’ lyrics.”
In the case ahead, the key battle line will be over whether the unauthorized use of proprietary music to train an AI platform is nonetheless legal under copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law.
Historically, fair use enabled critics to quote from the works they were dissecting, or parodists to use existing materials to mock them. But more recently, it’s also empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.
In Wednesday’s complaint, UMG and the other publishers seemed intent on heading off any kind of fair use defense. They argued that Anthropic’s behavior would harm the market for licensing lyrics to AI services that actually pay for licenses — a key consideration in any fair use analysis.
“Anthropic is depriving Publishers and their songwriters of control over their copyrighted works and the hard-earned benefits of their creative endeavors, it is competing unfairly against those website developers that respect the copyright law and pay for licenses, and it is undermining existing and future licensing markets in untold ways,” the publishers wrote.
In addition to targeting Anthropic’s use of songs as inputs, the publishers claim that the material produced by the company’s AI model also infringes their lyrics: “Anthropic’s AI models generate identical or nearly identical copies of those lyrics, in clear violation of publishers’ copyrights.”
Such litigation might only be the first step in setting national policy on how AI platforms can use copyrighted music, with legislative efforts close behind. At a hearing in May, Sen. Marsha Blackburn (R-Tenn.) repeatedly grilled the CEO of the company behind ChatGPT about how he and others planned to “compensate the artist.”
“If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said. “If it was radio play, it would be there. If it was streaming, it would be there.”
SINGAPORE — BandLab Technologies has pledged to engage responsibly and ethically with AI, part of a “strategic collaboration” with Universal Music Group.
Announced today (Oct. 18), Singapore-based BandLab becomes the first music creation platform to throw its support behind the Human Artistry Campaign (HAC), a global coalition devoted to ensuring fair and safe play with AI technologies.
“As the digital landscape of music continues to evolve,” reads a joint statement, “this collaboration is designed to be a beacon of innovation and ethical practice in the industry and heralds a new era where artists are supported and celebrated at every stage of their creative journey.”
Led by CEO Meng Ru Kuok, BandLab Technologies operates the largest social music creation platform, BandLab. Among the service’s breakouts is Houston artist d4vd (pronounced “David”), who, in July 2022 as a 17-year-old, released “Romantic Homicide,” a track he had made using BandLab. After going viral on TikTok, the song entered the Billboard Hot 100 (peaking at No. 45) as d4vd signed to Darkroom/Interscope. He’s one of a growing number of BandLab users who’ve developed deeper ties with UMG.
“We welcome BandLab’s commitment to an ethical approach to AI through their accessible technology, tools and platform,” comments Lucian Grainge, chairman & CEO, Universal Music Group, in a statement. “We are excited to add BandLab Technologies to a growing list of UMG partners whose responsible and innovative AI will benefit the creative community.”
Further to Grainge’s comments, Michael Nash, executive VP and chief digital officer at UMG, points to an expanding relationship with BandLab, noting “they are an excellent partner that is compelling for us on multiple fronts.”
BandLab Technologies’ assets are grouped under the holding company of Caldecott Music Group, for which Meng serves as CEO and founder. ““BandLab Technologies and our wider Caldecott Music Group network is steadfast in its respect for artists’ rights,” he comments in a statement, “and the infinite potential of AI in music creation and we believe our millions of users around the world share in this commitment and excitement.”
Meng showed his support in August at Ai4, an AI conference in Las Vegas, by way of the presentation “Augmenting the Artist: How AI is Redefining Music Creation and Innovation.” During that session, he discussed the importance of ethical AI training and development and showcased the company’s AI music idea generator tool SongStarter.
New technologies promise “unbelievable possibilities to break down more barriers for creators,” he notes, but “it’s essential that artists’ and songwriters’ rights be fully respected and protected to give these future generations a chance of success.”
The Human Artistry Campaign was announced during South by Southwest in March along with a series of seven key principles for protecting artists in the age of AI. More than 150 industry organizations and businesses have signed up.
UMG’s AI collaboration with BandLab follows separate arrangements forged with Endel and YouTube.
This “first of its kind” strategic partnership with BandLab Technologies, say reps for UMG, align the “two organizations to promote responsible AI practices and pro-creator standards, as well as enabling new opportunities for artists.”
Six months after ex-Fugees rapper Prakazrel “Pras” Michel was convicted on foreign lobbying charges, he’s now demanding a new trial — making the extraordinary claim that his ex-lawyer used an unproven artificial intelligence (AI) tool to craft closing arguments because he owned a stake in the tech platform.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
In a Monday (Oct. 16) filing in D.C. federal court, Michel claimed attorney David Kenner “utterly failed” him during the April trial, denying him his constitutional right to effective counsel. Among other shortcomings, the rapper said Kenner outsourced prep work to “inexperienced contract attorneys” and “failed to object to damaging and inadmissible testimony” during the trial.
Most unusual of all, Michel accused Kenner of using “an experimental artificial intelligence program” to draft his closing arguments for the trial, resulting in a deeply flawed presentation. And he claimed Kenner did so because he had an “undisclosed financial stake” in the company and wanted to use Michel’s trial to promote it.
“Michel never had a chance,” the rapper’s new lawyers wrote Monday. “Michel’s counsel was deficient throughout, likely more focused on promoting his AI program … than zealously defending Michel. The net effect was an unreliable verdict.”
Kenner did not immediately return a request for comment on Tuesday.
Michel was charged in 2019 with funneling money from a now-fugitive Malaysian financier through straw donors to Barack Obama’s 2012 re-election campaign. He was also accused of trying to squelch a Justice Department investigation and influence an extradition case on behalf of China under the Trump administration.
In April, following a trial that included testimony from actor Leonardo DiCaprio and former U.S. Attorney General Jeff Sessions, Michel was convicted on 10 counts including conspiracy, witness tampering and failing to register as an agent of China.
During that trial, Michel was represented by Kenner, a well-known Los Angeles criminal defense attorney who has previously repped hip-hop luminaries like Snoop Dogg, Suge Knight and, most recently, Tory Lanez. But earlier this summer, Michel asked for permission to replace Kenner with a new team of lawyers; in August, Kenner and his firm were swapped out for lawyers from the national firm ArentFox Schiff.
Now, it’s clear why. In Monday’s filing, Michel’s new lawyers accused Kenner of wide-ranging failures — including many that have nothing to do with AI tools or secret motives. They claim he “outsourced trial preparations” to other lawyers and “failed to familiarize himself with the charged statutes or required elements.” They also say he “overlooked nearly every colorable defense” and “failed to object to damaging and inadmissible testimony, betraying a failure to understand the rules of evidence.”
But the most unusual allegations concerned an alleged scheme to promote EyeLevel.AI, a computer program designed to help attorneys win cases by digesting trial transcripts and other data. Days after the trial concluded, the company put out a press release highlighting its use in the Michel trial, calling it the “first use of generative AI in a federal trial” and quoting Kenner.
“This is an absolute game changer for complex litigation,” Kenner said in the press release. “The system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.”
But according to Michel’s new lawyers, Kenner’s use of the program was harmful, not helpful, to his client’s case. They say it may have led to some smaller mistakes, like Kenner misattributing a Puff Daddy song to the Fugees, but also to massive legal errors, like conflating separate allegations against Michel — an error that Michel’s new lawyers say caused Kenner to make “frivolous” arguments before the jury.
“At bottom, the AI program failed Kenner, and Kenner failed Michel,” Michel’s attorneys at ArentFox Schiff wrote. “The closing argument was deficient, unhelpful, and a missed opportunity that prejudiced the defense.”
According to Michel’s new lawyers, the mistake of using the AI tools was compounded by Kenner’s alleged motive: an undisclosed ownership stake in the startup that sells it. By using a criminal trial as a means to promote a product, Monday’s filing says Kenner created “an extraordinary conflict of interest.”
“Kenner and [his partner]’s decision to elevate their financial interest in the AI program over Michel’s interest in a competent and vigorous defense adversely affected Kenner’s trial performance, as the closing argument was frivolous, missed nearly every colorable argument, and damaged the defense,” they wrote.
A bipartisan group of U.S. senators released draft legislation Thursday (Oct. 12) aimed at protecting musical artists and others from artificial intelligence-generated deepfakes and other replicas of their likeness, like this year’s infamous “Fake Drake” song.
The draft bill – labelled the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act, or NO FAKES Act — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission.
In announcing the bill, Sen. Chris Coons (D-Del.) specifically cited the April release of “Heart On My Sleeve,” an unauthorized song that featured AI-generated fake vocals from Drake and The Weeknd.
“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”
The draft bill quickly drew applause from music industry groups. The RIAA said it would push for a final version that “effectively protects against this illegal and immoral misappropriation of fundamental rights that protect human achievement.”
“Our industry has long embraced technology and innovation, including AI, but many of the recent generative AI models infringe on rights — essentially instruments of theft rather than constructive tools aiding human creativity,” the group wrote in the statement.
The American Association of Independent Music offered similar praise: “Independent record labels and the artists they work with are excited about the promise of AI to transform how music is made and how consumers enjoy art, but there must be guardrails to ensure that artists can make a living and that labels can recoup their investments.” The group said it would push to make sure that the final bill’s provisions were “accessible to small labels and working-class musicians, not just the megastars.”
A person’s name and likeness — including their distinctive voice — are already protected in most states by the so-called right of publicity, which allows control how your individual identity is commercially exploited by others. But those rights are currently governed by a patchwork of state statutes and common law systems.
The NO FAKES Act would create a nationwide property right in your image, voice, or visual likeness, allowing an individual to sue anyone the produced a “newly-created, computer-generated, electronic representation” of it. Unlike many state-law systems, that right would not expire at death and could be controlled by a person’s heirs for 70 years after their passing.
A tricky balancing act for any publicity rights legislation is the First Amendment and its protections for free speech. In Thursday’s announcementthe NO FAKES Act’s authors said the bill would include specific carveouts for replicas used in news coverage, parody, historical works or criticism.
“Congress must strike the right balance to defend individual rights, abide by the First Amendment, and foster AI innovation and creativity,” Coons said.
The draft was co-authored by Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.), and Sen. Thom Tillis (R-N.C.).
The RIAA has asked to have AI voice cloning added to the government’s piracy watch list, officially known as the Review of Notorious Markets for Counterfeiting and Piracy.
The RIAA typically writes in each year, requesting forms of piracy like torrenting, stream ripping, cyber lockers and free music downloading to be included in the final list. All of these categories of piracy are still present in the RIAA’s letter to the U.S. Trade Representative this year, but this is the first time the trade organization, which represents the interest of record labels, has added a form of generative AI to their recommendations.
The RIAA noted that it believes AI voice cloning, also referred to as ‘AI voice synthesis’ or ‘AI voice filters,’ infringes on their members’ copyrights and the artists’ rights to their voices and calls out one U.S.-based AI voice cloning site, Voicify.AI as one that should specifically face scrutiny.
According to the letter, Voicify.AI’s service includes voice models that emulate sound recording artists like Michael Jackson, Justin Bieber, Ariana Grande, Taylor Swift, Elvis Presley, Bruno Mars, Eminem, Harry Styles, Adele, Ed Sheeran, and others, as well as political figures including Donald Trump, Joe Biden, and Barak Obama.
The RIAA claims that this type of service infringes on copyrights because it “stream-rips the YouTube video selected by the user, copies the acapella from the track, modifies the acapella using the AI vocal model, and then provides the user unauthorized copies of the modified acapella stem, the underlying instrumental bed, and the modified remixed recording.” Essentially, some of these AI voice cloning sites train its models on stolen copyrights.
It additionally claims that there is a violation pf the artists’ right of publicity, the right that protects public figures from having their name, likeness, and voice commercially exploited without their permission. This is a more tenuous right, given it is only a state-level protection and its strength varies by state. It also becomes more limited after a public figure’s death. However, this is possibly the most common legal argument against AI voice cloning technology in the music business.
This form of artificial intelligence first became widely recognized last spring, when an anonymous TikTok user named Ghostwriter used AI to mimic the voices of Drake and The Weeknd in his song “Heart On My Sleeve” with shocking precision. The song was briefly available on streaming services, like YouTube, but was taken down after a stern letter from the artists’ label, Universal Music Group. However, the song was ultimately removed from official services due to a copyright infringement in the track, not because of a right of publicity claim.
A few months later, Billboard reported that streamers were in talks with the three major label groups about allowing them to file take down requests for right of publicity violations — something which previously was only allowed in cases of copyright infringement as dictated in the Digital Millennium Copyright Act (DMCA). Unlike the DMCA, the newly discussed arrangement regarding right of publicity issues would be a voluntary one. In July, UMG’s general counsel and executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright and asked for a new “federal right of publicity” to be made into law to protect artists’ voices.
An additional challenge in regulating this area is that many AI models available on the internet for global users are not based in the U.S., meaning the U.S. government has little recourse to stop their alleged piracy, even if alerted by trade organizations like the RIAA. Certain countries are known to be more relaxed on AI regulation — like China, Israel, South Korea, Japan, and Singapore — which has created safe havens for AI companies to grow abroad.
The U.S. Trade Representative still must review this letter from the RIAA as well as other recommendations from other industry groups and determine whether or not they believe AI voice cloning should be included on the watchlist. The office will likely issue their final review at the start of next year.
“OK — now Ghostwriter is ready for us.”
For almost three hours, I have been driving an airport rental car to an undisclosed location — accompanied by an artist manager whose name I only know in confidence — outside the U.S. city we both just flew into. I came here because, after weeks of back-and-forth email negotiations, the manager has promised that I can meet his client, whom I’ve interviewed once off-camera over Zoom, in person. In good traffic, the town we’re headed toward is about an hour from the airport, but it’s Friday rush hour, so we watch as my Google Maps ETA gets later and later with each passing minute. To fill the time, we chat about TikTok trends, our respective careers and the future of artificial intelligence.
AI is, after all, the reason we’re in this car in the first place. The mysterious man I’ve come to meet is a “well-known” professional songwriter-producer, his manager says — at least when he’s using his real name. But under his pseudonym, Ghostwriter, he is best known for creating “Heart on My Sleeve,” a song that employed AI voice filters to imitate Drake and The Weeknd’s voices with shocking precision — and without their consent. When it was posted to TikTok in the spring, it became one of the biggest music stories of the year, as well as one of the most controversial.
At the time of its release, many listeners believed that Ghost’s use of AI to make the song meant that a computer also generated the beat, lyrics or melodies, but as Ghost later explains to me, “It is definitely my songwriting, my production and my voice.” Still, “Heart on My Sleeve” posed pressing ethical questions: For one, how could an artist maintain control over their vocal likeness in this new age of AI? But as Ghost and his manager see it, AI poses a new opportunity for artists to license their voices for additional income and marketing reach, as well as for songwriters like Ghost to share their skills, improve their pitches to artists and even earn extra income.
As we finally pull into the sleepy town where we’re already late to meet with Ghost, his manager asks if I can stall. “Ghost isn’t quite ready,” he says, which I assume means he’s not yet wearing the disguise he dons in all his TikTok videos: a white bedsheet and black sunglasses. (Both the manager and Ghost agreed to this meeting under condition of total anonymity.) As I weave the car through residential streets at random, passing a few front yards already adorned in Halloween decor, I laugh to myself — it feels like an apropos precursor to our meeting.
But fifteen minutes later, when we enter Ghost’s “friend’s house,” I find him sitting at the back of an open-concept living space, at a dining room table, dressed head to toe in black: black hoodie, black sweatpants, black ski mask, black gloves and ski goggles. Not an inch of skin is visible, apart from short glimpses of the peach-colored nape of his neck when he turns his head a certain way.
Though he appears a little nervous to be talking to a reporter for the first time, Ghost is friendly, standing up from his chair to give me a hug and to greet his manager. When I decide to address the elephant in the room — “I know this is weird for all of us” — everyone laughs, maybe a little too hard.
Over the course of our first virtual conversation and, now, this face-to-masked-face one, Ghost and his manager openly discuss their last six months for the first time, from their decision to release “Heart on My Sleeve” to more recent events. Just weeks ago, Ghost returned with a second single, “Whiplash,” posted to TikTok using the voices of 21 Savage and Travis Scott — and with the ambition to get his music on the Grammy Awards ballot.
In a Sept. 5 New York Times story, Recording Academy CEO Harvey Mason Jr. said “Heart on My Sleeve” was “absolutely [Grammy-]eligible because it was written by a human,” making it the first song employing AI voices to be permitted on the ballot. Three days later, however, he appeared to walk back his comments in a video posted to his personal social media, saying, “This version of ‘Heart on My Sleeve’ using the AI voice modeling that sounds like Drake and The Weeknd, it’s not eligible for Grammy consideration.”
[embedded content]
In conversation, Ghost and his manager maintain (and a Recording Academy representative later confirms) that “Heart on My Sleeve” will, in fact, be on the ballot because they quietly uploaded a new version of the song (without any AI voice filters) to streaming services on Sept. 8, just days before Grammy eligibility cutoff and the same day as Mason’s statement.
When the interview concludes, Ghost’s manager asks if we will stay for the takeout barbecue the owner of the house ordered for everyone before the manager and I arrived. At this, Ghost stands up, saying his outfit is too hot and that he ate earlier anyway — or maybe he just realizes that eating would require taking his ski mask off in front of me.
When did Ghostwriter first approach you with this idea, and what were your initial thoughts?
Manager: We first discussed this not long before the first song dropped. He had just started getting into AI. We wanted to do something that could spark much needed conversation and prep us so that we can start moving toward building an environment where this can exist in an ethical and equitable way. What better way to move culture forward around AI than to create some examples of how it can be used and show how the demand and interest is there?
As the person in charge of Ghostwriter’s business affairs, what hurdles did you see to executing his idea?
Manager: When anything new happens, people don’t know how to react. I see a lot of parallels between this moment and the advent of sampling. There was an outcry [about] thievery in 1989 when De La Soul was sued for an uncleared sample. Fast-forward to now, and artist estates are jumping at the opportunity to be sampled and interpolated in the next big hit. All it took was for the industry to define an equitable arrangement for all stakeholders in order for people to see the value in that new form of creativity. I think we agreed that we had an opportunity to show people the value in AI and music here.
Ghostwriter’s songs weren’t created with the consent of Drake, The Weeknd, Travis Scott or 21 Savage. How do you justify using artists’ voices without their consent?
Manager: I like to say that everything starts somewhere, like Spotify wouldn’t exist without Napster. Nothing is perfect in the beginning. That’s just the reality of things. Hopefully, people will see all the value that lies here.
How did you get in touch with the Recording Academy?
Manager: Harvey reached out to Ghostwriter over DM. He was just curious and interested. It’s his job to keep the industry moving forward and to understand what new things are happening. I think he’s still wrapping his head around it, but I thought it was really cool that he put together an industry roundtable with some of the brightest minds — including people in the Copyright Office, legal departments at labels, Spotify, Ghostwriter. We had an open conversation.
I don’t know if Harvey has the answers — and I don’t want to put words in his mouth — but I think he sees that this is a cool tool to help people create great music. [Ultimately,] we just have to figure out the business model so that all stakeholders feel like they have control and are being taken care of.
I think in the near future, we’re going to have infrastructure that allows artists to not only license their voice, but do so with permissions. Like, say I’m artist X. I want to license my voice out, but I want to take 50% of the revenue that’s generated. Plus users can’t use my voice for hate speech or politics. It is possible to create tech that can have permissions like that. I think that’s where we are headed.
“Heart on My Sleeve” is Grammy-eligible after all, but only the version without AI voice filters. Why was it so important to keep trying for Grammy eligibility?
Manager: Our thought process was, it’s a dope record, and it resonated with people. It was a human creator who created this piece of art that made the entire music industry stop and pay attention. We aren’t worried about whether we win or not — this is about planting the seed, the idea that this is a creative tool for songwriters.
Do you still think it pushes the envelope in the same way, given that what is eligible now doesn’t have any AI filter on it?
Manager: Absolutely, because we’re just trying to highlight the fact that this song was created by a human. AI voice filters were just a tool. We haven’t changed the moment around the song that it had. I think it’s still as impactful because all of this is part of the story, the vision we are casting.
Tell me a little about yourself, Ghostwriter. What’s your background?
Ghostwriter: I’ve always been a songwriter-producer. Over time, I started to realize — as I started to get into different rooms and connect with different artists — that the business of songwriting was off. Songwriters get paid close to nothing. It caused me to think: “What can I do as a songwriter who just loves creating to maybe create another revenue stream? How do I get my voice heard as a songwriter?” That was the seed that later grew into becoming Ghostwriter.
I’ve been thinking about it for two years, honestly. The idea at first was to create music that feels like other artists and release it as Ghostwriter. Then when the AI tech came out, things just clicked. I realized, “Wait — people wouldn’t have to guess who this song was meant to sound like anymore,” now that we have this.
I did write and produce “Heart on My Sleeve” thinking that maybe this would be the one where I tried AI to add in voice filters, but the overall idea for Ghostwriter has been a piece of me for some time.
Why did you decide to take “Heart on My Sleeve” from just a fun experiment to a formal rollout?
Ghost: Up until this point, all of the AI voice stuff was jokes. Like, what if SpongeBob [SquarePants] sang this? I think it was exciting for me to try using this as a tool for actual songwriters.
When “Heart on My Sleeve” went viral, it became one of the biggest news stories at the time. Did you anticipate that?
Ghost: There was a piece of me that knew it was really special, but you just can’t predict what happens. I tried to stay realistic. When working in music, you have to remind yourself that even though you think you wrote an incredible song, there’s still a good chance the song is not going to come out or it won’t do well.
Do you think that age played a factor in how people responded to this song?
Manager: For sure. I think the older generations are more purists; it’s a tougher pill for them to swallow. I think younger generations obviously have grown up in an environment where tech moves quickly. They are more open to change and progression. I would absolutely attribute the good response on TikTok to that.
Are you still writing for other people now under your real name while you work on the Ghostwriter project, or are you solely focused on Ghostwriter right now?
Ghost: I am, but I have been placing a large amount of focus [on] Ghostwriter. For me, it’s a place that is so refreshing. Like, I love seeing that an artist is looking for pitch records and I have to figure out how to fit their sound. It’s a beautiful challenge.
This is one of the reasons I’m so passionate about Ghostwriter. There are so many talented songwriters that are able to chameleon themselves in the studio to fit the artist they are writing for. Even their vocal delivery, their timbre, where the artist is in their life story. That skill is what I get to showcase with Ghostwriter.
You’ve said songwriters aren’t treated fairly in today’s music industry. Was there a moment when you had this revelation?
Ghost: It was more of a progression…
Manager: I think the fact that Ghost’s songs feel so much like the real thing and resonate so much with those fan bases, despite the artists not actually being involved, proves how important songwriters are to the success of artists’ projects. We’re in no way trying to diminish the hard work and deserving nature of the artists and the labels that support them. We’re just trying to shine a light on the value that songwriters bring and that their compensation currently doesn’t match that contribution. We owe it to songwriters to find solutions for the new reality. Maybe this is the solution.
Ghost: How many incredible songs are sitting on songwriters and producers’ desktops that will never be heard by the world? It almost hurts me to think about that. The Ghostwriter project — if people will hopefully support it — is about not throwing art in the trash. I think there’s a way for artists to help provide that beauty to the world without having to put in work themselves. They just have to license their voices.
The counterpoint to that, though, is that artists want to curate their discographies. They make a lot of songs, but they might toss some of them so that they can present a singular vision — and many would say songs using AI to replicate an artist’s voice would confuse that vision. What do you say to that?
Ghost: I think this may be a simple solution, but the songs could be labeled as clearly separate from the artist.
Manager: That’s something we have done since the beginning. We have always clearly labeled everything as AI.
Ideally, where should these AI songs live? Do they belong on traditional streaming services?
Manager: One way that this can play out is that [digital service providers] eventually create sort of an AI section where the artist who licenses their voice can determine how much of the AI songs they want monetarily and how they want their voices to be used.
Ghost: These songs are going to live somewhere because the fans want them. We’ve experienced that with Ghostwriter. The song is not available anymore by us, but I was just out in my area and heard someone playing “Heart on My Sleeve” in their car as they drove by. One way or another, we as the music industry need to come to terms with the fact that good music is always going to win. The consumer and the listener are always in the seat of power.
There’s 100,000 songs added to Spotify every day, and the scale of music creation is unprecedented. Does your vision of the future contribute to a scale problem?
Manager: We don’t really see it as a problem. Because no matter how many people are releasing music, you know, there’s only going to be so many people in the world that can write hit songs. The cream always rises to the top.
Ghost: My concern is that a lot of that cream-of-the-crop music is just sitting on someone’s desktop because an artist moved in a different direction or something beyond their control. My hope is we’ll see incredible new music become available and then we can watch as democracy pushes it to the top.
Can you explain how you think AI voice filters serve as a possible new revenue stream for artists?
Manager: Imagine singing a karaoke song in the artist’s voice; a personalized birthday message from your favorite artist; a hit record that is clearly labeled and categorized as AI. It’s also a marketing driver. I compare this to fan fiction — a fan-generated genre of music. Some might feel this creates competition or steals attention away from an artist’s own music, but I would disagree.
We shouldn’t forget that in the early days of YouTube, artists and labels fought to remove every piece of fan-generated content [that used] copyrighted material that they could. Now a decade or so later, almost every music marketing effort centers around encouraging [user-generated content]: TikTok trends, lyric videos, dance choreography, covers, etcetera. There’s inherent value in empowering fans to create content that uses your image and likeness. I think AI voice filters are another iteration of UGC.
Timbaland recently wrote a song and used an AI voice filter to map The Notorious B.I.G.’s voice on top of it, essentially bringing Biggie back from the dead. That raises more ethical questions. Do you think using the voice of someone who is dead requires different consideration?
Manager: It’s an interesting thought. Obviously, there’s a lot of value here for companies that purchase catalogs. I think this all ties back to fan fiction. I love The Doors, and I know there are people who, like me, study exactly how they wrote and performed their songs. I’d love to hear a song from them I haven’t heard before personally, as long as it’s labeled [as a fan-made AI song]. As a music fan, it would be fun for me to consume. It’s like if you watch a film franchise and the fourth film isn’t directed by the same person as before. It’s not the same, but I’m still interested.
When Ghostwriter introduced “Whiplash,” he noted that he’s down to collaborate with and send royalties to Travis Scott and 21 Savage. Have you gotten in touch with them, or Drake or The Weeknd, yet?
Manager: No, we have not been in contact with anyone.
“Heart on My Sleeve” was taken down immediately from streaming services. Are you going about the release of “Whiplash” differently?
Manager: We will not release a song on streaming platforms again without getting the artists on board. That last time was an experiment to prove the market was there, but we are not here to agitate or cause problems.
You’ve said that other artists have reached out to your team about working together and using their voices through AI. Have you started that collaboration process?
Manager: We’re still having conversations with artists we are excited about that have reached out, but they probably won’t create the sort of moment that we want to keep consistently with this project. There’s nothing I can confirm with you right now, but hopefully soon.
Why are you not interested in collaborating with who has reached out so far? Is it because of the artist’s audience size or their genre?
Manager: It’s more like every moment we have has to add a point and purpose. There hasn’t been anyone yet that feels like they could drive things forward in a meaningful way. I mean, size for sure, and relevancy. We ask ourselves: What does doing a song with that person or act say about the utility and the value of this technology?
Ghost: We’re just always concerned with the bigger picture. When “Whiplash” happened, we all felt like it was right. It was part of a statement I wanted to make about where we were headed. This project is about messaging.
After all this back-and-forth about the eligibility of “Heart on My Sleeve,” do you both feel you’re still in a good place with Harvey Mason Jr. and the Recording Academy?
Manager: For sure, we have nothing but love for Harvey … We have a lot of respect for him, the academy and, ultimately, a lot of respect for all the opinions and arguments out there being made about this. We hear them all and are thinking deeply about it.
Ghostwriter, you’ve opted to not reveal your identity in this interview, but does any part of you wish you could shout from the rooftops that you’re the one behind this project?
Ghost: Maybe it sounds cheesy, but this is a lot bigger than me and Ghostwriter. It’s the future of music. I want to push the needle forward, and if I get to play a significant part in that, then there’s nothing cooler than that to me. I think that’s enough for me.
A version of this story originally appeared in the Oct. 7, 2023, issue of Billboard.
AI-powered hit song analytics platform ChartCipher has successfully completed its beta phase and is now accessible to the public, MyPart and Hit Songs Deconstructed jointly announced on Tuesday (Oct. 10).
“Our mission is to empower music creatives and industry professionals with comprehensive, real-time insights into the DNA of today’s most successful songs, and the trends shaping the music charts,” said Hit Songs Deconstructed co-founder Yael Penn in a statement. “Streams, engagement, and other performance metrics only tell part of the story. ChartCipher is the missing link. It provides comprehensive data reflecting the compositional, lyrical and sonic qualities fueling today’s charts.”
Added Hit Songs Deconstructed co-founder David Penn, “The correlations we can now draw between songwriting and production, spanning various genres and charts, offer unprecedented insights that have the potential to significantly enhance both the creative journey and the decision-making process.”
“ChartCipher’s beta phase confirmed that our AI analytics provide invaluable insights to music creatives and decision-makers,” said MyPart CEO Matan Kollenscher. “From selecting singles through exploring remix and collaboration opportunities to optimizing marketing investments and maximizing catalog utilization, ChartCipher equips users with unique, actionable data vital to making better informed business and creative decisions and understanding the musical landscape.”
Launched in April 2022, ChartCipher combines MyPart’s AI-powered analysis of songs’ compositional, lyrical and sonic qualities with Hit Songs Deconstructed’s analytics delivery platform and song analysis methodologies to offer real-time insights into the qualities that fuel today’s most popular music. The platform utilizes analytics from 10 of Billboard‘s most prominent charts going back to the turn of the century: the Billboard Hot 100, Hot Country Songs, Hot R&B/Hip-Hop Songs, Hot Dance/Electronic Songs, Hot Rock & Alternative Songs, Pop Airplay, Country Airplay, Streaming Songs, Radio Songs and Digital Song Sales.
“Billboard has consistently led the way in global music charts, and we are thrilled to introduce ChartCipher with analytics for 10 of their most prominent charts,” added Yael Penn. “Our longstanding relationship with Billboard, spanning over a decade, marks the start of an exciting new chapter. Together, we aim to provide even deeper, more actionable insights into the driving forces behind today’s most successful songs.”
Gary Trust, senior director of charts at Billboard, added, “Spotlighting ChartCipher’s intriguing insights about the sonic makeup of hit songs further rounds out Billboard’s coverage. We’re excited to add even more analysis of popular charting songs to our reporting on streaming, radio airplay and sales data, as provided by Luminate.”
To celebrate its official launch, ChartCipher has created a Billboard Hot 100 quiz available to anyone who would like to test their knowledge of the compositional, lyrical and production qualities driving the chart.