State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


artificial intelligence

Page: 13

Offering a preview of arguments the company might make in its upcoming legal battle with Universal Music Group (UMG), artificial intelligence (AI) company Anthropic PBC told the U.S. Copyright Office this week that the massive scraping of copyrighted materials to train AI models is a “quintessentially lawful.”

Music companies, songwriters and artists have argued that such training represents an infringement of their works at a vast scale, but Anthropic told the federal agency Monday (Oct. 30) that it was clearly allowed under copyright’s fair use doctrine.

“The copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,” the company wrote. “This sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.”

The filing came as part of an agency study aimed at answering thorny questions about how existing intellectual property laws should be applied to the disruptive new tech. Other AI giants, including OpenAI, Meta, Microsoft, Google and Stability AI all lodged similar filings, explaining their views.

But Anthropic’s comments will be of particular interest in the music industry because that company was sued last month by UMG over the very issues in question in the Copyright Office filing. The case, the first filed over music, claims that Anthropic unlawfully copied “vast amounts” of copyrighted songs when it trained its Claude AI tool to spit out new lyrics.

In the filing at the Copyright Office, Anthropic argued that such training was a fair use because it copied material only for the purpose of “performing a statistical analysis of the data” and was not “re-using the copyrighted expression to communicate it to users.”

“To the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work,” the company argued.

UMG is sure to argue otherwise, but Anthropic said legal precedent was clearly on its side. Notably, the company cited a 2015 ruling by a federal appeals court that Google was allowed to scan and upload millions of copyrighted books to create its searchable Google Books database. That ruling and others established the principle that “large-scale copying” was a fair use when done to “create tools for searching across those works and to perform statistical analysis.”

“The training process for Claude fits neatly within these same paradigms and is fair use,” Anthropic’s lawyers wrote. “Claude is intended to help users produce new, distinct works and thus serves a different purpose from the pre-existing work.”

Anthropic acknowledged that the training of AI models could lead to “short-term economic disruption.” But the company said such problems were “unlikely to be a copyright issue.”

“It is still a matter that policymakers should take seriously (outside of the context of copyright) and balance appropriately against the long-term benefits of LLMs on the well-being of workers and the economy as a whole by providing an entirely new category of tools to enhance human creativity and productivity,” the company wrote.

In the TikTok era, homemade remixes of songs — typically single tracks that have been sped up or slowed down, or two tracks mashed together — have become ever more popular. Increasingly, they are driving viral trends on the platform and garnering streams off of it. 

Just how popular? In April, Larry Mills, senior vp of sales at the digital rights tech company Pex, wrote that Pex’s tech found “hundreds of millions of modified audio tracks distributed from July 2021 to March 2023,” which appeared on TikTok, SoundCloud, Audiomack, YouTube, Instagram and more. 

On Wednesday (Nov. 1), Mills shared the results of a new Pex analysis — expanded to include streaming services like Spotify, Apple Music, Deezer, and Tidal — estimating that “at least 1% of all songs on [streaming platforms] are modified audio.”

“We’re talking more than 1 million unlicensed, manipulated songs that are diverting revenue away from rightsholders this very minute,” Mills wrote, pointing to homemade re-works of tracks by Halsey or One Republic that have amassed millions of plays. “These can generate millions in cumulative revenue for the uploaders instead of the correct rightsholders.”

Labels try to execute a tricky balancing act with user-generated remixes. They usually strike down the most popular unauthorized reworks on streaming services and move to release their own official versions in an attempt to pull those plays in-house. But they also find ways to encourage fan remixing, because it remains an effective form of music marketing at a time when most promotional strategies have proved toothless. “Rights holders understand that this process is inevitable, and it’s one of the best ways to bring new life to tracks,” Meng Ru Kuok, CEO of music technology company BandLab, said to Billboard earlier this year. 

Mills argues that the industry needs a better system for tracking user-generated remixes and making sure royalties are going into the right pockets. “While these hyper-speed remixes may make songs go viral,” he wrote in April, “they’re also capable of diverting royalty payments away from rights holders and into the hands of other creators.” 

Since Pex sells technology for identifying all this modified audio, it’s not exactly an unbiased party. But it’s notable that streaming services and distributors don’t have the best track record when it comes to keeping unauthorized content of any kind off their platforms.

It hasn’t been unusual to find leaked songs — especially from rappers with impassioned fan bases like Playboi Carti and Lil Uzi Vert — on Spotify, where leaked tracks can often be found climbing the viral chart, or TikTok. An unreleased Pink Pantheress song sampling Michael Jackson’s classic “Off the Wall” is currently hiding in plain sight on Spotify, masquerading as a podcast. 

“Historically, streaming services don’t have an economic incentive to actually care about that,” Deezer CEO Jeronimo Folgueira told Billboard earlier this year. “We don’t care whether you listen to the original Drake, fake Drake, or a recording of the rain. We just want you to pay $10.99.” Folgueira called that incentive structure “actually a bad thing for the industry.”

In addition, many of the distribution companies that act as middlemen between artists and labels and the streaming services operate on a volume model — the more content they upload, the more money they make — which means it’s not in their financial interest to look closely at what they send along to streaming services. 

However, the drive to improve this system has taken on new urgency this year. Rights holders and streaming services are going back and forth over how streaming payments should work and whether “an Ed Sheeran stream is worth exactly the same as a stream of rain falling on the roof,” as Warner Music Group CEO Robert Kyncl told financial analysts in May. As the industry starts to move to a system where all streams are no longer created equal, it becomes increasingly important to know exactly what’s on these platforms so it can sort different streams into different buckets.

In addition, the advance of artificial intelligence-driven technology has allowed for easily accessible and accurate-sounding voice-cloning, which has alarmed some executives and artists in a way that sped-up remixes have not. “In our conversations with the labels, we heard that some artists are really pissed about this stuff,” says Geraldo Ramos, co-founder/CEO of the music-tech company Moises. “They’re calling their label to say, ‘Hey, it isn’t acceptable, my voice is everywhere.’”

This presents new challenges, but also perhaps means new opportunities for digital fingerprint technology companies, whether that’s stalwarts like Audible Magic or newer players like Pex. “With AI, just think how much the creation of derivative works is going to exponentially grow — how many covers are going to get created, how many remixes are gonna get created,” Audible Magic CEO Kuni Takahashi told Billboard this summer. “The scale of what we’re trying to identify and the pace of change is going to keep getting faster.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Lizzo fights back against sexual harassment allegations with the help of a famous lawyer and a creative legal argument; a federal court issues an early ruling in an important copyright lawsuit over artificial intelligence; Kobalt is hit with a lawsuit alleging misconduct by one of the company’s former executives; and much more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

THE BIG STORY: Lizzo Hits Back With … Free Speech?

Three months after Lizzo and her touring company were accused of subjecting three of her backup dancers to sexual harassment, religious and racial discrimination and weight-shaming, her lawyers filed their first substantive response – and they didn’t hold back.

“Salacious and specious lawsuit.” “They have an axe to grind.” “A pattern of gross misconduct and failure to perform their job up to par.” “Fabricated sob story.” “Plaintiffs are not victims.” “They are opportunists.”

“Plaintiffs had it all and they blew it,” Lizzo’s lawyers wrote. “Instead of taking any accountability for their own actions, plaintiffs filed this lawsuit against defendants out of spite and in pursuit of media attention, public sympathy and a quick payday with minimal effort.”

That’s not exactly dry legalese, but it’s par-for-the-course in a lawsuit that has already featured its fair share of blunt language from the other side. And it’s hardly surprising given that it came from Martin Singer – an infamously tough celebrity lawyer once described by the Los Angeles Times as “Hollywood’s favorite legal hit man.”

While Singer’s quotes made the headlines, it was his legal argument that caught my attention.

Rather than a normal motion to dismiss the case, Lizzo’s motion cited California’s so-called anti-SLAPP statute — a special type of law enacted in states around the country that makes it easier to end meritless lawsuits that threaten free speech (known as “strategic lawsuits against public participation”). Anti-SLAPP laws allow for such cases to be tossed out more quickly, and they sometimes require a plaintiff to repay the legal bills incurred by a defendant.

Anti-SLAPP motions are filed every day, but it’s pretty unusual to see one aimed at dismissing a sexual harassment and discrimination lawsuit filed by former employees against their employer. They’re more common in precisely the opposite scenario: filed by an individual who claims that they’re being unfairly sued by a powerful person to silence accusations of abuse or other wrongdoing.

But in Friday’s motion, Singer and Lizzo’s other lawyers argued that California’s anti-SLAPP law could also apply to the current case because of the creative nature of the work in question. They called the case “a brazen attempt to silence defendants’ creative voices and weaponize their creative expression against them.”

Will that argument hold up in court? Stay tuned…

Go read the full story about Lizzo’s defense, including access to the actual legal documents filed in court.

Other top stories this week…

RULING IN AI COPYRIGHT CASE – A federal judge issued an early-stage ruling in a copyright class action filed by artists against artificial intelligence (AI) firm Stability AI — one of several important lawsuits filed against AI companies over how they use copyrighted content. Though he criticized the case and dismissed many of its claims, the judge allowed it to move toward trial on its central, all-important question: Whether it’s illegal to train AI models by using copyrighted content.

HALLOWEEN SPECIAL – To celebrate today’s spooky holiday, Billboard turned back the clock all the way to 1988, when the studio behind “A Nightmare on Elm Street” sued Will Smith over a Fresh Prince song and music video that made repeated references to Freddy Kreuger. To get the complete bizarre history of the case, go read our story here.

KOBALT FACES CASE OVER EX-EXEC – A female songwriter filed a lawsuit against Kobalt Music Group and former company executive Sam Taylor over allegations that he leveraged his position of power to demand sex from her – and that the company “ignored” and “gaslit” women who complained about him. The case came a year after Billboard’s Elias Leight first reported those allegations. Taylor did not return a request for comment; Kobalt has called the allegations against the company baseless, saying its employees never “condoned or aided any alleged wrongdoing.”

MF DOOM ESTATE BATTLE – The widow of late hip-hop legend MF Doom filed a lawsuit claiming the rapper’s former collaborator Egon stole dozens of the rapper’s notebooks that were used to write down many of his beloved songs. The case claims that Egon took possession of the files as Doom spent a decade in his native England due to visa issues, where he remained until his death in 2020. Egon’s lawyers called the allegations “frivolous and untrue.”

DJ ENVY FRAUD SCANDAL UPDATE – Cesar Pina, a celebrity house-flipper who was charged earlier this month with running a “Ponzi-like investment fraud scheme,” said publicly last week that New York City radio host DJ Envy had “nothing to do” with the real estate deals in question. Critics have argued that Envy, who hosts the popular hip-hop radio show The Breakfast Club, played a key role in Pina’s alleged fraud by promoting him on the air.

UTOPIA SUED AGAIN OVER FAILED DEAL – Utopia Music was hit with another lawsuit over an aborted $26.5 million deal to buy a U.S. music technology company called SourceAudio, this time over allegations that the company violated a $400,000 settlement that aimed to end the dispute. The allegations came after a year of repeated layoffs and restructuring at the Swiss-based music tech company.

A federal judge in San Francisco ruled Monday (Oct. 30) that artificial intelligence (AI) firm Stability AI could not dismiss a lawsuit claiming it had “trained” its platform on copyrighted images, though he also sided with AI companies on key questions.

In an early-stage order in a closely watched case, Judge William Orrick found many defects in the lawsuit’s allegations, and he dismissed some of the case’s claims. But he allowed the case to move forward on its core allegation: That Stability AI built its tools by exploiting vast numbers of copyrighted works.

“Plaintiffs have adequately alleged direct infringement based on the allegations that Stability downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion, and used those images to train Stable Diffusion,” the judge wrote.

The ruling came in one of many cases filed against AI companies over how they use copyrighted content to train their models. Authors, comedians and visual artists have all filed lawsuits against companies including Microsoft, Meta and OpenAI, alleging that such unauthorized use by the fast-growing industry amounts to a massive violation of copyright law.

Last week, Universal Music Group and others filed the first such case involving music, arguing that Anthropic PBC was infringing copyrights en masse by using “vast amounts” of music to teach its software how to spit out new lyrics.

Rulings in the earlier AI copyright cases could provide important guidance on how such legal questions will be handled by courts, potentially impacting how UMG’s lawsuit and others like it play out in the future.

Monday’s decision came in a class action filed by artists Sarah Andersen, Kelly McKernan and Karla Ortiz against Stability AI Ltd. over its Stable Diffusion — an AI-powered image generator. The lawsuit also targeted Midjourney Inc. and DeviantArt Inc., two companies that use Stable Diffusion as the basis for their own image generators.

In his ruling, Judge Orrick dismissed many of the lawsuit’s claims. He booted McKernan and Ortiz from the case entirely and ordered the plaintiffs to re-file an amended version of their case with much more detail about the specific allegations against Midjourney and DeviantArt.

The judge also cast doubt on the allegation that every “output” image produced by Stable Diffusion would itself be a copyright-infringing “derivative” of the images that were used to train the model — a ruling that could dramatically limit the scope of the lawsuit. The judge suggested that such images might only be infringing if they themselves looked “substantially similar” to a particular training image.

But Judge Orrick included no such critiques for the central accusation that Stability AI infringed Andersen’s copyrights by using them for training without permission — the basic allegation at the center of all of the AI copyright lawsuits, including the one filed by UMG. Andersen will still need to prove that such an accusation is true in future litigation, but the judge said she should be given the chance to do so.

“Even Stability recognizes that determination of the truth of these allegations — whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run — cannot be resolved at this juncture,” Orrick wrote in his decision.

Attorneys for Stability AI, Midjourney and DeviantArt did not return requests for comment. Attorneys for the artists praised the judge for allowing their “core claim” to move forward and onto “a path to trial.”

“As is common in a complex case, Judge Orrick granted the plaintiffs permission to amend most of their other claims,” said plaintiffs’ attorneys Joseph Saveri and Matthew Butterick after the ruling. “We’re confident that we can address the court’s concerns.”

President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence — requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list to oversee the rapidly progressing technology.
The order reflects the government’s effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils. AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.

White House chief of staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.

“We can’t move at a normal government pace,” Zients said the Democratic president told him. “We have to move as fast, if not faster than the technology itself.”

In Biden’s view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.

Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.

Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings.

“He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

The possibility of false images and sounds led the president to prioritize the labeling and watermarking of anything produced by AI. Biden also wanted to thwart the risk of older Americans getting a phone call from someone who sounded like a loved one, only to be scammed by an AI tool.

Meetings could go beyond schedule, with Biden telling civil society advocates in a ballroom of San Francisco’s Fairmont Hotel in June: “This is important. Take as long as you need.”

The president also talked with scientists and saw the upside that AI created if harnessed for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origins of the universe. Another scientist showed how AI could model extreme weather like 100-year floods, as the past data used to assess these events has lost its accuracy because of climate change.

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said Reed, who watched the film with the president.

With Congress still in the early stages of debating AI safeguards, Biden’s order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.

U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.

The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI’s real-world harms.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure “we’re holding the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us and not just a few,” said ReNika Moore, director of the ACLU’s racial justice program.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.

“These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.

This week: Universal Music Group (UMG) and other music companies file a hotly-anticipated copyright lawsuit over how artificial intelligence (AI) models are trained; DJ Envy’s business partner Cesar Pina is hit with criminal charges claiming he ran a “Ponzi-like” fraud scheme; Megan Thee Stallion reaches a settlement with her former label to end a contentious legal battle; Fyre Fest fraudster Billy McFarland is hit with a civil lawsuit by a jilted investor in his new project; and more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

THE BIG STORY: AI Music Heads To Court

When UMG and several other music companies filed a lawsuit last week, accusing an artificial intelligence company called Anthropic PBC of violating its copyrights en masse to “train” its AI models, my initial reaction was: “What took so long?”

The creators of other forms of content had already been in court for months. A group of photographers and Getty Images sued Stability AI over its training practices in January, and a slew of book authors, including Game of Thrones writer George R.R. Martin and legal novelist John Grisham, sued ChatGPT-maker OpenAI over the same thing in June and again in September. And music industry voices, like the RIAA and UMG itself, had repeatedly signaled that they viewed such training as illegal.

For months, we asked around, scanned dockets and waited for the music equivalent. Was the delay a deliberate litigation strategy, allowing the fast-changing market and the existing lawsuits to play out more before diving in? Was the music business focusing on legislative, regulatory or business solutions instead of the judicial warpath they chose during the file-sharing debacle of the early 2000s?

Maybe they were just waiting for the right defendant. In a complaint filed in Nashville federal court on Oct. 18, UMG claimed that Anthropic — a company that got a $4 billion investment from Amazon last month — “unlawfully copies and disseminates vast amounts of copyrighted works” in the process of teaching its models to spit out new lyrics. The lengthy complaint, co-signed by Concord Music Group, ABKCO and other music publishers, echoed arguments made by many rightsholders in the wake of the AI boom: “Copyrighted material is not free for the taking simply because it can be found on the internet.”

Like the previous cases filed by photographers and authors, the new lawsuit poses something of an existential question for AI companies. AI models are only as good as the “inputs” they ingest; if federal courts make all copyrighted material off-limits for such purposes, it would not only make current models illegal but would undoubtedly hamstring further development.

The battle ahead will center on fair use — the hugely important legal doctrine that allows for the free use of copyrighted material in certain situations. Fair use might make you think of parody or criticism, but more recently, it’s empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.

Are AI models, which imbibe millions of copyrighted works to create something new, the next landmark fair use? Or are they just a new form of copyright piracy on a vast new scale? We’re about to find out.

More key details about the AI case:

– The timing of the lawsuit would suggest that UMG is aiming for a carrot-and-stick approach when it comes to AI. On the same day the new case was filed, UMG announced that it was partnering with a company called BandLab Technologies to forge an “an ethical approach to AI.” Hours later, news also broke that UMG and other labels were actively negotiating with YouTube on a new AI tool that would allow creators to make videos using the voices of popular (consenting) recording artists.

-The huge issue in the case is whether the use of training inputs amounts to infringement, but UMG’s lawyers also allege that Anthropic violates its copyrights with the outputs that its models spit out — that it sometimes simply presents verbatim lyrics to songs. That adds a different dimension to the case that’s not present in earlier AI cases filed by authors and photographers and could perhaps make it a bit easier for UMG to win.

-While it’s the first such case about music, it should be noted that the Anthropic lawsuit deals only with song lyrics — meaning not with sound recordings, written musical notation, or voice likeness rights. While a ruling in any of the AI training cases would likely set precedent across different areas of copyright, those specific issues will have to wait for a future lawsuit, or perhaps an act of Congress.

Go read the full story on UMG’s lawsuit, with access to the actual complaint filed in court.

Other top stories this week…

MEGAN THEE SETTLEMENT – Megan Thee Stallion reached an agreement with her record label 1501 Certified Entertainment to end more than three years of ugly litigation over a record deal that Megan calls “unconscionable.” After battling for more than a year over whether she owed another album under the contract, the two sides now say they will “amicably part ways.”

DJ ENVY SCANDAL DEEPENS – Cesar Pina, a celebrity house-flipper with close ties to New York City radio host DJ Envy, was arrested on  federal charges that he perpetrated “a multimillion-dollar Ponzi-like investment fraud scheme.” Though Envy was not charged, federal prosecutors specifically noted that Pina had “partnered with a celebrity disc jockey and radio personality” — listed in the charges as “Individual-1” — to boost his reputation as a real estate guru. The charges came after months of criticism against Envy, who is named in a slew of civil lawsuits filed by alleged victims who say he helped promote the fraud.

FOOL ME ONCE… – Billy McFarland, the creator of the infamous Fyre Festival who served nearly four years in prison for fraud and lying to the FBI, is facing a new civil lawsuit claiming he ripped off an investor who gave him $740,000 for his new PYRT venture. The case was filed by Jonathan Taylor, a fellow felon who met McFarland in prison after pleading guilty to a single count of child sex trafficking.

AI-GENERATED CLOSING ARGS? – Months after ex-Fugees rapper Prakazrel “Pras” Michel was convicted on foreign lobbying charges, he demanded a new trial by making extraordinary accusations against his ex-lawyer David Kenner. Michel claims Kenner, a well-known L.A. criminal defense attorney, used an unproven artificial intelligence (AI) tool called EyeLevel.AI to craft closing arguments — and that he did so because he owned a stake in the tech platform. Kenner declined to comment, but EyeLevel has denied that Kenner has any equity in the company.

ROLLING STONES GET SATISFACTION – A federal judge dismissed a lawsuit accusing The Rolling Stones members Mick Jagger and Keith Richards of copying their 2020 single “Living in a Ghost Town” from a pair of little-known songs, ruling that the dispute — a Spanish artist suing two Brits — clearly didn’t belong in his Louisiana federal courthouse.

JUICE WRLD COPYRIGHT CASE – Dr. Luke and the estate of the late Juice WRLD were hit with a copyright lawsuit that claims they unfairly cut out one of the co-writers (an artist named PD Beats) from the profits of the rapper’s 2021 track “Not Enough.”

YouTube is planning to roll out a new artificial intelligence tool that will allow creators to make videos using the voices of popular recording artists — but inking deals with record companies to launch the beta version is taking longer than expected, sources tell Billboard.
The new AI tool, which YouTube had hoped to debut at its Made On YouTube event in September, will in beta let a select pool of artists to give permission to a select group of creators to use their voices in videos on the platform. From there, the product could be released broadly to all users with the voices of artists who choose to opt in. YouTube is also looking at those artists to contribute input on that will help steer the company’s AI strategy beyond this, sources say.

The major labels, Universal Music Group, Sony Music Entertainment and Warner Music Group, are still negotiating licensing deals that would cover voice rights for the beta version of the tool, sources say; a wide launch would require separate agreements. As label leaders have made public statements about their commitments to embracing AI in recent months, with UMG CEO Lucian Grainge saying the technology could “amplify human imagination and enrich musical creativity in extraordinary new ways” and WMG CEO Robert Kyncl saying, “You have to embrace the technology, because it’s not like you can put technology in a bottle” — some music executives worry they’ve given up some of their leverage in these initial deals, given that they want to be seen as proponents of progress and not as holding up innovation. Label executives are especially conscious of projecting that image now, having shortsightedly resisted the shift from CDs to downloads two decades ago, which allowed Apple to unbundle the album and sent the music business into years of decline. Some executives say it’s also been challenging to find top artists to participate in the new YouTube tool, with even some of the most forward-thinking acts hesitant to put their voices in the hands of unknown creators who could use them to make statements or sing lyrics they might not like.

The labels, sources say, view the deal as potentially precedent-setting for future AI deals to come — as well as creating a “framework,” as one source put it, for YouTube’s future AI initiatives. The key issues in negotiations are how the AI model is trained and that artists should have the option to opt-in (or out); and how monetization works — are artists paid for the use of their music as an input into the AI model or for the output that’s created using the AI tool? While negotiations are taking time, label sources say YouTube is seen as an important, reliable early partner in this space, based on the platform’s work developing its Content ID system that identifies and monetizes copyrighted materials in user-generated videos.

Publishing, meanwhile, is even more complicated, given that even with a small sampling of artists to launch the tool at beta there could be hundreds of songwriters with credits across their catalogs — which would be sampled by the model. Because of this, a source suggests that YouTube may prefer paying a lump sum licensing fee rather that publishers will need to figure out how to divide among their writers.

As complicated as the deal terms may be, sources say music rights holders are acting in good faith to get a deal done. That’s because there’s a dominant belief this sort of technology is inevitable and if the music business doesn’t come to the table to create licensing deals now, they’ll get left behind. However, one source familiar with the negotiations says this attitude is also putting music companies at a disadvantage because there is less room to drive a hard bargain.

For months, AI-soundalike tools that synthesize vocals to sound like famous artists have been garnering attention and triggering debate. The issue hit the mainstream in April when an anonymous musician calling himself Ghostwriter released a song to streaming services with soundalike versions of Drake and The Weeknd on it that he said were created with artificial intelligence. The song was quickly taken down due to copyright infringement on the recording, not based on the voices’ likenesses, but in the aftermath a month later Billboard reported that the streaming services seemed amenable to requests from the major labels to remove recordings with AI-generated vocals created to sound like popular artists.

In August, YouTube announced a new initiative with UMG artists and producers it called an “AI Music Incubator” that would “explore, experiment and offer feedback on the AI-related musical tools and products,” according to a blog post by Grainge at the time. “Once these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.” That partnership was separate from the licensing negotiations currently taking place and the beta product in development.

On Wednesday, UMG, Concord Music Group, ABKCO and other music publishers filed a lawsuit against AI platform Anthropic PBC for using copyrighted song lyrics to “train” its software. This marked the first major lawsuit in what is expected to be a key legal battle over the future of AI music, and as one source put it a signal that major labels will litigate with AI companies they see as bad players.

Universal Music Group (UMG) and other music companies are suing an artificial intelligence platform called Anthropic PBC for using copyrighted song lyrics to “train” its software — marking the first major lawsuit in what is expected to be a key legal battle over the future of AI music.
In a complaint filed Wednesday morning (Oct. 18) in Nashville federal court, lawyers for UMG, Concord Music Group, ABKCO and other music publishers accused Anthropic of violating the companies’ copyrights en masse by using vast numbers of songs to help its AI models learn how to spit out new lyrics.

“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” lawyers for the music companies wrote. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”

A spokesperson for Anthropic did not immediately return a request for comment.

The new lawsuit is similar to cases filed by visual artists over the unauthorized use of their works to train AI image generators, as well as cases filed by authors like Game of Thrones writer George R.R. Martin and novelist John Grisham over the use of their books. But it’s the first to squarely target music.

AI models like the popular ChatGPT are “trained” to produce new content by feeding them vast quantities of existing works known as “inputs.” In the case of AI music, that process involves huge numbers of songs. Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities.

Major music companies and other industry players have already argued that such training is illegal. Last year, the RIAA said that any use of copyrighted songs to build AI platforms “infringes our members’ rights.” In April, when UMG asked Spotify and other streamers in April to stop allowing AI companies to use their platforms to ingest music, it said it “will not hesitate to take steps to protect our rights.”

On Wednesday, the company took those steps. In the lawsuit, it said Anthropic “profits richly” from the “vast troves of copyrighted material that Anthropic scrapes from the internet.”

“Unlike songwriters, who are creative by nature, Anthropic’s AI models are not creative — they depend entirely on the creativity of others,” lawyers for the publishers wrote. “Yet, Anthropic pays nothing to publishers, their songwriters, or the countless other copyright owners whose copyrighted works Anthropic uses to train its AI models. Anthropic has never even attempted to license the use of Publishers’ lyrics.”

In the case ahead, the key battle line will be over whether the unauthorized use of proprietary music to train an AI platform is nonetheless legal under copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law.

Historically, fair use enabled critics to quote from the works they were dissecting, or parodists to use existing materials to mock them. But more recently, it’s also empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.

In Wednesday’s complaint, UMG and the other publishers seemed intent on heading off any kind of fair use defense. They argued that Anthropic’s behavior would harm the market for licensing lyrics to AI services that actually pay for licenses — a key consideration in any fair use analysis.

“Anthropic is depriving Publishers and their songwriters of control over their copyrighted works and the hard-earned benefits of their creative endeavors, it is competing unfairly against those website developers that respect the copyright law and pay for licenses, and it is undermining existing and future licensing markets in untold ways,” the publishers wrote.

In addition to targeting Anthropic’s use of songs as inputs, the publishers claim that the material produced by the company’s AI model also infringes their lyrics: “Anthropic’s AI models generate identical or nearly identical copies of those lyrics, in clear violation of publishers’ copyrights.”

Such litigation might only be the first step in setting national policy on how AI platforms can use copyrighted music, with legislative efforts close behind. At a hearing in May, Sen. Marsha Blackburn (R-Tenn.) repeatedly grilled the CEO of the company behind ChatGPT about how he and others planned to “compensate the artist.”

“If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said. “If it was radio play, it would be there. If it was streaming, it would be there.”

SINGAPORE — BandLab Technologies has pledged to engage responsibly and ethically with AI, part of a “strategic collaboration” with Universal Music Group.
Announced today (Oct. 18), Singapore-based BandLab becomes the first music creation platform to throw its support behind the Human Artistry Campaign (HAC), a global coalition devoted to ensuring fair and safe play with AI technologies.

“As the digital landscape of music continues to evolve,” reads a joint statement, “this collaboration is designed to be a beacon of innovation and ethical practice in the industry and heralds a new era where artists are supported and celebrated at every stage of their creative journey.”

Led by CEO Meng Ru Kuok, BandLab Technologies operates the largest social music creation platform, BandLab. Among the service’s breakouts is Houston artist d4vd (pronounced “David”), who, in July 2022 as a 17-year-old, released “Romantic Homicide,” a track he had made using BandLab. After going viral on TikTok, the song entered the Billboard Hot 100 (peaking at No. 45) as d4vd signed to Darkroom/Interscope. He’s one of a growing number of BandLab users who’ve developed deeper ties with UMG.

“We welcome BandLab’s commitment to an ethical approach to AI through their accessible technology, tools and platform,” comments Lucian Grainge, chairman & CEO, Universal Music Group, in a statement. “We are excited to add BandLab Technologies to a growing list of UMG partners whose responsible and innovative AI will benefit the creative community.”

Further to Grainge’s comments, Michael Nash, executive VP and chief digital officer at UMG, points to an expanding relationship with BandLab, noting “they are an excellent partner that is compelling for us on multiple fronts.”

BandLab Technologies’ assets are grouped under the holding company of Caldecott Music Group, for which Meng serves as CEO and founder. ““BandLab Technologies and our wider Caldecott Music Group network is steadfast in its respect for artists’ rights,” he comments in a statement, “and the infinite potential of AI in music creation and we believe our millions of users around the world share in this commitment and excitement.”

Meng showed his support in August at Ai4, an AI conference in Las Vegas, by way of the presentation “Augmenting the Artist: How AI is Redefining Music Creation and Innovation.” During that session, he discussed the importance of ethical AI training and development and showcased the company’s AI music idea generator tool SongStarter.

New technologies promise “unbelievable possibilities to break down more barriers for creators,” he notes, but “it’s essential that artists’ and songwriters’ rights be fully respected and protected to give these future generations a chance of success.”

The Human Artistry Campaign was announced during South by Southwest in March along with a series of seven key principles for protecting artists in the age of AI. More than 150 industry organizations and businesses have signed up.

UMG’s AI collaboration with BandLab follows separate arrangements forged with Endel and YouTube.

This “first of its kind” strategic partnership with BandLab Technologies, say reps for UMG, align the “two organizations to promote responsible AI practices and pro-creator standards, as well as enabling new opportunities for artists.”

Six months after ex-Fugees rapper Prakazrel “Pras” Michel was convicted on foreign lobbying charges, he’s now demanding a new trial — making the extraordinary claim that his ex-lawyer used an unproven artificial intelligence (AI) tool to craft closing arguments because he owned a stake in the tech platform.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

In a Monday (Oct. 16) filing in D.C. federal court, Michel claimed attorney David Kenner “utterly failed” him during the April trial, denying him his constitutional right to effective counsel. Among other shortcomings, the rapper said Kenner outsourced prep work to “inexperienced contract attorneys” and “failed to object to damaging and inadmissible testimony” during the trial.

Most unusual of all, Michel accused Kenner of using “an experimental artificial intelligence program” to draft his closing arguments for the trial, resulting in a deeply flawed presentation. And he claimed Kenner did so because he had an “undisclosed financial stake” in the company and wanted to use Michel’s trial to promote it.

“Michel never had a chance,” the rapper’s new lawyers wrote Monday. “Michel’s counsel was deficient throughout, likely more focused on promoting his AI program … than zealously defending Michel. The net effect was an unreliable verdict.”

Kenner did not immediately return a request for comment on Tuesday.

Michel was charged in 2019 with funneling money from a now-fugitive Malaysian financier through straw donors to Barack Obama’s 2012 re-election campaign. He was also accused of trying to squelch a Justice Department investigation and influence an extradition case on behalf of China under the Trump administration.

In April, following a trial that included testimony from actor Leonardo DiCaprio and former U.S. Attorney General Jeff Sessions, Michel was convicted on 10 counts including conspiracy, witness tampering and failing to register as an agent of China.

During that trial, Michel was represented by Kenner, a well-known Los Angeles criminal defense attorney who has previously repped hip-hop luminaries like Snoop Dogg, Suge Knight and, most recently, Tory Lanez. But earlier this summer, Michel asked for permission to replace Kenner with a new team of lawyers; in August, Kenner and his firm were swapped out for lawyers from the national firm ArentFox Schiff.

Now, it’s clear why. In Monday’s filing, Michel’s new lawyers accused Kenner of wide-ranging failures — including many that have nothing to do with AI tools or secret motives. They claim he “outsourced trial preparations” to other lawyers and “failed to familiarize himself with the charged statutes or required elements.” They also say he “overlooked nearly every colorable defense” and “failed to object to damaging and inadmissible testimony, betraying a failure to understand the rules of evidence.”

But the most unusual allegations concerned an alleged scheme to promote EyeLevel.AI, a computer program designed to help attorneys win cases by digesting trial transcripts and other data. Days after the trial concluded, the company put out a press release highlighting its use in the Michel trial, calling it the “first use of generative AI in a federal trial” and quoting Kenner.

“This is an absolute game changer for complex litigation,” Kenner said in the press release. “The system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.”

But according to Michel’s new lawyers, Kenner’s use of the program was harmful, not helpful, to his client’s case. They say it may have led to some smaller mistakes, like Kenner misattributing a Puff Daddy song to the Fugees, but also to massive legal errors, like conflating separate allegations against Michel — an error that Michel’s new lawyers say caused Kenner to make “frivolous” arguments before the jury.

“At bottom, the AI program failed Kenner, and Kenner failed Michel,” Michel’s attorneys at ArentFox Schiff wrote. “The closing argument was deficient, unhelpful, and a missed opportunity that prejudiced the defense.”

According to Michel’s new lawyers, the mistake of using the AI tools was compounded by Kenner’s alleged motive: an undisclosed ownership stake in the startup that sells it. By using a criminal trial as a means to promote a product, Monday’s filing says Kenner created “an extraordinary conflict of interest.”

“Kenner and [his partner]’s decision to elevate their financial interest in the AI program over Michel’s interest in a competent and vigorous defense adversely affected Kenner’s trial performance, as the closing argument was frivolous, missed nearly every colorable argument, and damaged the defense,” they wrote.