artificial intelligence
Page: 13
Offering a preview of arguments the company might make in its upcoming legal battle with Universal Music Group (UMG), artificial intelligence (AI) company Anthropic PBC told the U.S. Copyright Office this week that the massive scraping of copyrighted materials to train AI models is a âquintessentially lawful.â
Music companies, songwriters and artists have argued that such training represents an infringement of their works at a vast scale, but Anthropic told the federal agency Monday (Oct. 30) that it was clearly allowed under copyrightâs fair use doctrine.
âThe copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,â the company wrote. âThis sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.â
The filing came as part of an agency study aimed at answering thorny questions about how existing intellectual property laws should be applied to the disruptive new tech. Other AI giants, including OpenAI, Meta, Microsoft, Google and Stability AI all lodged similar filings, explaining their views.
But Anthropicâs comments will be of particular interest in the music industry because that company was sued last month by UMG over the very issues in question in the Copyright Office filing. The case, the first filed over music, claims that Anthropic unlawfully copied âvast amountsâ of copyrighted songs when it trained its Claude AI tool to spit out new lyrics.
In the filing at the Copyright Office, Anthropic argued that such training was a fair use because it copied material only for the purpose of âperforming a statistical analysis of the dataâ and was not âre-using the copyrighted expression to communicate it to users.â
âTo the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work,â the company argued.
UMG is sure to argue otherwise, but Anthropic said legal precedent was clearly on its side. Notably, the company cited a 2015 ruling by a federal appeals court that Google was allowed to scan and upload millions of copyrighted books to create its searchable Google Books database. That ruling and others established the principle that âlarge-scale copyingâ was a fair use when done to âcreate tools for searching across those works and to perform statistical analysis.â
âThe training process for Claude fits neatly within these same paradigms and is fair use,â Anthropicâs lawyers wrote. âClaude is intended to help users produce new, distinct works and thus serves a different purpose from the pre-existing work.â
Anthropic acknowledged that the training of AI models could lead to âshort-term economic disruption.â But the company said such problems were âunlikely to be a copyright issue.â
âIt is still a matter that policymakers should take seriously (outside of the context of copyright) and balance appropriately against the long-term benefits of LLMs on the well-being of workers and the economy as a whole by providing an entirely new category of tools to enhance human creativity and productivity,â the company wrote.
In the TikTok era, homemade remixes of songs â typically single tracks that have been sped up or slowed down, or two tracks mashed together â have become ever more popular. Increasingly, they are driving viral trends on the platform and garnering streams off of it.Â
Just how popular? In April, Larry Mills, senior vp of sales at the digital rights tech company Pex, wrote that Pexâs tech found âhundreds of millions of modified audio tracks distributed from July 2021 to March 2023,â which appeared on TikTok, SoundCloud, Audiomack, YouTube, Instagram and more.Â
On Wednesday (Nov. 1), Mills shared the results of a new Pex analysis â expanded to include streaming services like Spotify, Apple Music, Deezer, and Tidal â estimating that âat least 1% of all songs on [streaming platforms] are modified audio.â
âWeâre talking more than 1 million unlicensed, manipulated songs that are diverting revenue away from rightsholders this very minute,â Mills wrote, pointing to homemade re-works of tracks by Halsey or One Republic that have amassed millions of plays. âThese can generate millions in cumulative revenue for the uploaders instead of the correct rightsholders.â
Labels try to execute a tricky balancing act with user-generated remixes. They usually strike down the most popular unauthorized reworks on streaming services and move to release their own official versions in an attempt to pull those plays in-house. But they also find ways to encourage fan remixing, because it remains an effective form of music marketing at a time when most promotional strategies have proved toothless. âRights holders understand that this process is inevitable, and itâs one of the best ways to bring new life to tracks,â Meng Ru Kuok, CEO of music technology company BandLab, said to Billboard earlier this year.Â
Mills argues that the industry needs a better system for tracking user-generated remixes and making sure royalties are going into the right pockets. âWhile these hyper-speed remixes may make songs go viral,â he wrote in April, âtheyâre also capable of diverting royalty payments away from rights holders and into the hands of other creators.âÂ
Since Pex sells technology for identifying all this modified audio, itâs not exactly an unbiased party. But itâs notable that streaming services and distributors donât have the best track record when it comes to keeping unauthorized content of any kind off their platforms.
It hasnât been unusual to find leaked songs â especially from rappers with impassioned fan bases like Playboi Carti and Lil Uzi Vert â on Spotify, where leaked tracks can often be found climbing the viral chart, or TikTok. An unreleased Pink Pantheress song sampling Michael Jacksonâs classic âOff the Wallâ is currently hiding in plain sight on Spotify, masquerading as a podcast.Â
âHistorically, streaming services donât have an economic incentive to actually care about that,â Deezer CEO Jeronimo Folgueira told Billboard earlier this year. âWe donât care whether you listen to the original Drake, fake Drake, or a recording of the rain. We just want you to pay $10.99.â Folgueira called that incentive structure âactually a bad thing for the industry.â
In addition, many of the distribution companies that act as middlemen between artists and labels and the streaming services operate on a volume model â the more content they upload, the more money they make â which means itâs not in their financial interest to look closely at what they send along to streaming services.Â
However, the drive to improve this system has taken on new urgency this year. Rights holders and streaming services are going back and forth over how streaming payments should work and whether âan Ed Sheeran stream is worth exactly the same as a stream of rain falling on the roof,â as Warner Music Group CEO Robert Kyncl told financial analysts in May. As the industry starts to move to a system where all streams are no longer created equal, it becomes increasingly important to know exactly whatâs on these platforms so it can sort different streams into different buckets.
In addition, the advance of artificial intelligence-driven technology has allowed for easily accessible and accurate-sounding voice-cloning, which has alarmed some executives and artists in a way that sped-up remixes have not. âIn our conversations with the labels, we heard that some artists are really pissed about this stuff,â says Geraldo Ramos, co-founder/CEO of the music-tech company Moises. âTheyâre calling their label to say, âHey, it isnât acceptable, my voice is everywhere.’â
This presents new challenges, but also perhaps means new opportunities for digital fingerprint technology companies, whether thatâs stalwarts like Audible Magic or newer players like Pex. âWith AI, just think how much the creation of derivative works is going to exponentially grow â how many covers are going to get created, how many remixes are gonna get created,â Audible Magic CEO Kuni Takahashi told Billboard this summer. âThe scale of what weâre trying to identify and the pace of change is going to keep getting faster.â
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Lizzo fights back against sexual harassment allegations with the help of a famous lawyer and a creative legal argument; a federal court issues an early ruling in an important copyright lawsuit over artificial intelligence; Kobalt is hit with a lawsuit alleging misconduct by one of the companyâs former executives; and much more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
THE BIG STORY: Lizzo Hits Back With ⌠Free Speech?
Three months after Lizzo and her touring company were accused of subjecting three of her backup dancers to sexual harassment, religious and racial discrimination and weight-shaming, her lawyers filed their first substantive response â and they didnât hold back.
âSalacious and specious lawsuit.â âThey have an axe to grind.â âA pattern of gross misconduct and failure to perform their job up to par.â âFabricated sob story.â âPlaintiffs are not victims.â âThey are opportunists.â
âPlaintiffs had it all and they blew it,â Lizzoâs lawyers wrote. âInstead of taking any accountability for their own actions, plaintiffs filed this lawsuit against defendants out of spite and in pursuit of media attention, public sympathy and a quick payday with minimal effort.â
Thatâs not exactly dry legalese, but itâs par-for-the-course in a lawsuit that has already featured its fair share of blunt language from the other side. And itâs hardly surprising given that it came from Martin Singer â an infamously tough celebrity lawyer once described by the Los Angeles Times as âHollywoodâs favorite legal hit man.â
While Singerâs quotes made the headlines, it was his legal argument that caught my attention.
Rather than a normal motion to dismiss the case, Lizzoâs motion cited Californiaâs so-called anti-SLAPP statute â a special type of law enacted in states around the country that makes it easier to end meritless lawsuits that threaten free speech (known as âstrategic lawsuits against public participationâ). Anti-SLAPP laws allow for such cases to be tossed out more quickly, and they sometimes require a plaintiff to repay the legal bills incurred by a defendant.
Anti-SLAPP motions are filed every day, but itâs pretty unusual to see one aimed at dismissing a sexual harassment and discrimination lawsuit filed by former employees against their employer. Theyâre more common in precisely the opposite scenario: filed by an individual who claims that theyâre being unfairly sued by a powerful person to silence accusations of abuse or other wrongdoing.
But in Fridayâs motion, Singer and Lizzoâs other lawyers argued that Californiaâs anti-SLAPP law could also apply to the current case because of the creative nature of the work in question. They called the case âa brazen attempt to silence defendantsâ creative voices and weaponize their creative expression against them.â
Will that argument hold up in court? Stay tunedâŚ
Go read the full story about Lizzoâs defense, including access to the actual legal documents filed in court.
Other top stories this weekâŚ
RULING IN AI COPYRIGHT CASE â A federal judge issued an early-stage ruling in a copyright class action filed by artists against artificial intelligence (AI) firm Stability AI â one of several important lawsuits filed against AI companies over how they use copyrighted content. Though he criticized the case and dismissed many of its claims, the judge allowed it to move toward trial on its central, all-important question: Whether itâs illegal to train AI models by using copyrighted content.
HALLOWEEN SPECIAL â To celebrate todayâs spooky holiday, Billboard turned back the clock all the way to 1988, when the studio behind âA Nightmare on Elm Streetâ sued Will Smith over a Fresh Prince song and music video that made repeated references to Freddy Kreuger. To get the complete bizarre history of the case, go read our story here.
KOBALT FACES CASE OVER EX-EXEC â A female songwriter filed a lawsuit against Kobalt Music Group and former company executive Sam Taylor over allegations that he leveraged his position of power to demand sex from her â and that the company âignoredâ and âgaslitâ women who complained about him. The case came a year after Billboardâs Elias Leight first reported those allegations. Taylor did not return a request for comment; Kobalt has called the allegations against the company baseless, saying its employees never âcondoned or aided any alleged wrongdoing.â
MF DOOM ESTATE BATTLE â The widow of late hip-hop legend MF Doom filed a lawsuit claiming the rapperâs former collaborator Egon stole dozens of the rapperâs notebooks that were used to write down many of his beloved songs. The case claims that Egon took possession of the files as Doom spent a decade in his native England due to visa issues, where he remained until his death in 2020. Egonâs lawyers called the allegations âfrivolous and untrue.â
DJ ENVY FRAUD SCANDAL UPDATE â Cesar Pina, a celebrity house-flipper who was charged earlier this month with running a âPonzi-like investment fraud scheme,â said publicly last week that New York City radio host DJ Envy had ânothing to doâ with the real estate deals in question. Critics have argued that Envy, who hosts the popular hip-hop radio show The Breakfast Club, played a key role in Pinaâs alleged fraud by promoting him on the air.
UTOPIA SUED AGAIN OVER FAILED DEAL â Utopia Music was hit with another lawsuit over an aborted $26.5 million deal to buy a U.S. music technology company called SourceAudio, this time over allegations that the company violated a $400,000 settlement that aimed to end the dispute. The allegations came after a year of repeated layoffs and restructuring at the Swiss-based music tech company.
A federal judge in San Francisco ruled Monday (Oct. 30) that artificial intelligence (AI) firm Stability AI could not dismiss a lawsuit claiming it had âtrainedâ its platform on copyrighted images, though he also sided with AI companies on key questions.
In an early-stage order in a closely watched case, Judge William Orrick found many defects in the lawsuitâs allegations, and he dismissed some of the caseâs claims. But he allowed the case to move forward on its core allegation: That Stability AI built its tools by exploiting vast numbers of copyrighted works.
âPlaintiffs have adequately alleged direct infringement based on the allegations that Stability downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion, and used those images to train Stable Diffusion,â the judge wrote.
The ruling came in one of many cases filed against AI companies over how they use copyrighted content to train their models. Authors, comedians and visual artists have all filed lawsuits against companies including Microsoft, Meta and OpenAI, alleging that such unauthorized use by the fast-growing industry amounts to a massive violation of copyright law.
Last week, Universal Music Group and others filed the first such case involving music, arguing that Anthropic PBC was infringing copyrights en masse by using âvast amountsâ of music to teach its software how to spit out new lyrics.
Rulings in the earlier AI copyright cases could provide important guidance on how such legal questions will be handled by courts, potentially impacting how UMGâs lawsuit and others like it play out in the future.
Mondayâs decision came in a class action filed by artists Sarah Andersen, Kelly McKernan and Karla Ortiz against Stability AI Ltd. over its Stable Diffusion â an AI-powered image generator. The lawsuit also targeted Midjourney Inc. and DeviantArt Inc., two companies that use Stable Diffusion as the basis for their own image generators.
In his ruling, Judge Orrick dismissed many of the lawsuitâs claims. He booted McKernan and Ortiz from the case entirely and ordered the plaintiffs to re-file an amended version of their case with much more detail about the specific allegations against Midjourney and DeviantArt.
The judge also cast doubt on the allegation that every âoutputâ image produced by Stable Diffusion would itself be a copyright-infringing âderivativeâ of the images that were used to train the model â a ruling that could dramatically limit the scope of the lawsuit. The judge suggested that such images might only be infringing if they themselves looked âsubstantially similarâ to a particular training image.
But Judge Orrick included no such critiques for the central accusation that Stability AI infringed Andersenâs copyrights by using them for training without permission â the basic allegation at the center of all of the AI copyright lawsuits, including the one filed by UMG. Andersen will still need to prove that such an accusation is true in future litigation, but the judge said she should be given the chance to do so.
âEven Stability recognizes that determination of the truth of these allegations â whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run â cannot be resolved at this juncture,â Orrick wrote in his decision.
Attorneys for Stability AI, Midjourney and DeviantArt did not return requests for comment. Attorneys for the artists praised the judge for allowing their âcore claimâ to move forward and onto âa path to trial.â
âAs is common in a complex case, Judge Orrick granted the plaintiffs permission to amend most of their other claims,â said plaintiffsâ attorneys Joseph Saveri and Matthew Butterick after the ruling. âWeâre confident that we can address the courtâs concerns.â
President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence â requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list to oversee the rapidly progressing technology.
The order reflects the governmentâs effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils. AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.
White House chief of staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.
âWe canât move at a normal government pace,â Zients said the Democratic president told him. âWe have to move as fast, if not faster than the technology itself.â
In Bidenâs view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.
The order builds on voluntary commitments already made by technology companies. Itâs part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.
Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.
The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.
An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.
Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.
Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technologyâs capabilities at multiple gatherings.
âHe was as impressed and alarmed as anyone,â deputy White House chief of staff Bruce Reed said in an interview. âHe saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And heâs seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.â
The possibility of false images and sounds led the president to prioritize the labeling and watermarking of anything produced by AI. Biden also wanted to thwart the risk of older Americans getting a phone call from someone who sounded like a loved one, only to be scammed by an AI tool.
Meetings could go beyond schedule, with Biden telling civil society advocates in a ballroom of San Franciscoâs Fairmont Hotel in June: âThis is important. Take as long as you need.â
The president also talked with scientists and saw the upside that AI created if harnessed for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origins of the universe. Another scientist showed how AI could model extreme weather like 100-year floods, as the past data used to assess these events has lost its accuracy because of climate change.
The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film âMission: Impossible â Dead Reckoning Part One.â The filmâs villain is a sentient and rogue AI known as âthe Entityâ that sinks a submarine and kills its crew in the movieâs opening minutes.
âIf he hadnât already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,â said Reed, who watched the film with the president.
With Congress still in the early stages of debating AI safeguards, Bidenâs order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.
U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.
The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.
But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AIâs real-world harms.
The American Civil Liberties Union is among the groups that met with the White House to try to ensure âweâre holding the tech industry and tech billionaires accountableâ so that algorithmic tools âwork for all of us and not just a few,â said ReNika Moore, director of the ACLUâs racial justice program.
Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcementâs use of AI tools, including at U.S. borders.
âThese are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,â Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Universal Music Group (UMG) and other music companies file a hotly-anticipated copyright lawsuit over how artificial intelligence (AI) models are trained; DJ Envyâs business partner Cesar Pina is hit with criminal charges claiming he ran a âPonzi-likeâ fraud scheme; Megan Thee Stallion reaches a settlement with her former label to end a contentious legal battle; Fyre Fest fraudster Billy McFarland is hit with a civil lawsuit by a jilted investor in his new project; and more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
THE BIG STORY: AI Music Heads To Court
When UMG and several other music companies filed a lawsuit last week, accusing an artificial intelligence company called Anthropic PBC of violating its copyrights en masse to âtrainâ its AI models, my initial reaction was: âWhat took so long?â
The creators of other forms of content had already been in court for months. A group of photographers and Getty Images sued Stability AI over its training practices in January, and a slew of book authors, including Game of Thrones writer George R.R. Martin and legal novelist John Grisham, sued ChatGPT-maker OpenAI over the same thing in June and again in September. And music industry voices, like the RIAA and UMG itself, had repeatedly signaled that they viewed such training as illegal.
For months, we asked around, scanned dockets and waited for the music equivalent. Was the delay a deliberate litigation strategy, allowing the fast-changing market and the existing lawsuits to play out more before diving in? Was the music business focusing on legislative, regulatory or business solutions instead of the judicial warpath they chose during the file-sharing debacle of the early 2000s?
Maybe they were just waiting for the right defendant. In a complaint filed in Nashville federal court on Oct. 18, UMG claimed that Anthropic â a company that got a $4 billion investment from Amazon last month â âunlawfully copies and disseminates vast amounts of copyrighted worksâ in the process of teaching its models to spit out new lyrics. The lengthy complaint, co-signed by Concord Music Group, ABKCO and other music publishers, echoed arguments made by many rightsholders in the wake of the AI boom: âCopyrighted material is not free for the taking simply because it can be found on the internet.â
Like the previous cases filed by photographers and authors, the new lawsuit poses something of an existential question for AI companies. AI models are only as good as the âinputsâ they ingest; if federal courts make all copyrighted material off-limits for such purposes, it would not only make current models illegal but would undoubtedly hamstring further development.
The battle ahead will center on fair use â the hugely important legal doctrine that allows for the free use of copyrighted material in certain situations. Fair use might make you think of parody or criticism, but more recently, itâs empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.
Are AI models, which imbibe millions of copyrighted works to create something new, the next landmark fair use? Or are they just a new form of copyright piracy on a vast new scale? Weâre about to find out.
More key details about the AI case:
â The timing of the lawsuit would suggest that UMG is aiming for a carrot-and-stick approach when it comes to AI. On the same day the new case was filed, UMG announced that it was partnering with a company called BandLab Technologies to forge an âan ethical approach to AI.â Hours later, news also broke that UMG and other labels were actively negotiating with YouTube on a new AI tool that would allow creators to make videos using the voices of popular (consenting) recording artists.
-The huge issue in the case is whether the use of training inputs amounts to infringement, but UMGâs lawyers also allege that Anthropic violates its copyrights with the outputs that its models spit out â that it sometimes simply presents verbatim lyrics to songs. That adds a different dimension to the case thatâs not present in earlier AI cases filed by authors and photographers and could perhaps make it a bit easier for UMG to win.
-While itâs the first such case about music, it should be noted that the Anthropic lawsuit deals only with song lyrics â meaning not with sound recordings, written musical notation, or voice likeness rights. While a ruling in any of the AI training cases would likely set precedent across different areas of copyright, those specific issues will have to wait for a future lawsuit, or perhaps an act of Congress.
Go read the full story on UMGâs lawsuit, with access to the actual complaint filed in court.
Other top stories this weekâŚ
MEGAN THEE SETTLEMENT â Megan Thee Stallion reached an agreement with her record label 1501 Certified Entertainment to end more than three years of ugly litigation over a record deal that Megan calls âunconscionable.â After battling for more than a year over whether she owed another album under the contract, the two sides now say they will âamicably part ways.â
DJ ENVY SCANDAL DEEPENS â Cesar Pina, a celebrity house-flipper with close ties to New York City radio host DJ Envy, was arrested on  federal charges that he perpetrated âa multimillion-dollar Ponzi-like investment fraud scheme.â Though Envy was not charged, federal prosecutors specifically noted that Pina had âpartnered with a celebrity disc jockey and radio personalityâ â listed in the charges as âIndividual-1â â to boost his reputation as a real estate guru. The charges came after months of criticism against Envy, who is named in a slew of civil lawsuits filed by alleged victims who say he helped promote the fraud.
FOOL ME ONCE⌠â Billy McFarland, the creator of the infamous Fyre Festival who served nearly four years in prison for fraud and lying to the FBI, is facing a new civil lawsuit claiming he ripped off an investor who gave him $740,000 for his new PYRT venture. The case was filed by Jonathan Taylor, a fellow felon who met McFarland in prison after pleading guilty to a single count of child sex trafficking.
AI-GENERATED CLOSING ARGS? â Months after ex-Fugees rapper Prakazrel âPrasâ Michel was convicted on foreign lobbying charges, he demanded a new trial by making extraordinary accusations against his ex-lawyer David Kenner. Michel claims Kenner, a well-known L.A. criminal defense attorney, used an unproven artificial intelligence (AI) tool called EyeLevel.AI to craft closing arguments â and that he did so because he owned a stake in the tech platform. Kenner declined to comment, but EyeLevel has denied that Kenner has any equity in the company.
ROLLING STONES GET SATISFACTION â A federal judge dismissed a lawsuit accusing The Rolling Stones members Mick Jagger and Keith Richards of copying their 2020 single âLiving in a Ghost Townâ from a pair of little-known songs, ruling that the dispute â a Spanish artist suing two Brits â clearly didnât belong in his Louisiana federal courthouse.
JUICE WRLD COPYRIGHT CASE â Dr. Luke and the estate of the late Juice WRLD were hit with a copyright lawsuit that claims they unfairly cut out one of the co-writers (an artist named PD Beats) from the profits of the rapperâs 2021 track âNot Enough.â
YouTube is planning to roll out a new artificial intelligence tool that will allow creators to make videos using the voices of popular recording artists â but inking deals with record companies to launch the beta version is taking longer than expected, sources tell Billboard.
The new AI tool, which YouTube had hoped to debut at its Made On YouTube event in September, will in beta let a select pool of artists to give permission to a select group of creators to use their voices in videos on the platform. From there, the product could be released broadly to all users with the voices of artists who choose to opt in. YouTube is also looking at those artists to contribute input on that will help steer the companyâs AI strategy beyond this, sources say.
The major labels, Universal Music Group, Sony Music Entertainment and Warner Music Group, are still negotiating licensing deals that would cover voice rights for the beta version of the tool, sources say; a wide launch would require separate agreements. As label leaders have made public statements about their commitments to embracing AI in recent months, with UMG CEO Lucian Grainge saying the technology could âamplify human imagination and enrich musical creativity in extraordinary new waysâ and WMG CEO Robert Kyncl saying, âYou have to embrace the technology, because itâs not like you can put technology in a bottleâ â some music executives worry theyâve given up some of their leverage in these initial deals, given that they want to be seen as proponents of progress and not as holding up innovation. Label executives are especially conscious of projecting that image now, having shortsightedly resisted the shift from CDs to downloads two decades ago, which allowed Apple to unbundle the album and sent the music business into years of decline. Some executives say itâs also been challenging to find top artists to participate in the new YouTube tool, with even some of the most forward-thinking acts hesitant to put their voices in the hands of unknown creators who could use them to make statements or sing lyrics they might not like.
The labels, sources say, view the deal as potentially precedent-setting for future AI deals to come â as well as creating a âframework,â as one source put it, for YouTubeâs future AI initiatives. The key issues in negotiations are how the AI model is trained and that artists should have the option to opt-in (or out); and how monetization works â are artists paid for the use of their music as an input into the AI model or for the output thatâs created using the AI tool? While negotiations are taking time, label sources say YouTube is seen as an important, reliable early partner in this space, based on the platformâs work developing its Content ID system that identifies and monetizes copyrighted materials in user-generated videos.
Publishing, meanwhile, is even more complicated, given that even with a small sampling of artists to launch the tool at beta there could be hundreds of songwriters with credits across their catalogs â which would be sampled by the model. Because of this, a source suggests that YouTube may prefer paying a lump sum licensing fee rather that publishers will need to figure out how to divide among their writers.
As complicated as the deal terms may be, sources say music rights holders are acting in good faith to get a deal done. Thatâs because thereâs a dominant belief this sort of technology is inevitable and if the music business doesnât come to the table to create licensing deals now, theyâll get left behind. However, one source familiar with the negotiations says this attitude is also putting music companies at a disadvantage because there is less room to drive a hard bargain.
For months, AI-soundalike tools that synthesize vocals to sound like famous artists have been garnering attention and triggering debate. The issue hit the mainstream in April when an anonymous musician calling himself Ghostwriter released a song to streaming services with soundalike versions of Drake and The Weeknd on it that he said were created with artificial intelligence. The song was quickly taken down due to copyright infringement on the recording, not based on the voicesâ likenesses, but in the aftermath a month later Billboard reported that the streaming services seemed amenable to requests from the major labels to remove recordings with AI-generated vocals created to sound like popular artists.
In August, YouTube announced a new initiative with UMG artists and producers it called an âAI Music Incubatorâ that would âexplore, experiment and offer feedback on the AI-related musical tools and products,â according to a blog post by Grainge at the time. âOnce these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.â That partnership was separate from the licensing negotiations currently taking place and the beta product in development.
On Wednesday, UMG, Concord Music Group, ABKCO and other music publishers filed a lawsuit against AI platform Anthropic PBC for using copyrighted song lyrics to âtrainâ its software. This marked the first major lawsuit in what is expected to be a key legal battle over the future of AI music, and as one source put it a signal that major labels will litigate with AI companies they see as bad players.
Universal Music Group (UMG) and other music companies are suing an artificial intelligence platform called Anthropic PBC for using copyrighted song lyrics to âtrainâ its software â marking the first major lawsuit in what is expected to be a key legal battle over the future of AI music.
In a complaint filed Wednesday morning (Oct. 18) in Nashville federal court, lawyers for UMG, Concord Music Group, ABKCO and other music publishers accused Anthropic of violating the companiesâ copyrights en masse by using vast numbers of songs to help its AI models learn how to spit out new lyrics.
âIn the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,â lawyers for the music companies wrote. âPublishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.â
A spokesperson for Anthropic did not immediately return a request for comment.
The new lawsuit is similar to cases filed by visual artists over the unauthorized use of their works to train AI image generators, as well as cases filed by authors like Game of Thrones writer George R.R. Martin and novelist John Grisham over the use of their books. But itâs the first to squarely target music.
AI models like the popular ChatGPT are âtrainedâ to produce new content by feeding them vast quantities of existing works known as âinputs.â In the case of AI music, that process involves huge numbers of songs. Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities.
Major music companies and other industry players have already argued that such training is illegal. Last year, the RIAA said that any use of copyrighted songs to build AI platforms âinfringes our membersâ rights.â In April, when UMG asked Spotify and other streamers in April to stop allowing AI companies to use their platforms to ingest music, it said it âwill not hesitate to take steps to protect our rights.â
On Wednesday, the company took those steps. In the lawsuit, it said Anthropic âprofits richlyâ from the âvast troves of copyrighted material that Anthropic scrapes from the internet.â
âUnlike songwriters, who are creative by nature, Anthropicâs AI models are not creative â they depend entirely on the creativity of others,â lawyers for the publishers wrote. âYet, Anthropic pays nothing to publishers, their songwriters, or the countless other copyright owners whose copyrighted works Anthropic uses to train its AI models. Anthropic has never even attempted to license the use of Publishersâ lyrics.â
In the case ahead, the key battle line will be over whether the unauthorized use of proprietary music to train an AI platform is nonetheless legal under copyrightâs fair use doctrine â an important rule that allows people to reuse protected works without breaking the law.
Historically, fair use enabled critics to quote from the works they were dissecting, or parodists to use existing materials to mock them. But more recently, itâs also empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.
In Wednesdayâs complaint, UMG and the other publishers seemed intent on heading off any kind of fair use defense. They argued that Anthropicâs behavior would harm the market for licensing lyrics to AI services that actually pay for licenses â a key consideration in any fair use analysis.
âAnthropic is depriving Publishers and their songwriters of control over their copyrighted works and the hard-earned benefits of their creative endeavors, it is competing unfairly against those website developers that respect the copyright law and pay for licenses, and it is undermining existing and future licensing markets in untold ways,â the publishers wrote.
In addition to targeting Anthropicâs use of songs as inputs, the publishers claim that the material produced by the companyâs AI model also infringes their lyrics: âAnthropicâs AI models generate identical or nearly identical copies of those lyrics, in clear violation of publishersâ copyrights.â
Such litigation might only be the first step in setting national policy on how AI platforms can use copyrighted music, with legislative efforts close behind. At a hearing in May, Sen. Marsha Blackburn (R-Tenn.) repeatedly grilled the CEO of the company behind ChatGPT about how he and others planned to âcompensate the artist.â
âIf I can go in and say âwrite me a song that sounds like Garth Brooks,â and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,â Blackburn said. âIf it was radio play, it would be there. If it was streaming, it would be there.â
SINGAPORE â BandLab Technologies has pledged to engage responsibly and ethically with AI, part of a âstrategic collaborationâ with Universal Music Group.
Announced today (Oct. 18), Singapore-based BandLab becomes the first music creation platform to throw its support behind the Human Artistry Campaign (HAC), a global coalition devoted to ensuring fair and safe play with AI technologies.
âAs the digital landscape of music continues to evolve,â reads a joint statement, âthis collaboration is designed to be a beacon of innovation and ethical practice in the industry and heralds a new era where artists are supported and celebrated at every stage of their creative journey.â
Led by CEO Meng Ru Kuok, BandLab Technologies operates the largest social music creation platform, BandLab. Among the serviceâs breakouts is Houston artist d4vd (pronounced âDavidâ), who, in July 2022 as a 17-year-old, released âRomantic Homicide,â a track he had made using BandLab. After going viral on TikTok, the song entered the Billboard Hot 100 (peaking at No. 45) as d4vd signed to Darkroom/Interscope. Heâs one of a growing number of BandLab users whoâve developed deeper ties with UMG.
âWe welcome BandLabâs commitment to an ethical approach to AI through their accessible technology, tools and platform,â comments Lucian Grainge, chairman & CEO, Universal Music Group, in a statement. âWe are excited to add BandLab Technologies to a growing list of UMG partners whose responsible and innovative AI will benefit the creative community.â
Further to Graingeâs comments, Michael Nash, executive VP and chief digital officer at UMG, points to an expanding relationship with BandLab, noting âthey are an excellent partner that is compelling for us on multiple fronts.â
BandLab Technologiesâ assets are grouped under the holding company of Caldecott Music Group, for which Meng serves as CEO and founder. ââBandLab Technologies and our wider Caldecott Music Group network is steadfast in its respect for artistsâ rights,â he comments in a statement, âand the infinite potential of AI in music creation and we believe our millions of users around the world share in this commitment and excitement.â
Meng showed his support in August at Ai4, an AI conference in Las Vegas, by way of the presentation âAugmenting the Artist: How AI is Redefining Music Creation and Innovation.â During that session, he discussed the importance of ethical AI training and development and showcased the companyâs AI music idea generator tool SongStarter.
New technologies promise âunbelievable possibilities to break down more barriers for creators,â he notes, but âitâs essential that artistsâ and songwritersâ rights be fully respected and protected to give these future generations a chance of success.â
The Human Artistry Campaign was announced during South by Southwest in March along with a series of seven key principles for protecting artists in the age of AI. More than 150 industry organizations and businesses have signed up.
UMGâs AI collaboration with BandLab follows separate arrangements forged with Endel and YouTube.
This âfirst of its kindâ strategic partnership with BandLab Technologies, say reps for UMG, align the âtwo organizations to promote responsible AI practices and pro-creator standards, as well as enabling new opportunities for artists.â
Six months after ex-Fugees rapper Prakazrel âPrasâ Michel was convicted on foreign lobbying charges, heâs now demanding a new trial â making the extraordinary claim that his ex-lawyer used an unproven artificial intelligence (AI) tool to craft closing arguments because he owned a stake in the tech platform.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
In a Monday (Oct. 16) filing in D.C. federal court, Michel claimed attorney David Kenner âutterly failedâ him during the April trial, denying him his constitutional right to effective counsel. Among other shortcomings, the rapper said Kenner outsourced prep work to âinexperienced contract attorneysâ and âfailed to object to damaging and inadmissible testimonyâ during the trial.
Most unusual of all, Michel accused Kenner of using âan experimental artificial intelligence programâ to draft his closing arguments for the trial, resulting in a deeply flawed presentation. And he claimed Kenner did so because he had an âundisclosed financial stakeâ in the company and wanted to use Michelâs trial to promote it.
âMichel never had a chance,â the rapperâs new lawyers wrote Monday. âMichelâs counsel was deficient throughout, likely more focused on promoting his AI program ⌠than zealously defending Michel. The net effect was an unreliable verdict.â
Kenner did not immediately return a request for comment on Tuesday.
Michel was charged in 2019 with funneling money from a now-fugitive Malaysian financier through straw donors to Barack Obamaâs 2012 re-election campaign. He was also accused of trying to squelch a Justice Department investigation and influence an extradition case on behalf of China under the Trump administration.
In April, following a trial that included testimony from actor Leonardo DiCaprio and former U.S. Attorney General Jeff Sessions, Michel was convicted on 10 counts including conspiracy, witness tampering and failing to register as an agent of China.
During that trial, Michel was represented by Kenner, a well-known Los Angeles criminal defense attorney who has previously repped hip-hop luminaries like Snoop Dogg, Suge Knight and, most recently, Tory Lanez. But earlier this summer, Michel asked for permission to replace Kenner with a new team of lawyers; in August, Kenner and his firm were swapped out for lawyers from the national firm ArentFox Schiff.
Now, itâs clear why. In Mondayâs filing, Michelâs new lawyers accused Kenner of wide-ranging failures â including many that have nothing to do with AI tools or secret motives. They claim he âoutsourced trial preparationsâ to other lawyers and âfailed to familiarize himself with the charged statutes or required elements.â They also say he âoverlooked nearly every colorable defenseâ and âfailed to object to damaging and inadmissible testimony, betraying a failure to understand the rules of evidence.â
But the most unusual allegations concerned an alleged scheme to promote EyeLevel.AI, a computer program designed to help attorneys win cases by digesting trial transcripts and other data. Days after the trial concluded, the company put out a press release highlighting its use in the Michel trial, calling it the âfirst use of generative AI in a federal trialâ and quoting Kenner.
âThis is an absolute game changer for complex litigation,â Kenner said in the press release. âThe system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.â
But according to Michelâs new lawyers, Kennerâs use of the program was harmful, not helpful, to his clientâs case. They say it may have led to some smaller mistakes, like Kenner misattributing a Puff Daddy song to the Fugees, but also to massive legal errors, like conflating separate allegations against Michel â an error that Michelâs new lawyers say caused Kenner to make âfrivolousâ arguments before the jury.
âAt bottom, the AI program failed Kenner, and Kenner failed Michel,â Michelâs attorneys at ArentFox Schiff wrote. âThe closing argument was deficient, unhelpful, and a missed opportunity that prejudiced the defense.â
According to Michelâs new lawyers, the mistake of using the AI tools was compounded by Kennerâs alleged motive: an undisclosed ownership stake in the startup that sells it. By using a criminal trial as a means to promote a product, Mondayâs filing says Kenner created âan extraordinary conflict of interest.â
âKenner and [his partner]âs decision to elevate their financial interest in the AI program over Michelâs interest in a competent and vigorous defense adversely affected Kennerâs trial performance, as the closing argument was frivolous, missed nearly every colorable argument, and damaged the defense,â they wrote.