State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

8:00 pm 12:00 am

Current show

State Champ Radio Mix

8:00 pm 12:00 am


artificial intelligence

Page: 7

Diaa El All, CEO/founder of generative artificial intelligence music company Soundful, remembers when the first artists were signed to major label deals based on songs using type beats — cheap, licensable beats available online that are labeled based on the artists the beat emulates (i.e. Drake Type Beat, XXXTentacion Type Beat). He also remembers the legal troubles that followed. “Those type beats are licensed to sometimes thousands of people at a time,” he explains. “If it becomes a hit for one artist, then that artist ends up with major problems to unravel.”

Perhaps the most famous example of this is Lil Nas X and his breakthrough smash “Old Town Road,” which was written over a $30 Future type beat that was also licensed by other DIY talents. After the song went viral in early 2019, the then-unknown rapper and meme maker quickly inked a deal with Columbia Records, but beneath the song’s mammoth success lay a tangle of legal issues to sort through. For one thing, the song’s type beat included an unauthorized sample of Nine Inch Nails’ “34 Ghosts IV,” which was not disclosed to Lil Nas X when he purchased it.

El All’s solution to these issues may seem counter-intuitive, but he posits that his AI models could provide an ethical alternative to the copyright nightmares of the type beat market.

Starting Wednesday (Nov. 8), Soundful is launching Soundful Collabs, which is partnering with artists, songwriters and producers in various genres — including Kaskade, Starrah, 3LAU, DJ White Shadow, Autograf and CB Mix — to train personalized AI generators that create beats akin to their specific production and writing styles. To create a realistic model, the artists, songwriters and producers provide Soundful with dozens of their favorite one-shot recordings of kick drums, snares, guitar licks and synth patches from their personal sonic libraries, as well as information about how they typically construct chord progressions and song structures.

The result is individualized AI models that can generate endless one-of-a-kind tracks that echo a hitmaker’s style while compensating them for the use of their name and sonic identity. For $15, a Soundful subscriber can download up to 10 tracks the generator comes up with. This includes stems so the user can add or subtract elements of the track to suit their tastes after exporting it to a digital audio workstation (DAW) of their choice. The hitmaker receives 80% of the monies earned from the collaboration while Soundful retains 20% — a split El All says was inspired by “flipping” major record labels’ common 80/20 split in favor of the artist.

The Soundful leader, who has a background as a classical pianist and sound engineer, sees this as a novel form of musical “merchandise” that offers talent an additional revenue stream and a chance at fostering further fan engagement and user-generated content (UGC). “We don’t use any loops, we don’t use any previous tracks as references,” El All says. As a result, he argues the product’s profits belong only to the talent, not their record label or publishers, given that it does not use any of their copyrights. Still, he says he’s met with “a lot of publishers” and some labels about the new product. (El All admits that an artist in a 360 deal — a contract which grants labels a cut of money from touring, merchandise and other forms of non-recorded music income — may have to share proceeds with their label.)

According to Kaskade, who has been a fan of Soundful’s since he tested the original beta product earlier this year, the process of training his model felt like “Splice on crack — this is the next evolution of the Splice sample packs,” where producers offer fans the opportunity to purchase a pack of their favorite loops and samples for a set price, he explains. “[With sample packs] you got access to the sounds, but now, you get an AI generator to help you put it all together.”

The new Soundful product is illustrative of a larger trend in AI towards personalized models. On Monday, OpenAI, the leading AI company behind ChatGPT and DALL-E, announced that it was launching “GPTs” – a new service that allows small businesses and individuals to build customized versions of ChatGPT attuned to their personal needs and interests.

This trend is also present in music AI, with many companies offering personalized models and collaborations with talent. This is especially popular on the voice synthesis side of the nascent industry. So far, start-ups like Kits AI, Voice-Swap, Hooky, CreateSafe and more are working with artists to feed recordings of their voices into AI models to create realistic clones of their voices for fans or the artists themselves to use — Grimes’ model being the most notable to date. Though much more ethically questionable, the popularity of Ghostwriter’s “Heart On My Sleeve” — which employed a voice model to emulate Drake and The Weeknd and which was not authorized by the artists — also proved the appetite for personalized music models.

Notably, Soundful’s product has the potential to be a producer and songwriter-friendly counterpart to voice models, which present possible monetary benefits (and threats) to recording artists and singers but do not pertain to the craftspeople behind the hits, who generally enjoy fewer financial opportunities than the artists they work with. As Starrah — who has written “Havana” by Camila Cabello, “Pick Up The Phone” by Young Thug and Travis Scott and “Girls Like You” by Maroon 5 — explains, Soundful Collabs are “an opportunity for songwriters and producers to expand what they are doing in so many ways.”

El All says keeping the needs of the producer and songwriter communities in mind was paramount in the creation of this product. For the first time, he reveals that longtime industry executive, producer manager and Hallwood Media founder Neil Jacobson is on Soundful’s founding team and board. El All says Jacobson’s expertise proved instrumental in steering the Soundful Collabs project in a direction that El All feels could “change the industry for the better.” “I think what Soundful provides here is similar to what I do in my own business,” says Jacobson. “I supply music to people who need it — with Soundful, a fan of one of these artists who wants to make music but doesn’t quite know how to work a digital audio workstation can get the boost they need to start creating.”

El All says the new product will extend beyond personalization for current songwriters, producers and artists. The Soundful team is also in talks with catalog owners and estates and working with a number of top brands in the culinary, consumer goods, hospitality, children’s entertainment and energy industries to train personalized models to create sonic “brand templates” and “generative catalogs” to be used in social media content. “This will help them create a very clear signature identification via sound,” says El All.

When asked if this business-to-business application takes away opportunities for synch licensing from composers, El All counters that some of these companies were using royalty free libraries prior to meeting with Soundful. “We’re actually creating new opportunities for musicians because we are consistently hiring those specializing in synch and sound designers to continue to evolve the brand’s sound,” he says.

In the future, Soundful will drop more artist templates every four to six weeks, and its Collabs will expand into genres like Latin, lo-fi, rock, pop and more. “Though this sounds good out of the box … what will make the music a hit is when a person downloads these stems and adds their own human imperfections and style to it,” says El All. “That’s what we are looking to encourage. It’s a jumping off point.”

Korean artist MIDNATT made history earlier this year by using AI to help him translate his debut single “Masquerade” into six different languages. Though it wasn’t a major commercial success, its seamless execution by the HYBE-owned voice synthesis company Supertone proved there was a new, positive application of musical AI on the horizon that went beyond unauthorized deepfakes and (often disappointing) lo-fi beats.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Enter Jordan “DJ Swivel” Young, a Grammy-winning mixing engineer and producer, known best for his work with Beyonce, BTS and Dua Lipa. His new AI voice company Hooky is one of many new start-ups trying to popularize voice cloning, but unlike much of his competition, Young is still an active and well-known collaborator for today’s musical elite. After connecting with pop star Lauv, whom he worked with briefly years before as an engineer, Young’s Hooky developed an AI voice model of Lauv’s voice so that they could translate the singer-songwriter’s new single “Love U Like That” into Korean. 

It’s the first major Western artist to take part in the AI translation trend. Lauv wants the new translated version of “Love U Like That” to be a way of showing his love to his Korean fanbase and to celebrate his biggest headline show to date, which recently took place in Seoul. 

Though many fans around the world listen to English-speaking music in high numbers, Will Page, author and former chief economist at Spotify, and Chris Dalla Riva, a musician and Audiomack employee, noted in a recent report that many international audiences are increasingly turning their interest back to their local language music – a trend they nicknamed “Glocalization.” With Hooky, Supertone and other AI voice synthesis companies all working to master translation, English-speaking artists now have the opportunity to participate in this growing movement and to form tighter bonds with international fans.

To explain the creation of “Love U Like That (Korean Version),” out Wednesday (Nov. 8), Lauv and Young spoke in an exclusive interview to Billboard. 

When did you first hear what was possible with AI voice filters?

Lauv: I think the first time was that Drake and The Weeknd song [“Heart On My Sleeve” by Ghostwriter]. I thought it was crazy. Then when my friend and I were working on my album, we started playing each other’s music. He pulled out a demo. They were pitching it to Nicki Minaj, and he was singing it and then put it into Nicki Minaj’s voice. I remember thinking it’s so insane this is possible.

Why did you want to get involved with AI voice technology yourself?

Lauv: I truly believe that the only way forward is to embrace what is possible now, no matter what. I think being able to embrace a tool like this in a way that’s beneficial and able to get artists paid is great. 

Jordan, how did you get acquainted with Lauv, and why did you feel he was the right artist to mark your first major collaboration? 

Jordan “DJ Swivel” Young: We’ve done a lot of general outreach to record companies, managers, etcetera. We met Range Media Partners, Lauv’s management team, and they really resonated with Hooky. The timing was perfect: he was wrapping up his Asian tour and had done the biggest show of his life in South Korea. Plus, he has done a few collaborations with BTS. I’ve worked on a number of BTS songs too. There was a lot of synergy between us.

Why did you choose Korean as the language that you wanted to translate a song into?

Lauv: Well, in the future, I would love to have the opportunity to do this in as many different languages as possible, but Seoul has been a place that has become really close to my heart, and it was the place of my biggest headline show to date. I just wanted to start by doing something special for those Korean fans. 

What is the process of actually translating the song? 

Young: We received the original audio files for the song “Love U Like That,” and we rewrote the song with former K-Pop idol Kevin Woo. The thing with translating lyrics or poetry is it can’t be a direct translation. You have to make culturally appropriate choices, words that flow well. So Kevin did that and we re-recorded Kevin’s voice singing the translation, then we mixed the song again exactly as the original was done to match it sonically. All the background vocals were at the correct volume and the right reverbs were used. I think we’ve done a good job of matching it. Then we used our AI voice technology to match Lauv’s voice, and we converted Kevin’s Korean version into Lauv’s voice. 

Lauv: To help them make the model of my voice, I sent over a bunch of raw vocals that were just me singing in different registers. Then I met up with him and Kevin. It was riveting to hear my voice like that. I gave a couple of notes – very minor things – after hearing the initial version of the translation, and then they went back and modified. I really trusted Jordan and Kevin on how to make this authentic and respectful to Korean culture.

Is there an art to translating lyrics?

Lauv: Totally. When I was listening back to it, that’s what struck me. There’s certain parts that are so pleasing to the ear. I still love hearing the Korean version phonetically as someone from the outside. Certain parts of Kevin’s translation, like certain rhythm schemes, hit me so much harder than hearing it in English actually.

Do you foresee that there will be more opportunities for translators as this space develops?

Young: Absolutely. I call them songwriters more than translators though, actually. They play a huge role. I used to work with Beyonce as an engineer, and I’ve watched her do a couple songs in Spanish. It required a whole new vocal producer, a new team just to pull off those songs. It’s daunting to sing something that’s not your natural language. I even did some Korean background vocals myself on a BTS song I wrote. They provided me with the phonetics, and I can say it was honestly the hardest thing I’ve ever recorded. It’s hard to sing with the right emotion when you’re focused on pronouncing things correctly. But Hooky allows the artist to perform in other languages but with all the emotion that’s expected. Sure, there’s another songwriter doing the Korean performance, but Lauv was there for the whole process. His fingerprint is on it from beginning to end. I think this is the future of how music will be consumed. 

I think this could bring more opportunities for the mixing engineers too. When Dolby Atmos came out that offered more chances for mixers, and with the translations, I think there are now even more opportunities. I think it’s empowering the songwriter, the engineer, and the artist all at once. There could even be a new opportunity created for a demo singer, if it’s different from the songwriter who translated the song. 

Would you be open to making your voice model that you used for this song available to the public to use?

Lauv: Without thinking it through too much, I think my ideal self is a very open person, and so I feel like I want to say hell yeah. If people have song ideas and want to hear my voice singing their ideas, why not? As long as it’s clear to the world which songs were written and made by me and what was written by someone else using my voice tone. As long as the backend stuff makes sense, I don’t see any reason why not. 

Offering a preview of arguments the company might make in its upcoming legal battle with Universal Music Group (UMG), artificial intelligence (AI) company Anthropic PBC told the U.S. Copyright Office this week that the massive scraping of copyrighted materials to train AI models is a “quintessentially lawful.”

Music companies, songwriters and artists have argued that such training represents an infringement of their works at a vast scale, but Anthropic told the federal agency Monday (Oct. 30) that it was clearly allowed under copyright’s fair use doctrine.

“The copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,” the company wrote. “This sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.”

The filing came as part of an agency study aimed at answering thorny questions about how existing intellectual property laws should be applied to the disruptive new tech. Other AI giants, including OpenAI, Meta, Microsoft, Google and Stability AI all lodged similar filings, explaining their views.

But Anthropic’s comments will be of particular interest in the music industry because that company was sued last month by UMG over the very issues in question in the Copyright Office filing. The case, the first filed over music, claims that Anthropic unlawfully copied “vast amounts” of copyrighted songs when it trained its Claude AI tool to spit out new lyrics.

In the filing at the Copyright Office, Anthropic argued that such training was a fair use because it copied material only for the purpose of “performing a statistical analysis of the data” and was not “re-using the copyrighted expression to communicate it to users.”

“To the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work,” the company argued.

UMG is sure to argue otherwise, but Anthropic said legal precedent was clearly on its side. Notably, the company cited a 2015 ruling by a federal appeals court that Google was allowed to scan and upload millions of copyrighted books to create its searchable Google Books database. That ruling and others established the principle that “large-scale copying” was a fair use when done to “create tools for searching across those works and to perform statistical analysis.”

“The training process for Claude fits neatly within these same paradigms and is fair use,” Anthropic’s lawyers wrote. “Claude is intended to help users produce new, distinct works and thus serves a different purpose from the pre-existing work.”

Anthropic acknowledged that the training of AI models could lead to “short-term economic disruption.” But the company said such problems were “unlikely to be a copyright issue.”

“It is still a matter that policymakers should take seriously (outside of the context of copyright) and balance appropriately against the long-term benefits of LLMs on the well-being of workers and the economy as a whole by providing an entirely new category of tools to enhance human creativity and productivity,” the company wrote.

In the TikTok era, homemade remixes of songs — typically single tracks that have been sped up or slowed down, or two tracks mashed together — have become ever more popular. Increasingly, they are driving viral trends on the platform and garnering streams off of it. 

Just how popular? In April, Larry Mills, senior vp of sales at the digital rights tech company Pex, wrote that Pex’s tech found “hundreds of millions of modified audio tracks distributed from July 2021 to March 2023,” which appeared on TikTok, SoundCloud, Audiomack, YouTube, Instagram and more. 

On Wednesday (Nov. 1), Mills shared the results of a new Pex analysis — expanded to include streaming services like Spotify, Apple Music, Deezer, and Tidal — estimating that “at least 1% of all songs on [streaming platforms] are modified audio.”

“We’re talking more than 1 million unlicensed, manipulated songs that are diverting revenue away from rightsholders this very minute,” Mills wrote, pointing to homemade re-works of tracks by Halsey or One Republic that have amassed millions of plays. “These can generate millions in cumulative revenue for the uploaders instead of the correct rightsholders.”

Labels try to execute a tricky balancing act with user-generated remixes. They usually strike down the most popular unauthorized reworks on streaming services and move to release their own official versions in an attempt to pull those plays in-house. But they also find ways to encourage fan remixing, because it remains an effective form of music marketing at a time when most promotional strategies have proved toothless. “Rights holders understand that this process is inevitable, and it’s one of the best ways to bring new life to tracks,” Meng Ru Kuok, CEO of music technology company BandLab, said to Billboard earlier this year. 

Mills argues that the industry needs a better system for tracking user-generated remixes and making sure royalties are going into the right pockets. “While these hyper-speed remixes may make songs go viral,” he wrote in April, “they’re also capable of diverting royalty payments away from rights holders and into the hands of other creators.” 

Since Pex sells technology for identifying all this modified audio, it’s not exactly an unbiased party. But it’s notable that streaming services and distributors don’t have the best track record when it comes to keeping unauthorized content of any kind off their platforms.

It hasn’t been unusual to find leaked songs — especially from rappers with impassioned fan bases like Playboi Carti and Lil Uzi Vert — on Spotify, where leaked tracks can often be found climbing the viral chart, or TikTok. An unreleased Pink Pantheress song sampling Michael Jackson’s classic “Off the Wall” is currently hiding in plain sight on Spotify, masquerading as a podcast. 

“Historically, streaming services don’t have an economic incentive to actually care about that,” Deezer CEO Jeronimo Folgueira told Billboard earlier this year. “We don’t care whether you listen to the original Drake, fake Drake, or a recording of the rain. We just want you to pay $10.99.” Folgueira called that incentive structure “actually a bad thing for the industry.”

In addition, many of the distribution companies that act as middlemen between artists and labels and the streaming services operate on a volume model — the more content they upload, the more money they make — which means it’s not in their financial interest to look closely at what they send along to streaming services. 

However, the drive to improve this system has taken on new urgency this year. Rights holders and streaming services are going back and forth over how streaming payments should work and whether “an Ed Sheeran stream is worth exactly the same as a stream of rain falling on the roof,” as Warner Music Group CEO Robert Kyncl told financial analysts in May. As the industry starts to move to a system where all streams are no longer created equal, it becomes increasingly important to know exactly what’s on these platforms so it can sort different streams into different buckets.

In addition, the advance of artificial intelligence-driven technology has allowed for easily accessible and accurate-sounding voice-cloning, which has alarmed some executives and artists in a way that sped-up remixes have not. “In our conversations with the labels, we heard that some artists are really pissed about this stuff,” says Geraldo Ramos, co-founder/CEO of the music-tech company Moises. “They’re calling their label to say, ‘Hey, it isn’t acceptable, my voice is everywhere.’”

This presents new challenges, but also perhaps means new opportunities for digital fingerprint technology companies, whether that’s stalwarts like Audible Magic or newer players like Pex. “With AI, just think how much the creation of derivative works is going to exponentially grow — how many covers are going to get created, how many remixes are gonna get created,” Audible Magic CEO Kuni Takahashi told Billboard this summer. “The scale of what we’re trying to identify and the pace of change is going to keep getting faster.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Lizzo fights back against sexual harassment allegations with the help of a famous lawyer and a creative legal argument; a federal court issues an early ruling in an important copyright lawsuit over artificial intelligence; Kobalt is hit with a lawsuit alleging misconduct by one of the company’s former executives; and much more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

THE BIG STORY: Lizzo Hits Back With … Free Speech?

Three months after Lizzo and her touring company were accused of subjecting three of her backup dancers to sexual harassment, religious and racial discrimination and weight-shaming, her lawyers filed their first substantive response – and they didn’t hold back.

“Salacious and specious lawsuit.” “They have an axe to grind.” “A pattern of gross misconduct and failure to perform their job up to par.” “Fabricated sob story.” “Plaintiffs are not victims.” “They are opportunists.”

“Plaintiffs had it all and they blew it,” Lizzo’s lawyers wrote. “Instead of taking any accountability for their own actions, plaintiffs filed this lawsuit against defendants out of spite and in pursuit of media attention, public sympathy and a quick payday with minimal effort.”

That’s not exactly dry legalese, but it’s par-for-the-course in a lawsuit that has already featured its fair share of blunt language from the other side. And it’s hardly surprising given that it came from Martin Singer – an infamously tough celebrity lawyer once described by the Los Angeles Times as “Hollywood’s favorite legal hit man.”

While Singer’s quotes made the headlines, it was his legal argument that caught my attention.

Rather than a normal motion to dismiss the case, Lizzo’s motion cited California’s so-called anti-SLAPP statute — a special type of law enacted in states around the country that makes it easier to end meritless lawsuits that threaten free speech (known as “strategic lawsuits against public participation”). Anti-SLAPP laws allow for such cases to be tossed out more quickly, and they sometimes require a plaintiff to repay the legal bills incurred by a defendant.

Anti-SLAPP motions are filed every day, but it’s pretty unusual to see one aimed at dismissing a sexual harassment and discrimination lawsuit filed by former employees against their employer. They’re more common in precisely the opposite scenario: filed by an individual who claims that they’re being unfairly sued by a powerful person to silence accusations of abuse or other wrongdoing.

But in Friday’s motion, Singer and Lizzo’s other lawyers argued that California’s anti-SLAPP law could also apply to the current case because of the creative nature of the work in question. They called the case “a brazen attempt to silence defendants’ creative voices and weaponize their creative expression against them.”

Will that argument hold up in court? Stay tuned…

Go read the full story about Lizzo’s defense, including access to the actual legal documents filed in court.

Other top stories this week…

RULING IN AI COPYRIGHT CASE – A federal judge issued an early-stage ruling in a copyright class action filed by artists against artificial intelligence (AI) firm Stability AI — one of several important lawsuits filed against AI companies over how they use copyrighted content. Though he criticized the case and dismissed many of its claims, the judge allowed it to move toward trial on its central, all-important question: Whether it’s illegal to train AI models by using copyrighted content.

HALLOWEEN SPECIAL – To celebrate today’s spooky holiday, Billboard turned back the clock all the way to 1988, when the studio behind “A Nightmare on Elm Street” sued Will Smith over a Fresh Prince song and music video that made repeated references to Freddy Kreuger. To get the complete bizarre history of the case, go read our story here.

KOBALT FACES CASE OVER EX-EXEC – A female songwriter filed a lawsuit against Kobalt Music Group and former company executive Sam Taylor over allegations that he leveraged his position of power to demand sex from her – and that the company “ignored” and “gaslit” women who complained about him. The case came a year after Billboard’s Elias Leight first reported those allegations. Taylor did not return a request for comment; Kobalt has called the allegations against the company baseless, saying its employees never “condoned or aided any alleged wrongdoing.”

MF DOOM ESTATE BATTLE – The widow of late hip-hop legend MF Doom filed a lawsuit claiming the rapper’s former collaborator Egon stole dozens of the rapper’s notebooks that were used to write down many of his beloved songs. The case claims that Egon took possession of the files as Doom spent a decade in his native England due to visa issues, where he remained until his death in 2020. Egon’s lawyers called the allegations “frivolous and untrue.”

DJ ENVY FRAUD SCANDAL UPDATE – Cesar Pina, a celebrity house-flipper who was charged earlier this month with running a “Ponzi-like investment fraud scheme,” said publicly last week that New York City radio host DJ Envy had “nothing to do” with the real estate deals in question. Critics have argued that Envy, who hosts the popular hip-hop radio show The Breakfast Club, played a key role in Pina’s alleged fraud by promoting him on the air.

UTOPIA SUED AGAIN OVER FAILED DEAL – Utopia Music was hit with another lawsuit over an aborted $26.5 million deal to buy a U.S. music technology company called SourceAudio, this time over allegations that the company violated a $400,000 settlement that aimed to end the dispute. The allegations came after a year of repeated layoffs and restructuring at the Swiss-based music tech company.

A federal judge in San Francisco ruled Monday (Oct. 30) that artificial intelligence (AI) firm Stability AI could not dismiss a lawsuit claiming it had “trained” its platform on copyrighted images, though he also sided with AI companies on key questions.

In an early-stage order in a closely watched case, Judge William Orrick found many defects in the lawsuit’s allegations, and he dismissed some of the case’s claims. But he allowed the case to move forward on its core allegation: That Stability AI built its tools by exploiting vast numbers of copyrighted works.

“Plaintiffs have adequately alleged direct infringement based on the allegations that Stability downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion, and used those images to train Stable Diffusion,” the judge wrote.

The ruling came in one of many cases filed against AI companies over how they use copyrighted content to train their models. Authors, comedians and visual artists have all filed lawsuits against companies including Microsoft, Meta and OpenAI, alleging that such unauthorized use by the fast-growing industry amounts to a massive violation of copyright law.

Last week, Universal Music Group and others filed the first such case involving music, arguing that Anthropic PBC was infringing copyrights en masse by using “vast amounts” of music to teach its software how to spit out new lyrics.

Rulings in the earlier AI copyright cases could provide important guidance on how such legal questions will be handled by courts, potentially impacting how UMG’s lawsuit and others like it play out in the future.

Monday’s decision came in a class action filed by artists Sarah Andersen, Kelly McKernan and Karla Ortiz against Stability AI Ltd. over its Stable Diffusion — an AI-powered image generator. The lawsuit also targeted Midjourney Inc. and DeviantArt Inc., two companies that use Stable Diffusion as the basis for their own image generators.

In his ruling, Judge Orrick dismissed many of the lawsuit’s claims. He booted McKernan and Ortiz from the case entirely and ordered the plaintiffs to re-file an amended version of their case with much more detail about the specific allegations against Midjourney and DeviantArt.

The judge also cast doubt on the allegation that every “output” image produced by Stable Diffusion would itself be a copyright-infringing “derivative” of the images that were used to train the model — a ruling that could dramatically limit the scope of the lawsuit. The judge suggested that such images might only be infringing if they themselves looked “substantially similar” to a particular training image.

But Judge Orrick included no such critiques for the central accusation that Stability AI infringed Andersen’s copyrights by using them for training without permission — the basic allegation at the center of all of the AI copyright lawsuits, including the one filed by UMG. Andersen will still need to prove that such an accusation is true in future litigation, but the judge said she should be given the chance to do so.

“Even Stability recognizes that determination of the truth of these allegations — whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run — cannot be resolved at this juncture,” Orrick wrote in his decision.

Attorneys for Stability AI, Midjourney and DeviantArt did not return requests for comment. Attorneys for the artists praised the judge for allowing their “core claim” to move forward and onto “a path to trial.”

“As is common in a complex case, Judge Orrick granted the plaintiffs permission to amend most of their other claims,” said plaintiffs’ attorneys Joseph Saveri and Matthew Butterick after the ruling. “We’re confident that we can address the court’s concerns.”

President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence — requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list to oversee the rapidly progressing technology.
The order reflects the government’s effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils. AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.

White House chief of staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.

“We can’t move at a normal government pace,” Zients said the Democratic president told him. “We have to move as fast, if not faster than the technology itself.”

In Biden’s view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.

Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.

Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings.

“He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

The possibility of false images and sounds led the president to prioritize the labeling and watermarking of anything produced by AI. Biden also wanted to thwart the risk of older Americans getting a phone call from someone who sounded like a loved one, only to be scammed by an AI tool.

Meetings could go beyond schedule, with Biden telling civil society advocates in a ballroom of San Francisco’s Fairmont Hotel in June: “This is important. Take as long as you need.”

The president also talked with scientists and saw the upside that AI created if harnessed for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origins of the universe. Another scientist showed how AI could model extreme weather like 100-year floods, as the past data used to assess these events has lost its accuracy because of climate change.

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said Reed, who watched the film with the president.

With Congress still in the early stages of debating AI safeguards, Biden’s order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.

U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.

The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI’s real-world harms.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure “we’re holding the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us and not just a few,” said ReNika Moore, director of the ACLU’s racial justice program.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.

“These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.

This week: Universal Music Group (UMG) and other music companies file a hotly-anticipated copyright lawsuit over how artificial intelligence (AI) models are trained; DJ Envy’s business partner Cesar Pina is hit with criminal charges claiming he ran a “Ponzi-like” fraud scheme; Megan Thee Stallion reaches a settlement with her former label to end a contentious legal battle; Fyre Fest fraudster Billy McFarland is hit with a civil lawsuit by a jilted investor in his new project; and more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

THE BIG STORY: AI Music Heads To Court

When UMG and several other music companies filed a lawsuit last week, accusing an artificial intelligence company called Anthropic PBC of violating its copyrights en masse to “train” its AI models, my initial reaction was: “What took so long?”

The creators of other forms of content had already been in court for months. A group of photographers and Getty Images sued Stability AI over its training practices in January, and a slew of book authors, including Game of Thrones writer George R.R. Martin and legal novelist John Grisham, sued ChatGPT-maker OpenAI over the same thing in June and again in September. And music industry voices, like the RIAA and UMG itself, had repeatedly signaled that they viewed such training as illegal.

For months, we asked around, scanned dockets and waited for the music equivalent. Was the delay a deliberate litigation strategy, allowing the fast-changing market and the existing lawsuits to play out more before diving in? Was the music business focusing on legislative, regulatory or business solutions instead of the judicial warpath they chose during the file-sharing debacle of the early 2000s?

Maybe they were just waiting for the right defendant. In a complaint filed in Nashville federal court on Oct. 18, UMG claimed that Anthropic — a company that got a $4 billion investment from Amazon last month — “unlawfully copies and disseminates vast amounts of copyrighted works” in the process of teaching its models to spit out new lyrics. The lengthy complaint, co-signed by Concord Music Group, ABKCO and other music publishers, echoed arguments made by many rightsholders in the wake of the AI boom: “Copyrighted material is not free for the taking simply because it can be found on the internet.”

Like the previous cases filed by photographers and authors, the new lawsuit poses something of an existential question for AI companies. AI models are only as good as the “inputs” they ingest; if federal courts make all copyrighted material off-limits for such purposes, it would not only make current models illegal but would undoubtedly hamstring further development.

The battle ahead will center on fair use — the hugely important legal doctrine that allows for the free use of copyrighted material in certain situations. Fair use might make you think of parody or criticism, but more recently, it’s empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.

Are AI models, which imbibe millions of copyrighted works to create something new, the next landmark fair use? Or are they just a new form of copyright piracy on a vast new scale? We’re about to find out.

More key details about the AI case:

– The timing of the lawsuit would suggest that UMG is aiming for a carrot-and-stick approach when it comes to AI. On the same day the new case was filed, UMG announced that it was partnering with a company called BandLab Technologies to forge an “an ethical approach to AI.” Hours later, news also broke that UMG and other labels were actively negotiating with YouTube on a new AI tool that would allow creators to make videos using the voices of popular (consenting) recording artists.

-The huge issue in the case is whether the use of training inputs amounts to infringement, but UMG’s lawyers also allege that Anthropic violates its copyrights with the outputs that its models spit out — that it sometimes simply presents verbatim lyrics to songs. That adds a different dimension to the case that’s not present in earlier AI cases filed by authors and photographers and could perhaps make it a bit easier for UMG to win.

-While it’s the first such case about music, it should be noted that the Anthropic lawsuit deals only with song lyrics — meaning not with sound recordings, written musical notation, or voice likeness rights. While a ruling in any of the AI training cases would likely set precedent across different areas of copyright, those specific issues will have to wait for a future lawsuit, or perhaps an act of Congress.

Go read the full story on UMG’s lawsuit, with access to the actual complaint filed in court.

Other top stories this week…

MEGAN THEE SETTLEMENT – Megan Thee Stallion reached an agreement with her record label 1501 Certified Entertainment to end more than three years of ugly litigation over a record deal that Megan calls “unconscionable.” After battling for more than a year over whether she owed another album under the contract, the two sides now say they will “amicably part ways.”

DJ ENVY SCANDAL DEEPENS – Cesar Pina, a celebrity house-flipper with close ties to New York City radio host DJ Envy, was arrested on  federal charges that he perpetrated “a multimillion-dollar Ponzi-like investment fraud scheme.” Though Envy was not charged, federal prosecutors specifically noted that Pina had “partnered with a celebrity disc jockey and radio personality” — listed in the charges as “Individual-1” — to boost his reputation as a real estate guru. The charges came after months of criticism against Envy, who is named in a slew of civil lawsuits filed by alleged victims who say he helped promote the fraud.

FOOL ME ONCE… – Billy McFarland, the creator of the infamous Fyre Festival who served nearly four years in prison for fraud and lying to the FBI, is facing a new civil lawsuit claiming he ripped off an investor who gave him $740,000 for his new PYRT venture. The case was filed by Jonathan Taylor, a fellow felon who met McFarland in prison after pleading guilty to a single count of child sex trafficking.

AI-GENERATED CLOSING ARGS? – Months after ex-Fugees rapper Prakazrel “Pras” Michel was convicted on foreign lobbying charges, he demanded a new trial by making extraordinary accusations against his ex-lawyer David Kenner. Michel claims Kenner, a well-known L.A. criminal defense attorney, used an unproven artificial intelligence (AI) tool called EyeLevel.AI to craft closing arguments — and that he did so because he owned a stake in the tech platform. Kenner declined to comment, but EyeLevel has denied that Kenner has any equity in the company.

ROLLING STONES GET SATISFACTION – A federal judge dismissed a lawsuit accusing The Rolling Stones members Mick Jagger and Keith Richards of copying their 2020 single “Living in a Ghost Town” from a pair of little-known songs, ruling that the dispute — a Spanish artist suing two Brits — clearly didn’t belong in his Louisiana federal courthouse.

JUICE WRLD COPYRIGHT CASE – Dr. Luke and the estate of the late Juice WRLD were hit with a copyright lawsuit that claims they unfairly cut out one of the co-writers (an artist named PD Beats) from the profits of the rapper’s 2021 track “Not Enough.”

YouTube is planning to roll out a new artificial intelligence tool that will allow creators to make videos using the voices of popular recording artists — but inking deals with record companies to launch the beta version is taking longer than expected, sources tell Billboard.
The new AI tool, which YouTube had hoped to debut at its Made On YouTube event in September, will in beta let a select pool of artists to give permission to a select group of creators to use their voices in videos on the platform. From there, the product could be released broadly to all users with the voices of artists who choose to opt in. YouTube is also looking at those artists to contribute input on that will help steer the company’s AI strategy beyond this, sources say.

The major labels, Universal Music Group, Sony Music Entertainment and Warner Music Group, are still negotiating licensing deals that would cover voice rights for the beta version of the tool, sources say; a wide launch would require separate agreements. As label leaders have made public statements about their commitments to embracing AI in recent months, with UMG CEO Lucian Grainge saying the technology could “amplify human imagination and enrich musical creativity in extraordinary new ways” and WMG CEO Robert Kyncl saying, “You have to embrace the technology, because it’s not like you can put technology in a bottle” — some music executives worry they’ve given up some of their leverage in these initial deals, given that they want to be seen as proponents of progress and not as holding up innovation. Label executives are especially conscious of projecting that image now, having shortsightedly resisted the shift from CDs to downloads two decades ago, which allowed Apple to unbundle the album and sent the music business into years of decline. Some executives say it’s also been challenging to find top artists to participate in the new YouTube tool, with even some of the most forward-thinking acts hesitant to put their voices in the hands of unknown creators who could use them to make statements or sing lyrics they might not like.

The labels, sources say, view the deal as potentially precedent-setting for future AI deals to come — as well as creating a “framework,” as one source put it, for YouTube’s future AI initiatives. The key issues in negotiations are how the AI model is trained and that artists should have the option to opt-in (or out); and how monetization works — are artists paid for the use of their music as an input into the AI model or for the output that’s created using the AI tool? While negotiations are taking time, label sources say YouTube is seen as an important, reliable early partner in this space, based on the platform’s work developing its Content ID system that identifies and monetizes copyrighted materials in user-generated videos.

Publishing, meanwhile, is even more complicated, given that even with a small sampling of artists to launch the tool at beta there could be hundreds of songwriters with credits across their catalogs — which would be sampled by the model. Because of this, a source suggests that YouTube may prefer paying a lump sum licensing fee rather that publishers will need to figure out how to divide among their writers.

As complicated as the deal terms may be, sources say music rights holders are acting in good faith to get a deal done. That’s because there’s a dominant belief this sort of technology is inevitable and if the music business doesn’t come to the table to create licensing deals now, they’ll get left behind. However, one source familiar with the negotiations says this attitude is also putting music companies at a disadvantage because there is less room to drive a hard bargain.

For months, AI-soundalike tools that synthesize vocals to sound like famous artists have been garnering attention and triggering debate. The issue hit the mainstream in April when an anonymous musician calling himself Ghostwriter released a song to streaming services with soundalike versions of Drake and The Weeknd on it that he said were created with artificial intelligence. The song was quickly taken down due to copyright infringement on the recording, not based on the voices’ likenesses, but in the aftermath a month later Billboard reported that the streaming services seemed amenable to requests from the major labels to remove recordings with AI-generated vocals created to sound like popular artists.

In August, YouTube announced a new initiative with UMG artists and producers it called an “AI Music Incubator” that would “explore, experiment and offer feedback on the AI-related musical tools and products,” according to a blog post by Grainge at the time. “Once these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.” That partnership was separate from the licensing negotiations currently taking place and the beta product in development.

On Wednesday, UMG, Concord Music Group, ABKCO and other music publishers filed a lawsuit against AI platform Anthropic PBC for using copyrighted song lyrics to “train” its software. This marked the first major lawsuit in what is expected to be a key legal battle over the future of AI music, and as one source put it a signal that major labels will litigate with AI companies they see as bad players.

Universal Music Group (UMG) and other music companies are suing an artificial intelligence platform called Anthropic PBC for using copyrighted song lyrics to “train” its software — marking the first major lawsuit in what is expected to be a key legal battle over the future of AI music.
In a complaint filed Wednesday morning (Oct. 18) in Nashville federal court, lawyers for UMG, Concord Music Group, ABKCO and other music publishers accused Anthropic of violating the companies’ copyrights en masse by using vast numbers of songs to help its AI models learn how to spit out new lyrics.

“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” lawyers for the music companies wrote. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”

A spokesperson for Anthropic did not immediately return a request for comment.

The new lawsuit is similar to cases filed by visual artists over the unauthorized use of their works to train AI image generators, as well as cases filed by authors like Game of Thrones writer George R.R. Martin and novelist John Grisham over the use of their books. But it’s the first to squarely target music.

AI models like the popular ChatGPT are “trained” to produce new content by feeding them vast quantities of existing works known as “inputs.” In the case of AI music, that process involves huge numbers of songs. Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities.

Major music companies and other industry players have already argued that such training is illegal. Last year, the RIAA said that any use of copyrighted songs to build AI platforms “infringes our members’ rights.” In April, when UMG asked Spotify and other streamers in April to stop allowing AI companies to use their platforms to ingest music, it said it “will not hesitate to take steps to protect our rights.”

On Wednesday, the company took those steps. In the lawsuit, it said Anthropic “profits richly” from the “vast troves of copyrighted material that Anthropic scrapes from the internet.”

“Unlike songwriters, who are creative by nature, Anthropic’s AI models are not creative — they depend entirely on the creativity of others,” lawyers for the publishers wrote. “Yet, Anthropic pays nothing to publishers, their songwriters, or the countless other copyright owners whose copyrighted works Anthropic uses to train its AI models. Anthropic has never even attempted to license the use of Publishers’ lyrics.”

In the case ahead, the key battle line will be over whether the unauthorized use of proprietary music to train an AI platform is nonetheless legal under copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law.

Historically, fair use enabled critics to quote from the works they were dissecting, or parodists to use existing materials to mock them. But more recently, it’s also empowered new technologies: In 1984, the U.S. Supreme Court ruled that the VCR was protected by fair use; in 2007, a federal appeals court ruled that Google Image search was fair use.

In Wednesday’s complaint, UMG and the other publishers seemed intent on heading off any kind of fair use defense. They argued that Anthropic’s behavior would harm the market for licensing lyrics to AI services that actually pay for licenses — a key consideration in any fair use analysis.

“Anthropic is depriving Publishers and their songwriters of control over their copyrighted works and the hard-earned benefits of their creative endeavors, it is competing unfairly against those website developers that respect the copyright law and pay for licenses, and it is undermining existing and future licensing markets in untold ways,” the publishers wrote.

In addition to targeting Anthropic’s use of songs as inputs, the publishers claim that the material produced by the company’s AI model also infringes their lyrics: “Anthropic’s AI models generate identical or nearly identical copies of those lyrics, in clear violation of publishers’ copyrights.”

Such litigation might only be the first step in setting national policy on how AI platforms can use copyrighted music, with legislative efforts close behind. At a hearing in May, Sen. Marsha Blackburn (R-Tenn.) repeatedly grilled the CEO of the company behind ChatGPT about how he and others planned to “compensate the artist.”

“If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said. “If it was radio play, it would be there. If it was streaming, it would be there.”