State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 13

YouTube is launching an experimental feature Thursday (Nov. 16) that will create artificial intelligence-generated voices of well-known artists for use in clips on YouTube shorts. The initial selection of acts participating in the program includes Charlie Puth, John Legend, Sia, T-Pain, Demi Lovato, Troye Sivan, Charli XCX, Alec Benjamin and Papoose. 

YouTube’s feature, called Dream Track, creates pieces of music — voice along with musical accompaniment — based on text prompts that are up to 30 seconds in length. For now, around 100 U.S.-based creators will have Dream Track access.

“At this initial phase, the experiment is designed to help explore how the technology could be used to create deeper connections between artists and creators, and ultimately, their fans,” according to a blog post from Lyor Cohen, global head of music, and Toni Reid, vp of emerging experiences and community.

The music industry has been wary of AI this year, but several prominent executives voiced their support for Dream Track. “In this dynamic and rapidly evolving market, artists gain most when together we engage with our technology partners to work towards an environment in which responsible AI can take root and grow,” Universal Music Group chairman and CEO Lucian Grainge said in a statement. “Only with active, constructive and deep engagement can we build a mutually successful future together.”

“YouTube is taking a collaborative approach with this Beta,” Robert Kyncl, CEO of Warner Music Group, said in a statement of his own. “These artists are being offered the choice to lean in, and we’re pleased to experiment and find out what the creators come up with.” 

YouTube emphasized that Dream Track is an experiment. The artists involved are “excited to help us shape the future,” Cohen said in an interview. “Being part of this experiment allows them to do it.” That also means that, for now, some of the underlying details — how is the AI tech trained? how might this feature be monetized at scale? — remain fuzzy.

While the lawyers figure all that out, the artists involved in Dream Track sounded enthusiastic. Demi Lovato: “I am open minded and hopeful that this experiment with Google and YouTube will be a positive and enlightening experience.” John Legend: “I am happy to have a seat at the table, and I look forward to seeing what the creators dream up during this period.” Sia: “I can’t wait to hear what kinds of recipes all you creators out there come up with.” 

While YouTube’s AI-generated voices are likely to get the most attention, the platform also announced the release of new AI music tools. These build on lessons learned from the “AI Music Incubator” the platform announced in August, according to Demis Hassabis, CEO of Google Deepmind. Through that program, “some of the world’s most famous musicians have given feedback on what they would like to see, and we’ve been inspired by that to build out the technology and the tools in certain ways so that it would be useful for them,” Hassabis explained in an interview.

He ticked off a handful of examples: An artist can hum something and AI-powered technology will create an instrumental based on the tune; a songwriter can pen two musical phrases on their own and rely on the tools to help craft a transition between them; a singer can come in with a fully fledged vocal melody and ask the tech to come up with musical accompaniment.   

Finally, YouTube is rolling out another feature called SynthID, which will watermark any of the AI-generated audio it produces so it can be identified as such. Earlier this week, the platform announced that it would provide labels and others music rights holders the ability “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice.”

Moises, an AI music and audio start-up, has partnered with HYPERREAL, a visual effects company, to create a “proprietary digital human asset” called Hypermodel. This will allow artists to create their digital versions of themselves for marketing, creative and fan engagement purposes.

HYPERREAL has already been collaborating with musicians since 2021, when he worked with Paul McCartney and Beck on their music video for “Find My Way.” In the video, Beck went undercover as a younger version of 81-year-old McCartney, using HYPERREAL to swap and de-age their faces.

Moises is a popular AI music and audio company that provides a suite of tools for musicians, including stem separation, lyric transcription, and voice synthesis.

According to the press release, Moises and HYPERREAL believe this collaboration will especially help the estates of legacy artists to bring the artist’s legacy “to life” and will allow artists to sing or speak in another language using AI voice modeling provided by Moises, helping to localize songs and marketing content to specific regions.

Translations and estate or legacy artist marketing are seen as two of the most sought after new applications of AI for musicians. Last week, pop artist Lauv collaborated with AI voice start-up Hooky to translate his song “Love U Like That” into Korean as a thank you to his steadfast fanbase in the region. This is not the first time AI has been used to translate an artist’s voice — it was first employed in May by MIDNATT, a Korean artist who used the HYBE-owned voice synthesis company Supertone to translate his debut single into six languages — but Lauv’s use of the technology was the first popular Western artist to try it.

Estates are starting to leverage AI as well to essentially bring a late artist back to life. On Tuesday, Nov 14, Warner Music announced plans to use AI to recreate the voice and image of legendary “La Vie En Rose” singer, Edith Piaf, for an upcoming biopic about her life and career. Over in Korea, Supertone remade the voice of late South Korean folk artist Kim Kwang-seok, and Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui as a way to revive interest in their catalogs.

“Moises and HYPERREAL are each best-in-class players with a history of pushing creative boundaries enabled by technology while fully respecting the choices of artists and rights holders,” says Moises CEO Geraldo Ramos. “As their preferred partner, we’re looking forward to seeing the ways HYPERREAL, can leverage Moises’s voice modeling capabilities to add incredibly realistic voices to their productions.”

“We have set the industry standard and exceeded the expectations of the most demanding directors and producers time and time again,” says Remington Scott, founder and CEO of HYPERREAL. “In addition to Moises’s artist-first approach, the quality of their voice models is the best we’ve heard.”

YouTube will introduce the ability for labels and others music rights holders “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” according to a blog post published on Tuesday (Nov. 14). 

Access to the request system will initially be limited: “These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments.” However, the blog, written by vice presidents of of product management Jennifer Flannery O’Connor and Emily Moxley, noted that YouTube will “continue to expand access to additional labels and distributors over the coming months.”

This marks the latest step by YouTube to try to assuage music industry fears about new AI-powered technologies — and also position itself as a leader in the space. 

In August, YouTube published its “principles for partnering with the music industry on AI technology.” Chief among them: “it must include appropriate protections and unlock opportunities for music partners who decide to participate,” wrote CEO Neil Mohan.

YouTube also partnered with a slew of artists from Universal Music Group on an “AI music incubator.” “Artists must play a central role in helping to shape the future of this technology,” the Colombian star Juanes said in a statement at the time. “I’m looking forward to working with Google and YouTube… to assure that AI develops responsibly as a tool to empower artists.”

In September, at the annual Made on YouTube event, the company announced a new suite of AI-powered video and audio tools for creators. Creators can type in an idea for a backdrop, for example, and a new feature dubbed “Dream Screen” will generate it for them. Similarly, AI can assist creators in finding the right songs for their videos.

In addition to giving labels the ability to request the takedown of unauthorized imitations, YouTube promised on Tuesday to roll out enhanced labels so that viewers know they are interacting with content that “is synthetic”: “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” 

TikTok announced a similar feature in September. Of course, self disclosure has its limits — especially as it is already reported that many creators experiment with AI without admitting it.

According to YouTube, “creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

Warner Music has announced plans to use AI technology to recreate the voice and image of legendary French artist, Edith Piaf, in an upcoming full-length animated film. Titled EDITH, the upcoming project is developed by production company Seriously Happy and Warner Music Entertainment in partnership with the Piaf’s estate.
EDITH is set to be a 90-minute film, chronicling the life and career of the famous singer as she traveled between Paris and New York. The voice clone of Piaf will narrate the story, revealing new details about her life never before known.

The AI models used to aid EDITH’s storytelling were trained on hundreds of voice clips and images of the late French singer-songwriter to, as a press release puts it, “further enhance the authenticity and emotional impact of her story.” The story will also feature recordings of her songs “La Vie En Rose” and “Non, Je Ne Regrette Rien,” which are part of the Warner Music catalog.

The story will be told through a mix of animation and archival footage of the singer’s life, including clips of her stage and tv performances, interviews and personal archives. EDITH is the brain child of Julie Veille, who previously created other French-language music biographies like Stevie Wonder: Visionnaire et prophète, Diana Ross, suprême diva, Sting, l’électron libre. The screenplay was written by Veille and Gilles Marliac and will be developed alongside Warner Music Entertainment President, Charlie Cohen. The proof of concept has been created, and the team will soon partner with a studio to develop it into a full-length film.

This is not the first time AI voice clones have been used to aid in the storytelling of a film. Perhaps the most cited example of this was Roadrunner (2021), a documentary about the life of chef and TV host Anthony Bourdain, who passed away in 2018. AI was used to bring back Bourdain’s voice for about 45 seconds. During that time, a deepfaked Bourdain spoke a letter he wrote during his life aloud to the audience.

Visual AI and other forms of CGI have also been employed in movies in recent years to resurrect the likenesses of deceased icons, including Carrie Fisher, Harold Ramis and Paul Walker. Even James Dean, who died in 1955 after starring in only three films, is currently being recreated using AI for an upcoming film titled Back to Eden.

The EDITH project is likely just the start of estates using AI voice or likeness recreation to rejuvenate the relevance of deceased artists and grow the value of older music catalogs. Already, HYBE-owned AI voice synthesis company Supertone remade the voice of late South Korean folk artist Kim Kwang-seok, and Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui.

Veille says, “It has been the greatest privilege to work alongside Edith’s Estate to help bring her story into the 21st century. When creating the film we kept asking ourselves, ‘if Edith were still with us, what messages would she want to convey to the younger generations?’ Her story is one of incredible resilience, of overcoming struggles, and defying social norms to achieve greatness – and one that is as relevant now as it was then. Our goal is to utilize the latest advancements in animation and technology to bring the timeless story to audiences of all ages.”

Catherine Glavas and Christie Laume, executors of Edith Piaf’s estate, add, “It’s been a special and touching experience to be able to hear Edith’s voice once again – the technology has made it feel like we were back in the room with her. The animation is beautiful and through this film we’ll be able to show the real side of Edith – her joyful personality, her humor and her unwavering spirit.”

Alain Veille, CEO of Warner Music France, says, “Edith is one of France’s greatest ever artists and she is still a source of so much pride to the French people. It is such a delicate balancing act when combining new technology with heritage artists, and it was imperative to us that we worked closely with Edith’s estate and handled this project with the utmost respect. Her story is one that deserves to be told, and through this film we’ll be able to connect with a whole new audience and inspire a new generation of fans.”

Diaa El All, CEO/founder of generative artificial intelligence music company Soundful, remembers when the first artists were signed to major label deals based on songs using type beats — cheap, licensable beats available online that are labeled based on the artists the beat emulates (i.e. Drake Type Beat, XXXTentacion Type Beat). He also remembers the legal troubles that followed. “Those type beats are licensed to sometimes thousands of people at a time,” he explains. “If it becomes a hit for one artist, then that artist ends up with major problems to unravel.”

Perhaps the most famous example of this is Lil Nas X and his breakthrough smash “Old Town Road,” which was written over a $30 Future type beat that was also licensed by other DIY talents. After the song went viral in early 2019, the then-unknown rapper and meme maker quickly inked a deal with Columbia Records, but beneath the song’s mammoth success lay a tangle of legal issues to sort through. For one thing, the song’s type beat included an unauthorized sample of Nine Inch Nails’ “34 Ghosts IV,” which was not disclosed to Lil Nas X when he purchased it.

El All’s solution to these issues may seem counter-intuitive, but he posits that his AI models could provide an ethical alternative to the copyright nightmares of the type beat market.

Starting Wednesday (Nov. 8), Soundful is launching Soundful Collabs, which is partnering with artists, songwriters and producers in various genres — including Kaskade, Starrah, 3LAU, DJ White Shadow, Autograf and CB Mix — to train personalized AI generators that create beats akin to their specific production and writing styles. To create a realistic model, the artists, songwriters and producers provide Soundful with dozens of their favorite one-shot recordings of kick drums, snares, guitar licks and synth patches from their personal sonic libraries, as well as information about how they typically construct chord progressions and song structures.

The result is individualized AI models that can generate endless one-of-a-kind tracks that echo a hitmaker’s style while compensating them for the use of their name and sonic identity. For $15, a Soundful subscriber can download up to 10 tracks the generator comes up with. This includes stems so the user can add or subtract elements of the track to suit their tastes after exporting it to a digital audio workstation (DAW) of their choice. The hitmaker receives 80% of the monies earned from the collaboration while Soundful retains 20% — a split El All says was inspired by “flipping” major record labels’ common 80/20 split in favor of the artist.

The Soundful leader, who has a background as a classical pianist and sound engineer, sees this as a novel form of musical “merchandise” that offers talent an additional revenue stream and a chance at fostering further fan engagement and user-generated content (UGC). “We don’t use any loops, we don’t use any previous tracks as references,” El All says. As a result, he argues the product’s profits belong only to the talent, not their record label or publishers, given that it does not use any of their copyrights. Still, he says he’s met with “a lot of publishers” and some labels about the new product. (El All admits that an artist in a 360 deal — a contract which grants labels a cut of money from touring, merchandise and other forms of non-recorded music income — may have to share proceeds with their label.)

According to Kaskade, who has been a fan of Soundful’s since he tested the original beta product earlier this year, the process of training his model felt like “Splice on crack — this is the next evolution of the Splice sample packs,” where producers offer fans the opportunity to purchase a pack of their favorite loops and samples for a set price, he explains. “[With sample packs] you got access to the sounds, but now, you get an AI generator to help you put it all together.”

The new Soundful product is illustrative of a larger trend in AI towards personalized models. On Monday, OpenAI, the leading AI company behind ChatGPT and DALL-E, announced that it was launching “GPTs” – a new service that allows small businesses and individuals to build customized versions of ChatGPT attuned to their personal needs and interests.

This trend is also present in music AI, with many companies offering personalized models and collaborations with talent. This is especially popular on the voice synthesis side of the nascent industry. So far, start-ups like Kits AI, Voice-Swap, Hooky, CreateSafe and more are working with artists to feed recordings of their voices into AI models to create realistic clones of their voices for fans or the artists themselves to use — Grimes’ model being the most notable to date. Though much more ethically questionable, the popularity of Ghostwriter’s “Heart On My Sleeve” — which employed a voice model to emulate Drake and The Weeknd and which was not authorized by the artists — also proved the appetite for personalized music models.

Notably, Soundful’s product has the potential to be a producer and songwriter-friendly counterpart to voice models, which present possible monetary benefits (and threats) to recording artists and singers but do not pertain to the craftspeople behind the hits, who generally enjoy fewer financial opportunities than the artists they work with. As Starrah — who has written “Havana” by Camila Cabello, “Pick Up The Phone” by Young Thug and Travis Scott and “Girls Like You” by Maroon 5 — explains, Soundful Collabs are “an opportunity for songwriters and producers to expand what they are doing in so many ways.”

El All says keeping the needs of the producer and songwriter communities in mind was paramount in the creation of this product. For the first time, he reveals that longtime industry executive, producer manager and Hallwood Media founder Neil Jacobson is on Soundful’s founding team and board. El All says Jacobson’s expertise proved instrumental in steering the Soundful Collabs project in a direction that El All feels could “change the industry for the better.” “I think what Soundful provides here is similar to what I do in my own business,” says Jacobson. “I supply music to people who need it — with Soundful, a fan of one of these artists who wants to make music but doesn’t quite know how to work a digital audio workstation can get the boost they need to start creating.”

El All says the new product will extend beyond personalization for current songwriters, producers and artists. The Soundful team is also in talks with catalog owners and estates and working with a number of top brands in the culinary, consumer goods, hospitality, children’s entertainment and energy industries to train personalized models to create sonic “brand templates” and “generative catalogs” to be used in social media content. “This will help them create a very clear signature identification via sound,” says El All.

When asked if this business-to-business application takes away opportunities for synch licensing from composers, El All counters that some of these companies were using royalty free libraries prior to meeting with Soundful. “We’re actually creating new opportunities for musicians because we are consistently hiring those specializing in synch and sound designers to continue to evolve the brand’s sound,” he says.

In the future, Soundful will drop more artist templates every four to six weeks, and its Collabs will expand into genres like Latin, lo-fi, rock, pop and more. “Though this sounds good out of the box … what will make the music a hit is when a person downloads these stems and adds their own human imperfections and style to it,” says El All. “That’s what we are looking to encourage. It’s a jumping off point.”

Korean artist MIDNATT made history earlier this year by using AI to help him translate his debut single “Masquerade” into six different languages. Though it wasn’t a major commercial success, its seamless execution by the HYBE-owned voice synthesis company Supertone proved there was a new, positive application of musical AI on the horizon that went beyond unauthorized deepfakes and (often disappointing) lo-fi beats.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Enter Jordan “DJ Swivel” Young, a Grammy-winning mixing engineer and producer, known best for his work with Beyonce, BTS and Dua Lipa. His new AI voice company Hooky is one of many new start-ups trying to popularize voice cloning, but unlike much of his competition, Young is still an active and well-known collaborator for today’s musical elite. After connecting with pop star Lauv, whom he worked with briefly years before as an engineer, Young’s Hooky developed an AI voice model of Lauv’s voice so that they could translate the singer-songwriter’s new single “Love U Like That” into Korean. 

It’s the first major Western artist to take part in the AI translation trend. Lauv wants the new translated version of “Love U Like That” to be a way of showing his love to his Korean fanbase and to celebrate his biggest headline show to date, which recently took place in Seoul. 

Though many fans around the world listen to English-speaking music in high numbers, Will Page, author and former chief economist at Spotify, and Chris Dalla Riva, a musician and Audiomack employee, noted in a recent report that many international audiences are increasingly turning their interest back to their local language music – a trend they nicknamed “Glocalization.” With Hooky, Supertone and other AI voice synthesis companies all working to master translation, English-speaking artists now have the opportunity to participate in this growing movement and to form tighter bonds with international fans.

To explain the creation of “Love U Like That (Korean Version),” out Wednesday (Nov. 8), Lauv and Young spoke in an exclusive interview to Billboard. 

When did you first hear what was possible with AI voice filters?

Lauv: I think the first time was that Drake and The Weeknd song [“Heart On My Sleeve” by Ghostwriter]. I thought it was crazy. Then when my friend and I were working on my album, we started playing each other’s music. He pulled out a demo. They were pitching it to Nicki Minaj, and he was singing it and then put it into Nicki Minaj’s voice. I remember thinking it’s so insane this is possible.

Why did you want to get involved with AI voice technology yourself?

Lauv: I truly believe that the only way forward is to embrace what is possible now, no matter what. I think being able to embrace a tool like this in a way that’s beneficial and able to get artists paid is great. 

Jordan, how did you get acquainted with Lauv, and why did you feel he was the right artist to mark your first major collaboration? 

Jordan “DJ Swivel” Young: We’ve done a lot of general outreach to record companies, managers, etcetera. We met Range Media Partners, Lauv’s management team, and they really resonated with Hooky. The timing was perfect: he was wrapping up his Asian tour and had done the biggest show of his life in South Korea. Plus, he has done a few collaborations with BTS. I’ve worked on a number of BTS songs too. There was a lot of synergy between us.

Why did you choose Korean as the language that you wanted to translate a song into?

Lauv: Well, in the future, I would love to have the opportunity to do this in as many different languages as possible, but Seoul has been a place that has become really close to my heart, and it was the place of my biggest headline show to date. I just wanted to start by doing something special for those Korean fans. 

What is the process of actually translating the song? 

Young: We received the original audio files for the song “Love U Like That,” and we rewrote the song with former K-Pop idol Kevin Woo. The thing with translating lyrics or poetry is it can’t be a direct translation. You have to make culturally appropriate choices, words that flow well. So Kevin did that and we re-recorded Kevin’s voice singing the translation, then we mixed the song again exactly as the original was done to match it sonically. All the background vocals were at the correct volume and the right reverbs were used. I think we’ve done a good job of matching it. Then we used our AI voice technology to match Lauv’s voice, and we converted Kevin’s Korean version into Lauv’s voice. 

Lauv: To help them make the model of my voice, I sent over a bunch of raw vocals that were just me singing in different registers. Then I met up with him and Kevin. It was riveting to hear my voice like that. I gave a couple of notes – very minor things – after hearing the initial version of the translation, and then they went back and modified. I really trusted Jordan and Kevin on how to make this authentic and respectful to Korean culture.

Is there an art to translating lyrics?

Lauv: Totally. When I was listening back to it, that’s what struck me. There’s certain parts that are so pleasing to the ear. I still love hearing the Korean version phonetically as someone from the outside. Certain parts of Kevin’s translation, like certain rhythm schemes, hit me so much harder than hearing it in English actually.

Do you foresee that there will be more opportunities for translators as this space develops?

Young: Absolutely. I call them songwriters more than translators though, actually. They play a huge role. I used to work with Beyonce as an engineer, and I’ve watched her do a couple songs in Spanish. It required a whole new vocal producer, a new team just to pull off those songs. It’s daunting to sing something that’s not your natural language. I even did some Korean background vocals myself on a BTS song I wrote. They provided me with the phonetics, and I can say it was honestly the hardest thing I’ve ever recorded. It’s hard to sing with the right emotion when you’re focused on pronouncing things correctly. But Hooky allows the artist to perform in other languages but with all the emotion that’s expected. Sure, there’s another songwriter doing the Korean performance, but Lauv was there for the whole process. His fingerprint is on it from beginning to end. I think this is the future of how music will be consumed. 

I think this could bring more opportunities for the mixing engineers too. When Dolby Atmos came out that offered more chances for mixers, and with the translations, I think there are now even more opportunities. I think it’s empowering the songwriter, the engineer, and the artist all at once. There could even be a new opportunity created for a demo singer, if it’s different from the songwriter who translated the song. 

Would you be open to making your voice model that you used for this song available to the public to use?

Lauv: Without thinking it through too much, I think my ideal self is a very open person, and so I feel like I want to say hell yeah. If people have song ideas and want to hear my voice singing their ideas, why not? As long as it’s clear to the world which songs were written and made by me and what was written by someone else using my voice tone. As long as the backend stuff makes sense, I don’t see any reason why not. 

Offering a preview of arguments the company might make in its upcoming legal battle with Universal Music Group (UMG), artificial intelligence (AI) company Anthropic PBC told the U.S. Copyright Office this week that the massive scraping of copyrighted materials to train AI models is a “quintessentially lawful.”

Music companies, songwriters and artists have argued that such training represents an infringement of their works at a vast scale, but Anthropic told the federal agency Monday (Oct. 30) that it was clearly allowed under copyright’s fair use doctrine.

“The copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,” the company wrote. “This sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.”

The filing came as part of an agency study aimed at answering thorny questions about how existing intellectual property laws should be applied to the disruptive new tech. Other AI giants, including OpenAI, Meta, Microsoft, Google and Stability AI all lodged similar filings, explaining their views.

But Anthropic’s comments will be of particular interest in the music industry because that company was sued last month by UMG over the very issues in question in the Copyright Office filing. The case, the first filed over music, claims that Anthropic unlawfully copied “vast amounts” of copyrighted songs when it trained its Claude AI tool to spit out new lyrics.

In the filing at the Copyright Office, Anthropic argued that such training was a fair use because it copied material only for the purpose of “performing a statistical analysis of the data” and was not “re-using the copyrighted expression to communicate it to users.”

“To the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work,” the company argued.

UMG is sure to argue otherwise, but Anthropic said legal precedent was clearly on its side. Notably, the company cited a 2015 ruling by a federal appeals court that Google was allowed to scan and upload millions of copyrighted books to create its searchable Google Books database. That ruling and others established the principle that “large-scale copying” was a fair use when done to “create tools for searching across those works and to perform statistical analysis.”

“The training process for Claude fits neatly within these same paradigms and is fair use,” Anthropic’s lawyers wrote. “Claude is intended to help users produce new, distinct works and thus serves a different purpose from the pre-existing work.”

Anthropic acknowledged that the training of AI models could lead to “short-term economic disruption.” But the company said such problems were “unlikely to be a copyright issue.”

“It is still a matter that policymakers should take seriously (outside of the context of copyright) and balance appropriately against the long-term benefits of LLMs on the well-being of workers and the economy as a whole by providing an entirely new category of tools to enhance human creativity and productivity,” the company wrote.

In the TikTok era, homemade remixes of songs — typically single tracks that have been sped up or slowed down, or two tracks mashed together — have become ever more popular. Increasingly, they are driving viral trends on the platform and garnering streams off of it. 

Just how popular? In April, Larry Mills, senior vp of sales at the digital rights tech company Pex, wrote that Pex’s tech found “hundreds of millions of modified audio tracks distributed from July 2021 to March 2023,” which appeared on TikTok, SoundCloud, Audiomack, YouTube, Instagram and more. 

On Wednesday (Nov. 1), Mills shared the results of a new Pex analysis — expanded to include streaming services like Spotify, Apple Music, Deezer, and Tidal — estimating that “at least 1% of all songs on [streaming platforms] are modified audio.”

“We’re talking more than 1 million unlicensed, manipulated songs that are diverting revenue away from rightsholders this very minute,” Mills wrote, pointing to homemade re-works of tracks by Halsey or One Republic that have amassed millions of plays. “These can generate millions in cumulative revenue for the uploaders instead of the correct rightsholders.”

Labels try to execute a tricky balancing act with user-generated remixes. They usually strike down the most popular unauthorized reworks on streaming services and move to release their own official versions in an attempt to pull those plays in-house. But they also find ways to encourage fan remixing, because it remains an effective form of music marketing at a time when most promotional strategies have proved toothless. “Rights holders understand that this process is inevitable, and it’s one of the best ways to bring new life to tracks,” Meng Ru Kuok, CEO of music technology company BandLab, said to Billboard earlier this year. 

Mills argues that the industry needs a better system for tracking user-generated remixes and making sure royalties are going into the right pockets. “While these hyper-speed remixes may make songs go viral,” he wrote in April, “they’re also capable of diverting royalty payments away from rights holders and into the hands of other creators.” 

Since Pex sells technology for identifying all this modified audio, it’s not exactly an unbiased party. But it’s notable that streaming services and distributors don’t have the best track record when it comes to keeping unauthorized content of any kind off their platforms.

It hasn’t been unusual to find leaked songs — especially from rappers with impassioned fan bases like Playboi Carti and Lil Uzi Vert — on Spotify, where leaked tracks can often be found climbing the viral chart, or TikTok. An unreleased Pink Pantheress song sampling Michael Jackson’s classic “Off the Wall” is currently hiding in plain sight on Spotify, masquerading as a podcast. 

“Historically, streaming services don’t have an economic incentive to actually care about that,” Deezer CEO Jeronimo Folgueira told Billboard earlier this year. “We don’t care whether you listen to the original Drake, fake Drake, or a recording of the rain. We just want you to pay $10.99.” Folgueira called that incentive structure “actually a bad thing for the industry.”

In addition, many of the distribution companies that act as middlemen between artists and labels and the streaming services operate on a volume model — the more content they upload, the more money they make — which means it’s not in their financial interest to look closely at what they send along to streaming services. 

However, the drive to improve this system has taken on new urgency this year. Rights holders and streaming services are going back and forth over how streaming payments should work and whether “an Ed Sheeran stream is worth exactly the same as a stream of rain falling on the roof,” as Warner Music Group CEO Robert Kyncl told financial analysts in May. As the industry starts to move to a system where all streams are no longer created equal, it becomes increasingly important to know exactly what’s on these platforms so it can sort different streams into different buckets.

In addition, the advance of artificial intelligence-driven technology has allowed for easily accessible and accurate-sounding voice-cloning, which has alarmed some executives and artists in a way that sped-up remixes have not. “In our conversations with the labels, we heard that some artists are really pissed about this stuff,” says Geraldo Ramos, co-founder/CEO of the music-tech company Moises. “They’re calling their label to say, ‘Hey, it isn’t acceptable, my voice is everywhere.’”

This presents new challenges, but also perhaps means new opportunities for digital fingerprint technology companies, whether that’s stalwarts like Audible Magic or newer players like Pex. “With AI, just think how much the creation of derivative works is going to exponentially grow — how many covers are going to get created, how many remixes are gonna get created,” Audible Magic CEO Kuni Takahashi told Billboard this summer. “The scale of what we’re trying to identify and the pace of change is going to keep getting faster.”

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Lizzo fights back against sexual harassment allegations with the help of a famous lawyer and a creative legal argument; a federal court issues an early ruling in an important copyright lawsuit over artificial intelligence; Kobalt is hit with a lawsuit alleging misconduct by one of the company’s former executives; and much more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

THE BIG STORY: Lizzo Hits Back With … Free Speech?

Three months after Lizzo and her touring company were accused of subjecting three of her backup dancers to sexual harassment, religious and racial discrimination and weight-shaming, her lawyers filed their first substantive response – and they didn’t hold back.

“Salacious and specious lawsuit.” “They have an axe to grind.” “A pattern of gross misconduct and failure to perform their job up to par.” “Fabricated sob story.” “Plaintiffs are not victims.” “They are opportunists.”

“Plaintiffs had it all and they blew it,” Lizzo’s lawyers wrote. “Instead of taking any accountability for their own actions, plaintiffs filed this lawsuit against defendants out of spite and in pursuit of media attention, public sympathy and a quick payday with minimal effort.”

That’s not exactly dry legalese, but it’s par-for-the-course in a lawsuit that has already featured its fair share of blunt language from the other side. And it’s hardly surprising given that it came from Martin Singer – an infamously tough celebrity lawyer once described by the Los Angeles Times as “Hollywood’s favorite legal hit man.”

While Singer’s quotes made the headlines, it was his legal argument that caught my attention.

Rather than a normal motion to dismiss the case, Lizzo’s motion cited California’s so-called anti-SLAPP statute — a special type of law enacted in states around the country that makes it easier to end meritless lawsuits that threaten free speech (known as “strategic lawsuits against public participation”). Anti-SLAPP laws allow for such cases to be tossed out more quickly, and they sometimes require a plaintiff to repay the legal bills incurred by a defendant.

Anti-SLAPP motions are filed every day, but it’s pretty unusual to see one aimed at dismissing a sexual harassment and discrimination lawsuit filed by former employees against their employer. They’re more common in precisely the opposite scenario: filed by an individual who claims that they’re being unfairly sued by a powerful person to silence accusations of abuse or other wrongdoing.

But in Friday’s motion, Singer and Lizzo’s other lawyers argued that California’s anti-SLAPP law could also apply to the current case because of the creative nature of the work in question. They called the case “a brazen attempt to silence defendants’ creative voices and weaponize their creative expression against them.”

Will that argument hold up in court? Stay tuned…

Go read the full story about Lizzo’s defense, including access to the actual legal documents filed in court.

Other top stories this week…

RULING IN AI COPYRIGHT CASE – A federal judge issued an early-stage ruling in a copyright class action filed by artists against artificial intelligence (AI) firm Stability AI — one of several important lawsuits filed against AI companies over how they use copyrighted content. Though he criticized the case and dismissed many of its claims, the judge allowed it to move toward trial on its central, all-important question: Whether it’s illegal to train AI models by using copyrighted content.

HALLOWEEN SPECIAL – To celebrate today’s spooky holiday, Billboard turned back the clock all the way to 1988, when the studio behind “A Nightmare on Elm Street” sued Will Smith over a Fresh Prince song and music video that made repeated references to Freddy Kreuger. To get the complete bizarre history of the case, go read our story here.

KOBALT FACES CASE OVER EX-EXEC – A female songwriter filed a lawsuit against Kobalt Music Group and former company executive Sam Taylor over allegations that he leveraged his position of power to demand sex from her – and that the company “ignored” and “gaslit” women who complained about him. The case came a year after Billboard’s Elias Leight first reported those allegations. Taylor did not return a request for comment; Kobalt has called the allegations against the company baseless, saying its employees never “condoned or aided any alleged wrongdoing.”

MF DOOM ESTATE BATTLE – The widow of late hip-hop legend MF Doom filed a lawsuit claiming the rapper’s former collaborator Egon stole dozens of the rapper’s notebooks that were used to write down many of his beloved songs. The case claims that Egon took possession of the files as Doom spent a decade in his native England due to visa issues, where he remained until his death in 2020. Egon’s lawyers called the allegations “frivolous and untrue.”

DJ ENVY FRAUD SCANDAL UPDATE – Cesar Pina, a celebrity house-flipper who was charged earlier this month with running a “Ponzi-like investment fraud scheme,” said publicly last week that New York City radio host DJ Envy had “nothing to do” with the real estate deals in question. Critics have argued that Envy, who hosts the popular hip-hop radio show The Breakfast Club, played a key role in Pina’s alleged fraud by promoting him on the air.

UTOPIA SUED AGAIN OVER FAILED DEAL – Utopia Music was hit with another lawsuit over an aborted $26.5 million deal to buy a U.S. music technology company called SourceAudio, this time over allegations that the company violated a $400,000 settlement that aimed to end the dispute. The allegations came after a year of repeated layoffs and restructuring at the Swiss-based music tech company.

A federal judge in San Francisco ruled Monday (Oct. 30) that artificial intelligence (AI) firm Stability AI could not dismiss a lawsuit claiming it had “trained” its platform on copyrighted images, though he also sided with AI companies on key questions.

In an early-stage order in a closely watched case, Judge William Orrick found many defects in the lawsuit’s allegations, and he dismissed some of the case’s claims. But he allowed the case to move forward on its core allegation: That Stability AI built its tools by exploiting vast numbers of copyrighted works.

“Plaintiffs have adequately alleged direct infringement based on the allegations that Stability downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion, and used those images to train Stable Diffusion,” the judge wrote.

The ruling came in one of many cases filed against AI companies over how they use copyrighted content to train their models. Authors, comedians and visual artists have all filed lawsuits against companies including Microsoft, Meta and OpenAI, alleging that such unauthorized use by the fast-growing industry amounts to a massive violation of copyright law.

Last week, Universal Music Group and others filed the first such case involving music, arguing that Anthropic PBC was infringing copyrights en masse by using “vast amounts” of music to teach its software how to spit out new lyrics.

Rulings in the earlier AI copyright cases could provide important guidance on how such legal questions will be handled by courts, potentially impacting how UMG’s lawsuit and others like it play out in the future.

Monday’s decision came in a class action filed by artists Sarah Andersen, Kelly McKernan and Karla Ortiz against Stability AI Ltd. over its Stable Diffusion — an AI-powered image generator. The lawsuit also targeted Midjourney Inc. and DeviantArt Inc., two companies that use Stable Diffusion as the basis for their own image generators.

In his ruling, Judge Orrick dismissed many of the lawsuit’s claims. He booted McKernan and Ortiz from the case entirely and ordered the plaintiffs to re-file an amended version of their case with much more detail about the specific allegations against Midjourney and DeviantArt.

The judge also cast doubt on the allegation that every “output” image produced by Stable Diffusion would itself be a copyright-infringing “derivative” of the images that were used to train the model — a ruling that could dramatically limit the scope of the lawsuit. The judge suggested that such images might only be infringing if they themselves looked “substantially similar” to a particular training image.

But Judge Orrick included no such critiques for the central accusation that Stability AI infringed Andersen’s copyrights by using them for training without permission — the basic allegation at the center of all of the AI copyright lawsuits, including the one filed by UMG. Andersen will still need to prove that such an accusation is true in future litigation, but the judge said she should be given the chance to do so.

“Even Stability recognizes that determination of the truth of these allegations — whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run — cannot be resolved at this juncture,” Orrick wrote in his decision.

Attorneys for Stability AI, Midjourney and DeviantArt did not return requests for comment. Attorneys for the artists praised the judge for allowing their “core claim” to move forward and onto “a path to trial.”

“As is common in a complex case, Judge Orrick granted the plaintiffs permission to amend most of their other claims,” said plaintiffs’ attorneys Joseph Saveri and Matthew Butterick after the ruling. “We’re confident that we can address the court’s concerns.”