artificial intelligence
Page: 3
Universal Music Group has entered a strategic partnership with AI music company KLAY. To date, KLAY has not yet released any of its products, but the AI start-up is said to be developing what it calls a “Large Music Model,” dubbed “KLayMM” which will “help humans create new music with the help of AI,” says […]
A new Spanish-language version of Brenda Lee‘s holiday hit “Rockin’ Around The Christmas Tree” was released Friday (Oct. 25), using “responsibly-trained” artificial intelligence to make the translation. A perennial hit for 66 years and counting, “Rockin’ Around The Christmas Tree” is the biggest song to have been translated into a new language using AI.
Released via MCA Nashville/Universal Music Enterprises (UMe), “Noche Buena y Navidad” was revamped by four-time Latin Grammy award-winning producer Auero Baqueiro. Baqueiro first translated the lyrics from English to Spanish, trying to match the same phonetics and rhyming structure that made sense for Spanish while maintaining the original lyrical themes from the original English version. Then, he enlisted Chile-born, L.A.-based vocalist Leyla Hoyle to sing the vocals in Spanish in a way that would capture Lee’s unique voice patterns, including intricacies like phrasing, tone, and breaths.
Baquiero ultimately kept the original music and background vocals and, once Hoyle recorded the raw Spanish vocals, used AI to map Lee’s voice over Hoyle’s performance. The translation was made possible using SoundLabs AI’s MicDrop technology, a “responsibly-trained” AI audio plugin that allows users to swap their voices out with another voice or instrument.
Trending on Billboard
The new AI-powered translation arrives just in time for the holiday season. Despite being released back in 1958, Lee’s “Rockin’ Around the Christmas Tree” is more popular than ever. Last year, her song hit No. 1 on the Billboard Hot 100 for the first time and stayed in the top spot for three straight weeks.
Lee is not the first artist to use AI to translate her work. In May 2023, HYBE debuted a new K-pop artist, MIDNATT, who used AI to release his first single in six different languages. In November 2023, indie-pop artist Lauv released an AI translation of his single “Love U Like That” in Korean as a nod to his strong fanbase in that country. Similarly to “Rockin’ Around the Christmas Tree,” a bilingual human songwriter, Kevin Woo, translated the “Love U Like That” lyrics to Korean. After Woo sang through the song, Lauv’s voice was planted on top of his vocals using the AI voice platform Hooky.
Finally, country icon Randy Travis made headlines in May 2024 by using AI voice technology to record a new single, “Where That Came From,” after his vocal abilities were greatly diminished in a near-fatal stroke a decade before.
The Spanish-language version of “Rockin’ Around the Christmas Tree” is the first release from Universal Music Group (UMG) and SoundLabs’ partnership, which was announced in June 2024. The AI company was founded by software developer and electronic artist BT and is said to be “responsibly” trained. The partnership is part of UMG’s “responsible AI initiative,” as laid out by the company’s CEO/chairman Lucian Grainge, which involves “forg[ing] groundbreaking private-sector partnerships with AI technology companies.”
“I am so blown away by this new Spanish version of ‘Rockin’ Around The Christmas Tree,’ which was created with the help of AI,” said Lee in a statement. “Throughout my career, I performed and recorded many songs in different languages, but I never recorded ‘Rockin’’ in Spanish, which I would have loved to do. To have this out now is pretty incredible, and I’m happy to introduce the song to fans in a new way.”
“We are thrilled to work with Brenda Lee to making ‘Rockin’ Around The Christmas Tree’ the first classic song translated responsibly into another language with the power of AI, added UMe president/CEO Bruce Resnikoff. “We are also very excited about the possibilities of this emerging technology and look forward to harnessing its capabilities to introduce new material created by and approved by our artists.”
“The minute you hear Brenda Lee’s iconic voice on ‘Rockin’ Around The Christmas Tree’ you know it’s the official start of Christmas,” said UMG Nashville chair/CEO Cindy Mabe. “The global hit has touched people all over the world and kept this young 13-year-old spirit of Christmas captured in a time capsule. We are all so excited for this new Spanish version created with the help of AI from that legendary voice and approved by Brenda Lee herself to help celebrate this enduring, timeless classic.”
Thousands of musicians, composers, actors and authors from across the creative industries, including ABBA’s Björn Ulvaeus, all five members of Radiohead and The Cure’s Robert Smith, have signed a statement opposing artificial intelligence companies and developers using their work without a license for training generative AI systems.
Signatories also include all three major record labels — Universal Music Group, Sony Music Entertainment and Warner Music Group — as well as a wide range of music trade organizations representing record labels, publishers and creators from the U.S., Canada, Australia, France, Germany, Spain, Austria, Mexico, the U.K., Ireland, Sweden and Brazil.
Trending on Billboard
“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” reads the single-sentence statement posted at aitrainingstatement.org.
Within several hours of going live on Tuesday (Oct. 22), the statement had been signed by more 11,500 people from across the creative arts, including actors Kevin Bacon, Sean Astin and Rosario Dawson; authors James Patterson, Ian Rankin, Ann Patchett and Kate Mosse; and music artists Billy Bragg, Max Richter and Norwegian singer-songwriter Aurora.
The global campaign was conceived and organized by Ed Newton-Rex, a British composer now based in the U.S., who has formerly held several senior executive roles at AI technology and music companies.
In 2010, Newton-Rex founded Jukedeck, a U.K.-based AI music generation company that provided music for video, TV, radio, podcasts and games. It was acquired by TikTok parent company ByteDance in 2019.
Following the acquisition, Newton-Rex, who is also a choral composer, went on to run ByteDance’s European AI Lab before becoming head of audio at tech firm Stability AI. He quit that role last year in protest of the company’s belief that it is acceptable to use copyrighted work without a license on “fair use” grounds without permission from rights holders.
Newton-Rex tells Billboard that several trade groups are supporting his campaign and helped gather signatories but have not provided funding for the initiative.
The statement comes amid increasing concern from creators and rights holders over how their works are being exploited by AI developers for generative training purposes — and how to rein those tech companies in.
Earlier this year, the three major record companies filed lawsuits against AI music firms Suno and Udio alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale.”
In the U.K., the government is soon to launch a consultation on how to regulate AI technology and is understood to be exploring a scheme that would allow AI companies to legally scrape copyright-protected content from artists and rights holders unless they “opt out.”
Creator groups say that any “opt out” solution would be highly damaging to the music business and would prefer an “opt in” scheme that grants rights holders the ability to approve the use of their works by AI companies.
Tech giants Google and Microsoft are meanwhile calling for the British government to soften the country’s copyright laws for AI firms and introduce an exception for text and data mining of copyrighted works, including music, for commercial purposes. Such a premise was raised by the previous Conservative government in 2022 but was abandoned a year later following strong criticism from musicians and creators.
“Copyright serves to safeguard the value of human creativity, while also driving value in the wider music and creative industries,” said Sophie Jones, chief strategy officer at U.K. labels trade body BPI, one of the organizations supporting Newton-Rex, in a statement. “If the U.K. is to remain a global creative powerhouse in an increasingly competitive world,” she continued, “the government must ensure that it is respected and enforced.”
Those views were echoed by the Association of Independent Music (AIM), which has also signed the statement.
“To achieve the benefits of AI for creativity, we urge policymakers not to lose sight of the need for strong copyright protections,” said AIM interim CEO Gee Davy in a statement on Tuesday (Oct. 22). She added that it was “vital” policymakers protect artists and rights holders “to ensure a healthy future for those who create, invest in and release music across genres and all communities, regions and nations of the U.K.”
Tuesday’s statement is just the latest salvo in the battle between generative AI companies and rights holders. In May, Sony Music released a statement warning more than 700 AI companies not to scrape the company’s copyrighted data, while Warner Music released a similar statement in July. That same month in the U.S. Senate, a bill dubbed the No FAKES Act, which aims to protect creators from AI deepfakes, was introduced by a bipartisan group of senators.
Grammy-winning producer Timbaland has taken on a new role as a strategic advisor to Suno, an AI music company that can generate full songs at the click of a button.
News of the deal comes four months after the three major music companies collectively sued Suno (and competitor Udio) for alleged infringement of their copyrighted sound recordings “at an almost unimaginable scale.”
According to a press release from Suno, Timbaland has been a “top user” of the platform for months, and this announcement formalizes his involvement with Suno. The partnership will be kicked off with Timbaland previewing his latest single “Love Again” exclusively on Suno’s platform.
Then, Suno users will be able to participate in a remix contest, which will include feedback and judging from Timbaland himself and over $100,000 in prizes for winning remixes. Timbaland will also release the top two remixes of “Love Again” on streaming services, including Spotify, Apple Music and more.
Trending on Billboard
Additionally, as part of being a strategic advisor to Suno, Timbaland will assume an “active” role in the “day-to-day product development and strategic creative direction” of new generative AI tools, says the company in a press release.
Suno is one of the most advanced generative AI music companies on the market today. Using simple text prompts, users can generate voice, lyrics and instrumentals in seconds. On May 21, Suno announced that it had raised $125 million in funding across multiple funding rounds, including investments from including Lightspeed Venture Partners, Nat Friedman and Daniel Gross, Matrix and Founder Collective. Suno also said it had been working closely with a team of advisors, including 3LAU, Aaron Levie, Alexandr Wang, Amjad Masad, Andrej Karpathy, Aravind Srinivas, Brendan Iribe, Flosstradamus, Fred Ehrsam, Guillermo Rauch and Shane Mac.
Though many have marveled at its uncanny music-making capabilities, the music business establishment also feared that Suno might have been trained on copyrighted material without consent. (At the time, Suno declined to state what materials were in its training data, and whether or not it included copyrighted music).
Then, Billboard broke the news on June 20 that the major labels were weighing the idea of a lawsuit against Suno and Udio, alleging widespread copyright infringement of their sound recordings for the purposes of AI training. After the lawsuit was officially filed four days later, Suno and Udio then hired top law firm Latham & Watkins, and filed lengthy responses to fire back at the labels. Suno noted it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet” and that it was “fair use” to use these files.
“When I heard what Suno was doing, I was immediately curious,” says Timbaland of the partnership. “After witnessing the potential, I knew I had to be a part of it. By combining forces, we have a unique opportunity to make A.I. work for the artist community and not the other way around. We’re seizing that opportunity, and we’re going to open up the floodgates for generations of artists to flourish on this new frontier. I’m excited and grateful to Suno for this opportunity.”
“It’s an honor to work with a legend like Timbaland,” says Mikey Shulman, CEO of Suno. “At Suno, we’re really excited about exploring new ways for fans to engage with their favorite artists. With Timbaland’s guidance, we’re helping musicians create music at the speed of their ideas—whether they’re just starting out or already selling out stadiums. We couldn’t be more excited for what’s ahead!”
The newest upsurge in artificial intelligence technology is streamlining the tedious tasks that run beneath the glamor of the industry, from simplifying marketing strategies to easing direct fan engagement to handling financial intricacies. And as this ecosystem matures, companies are discovering unprecedented methods to not only navigate but thrive within these new paradigms.
In our previous guest column, we explored how the wave of music tech startups is empowering musicians, artists and the creative process. Now, we shift our focus to the technologies revolutionizing the business side of the industry, including artist services, ticketing, fan engagement and more.
Music marketing has continued to evolve and become increasingly data-driven. A natural next step after creation and distribution, marketing involves creating assets for a campaign to effectively engage with the right audience. Traditionally, this has been a resource-intensive task, but now, AI-driven startups are providing efficiencies by automating much of this process.
Trending on Billboard
Startups like Symphony and un:hurd are now providing automated campaign management services, handling everything from social media ads to DSP and playlist pitching from a single automated hub. Some of these platforms even incorporate financial management tools into their offerings.
“Having financial management tools integrated into one platform allows for better revenue management and planning,” says Rameen Satar, founder/CEO of the financial management platform BANDS. “Overall, a unified platform simplifies the complexities of managing a music career, empowering musicians to focus more on their creative work and succeed in the industry.”
One hot topic as of late has been superfan monetization, with multiple startups creating platforms for artists to engage with and monetize their fan bases directly. From fan-designed merchandise on Softside to artist-to-fan streaming platform Vault.fm, which recently partnered with James Blake, these platforms provide personalized fan experiences including exclusive content, NFTs, merchandise, early access to tickets and bespoke offerings.
Drew Thurlow and Rufy Anam Ghazi
Courtesy Photo
“The future of fan engagement will be community-driven. No two fan communities are alike, so engagement will be bespoke to each artist,” says Andy Apple, co-founder/CEO of superfan platform Mellomanic. “Artists will each have their own unique culture, but share one commonality: Every community will align, organize and innovate to support the goals of the artist.”
Managing metadata and accounting royalties through the global web of streaming services is another area seeing innovation. With nearly 220 million tracks now registered at DSPs, according to content ID company Audible Magic, startups are stepping in to offer solutions across the music distribution and monetization chain. New tools are being developed to organize and search catalogs, manage track credits and splits, handle incomes, find unclaimed royalties, and clean up metadata errors.
”While we have well-publicized challenges still around artist remuneration, there are innovation opportunities across the value chain, driving growth through improved operations and new models,” says Gareth Deakin of Sonorous Global Consulting, a London-based consultancy that works with labels and music creators to best use emerging technologies.
Another issue that some AI companies have stepped in help solve is preventing fraud — a significant concern stemming from the ease of music distribution and the sheer volume of new music being released every day. Startups are helping labels and digital service providers address this problem with anti-piracy, content detection and audio fingerprinting technology. Beatdapp, for instance, which developed groundbreaking AI technology to detect fake streams, has partnered with Universal Music Group, SoundExchange and Napster. Elsewhere, MatchTune has patented an algorithm that detects AI-generated and manipulated audio, and a few others are developing tech to ensure the ethical use of copyrighted material by connecting rights holders and AI developers for fair compensation. Music recognition technology (MRT), which also utilizes audio fingerprinting technology, is becoming a prominent way to identify, track and monetize music plays across various platforms, including on-ground venues and other commercial spaces.
In the live music industry, there has been minimal innovation in ticketing, especially at the club level. That’s starting to turn around, however, as new technologies are emerging to automate the tracking of ticket sales and counts, thereby helping agents and promoters reduce manual workloads.
RealCount is one such startup that helps artists, agencies and promoters make sense of ticketing data. “We see RealCount as a second brain for promoters, agents and venues, automating the tracking of ticket counts and sales data from any point of sale,” says Diane Gremore, the company’s founder/CEO. Other exciting developments are taking place in how live events are experienced virtually, with platforms like Condense delivering immersive 3D content in real time.
Drew Thurlow is the founder of Opening Ceremony Media where he advises music and music tech companies. Previously he was senior vp of A&R at Sony Music, and director of artists partnerships & industry relations at Pandora. His first book, about music & AI, will be released by Routledge in early 2026.
Rufy Anam Ghazi is a seasoned music business professional with over eight years of experience in product development, data analysis, research, business strategy, and partnerships. Known for her data-driven decision-making and innovative approach, she has successfully led product development, market analysis, and strategic growth initiatives, fostering strong industry relationships.
The ASCAP Lab, ASCAP’s innovation program, has announced this year’s cohort for their AI and the Business of Music Challenge. Featuring CRESQA, Music Tomorrow, RoEx, SoundSafe.ai and Wavelets AI, these start-ups will take part in a 12-week course, in partnership with NYC Media Lab, led by the NYU Tandon School of Engineering, to receive mentorship and small grants to develop their ideas.
As part of the initiative, the start-ups will receive hands-on support from the ASCAP Lab, as well as ASCAP’s network of writer and publisher members, to help them optimize their products for the music creator community.
While last year’s cohort of companies focused on AI for music creation and experience, the 2024 AI and the Business of Music Challenge is much more focused on commercial solutions that can help the music industry better manage data and improve workflows.
Trending on Billboard
ASCAP Chief Strategy and Digital Officer Nick Lehman says of the 2024 cohort: “ASCAP’s creator-first, future-forward commitment makes it imperative for us to embrace technology while simultaneously protecting the rights of creators. The dialogue, understanding and relationships that the ASCAP Lab Challenge creates with the music startup community enable us to drive progress for the industry and deliver on this commitment.”
Meet the ASCAP Lab Challenge teams for 2024 below:
CRESQA: An AI social media content assistant designed for songwriters and musicians that automates the process of social media strategy development and helps generate fully personalized post ideas and schedules for TikTok, Instagram, YouTube Shorts, Facebook and more.
Music Tomorrow: Analytics tools that monitor and boost artists’ algorithmic performance on streaming platforms, using AI for advanced audience insights and automation that improve an artist’s content discoverability, listener engagement and team efficiency.
RoEx: AI-driven tools for multitrack mixing, mastering, audio cleanup and quality control, designed to streamline and enhance the final steps of the creative process by delivering a professional and balanced mix with ease.
SoundSafe.ai: Robust, state-of-the-art audio watermarking using AI to enhance security, reporting and the detection of real-time piracy and/or audio deepfakes.
Wavelets AI: Tools for artists, labels, copyright holders, content distributors and DSPs that help reduce IP infringement by detecting AI vocals in music.
California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.
The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.
Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”
Trending on Billboard
The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.
The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.
The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet.”
“The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said in a statement Sunday afternoon.
Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point.
The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.
Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.
The bill targeted systems that require a high level of computing power and more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.
“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”
The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.
A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure’s supporters.
But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.
Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.
Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.
The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.
He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.
Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.
But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.
“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”
On Sept. 4, the public learned of the first-ever U.S. criminal case addressing streaming fraud. In the indictment, federal prosecutors claim that a North Carolina-based musician named Michael “Mike” Smith stole $10 million dollars from streaming services by using bots to artificially inflate the streaming numbers for hundreds of thousands of mostly AI-generated songs. A day later, Billboard reported a link between Smith and the popular generative AI music company Boomy; Boomy’s CEO Alex Mitchell and Smith were listed on hundreds of tracks as co-writers.
(The AI company and its CEO that supplied songs to Smith were not charged with any crime and were left unnamed in the indictment. Mitchell replied to Billboard’s request for comment, saying, “We were shocked by the details in the recently filed indictment of Michael Smith, which we are reviewing. Michael Smith consistently represented himself as legitimate.”)
Trending on Billboard
This case marks the end of generative AI music’s honeymoon phase (or “hype” phase) with the music industry establishment. Though there have always been naysayers about AI in the music business, the industry’s top leaders have been largely optimistic about it, provided AI tools were used ethically and responsibly. “If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways,” said Lucian Grainge, Universal Music Group’s chairman/CEO, in a statement about UMG’s partnership with YouTube for its AI Music Incubator. “You have to embrace technology [like AI], because it’s not like you can put technology in a bottle,” WMG CEO Robert Kyncl said during an onstage interview at the Code Conference last September.
Each major music label group has established its own partnerships to get in on the AI gold rush since late 2022. UMG coupled with YouTube for an AI incubator program and SoundLabs for “responsible” AI plug-ins. Sony Music started collaborating with Vermillio for an AI remix project around David Gilmour and The Orb’s latest album. Warner Music Group’s ADA struck a deal with Boomy, which was previously distributing its tracks with Downtown, and invested in dynamic music company Lifescore.
Artists and producers jumped in, too — from Lauv’s collaboration with Hooky to create an AI-assisted Korean-language translation of his newest single to 3LAU’s investment in Suno. Songwriters reportedly used AI voices on pitch records. Artists like Drake and Timbaland used unauthorized AI voices to resurrect stars like Tupac Shakur and Notorious B.I.G. in songs they posted to social media. Metro Boomin sampled an AI song from Udio to create his viral “BBL Drizzy” remix. (Drake later sampled “BBL Drizzy” himself in his feature on the song “U My Everything” by Sexyy Red.) The estate of “La Vie En Rose” singer Edith Piaf, in partnership with WMG, developed an animated documentary of her life, using AI voices and images. The list goes on.
While these industry leaders haven’t spoken publicly about the overall state of AI music in a few months, I can’t imagine their tone is now as sunny as it once was, given the events of the summer. It all started with Sony Music releasing a statement that warned over 700 AI companies to not scrape the label group’s copyrighted data in May. Then Billboard broke the news that the majors were filing a sweeping copyright infringement lawsuit against Suno and Udio in June. In July, WMG issued a similar warning to AI companies as Sony had. In August, Billboard reported that AI music adoption has been much slower than was anticipated, the NO FAKES Act was introduced to the Senate, and Donald Trump deepfaked a false Taylor Swift endorsement of his presidential run on Truth Social — an event that Swift herself referenced as a driving factor in her social media post endorsing Kamala Harris for president.
And finally, the AI music streaming fraud case dropped. It proved what many had feared: AI music flooding onto streaming services is diverting significant sums of royalties away from human artists, while also making streaming fraud harder to detect. I imagine Grainge is particularly interested in this case, given that its findings support his recent crusade to change the way streaming services pay out royalties to benefit “professional artists” over hobbyists, white noise makers and AI content generators.
When I posted my follow up reporting on LinkedIn, Declan McGlynn, director of communications for Voice-Swap, an “ethical” AI voice company, summed up people’s feelings well in his comment: “Can yall stop stealing shit for like, five seconds[?] Makes it so much harder for the rest of us.”
One AI music executive told me that the majors have said that they would use a “carrot and stick” approach to this growing field, providing opportunities to the good guys and meting out punishment for the bad guys. Some of those carrots were handed out early while the hype was still fresh around AI because music companies wanted to appear innovative — and because they were desperate to prove to shareholders and artists that they learned from the mistakes of Napster, iTunes, early YouTube and TikTok. Now that they’ve made their point and the initial shock of these models has worn off, the majors have started using those sticks.
This summer, then, has represented a serious vibe shift, to borrow New York magazine’s memeable term. All this recent bad press for generative AI music, including the reports about slow adoption, seems destined to result in far fewer new partnerships announced between generative AI music companies and the music business establishment, at least for the time being. Investment could be harder to come by, too. Some players who benefitted from early hype but never amassed an audience or formed a strong business will start to fall.
This doesn’t mean that generative AI music-related companies won’t find their place in the industry eventually — some certainly will. This is just a common phase in the life cycle of new tech. Investors will probably increasingly turn their attention to other AI music companies, those largely not of the generative variety, that promise to solve the problems created by generative AI music. Metadata management and attribution, fingerprinting, AI music detection, music discovery — it’s a lot less sexy than a consumer-facing product making songs at the click of a button, but it’s a lot safer, and is solving real problems in practical ways.
There’s still time to continue to set the guardrails for generative AI music before it is adopted en masse. The music business has already started working toward protecting artists’ names, images, likenesses and voices and has fought back against unauthorized AI training on their copyrights. Now it’s time for the streaming services to join in and finally set some rules for how AI generated music is treated on its platforms.
This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.
If you have any tips about the AI music streaming fraud case, Billboard is continuing to report on it. Please reach out to krobinson@billboard.com.
You can’t say no one’s getting rich from streaming. In an indictment unsealed in early September, federal prosecutors charged musician Michael Smith with fraud and conspiracy in a scheme in which he used AI-generated songs streamed by bots to rake in $10 million in royalties. He allegedly received royalties for hundreds of thousands of songs, at least hundreds of which listed as co-writer the CEO of the AI company Boomy, which had received investment from Warner Music Group. (The CEO, Alex Mitchell, has not been charged with any crime.)
This is the first criminal case for streaming fraud in the U.S., and its size may make it an outlier. But the frightening ease of creating so many AI songs and using bots to generate royalties with them shows how vulnerable the streaming ecosystem really is. This isn’t news to some executives, but it should come as a wake-up call to the industry as a whole. And it shows how the subscription streaming business model with pro-rata royalty distribution that now powers the recorded music industry is broken — not beyond repair, but certainly to the point where serious changes need to be made.
Trending on Billboard
One great thing about streaming music platforms, like the internet in general, is how open they are — anyone can upload music, just like anyone can make a TikTok video or write a blog. But that also means that these platforms are vulnerable to fraud, manipulation and undesirable content that erodes the value of the overall experience. (I don’t mean things I don’t like — I mean spam and attempts to manipulate people.) And while the pluses and minuses of this openness are impossible to calculate, there’s a sense in the industry and among creators that this has gradually become less of a feature and more of a bug.
At this point, more than 100,000 new tracks are uploaded to streaming services daily. And while some of this reflects an inspiring explosion of amateur creativity, some of it is, sometimes literally, noise (not the artistic kind). Millions of those tracks are never heard, so they provide no consumer value — they just clutter up streaming service interfaces — while others are streamed a few times a year. From the point of view of some rightsholders, part of the solution may lie in a system of “artist-centric” royalties that privileges more popular artists and tracks. Even if this can be done fairly, though, this only addresses the financial issue — it does nothing for the user experience.
For users, finding the song they want can be like looking for “Silver Threads and Golden Needles” in a fast-expanding haystack. A search for that song on Apple Music brings up five listings for the same Linda Ronstadt recording, several listings of what seems to be another Ronstadt recording, and multiple versions of a few other performances. In this case, they all seem to be professional recordings, but how many of the listings are for the same one? It’s far from obvious.
From the perspective of major labels and most indies, the problems with streaming are all about making sure consumers can filter “professional music” from tracks uploaded by amateur creators — bar bands and hobbyists. But that prioritizes sellers over consumers. The truth is that the streaming business is broken in a number of ways. The big streaming services are very effective at steering users to big new releases and mainstream pop and hip-hop, which is one reason why major labels like them so much. But they don’t do a great job of serving consumers who are not that interested in new mainstream music or old favorites. And rightsholders aren’t exactly pushing for change here. From their perspective, under the current pro-rata royalty system, it makes economic sense to focus on the mostly young users who spend hours a day streaming music. Those who listen less, who tend to be older, are literally worth less.
It shows. If you’re interested in cool new rock bands — and a substantial number of people still seem to be — the streaming experience just isn’t as good. Algorithmic recommendations aren’t great. Less popular genres aren’t served well, either. If you search for John Coltrane — hardly an obscure artist — Spotify offers icons for John Coltrane, John Coltrane & Johnny Hartman, the John Coltrane Quartet, the John Coltrane Quintet, the John Coltrane Trio and two for the John Coltrane Sextet, plus some others. It’s hard to know what this means from an accounting perspective — one entry for the Sextet has 928 monthly listeners and the other has none. If you want to listen to John Coltrane, though, it’s not a great experience.
What does this have to do with streaming fraud? Not much — but also everything. If the goal of streaming services is to offer as much music as possible, they’re kicking ass. But most consumers would prefer an experience that’s easier to navigate. This ought to mean less music, with a limit on what can be uploaded, which some services already have; the sheer amount of music Smith had online ought to have suggested a problem, and it seems to have done so after some time. It should mean rethinking the pro-rata royalty system to make everyone’s listening habits generate money for their favorite artists. And it needs to mean spending some money to make streaming services look more like a record store and less like a swap-meet table.
These ideas may not be popular — streaming services don’t want the burden or expense of curating what they offer, and most of the labels so eager to fight fraud also fear the loss of the pro-rata system that disproportionately benefits their biggest artists. (In this industry, one illegitimate play for one song is fraud but a system that pays unpopular artists less is a business model.) But the industry needs to think about what consumers want — easy ways to find the song they want, music discovery that works in different genres, and a royalty system that benefits the artists they listen to. Shouldn’t they get it?