State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

Lunch Time Rewind

12:00 pm 1:00 pm

Current show
blank

Lunch Time Rewind

12:00 pm 1:00 pm


artificial intelligence

Page: 2

Grammy-winning producer Timbaland has taken on a new role as a strategic advisor to Suno, an AI music company that can generate full songs at the click of a button.
News of the deal comes four months after the three major music companies collectively sued Suno (and competitor Udio) for alleged infringement of their copyrighted sound recordings “at an almost unimaginable scale.”

According to a press release from Suno, Timbaland has been a “top user” of the platform for months, and this announcement formalizes his involvement with Suno. The partnership will be kicked off with Timbaland previewing his latest single “Love Again” exclusively on Suno’s platform.

Then, Suno users will be able to participate in a remix contest, which will include feedback and judging from Timbaland himself and over $100,000 in prizes for winning remixes. Timbaland will also release the top two remixes of “Love Again” on streaming services, including Spotify, Apple Music and more.

Trending on Billboard

Additionally, as part of being a strategic advisor to Suno, Timbaland will assume an “active” role in the “day-to-day product development and strategic creative direction” of new generative AI tools, says the company in a press release.

Suno is one of the most advanced generative AI music companies on the market today. Using simple text prompts, users can generate voice, lyrics and instrumentals in seconds. On May 21, Suno announced that it had raised $125 million in funding across multiple funding rounds, including investments from including Lightspeed Venture Partners, Nat Friedman and Daniel Gross, Matrix and Founder Collective. Suno also said it had been working closely with a team of advisors, including 3LAU, Aaron Levie, Alexandr Wang, Amjad Masad, Andrej Karpathy, Aravind Srinivas, Brendan Iribe, Flosstradamus, Fred Ehrsam, Guillermo Rauch and Shane Mac.

Though many have marveled at its uncanny music-making capabilities, the music business establishment also feared that Suno might have been trained on copyrighted material without consent. (At the time, Suno declined to state what materials were in its training data, and whether or not it included copyrighted music).

Then, Billboard broke the news on June 20 that the major labels were weighing the idea of a lawsuit against Suno and Udio, alleging widespread copyright infringement of their sound recordings for the purposes of AI training. After the lawsuit was officially filed four days later, Suno and Udio then hired top law firm Latham & Watkins, and filed lengthy responses to fire back at the labels. Suno noted it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet” and that it was “fair use” to use these files.

“When I heard what Suno was doing, I was immediately curious,” says Timbaland of the partnership. “After witnessing the potential, I knew I had to be a part of it. By combining forces, we have a unique opportunity to make A.I. work for the artist community and not the other way around. We’re seizing that opportunity, and we’re going to open up the floodgates for generations of artists to flourish on this new frontier. I’m excited and grateful to Suno for this opportunity.”

“It’s an honor to work with a legend like Timbaland,” says Mikey Shulman, CEO of Suno. “At Suno, we’re really excited about exploring new ways for fans to engage with their favorite artists. With Timbaland’s guidance, we’re helping musicians create music at the speed of their ideas—whether they’re just starting out or already selling out stadiums. We couldn’t be more excited for what’s ahead!”

The newest upsurge in artificial intelligence technology is streamlining the tedious tasks that run beneath the glamor of the industry, from simplifying marketing strategies to easing direct fan engagement to handling financial intricacies. And as this ecosystem matures, companies are discovering unprecedented methods to not only navigate but thrive within these new paradigms. 
In our previous guest column, we explored how the wave of music tech startups is empowering musicians, artists and the creative process. Now, we shift our focus to the technologies revolutionizing the business side of the industry, including artist services, ticketing, fan engagement and more. 

Music marketing has continued to evolve and become increasingly data-driven. A natural next step after creation and distribution, marketing involves creating assets for a campaign to effectively engage with the right audience. Traditionally, this has been a resource-intensive task, but now, AI-driven startups are providing efficiencies by automating much of this process.  

Trending on Billboard

Startups like Symphony and un:hurd are now providing automated campaign management services, handling everything from social media ads to DSP and playlist pitching from a single automated hub. Some of these platforms even incorporate financial management tools into their offerings.  

“Having financial management tools integrated into one platform allows for better revenue management and planning,” says Rameen Satar, founder/CEO of the financial management platform BANDS. “Overall, a unified platform simplifies the complexities of managing a music career, empowering musicians to focus more on their creative work and succeed in the industry.”

One hot topic as of late has been superfan monetization, with multiple startups creating platforms for artists to engage with and monetize their fan bases directly. From fan-designed merchandise on Softside to artist-to-fan streaming platform Vault.fm, which recently partnered with James Blake, these platforms provide personalized fan experiences including exclusive content, NFTs, merchandise, early access to tickets and bespoke offerings.  

Drew Thurlow and Rufy Anam Ghazi

Courtesy Photo

“The future of fan engagement will be community-driven. No two fan communities are alike, so engagement will be bespoke to each artist,” says Andy Apple, co-founder/CEO of superfan platform Mellomanic. “Artists will each have their own unique culture, but share one commonality: Every community will align, organize and innovate to support the goals of the artist.” 

Managing metadata and accounting royalties through the global web of streaming services is another area seeing innovation. With nearly 220 million tracks now registered at DSPs, according to content ID company Audible Magic, startups are stepping in to offer solutions across the music distribution and monetization chain. New tools are being developed to organize and search catalogs, manage track credits and splits, handle incomes, find unclaimed royalties, and clean up metadata errors.  

”While we have well-publicized challenges still around artist remuneration, there are innovation opportunities across the value chain, driving growth through improved operations and new models,” says Gareth Deakin of Sonorous Global Consulting, a London-based consultancy that works with labels and music creators to best use emerging technologies. 

Another issue that some AI companies have stepped in help solve is preventing fraud — a significant concern stemming from the ease of music distribution and the sheer volume of new music being released every day. Startups are helping labels and digital service providers address this problem with anti-piracy, content detection and audio fingerprinting technology. Beatdapp, for instance, which developed groundbreaking AI technology to detect fake streams, has partnered with Universal Music Group, SoundExchange and Napster. Elsewhere, MatchTune has patented an algorithm that detects AI-generated and manipulated audio, and a few others are developing tech to ensure the ethical use of copyrighted material by connecting rights holders and AI developers for fair compensation. Music recognition technology (MRT), which also utilizes audio fingerprinting technology, is becoming a prominent way to identify, track and monetize music plays across various platforms, including on-ground venues and other commercial spaces. 

In the live music industry, there has been minimal innovation in ticketing, especially at the club level. That’s starting to turn around, however, as new technologies are emerging to automate the tracking of ticket sales and counts, thereby helping agents and promoters reduce manual workloads.  

RealCount is one such startup that helps artists, agencies and promoters make sense of ticketing data. “We see RealCount as a second brain for promoters, agents and venues, automating the tracking of ticket counts and sales data from any point of sale,” says Diane Gremore, the company’s founder/CEO. Other exciting developments are taking place in how live events are experienced virtually, with platforms like Condense delivering immersive 3D content in real time. 

Drew Thurlow is the founder of Opening Ceremony Media where he advises music and music tech companies. Previously he was senior vp of A&R at Sony Music, and director of artists partnerships & industry relations at Pandora. His first book, about music & AI, will be released by Routledge in early 2026.

Rufy Anam Ghazi is a seasoned music business professional with over eight years of experience in product development, data analysis, research, business strategy, and partnerships. Known for her data-driven decision-making and innovative approach, she has successfully led product development, market analysis, and strategic growth initiatives, fostering strong industry relationships.

The ASCAP Lab, ASCAP’s innovation program, has announced this year’s cohort for their AI and the Business of Music Challenge. Featuring CRESQA, Music Tomorrow, RoEx, SoundSafe.ai and Wavelets AI, these start-ups will take part in a 12-week course, in partnership with NYC Media Lab, led by the NYU Tandon School of Engineering, to receive mentorship and small grants to develop their ideas.

As part of the initiative, the start-ups will receive hands-on support from the ASCAP Lab, as well as ASCAP’s network of writer and publisher members, to help them optimize their products for the music creator community.

While last year’s cohort of companies focused on AI for music creation and experience, the 2024 AI and the Business of Music Challenge is much more focused on commercial solutions that can help the music industry better manage data and improve workflows.

Trending on Billboard

ASCAP Chief Strategy and Digital Officer Nick Lehman says of the 2024 cohort: “ASCAP’s creator-first, future-forward commitment makes it imperative for us to embrace technology while simultaneously protecting the rights of creators. The dialogue, understanding and relationships that the ASCAP Lab Challenge creates with the music startup community enable us to drive progress for the industry and deliver on this commitment.”

Meet the ASCAP Lab Challenge teams for 2024 below:

CRESQA: An AI social media content assistant designed for songwriters and musicians that automates the process of social media strategy development and helps generate fully personalized post ideas and schedules for TikTok, Instagram, YouTube Shorts, Facebook and more. 

Music Tomorrow: Analytics tools that monitor and boost artists’ algorithmic performance on streaming platforms, using AI for advanced audience insights and automation that improve an artist’s content discoverability, listener engagement and team efficiency. 

RoEx: AI-driven tools for multitrack mixing, mastering, audio cleanup and quality control, designed to streamline and enhance the final steps of the creative process by delivering a professional and balanced mix with ease. 

SoundSafe.ai: Robust, state-of-the-art audio watermarking using AI to enhance security, reporting and the detection of real-time piracy and/or audio deepfakes. 

Wavelets AI: Tools for artists, labels, copyright holders, content distributors and DSPs that help reduce IP infringement by detecting AI vocals in music. 

The Warner Music Group (WMG) has struck a new multiyear licensing deal with Meta, Billboard has learned. The partnership, which covers both Warner’s recorded music and Warner Chappell publishing operations, will be across all of Meta’s platforms — Facebook, Instagram, Messenger, Horizon and Threads — and will also include WhatsApp for the first time, Billboard […]

California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.
The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

Trending on Billboard

The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet.”

“The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said in a statement Sunday afternoon.

Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point.

The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

The bill targeted systems that require a high level of computing power and more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure’s supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”

On Sept. 4, the public learned of the first-ever U.S. criminal case addressing streaming fraud. In the indictment, federal prosecutors claim that a North Carolina-based musician named Michael “Mike” Smith stole $10 million dollars from streaming services by using bots to artificially inflate the streaming numbers for hundreds of thousands of mostly AI-generated songs. A day later, Billboard reported a link between Smith and the popular generative AI music company Boomy; Boomy’s CEO Alex Mitchell and Smith were listed on hundreds of tracks as co-writers. 
(The AI company and its CEO that supplied songs to Smith were not charged with any crime and were left unnamed in the indictment. Mitchell replied to Billboard’s request for comment, saying, “We were shocked by the details in the recently filed indictment of Michael Smith, which we are reviewing. Michael Smith consistently represented himself as legitimate.”) 

Trending on Billboard

This case marks the end of generative AI music’s honeymoon phase (or “hype” phase) with the music industry establishment. Though there have always been naysayers about AI in the music business, the industry’s top leaders have been largely optimistic about it, provided AI tools were used ethically and responsibly. “If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways,” said Lucian Grainge, Universal Music Group’s chairman/CEO, in a statement about UMG’s partnership with YouTube for its AI Music Incubator. “You have to embrace technology [like AI], because it’s not like you can put technology in a bottle,” WMG CEO Robert Kyncl said during an onstage interview at the Code Conference last September.

Each major music label group has established its own partnerships to get in on the AI gold rush since late 2022. UMG coupled with YouTube for an AI incubator program and SoundLabs for “responsible” AI plug-ins. Sony Music started collaborating with Vermillio for an AI remix project around David Gilmour and The Orb’s latest album. Warner Music Group’s ADA struck a deal with Boomy, which was previously distributing its tracks with Downtown, and invested in dynamic music company Lifescore. 

Artists and producers jumped in, too — from Lauv’s collaboration with Hooky to create an AI-assisted Korean-language translation of his newest single to 3LAU’s investment in Suno. Songwriters reportedly used AI voices on pitch records. Artists like Drake and Timbaland used unauthorized AI voices to resurrect stars like Tupac Shakur and Notorious B.I.G. in songs they posted to social media. Metro Boomin sampled an AI song from Udio to create his viral “BBL Drizzy” remix. (Drake later sampled “BBL Drizzy” himself in his feature on the song “U My Everything” by Sexyy Red.) The estate of “La Vie En Rose” singer Edith Piaf, in partnership with WMG, developed an animated documentary of her life, using AI voices and images. The list goes on. 

While these industry leaders haven’t spoken publicly about the overall state of AI music in a few months, I can’t imagine their tone is now as sunny as it once was, given the events of the summer. It all started with Sony Music releasing a statement that warned over 700 AI companies to not scrape the label group’s copyrighted data in May. Then Billboard broke the news that the majors were filing a sweeping copyright infringement lawsuit against Suno and Udio in June. In July, WMG issued a similar warning to AI companies as Sony had. In August, Billboard reported that AI music adoption has been much slower than was anticipated, the NO FAKES Act was introduced to the Senate, and Donald Trump deepfaked a false Taylor Swift endorsement of his presidential run on Truth Social — an event that Swift herself referenced as a driving factor in her social media post endorsing Kamala Harris for president.

And finally, the AI music streaming fraud case dropped. It proved what many had feared: AI music flooding onto streaming services is diverting significant sums of royalties away from human artists, while also making streaming fraud harder to detect. I imagine Grainge is particularly interested in this case, given that its findings support his recent crusade to change the way streaming services pay out royalties to benefit “professional artists” over hobbyists, white noise makers and AI content generators.

When I posted my follow up reporting on LinkedIn, Declan McGlynn, director of communications for Voice-Swap, an “ethical” AI voice company, summed up people’s feelings well in his comment: “Can yall stop stealing shit for like, five seconds[?] Makes it so much harder for the rest of us.” 

One AI music executive told me that the majors have said that they would use a “carrot and stick” approach to this growing field, providing opportunities to the good guys and meting out punishment for the bad guys. Some of those carrots were handed out early while the hype was still fresh around AI because music companies wanted to appear innovative — and because they were desperate to prove to shareholders and artists that they learned from the mistakes of Napster, iTunes, early YouTube and TikTok. Now that they’ve made their point and the initial shock of these models has worn off, the majors have started using those sticks. 

This summer, then, has represented a serious vibe shift, to borrow New York magazine’s memeable term. All this recent bad press for generative AI music, including the reports about slow adoption, seems destined to result in far fewer new partnerships announced between generative AI music companies and the music business establishment, at least for the time being. Investment could be harder to come by, too. Some players who benefitted from early hype but never amassed an audience or formed a strong business will start to fall. 

This doesn’t mean that generative AI music-related companies won’t find their place in the industry eventually — some certainly will. This is just a common phase in the life cycle of new tech. Investors will probably increasingly turn their attention to other AI music companies, those largely not of the generative variety, that promise to solve the problems created by generative AI music. Metadata management and attribution, fingerprinting, AI music detection, music discovery — it’s a lot less sexy than a consumer-facing product making songs at the click of a button, but it’s a lot safer, and is solving real problems in practical ways. 

There’s still time to continue to set the guardrails for generative AI music before it is adopted en masse. The music business has already started working toward protecting artists’ names, images, likenesses and voices and has fought back against unauthorized AI training on their copyrights. Now it’s time for the streaming services to join in and finally set some rules for how AI generated music is treated on its platforms. 

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

If you have any tips about the AI music streaming fraud case, Billboard is continuing to report on it. Please reach out to krobinson@billboard.com. 

You can’t say no one’s getting rich from streaming. In an indictment unsealed in early September, federal prosecutors charged musician Michael Smith with fraud and conspiracy in a scheme in which he used AI-generated songs streamed by bots to rake in $10 million in royalties. He allegedly received royalties for hundreds of thousands of songs, at least hundreds of which listed as co-writer the CEO of the AI company Boomy, which had received investment from Warner Music Group. (The CEO, Alex Mitchell, has not been charged with any crime.) 
This is the first criminal case for streaming fraud in the U.S., and its size may make it an outlier. But the frightening ease of creating so many AI songs and using bots to generate royalties with them shows how vulnerable the streaming ecosystem really is. This isn’t news to some executives, but it should come as a wake-up call to the industry as a whole. And it shows how the subscription streaming business model with pro-rata royalty distribution that now powers the recorded music industry is broken — not beyond repair, but certainly to the point where serious changes need to be made.

Trending on Billboard

One great thing about streaming music platforms, like the internet in general, is how open they are — anyone can upload music, just like anyone can make a TikTok video or write a blog. But that also means that these platforms are vulnerable to fraud, manipulation and undesirable content that erodes the value of the overall experience. (I don’t mean things I don’t like — I mean spam and attempts to manipulate people.) And while the pluses and minuses of this openness are impossible to calculate, there’s a sense in the industry and among creators that this has gradually become less of a feature and more of a bug. 

At this point, more than 100,000 new tracks are uploaded to streaming services daily. And while some of this reflects an inspiring explosion of amateur creativity, some of it is, sometimes literally, noise (not the artistic kind). Millions of those tracks are never heard, so they provide no consumer value — they just clutter up streaming service interfaces — while others are streamed a few times a year. From the point of view of some rightsholders, part of the solution may lie in a system of “artist-centric” royalties that privileges more popular artists and tracks. Even if this can be done fairly, though, this only addresses the financial issue — it does nothing for the user experience.

For users, finding the song they want can be like looking for “Silver Threads and Golden Needles” in a fast-expanding haystack. A search for that song on Apple Music brings up five listings for the same Linda Ronstadt recording, several listings of what seems to be another Ronstadt recording, and multiple versions of a few other performances. In this case, they all seem to be professional recordings, but how many of the listings are for the same one? It’s far from obvious. 

From the perspective of major labels and most indies, the problems with streaming are all about making sure consumers can filter “professional music” from tracks uploaded by amateur creators — bar bands and hobbyists. But that prioritizes sellers over consumers. The truth is that the streaming business is broken in a number of ways. The big streaming services are very effective at steering users to big new releases and mainstream pop and hip-hop, which is one reason why major labels like them so much. But they don’t do a great job of serving consumers who are not that interested in new mainstream music or old favorites. And rightsholders aren’t exactly pushing for change here. From their perspective, under the current pro-rata royalty system, it makes economic sense to focus on the mostly young users who spend hours a day streaming music. Those who listen less, who tend to be older, are literally worth less.

It shows. If you’re interested in cool new rock bands — and a substantial number of people still seem to be — the streaming experience just isn’t as good. Algorithmic recommendations aren’t great. Less popular genres aren’t served well, either. If you search for John Coltrane — hardly an obscure artist — Spotify offers icons for John Coltrane, John Coltrane & Johnny Hartman, the John Coltrane Quartet, the John Coltrane Quintet, the John Coltrane Trio and two for the John Coltrane Sextet, plus some others. It’s hard to know what this means from an accounting perspective — one entry for the Sextet has 928 monthly listeners and the other has none. If you want to listen to John Coltrane, though, it’s not a great experience.  

What does this have to do with streaming fraud? Not much — but also everything. If the goal of streaming services is to offer as much music as possible, they’re kicking ass. But most consumers would prefer an experience that’s easier to navigate. This ought to mean less music, with a limit on what can be uploaded, which some services already have; the sheer amount of music Smith had online ought to have suggested a problem, and it seems to have done so after some time. It should mean rethinking the pro-rata royalty system to make everyone’s listening habits generate money for their favorite artists. And it needs to mean spending some money to make streaming services look more like a record store and less like a swap-meet table. 

These ideas may not be popular — streaming services don’t want the burden or expense of curating what they offer, and most of the labels so eager to fight fraud also fear the loss of the pro-rata system that disproportionately benefits their biggest artists. (In this industry, one illegitimate play for one song is fraud but a system that pays unpopular artists less is a business model.) But the industry needs to think about what consumers want — easy ways to find the song they want, music discovery that works in different genres, and a royalty system that benefits the artists they listen to. Shouldn’t they get it? 

When Taylor Swift endorsed Kamala Harris for president on Tuesday (Sept. 10), the singer said she was spurred to action by her fears about artificial intelligence — namely, an incident last month in which Donald Trump posted AI-generated images that falsely claimed the superstar’s support.
Swift’s endorsement, which landed on Instagram just minutes after the conclusion of the Harris-Trump debate, called the Democratic nominee a “steady-handed, gifted leader” who “fights for the rights and causes I believe need a warrior to champion them.” But before those reasons, she pointed first to last month’s deepfake debacle.

Trending on Billboard

“It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

Her fears are well-founded, as Swift has been one most prominent victims of AI deepfakes. At the start of 2024, a flood of fake, sexually explicit images of Swift were posted to the social media site X (formerly Twitter). Some were viewed millions of times before they were removed.

At the time, Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law, told Billboard that the Swift deepfakes highlighted a “particularly toxic cocktail” that was bubbling up on social media in 2024: “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”

Then last month, Trump posted several AI-generated images to social media falsely suggesting Swift had endorsed him. Several showed women in t-shirts with the slogan “Swifties for Trump”; another showed Swift herself, dressed up as Uncle Sam alongside the message, “Taylor wants you to vote for Donald Trump.” Trump himself responded to the false endorsement: “I accept!”

At the time, experts told Billboard that Swift likely had grounds to file a lawsuit over Trump’s phony endorsement by citing her right of publicity — the legal power to control how your name, image and likeness are used by others.

But they also predicted — accurately, it turns out — that the star was better off fighting Trump’s fake endorsement with a legitimate endorsement of her own, broadcast across social media to her millions of die-hard fans: “I think Swift probably has more effective political rather than legal recourse here.”

Whether or not Swift’s endorsement has its intended effect, the next president will have a chance to shape federal policy on AI and deepfakes. Numerous bills aimed at regulating the cutting-edge tech are pending before Congress, including one that would create a federal right of publicity that would allow people like Swift to more easily sue over the unauthorized use of their likeness.

As more music industry entrepreneurs rush into the nascent AI sector, the number of new companies seems to grow by the day. To help artists, creators and others navigate the space, Billboard has compiled a directory of music-centric AI startups.
Given how quickly the sector is growing, this is not an exhaustive list, but it will continue to be updated. The directory also does not make judgment calls about the quality of the models’ outputs and whether their training process is “ethical.” It is an agnostic directory of what is available. Potential users should research any company they are considering.

Although a number of the following companies fit into more than one business sector, for the sake of brevity, no company is listed more than once.

Trending on Billboard

To learn more about what is considered to be an “ethical” AI model, please read our AI FAQs, where key questions are answered by top experts in the field, or visit Fairly Trained, a nonprofit dedicated to certifying “ethical” AI music models.

General Music Creation

AIVA: A music generator that also provides additional editing tools so that users can edit the generated songs and make them their own.

Beatoven: A text-to-music generator that provides royalty-free music for content creators.

Boomy: This music generator creates instrumentals using a number of controllable parameters such as genre and BPM. It also allows users to publish and monetize their generated works.

Create: A stem and sample arrangement tool created by Splice. This model uses AI to generate new arrangements of different Splice samples, which are intended to spark the songwriting process and help users find new samples.

Gennie: A text-to-music generator created by Soundation that produces 12-second-long samples.

Hydra II: A text-to-music generator created by Rightsify that aims to create royalty-free music for commercial spaces. It is trained on Rightsify’s owned catalog of songs.

Infinite Album: A music generator that provides “fully licensed” and “copyright safe” AI music for gamers.

Jen: A text-to-music generator created by Futureverse that was trained on 40 licensed music catalogs and uses blockchain technology to verify and timestamp its creations.

Lemonaide: A “melodic idea” generator. This model creates musical ideas in MIDI form to help songwriters get started on their next idea.

MusicGen: A text-to-music generator created by Meta.

Music LM: A text-to-music generator created by Google.

Ripple: A music generator created by ByteDance. This product can convert a hummed melody into an instrumental and can expand upon the result.

Song Starter: A music generator created by BandLab that is designed to help young artists start new song ideas.

Soundful: This company has collaborated with Kaskade, Starrah and other artists and producers to create their own AI beat generators, a new play on the “type-beat.”

SoundGen: A text-to-music generator that can also act as a “musical assistant” to help flesh out a creator’s music.

Soundraw: A generator that creates royalty-free beats, some of which have been used by Trippie Redd, Fivio Foreign and French Montana.

Stable Audio: A text-to-music generator created by Stability AI. This model also offers audio-to-audio generation, which enables users to manipulate any uploaded audio sample using text prompts.

Suno: A text-to-music generator. This model can create lyrics, vocals and instrumentals with the click of a button. Suno and another generator, Udio, are currently being sued by the three major music companies for alleged widespread copyright infringement during the training process. Suno and Udio claim the training qualifies as fair use under U.S. copyright law and contend the lawsuits are attempts to stifle independent competition.

Tuney: A music generator. This model is known for soundtracking brand advertisements and offering “adaptive music” to make a generated track better fit any given project.

Udio: A text-to-music generator that can create lyrics, vocals and instrumentals with a keyboard stroke. This model is best known for generating “BBL Drizzy,” a parody song by comic Willonius Hatcher that was then sampled by Metro Boomin and became a viral hit. Udio, like Suno, is defending itself against a copyright infringement lawsuit filed by the three major music companies. Udio and Suno claim their training counts as fair use and accuse the label groups of attempting to stifle independent competition.

Voice Conversion

Covers.AI: A voice filter platform created by Mayk.It. The platform offers the ability to build your own AI voice, as well as try on the voices of characters like SpongeBob, Mario or Ash Ketcham.

Elf.Tech: A Grimes voice filter created by CreateSafe and Grimes. This tool is the first major artist-voice converter, and Grimes debuted it in response to the virality of Ghostwriter977’s “Heart on My Sleeve,” which deepfaked the voices of Drake and The Weeknd.

Hooky: A voice filter platform best known for its official partnership with Lauv, who used Hooky technology to translate his song “Love U Like That” into Korean.

Kits.AI: A voice filter, stem separation and mastering platform. This company can provide DIY voice cloning as well as a suite of other generic types of voices. It is certified by Fairly Trained.

Supertone: A voice filter platform, acquired by HYBE, that allows users to change their voice in real time. It also offers a tool called Clear to remove noise and reverb from vocal stems.

Voice-Swap: A voice filter and stem separation platform. This company offers an exclusive roster of artist voices to choose from, including Imogen Heap, and it hopes to become an “agency” for artists’ voices.

Vocoflex: A voice filter plug-in created by Dreamtonics that offers the ability to change the tone of a singer’s voice in real time.

Stem Separation

Audioshake: A stem separation and lyric transcription tool. This company is best known for its recent participation in Disney’s accelerator program.

LALA.AI: A stem separation and voice conversion tool.

Moises AI: A stem separation, pitch-changer, chord detection and smart metronome tool created by Music AI.

Sounds.Studio: A stem separation tool created by Never Before Heard Sounds.

Stem-Swap: A stem separation tool created by Voice-Swap.

Dynamic Music

Endel: A personalized soundscape generator that enhances activities including sleep and focus. The company also releases collaborations with artists like Grimes, James Blake and 6LACK.

Lifescore: A personalized soundtrack generator that enhances activities like driving, working out and more.

Plus Music.AI: A personalized soundtrack generator for video-game play.

Reactional Music: A personalized soundtrack generator that adapts music with actions taken in video games in real time.

Management

Drop Track: An AI-powered music publicity tool.

Musical AI: An AI-powered rights management tool that enables rights holders to manage their catalog and license their works for generative AI training as desired.

Musiio: An AI music tagging and search tool owned by SoundCloud. This tool creates fingerprints to better track and search songs, and it automates tagging songs by mood, keywords, language, genre and lyrical content.

Triniti: A suite of AI tools for music creation, marketing, management and distribution created by CreateSafe. It is best known for the AI voice application programming interface behind Grimes’ Elf.Tech synthetic voice model.

Other

Hook: An AI music remix app that allows users to create mashups and edits with proper licensing in place.

LANDR: A suite of plug-ins and producer services, many of which are powered by AI, including an AI mastering tool.

Morpho: A timbre transfer tool created by Neutone.

From Ghostwriter’s “fake Drake” song to Metro Boomin‘s “BBL Drizzy,” a lot has happened in a very short time when it comes to the evolution of AI’s use in music. And it’s much more prevalent than the headlines suggest. Every day, songwriters are using AI voices to better target pitch records to artists, producers are trying out AI beats and samples, film/TV licensing experts are using AI stem separation to help them clean up old audio, estates and catalog owners are using AI to better market older songs, and superfans are using AI to create next-level fan fiction and UGC about their favorite artists.
For those just starting out in the brave new world of AI music, and understanding all the buzzwords that come with it, Billboard contacted some of the sector’s leading experts to get answers to top questions.

Trending on Billboard

What are some of the most common ways AI is already being used by songwriters and producers?

TRINITY, music producer: As a producer and songwriter, I use AI and feel inspired by AI tools every day. For example, I love using Splice Create Mode. It allows me to search through the Splice sample catalog while coming up with ideas quickly, and then I export it into my DAW Studio One. It keeps the flow of my sessions going as I create. I heard we’ll soon be able to record vocal ideas into Create Mode, which will be even more intuitive and fun. Also, the Izotope Ozone suite is great. The suite has mastering and mixing assistant AI tools built into its plug-ins. These tools help producers and songwriters mix and master tracks and song ideas.

I’ve also heard other songwriters and producers using AI to get started with song ideas. When you feel blocked, you have AI tools like Jen, Melody Studio and Lemonaide to help you come up with new chord progressions. Also, Akai MPC AI and LALA AI are both great for stem splitting, which allows you to separate [out] any part of the music. For example, if I just want to solo and sample the drums in a record, I can do that now in minutes.

AI is not meant to replace us as producers and songwriters. It’s meant to inspire and push our creativity. It’s all about your perspective and how you use it. The future is now; we should embrace it. Just think about how far we have come from the flip phones to the phones we have now that feel more limitless every day. I believe the foundation and heart of us as producers and songwriters will never get lost. We must master our craft to become the greatest producers and songwriters. AI in music creation is meant to assist and free [up] more mental space while I create. I think of AI as my J.A.R.V.I.S. and I’m Iron Man.

How can a user tell if a generative AI company is considered “ethical” or not?

Michael Pelczynski, chief strategy and impact officer, Voice-Swap: If you’re paying for services from a generative AI company, ask yourself, “Where is my money going?” If you’re an artist, producer or songwriter, this question becomes even more crucial: “Why?” Because as a customer, the impact of your usage directly affects you and your rights as a creator. Not many companies in this space truly lead by example when it comes to ethical practices. Doing so requires effort, time and money. It’s more than just marketing yourself as ethical. To make AI use safer and more accessible for musicians, make sure the platform or company you choose compensates everyone involved, both for the final product and for the training sources.

Two of the most popular [ways to determine whether a company is ethical] are the Fairly Trained certification that highlights companies committed to ethical AI training practices, and the BMAT x Voice-Swap technical certification that sets new standards for the ethical and legal utilization of AI-generated voices.

When a generative AI company says it has “ethically” sourced the data it trained on, what does that usually mean? 

Alex Bestall, founder and CEO, Rightsify and Global Copyright Exchange (GCX): [Ethical datasets] require [an AI company to] license the works and get opt-ins from the rights holders and contributors… Beyond copyright, it is also important for vocalists whose likeness is used in a dataset to have a clear opt-in.

What are some examples of AI that can be useful to music-makers that are not generative?

Jessica Powell, CEO, AudioShake: There are loads of tools powered by AI that are not generative. Loop and sample suggestion are a great way to help producers and artists brainstorm the next steps in a track. Stem separation can open up a recording for synch licensing, immersive mixing or remixing. And metadata tagging can help prepare a song for synch-licensing opportunities, playlisting and other experiences that require an understanding of genre, BPM and other factors.

In the last year, several lawsuits have been filed between artists of various fields and generative AI companies, primarily concerning the training process. What is the controversy about?

Shara Senderoff, co-founder, Futureverse and Raised in Space: The heart of the controversy lies in generative AI companies using copyrighted work to train their models without artists’ permission. Creators argue that this practice infringes on their intellectual property rights, as these AI models can produce content closely resembling their original works. This raises significant legal and ethical questions about creative ownership and the value of human artistry in the digital age. The creator community is incensed [by] seeing AI companies profit from their efforts without proper recognition or compensation.

Are there any tools out there today that can be used to detect generative AI use in music? Why are these tools important to have?

Amadea Choplin, COO, Pex: The more reliable tools available today use automated content recognition (ACR) and music recognition technology (MRT) to identify uses of existing AI-generated music. Pex can recognize new uses of existing AI tracks, detect impersonations of artists via voice identification and help determine when music is likely to be AI-generated. Other companies that can detect AI-generated music include Believe and Deezer; however, we have not tested them ourselves. We are living in the most content-dense period in human history where any person with a smartphone can be a creator in an instant, and AI-powered technology is fueling this growth. Tools that operate at mass scale are critical to correctly identifying creators and ensuring they are properly compensated for their creations.

Romain Simiand, chief product officer, Ircam Amplify: Most AI detection tools provide only one side of the coin. As an example, tools such as aivoicedetector.com are primarily meant to detect deepfakes for speech. IRCAM Amplify focuses primarily on prompt-based tools used widely. Yet, because we know this approach is not bulletproof, we are currently supercharging our product to highlight voice clones and identify per-stem AI-generated content. Another interesting contender is resemble.ai, but while it seems their approach is similar, the methodology described diverges greatly.

Finally, we have pex.com, which focuses on voice identification. I haven’t tested the tool but this approach seems to require the original catalog to be made available, which is a potential problem.

AI recognition tools like the AI Generated Detector released by IRCAM Amplify and the others mentioned above help with the fair use and distribution of AI-generated content.

We think AI can be a creativity booster in the music sector, but it is as important to be able to recognize those tracks that have been generated with AI [automatically] as well as identifying deepfakes — videos and audio that are typically used maliciously or to spread false information.

In the United States, what laws are currently being proposed to protect artists from AI vocal deepfakes?

Morna Willens, chief policy officer, RIAA: Policymakers in the U.S. have been focused on guardrails for artificial intelligence that promote innovation while protecting all of us from unconsented use of our images and voices to create invasive deepfakes and voice clones. Across legislative efforts, First Amendment speech protections are expressly covered and provisions are in place to help remove damaging AI content that would violate these laws.

On the federal level, Reps. María Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA) introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act to create a national framework that would safeguard Americans from their voice and likeness being used in nonconsensual AI-generated imitations.

Sens. Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC) released a discussion draft of a bill called Nurture Originals, Foster Art and Keep Entertainment Safe Act with similar aims of protecting individuals from AI deepfakes and voice clones. While not yet formally introduced, we’re hopeful that the final version will provide strong and comprehensive protections against exploitive AI content.

Most recently, Sens. Blackburn, Maria Cantwell (D-WA) and Martin Heinrich (D-NM) introduced the Content Origin Protection and Integrity From Edited and Deepfaked Media Act, offering federal transparency guidelines for authenticating and detecting AI-generated content while also holding violators accountable for harmful deepfakes.

In the states, existing “right of publicity” laws address some of the harms caused by unconsented deepfakes and voice clones, and policymakers are working to strengthen and update these. The landmark Ensuring Likeness Voice and Image Security Act made Tennessee the first state to update its laws to address the threats posed by unconsented AI deepfakes and voice clones. Many states are similarly considering updates to local laws for the AI era.

RIAA has worked on behalf of the artists, rights holders and the creative community to educate policymakers on the impact of AI — both challenges and opportunities. These efforts are a promising start, and we’ll continue to advocate for artists and the entire music ecosystem as technologies develop and new issues emerge.

What legal consequences could a user face for releasing a song that deepfakes another artist’s voice? Could that user be shielded from liability if the song is clearly meant to be parody?

Joseph Fishman, music law professor, Vanderbilt University: The most important area of law that the user would need to worry about is publicity rights, also known as name/image/likeness laws, or NIL. For now, the scope of publicity rights varies state by state, though Congress is working on enacting an additional federal version whose details are still up for grabs. Several states include voice as a protected aspect of the rights holder’s identity. Some companies in the past have gotten in legal trouble for mimicking a celebrity’s voice, but so far those cases have involved commercial advertisements. Whether one could get in similar trouble simply for using vocal mimicry in a new song, outside of the commercial context, is a different and largely untested question. This year, Tennessee became the first state to expand its publicity rights statute to cover that scenario expressly, and other jurisdictions may soon follow. We still don’t know whether that expansion would survive a First Amendment challenge.

If the song is an obvious parody, the user should be on safer ground. There’s pretty widespread agreement that using someone’s likeness for parody or other forms of criticism is protected speech under the First Amendment. Some state publicity rights statutes even include specific parody exemptions.