State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 3

California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.
The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

Trending on Billboard

The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet.”

“The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said in a statement Sunday afternoon.

Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point.

The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

The bill targeted systems that require a high level of computing power and more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure’s supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”

On Sept. 4, the public learned of the first-ever U.S. criminal case addressing streaming fraud. In the indictment, federal prosecutors claim that a North Carolina-based musician named Michael “Mike” Smith stole $10 million dollars from streaming services by using bots to artificially inflate the streaming numbers for hundreds of thousands of mostly AI-generated songs. A day later, Billboard reported a link between Smith and the popular generative AI music company Boomy; Boomy’s CEO Alex Mitchell and Smith were listed on hundreds of tracks as co-writers. 
(The AI company and its CEO that supplied songs to Smith were not charged with any crime and were left unnamed in the indictment. Mitchell replied to Billboard’s request for comment, saying, “We were shocked by the details in the recently filed indictment of Michael Smith, which we are reviewing. Michael Smith consistently represented himself as legitimate.”) 

Trending on Billboard

This case marks the end of generative AI music’s honeymoon phase (or “hype” phase) with the music industry establishment. Though there have always been naysayers about AI in the music business, the industry’s top leaders have been largely optimistic about it, provided AI tools were used ethically and responsibly. “If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways,” said Lucian Grainge, Universal Music Group’s chairman/CEO, in a statement about UMG’s partnership with YouTube for its AI Music Incubator. “You have to embrace technology [like AI], because it’s not like you can put technology in a bottle,” WMG CEO Robert Kyncl said during an onstage interview at the Code Conference last September.

Each major music label group has established its own partnerships to get in on the AI gold rush since late 2022. UMG coupled with YouTube for an AI incubator program and SoundLabs for “responsible” AI plug-ins. Sony Music started collaborating with Vermillio for an AI remix project around David Gilmour and The Orb’s latest album. Warner Music Group’s ADA struck a deal with Boomy, which was previously distributing its tracks with Downtown, and invested in dynamic music company Lifescore. 

Artists and producers jumped in, too — from Lauv’s collaboration with Hooky to create an AI-assisted Korean-language translation of his newest single to 3LAU’s investment in Suno. Songwriters reportedly used AI voices on pitch records. Artists like Drake and Timbaland used unauthorized AI voices to resurrect stars like Tupac Shakur and Notorious B.I.G. in songs they posted to social media. Metro Boomin sampled an AI song from Udio to create his viral “BBL Drizzy” remix. (Drake later sampled “BBL Drizzy” himself in his feature on the song “U My Everything” by Sexyy Red.) The estate of “La Vie En Rose” singer Edith Piaf, in partnership with WMG, developed an animated documentary of her life, using AI voices and images. The list goes on. 

While these industry leaders haven’t spoken publicly about the overall state of AI music in a few months, I can’t imagine their tone is now as sunny as it once was, given the events of the summer. It all started with Sony Music releasing a statement that warned over 700 AI companies to not scrape the label group’s copyrighted data in May. Then Billboard broke the news that the majors were filing a sweeping copyright infringement lawsuit against Suno and Udio in June. In July, WMG issued a similar warning to AI companies as Sony had. In August, Billboard reported that AI music adoption has been much slower than was anticipated, the NO FAKES Act was introduced to the Senate, and Donald Trump deepfaked a false Taylor Swift endorsement of his presidential run on Truth Social — an event that Swift herself referenced as a driving factor in her social media post endorsing Kamala Harris for president.

And finally, the AI music streaming fraud case dropped. It proved what many had feared: AI music flooding onto streaming services is diverting significant sums of royalties away from human artists, while also making streaming fraud harder to detect. I imagine Grainge is particularly interested in this case, given that its findings support his recent crusade to change the way streaming services pay out royalties to benefit “professional artists” over hobbyists, white noise makers and AI content generators.

When I posted my follow up reporting on LinkedIn, Declan McGlynn, director of communications for Voice-Swap, an “ethical” AI voice company, summed up people’s feelings well in his comment: “Can yall stop stealing shit for like, five seconds[?] Makes it so much harder for the rest of us.” 

One AI music executive told me that the majors have said that they would use a “carrot and stick” approach to this growing field, providing opportunities to the good guys and meting out punishment for the bad guys. Some of those carrots were handed out early while the hype was still fresh around AI because music companies wanted to appear innovative — and because they were desperate to prove to shareholders and artists that they learned from the mistakes of Napster, iTunes, early YouTube and TikTok. Now that they’ve made their point and the initial shock of these models has worn off, the majors have started using those sticks. 

This summer, then, has represented a serious vibe shift, to borrow New York magazine’s memeable term. All this recent bad press for generative AI music, including the reports about slow adoption, seems destined to result in far fewer new partnerships announced between generative AI music companies and the music business establishment, at least for the time being. Investment could be harder to come by, too. Some players who benefitted from early hype but never amassed an audience or formed a strong business will start to fall. 

This doesn’t mean that generative AI music-related companies won’t find their place in the industry eventually — some certainly will. This is just a common phase in the life cycle of new tech. Investors will probably increasingly turn their attention to other AI music companies, those largely not of the generative variety, that promise to solve the problems created by generative AI music. Metadata management and attribution, fingerprinting, AI music detection, music discovery — it’s a lot less sexy than a consumer-facing product making songs at the click of a button, but it’s a lot safer, and is solving real problems in practical ways. 

There’s still time to continue to set the guardrails for generative AI music before it is adopted en masse. The music business has already started working toward protecting artists’ names, images, likenesses and voices and has fought back against unauthorized AI training on their copyrights. Now it’s time for the streaming services to join in and finally set some rules for how AI generated music is treated on its platforms. 

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

If you have any tips about the AI music streaming fraud case, Billboard is continuing to report on it. Please reach out to krobinson@billboard.com. 

You can’t say no one’s getting rich from streaming. In an indictment unsealed in early September, federal prosecutors charged musician Michael Smith with fraud and conspiracy in a scheme in which he used AI-generated songs streamed by bots to rake in $10 million in royalties. He allegedly received royalties for hundreds of thousands of songs, at least hundreds of which listed as co-writer the CEO of the AI company Boomy, which had received investment from Warner Music Group. (The CEO, Alex Mitchell, has not been charged with any crime.) 
This is the first criminal case for streaming fraud in the U.S., and its size may make it an outlier. But the frightening ease of creating so many AI songs and using bots to generate royalties with them shows how vulnerable the streaming ecosystem really is. This isn’t news to some executives, but it should come as a wake-up call to the industry as a whole. And it shows how the subscription streaming business model with pro-rata royalty distribution that now powers the recorded music industry is broken — not beyond repair, but certainly to the point where serious changes need to be made.

Trending on Billboard

One great thing about streaming music platforms, like the internet in general, is how open they are — anyone can upload music, just like anyone can make a TikTok video or write a blog. But that also means that these platforms are vulnerable to fraud, manipulation and undesirable content that erodes the value of the overall experience. (I don’t mean things I don’t like — I mean spam and attempts to manipulate people.) And while the pluses and minuses of this openness are impossible to calculate, there’s a sense in the industry and among creators that this has gradually become less of a feature and more of a bug. 

At this point, more than 100,000 new tracks are uploaded to streaming services daily. And while some of this reflects an inspiring explosion of amateur creativity, some of it is, sometimes literally, noise (not the artistic kind). Millions of those tracks are never heard, so they provide no consumer value — they just clutter up streaming service interfaces — while others are streamed a few times a year. From the point of view of some rightsholders, part of the solution may lie in a system of “artist-centric” royalties that privileges more popular artists and tracks. Even if this can be done fairly, though, this only addresses the financial issue — it does nothing for the user experience.

For users, finding the song they want can be like looking for “Silver Threads and Golden Needles” in a fast-expanding haystack. A search for that song on Apple Music brings up five listings for the same Linda Ronstadt recording, several listings of what seems to be another Ronstadt recording, and multiple versions of a few other performances. In this case, they all seem to be professional recordings, but how many of the listings are for the same one? It’s far from obvious. 

From the perspective of major labels and most indies, the problems with streaming are all about making sure consumers can filter “professional music” from tracks uploaded by amateur creators — bar bands and hobbyists. But that prioritizes sellers over consumers. The truth is that the streaming business is broken in a number of ways. The big streaming services are very effective at steering users to big new releases and mainstream pop and hip-hop, which is one reason why major labels like them so much. But they don’t do a great job of serving consumers who are not that interested in new mainstream music or old favorites. And rightsholders aren’t exactly pushing for change here. From their perspective, under the current pro-rata royalty system, it makes economic sense to focus on the mostly young users who spend hours a day streaming music. Those who listen less, who tend to be older, are literally worth less.

It shows. If you’re interested in cool new rock bands — and a substantial number of people still seem to be — the streaming experience just isn’t as good. Algorithmic recommendations aren’t great. Less popular genres aren’t served well, either. If you search for John Coltrane — hardly an obscure artist — Spotify offers icons for John Coltrane, John Coltrane & Johnny Hartman, the John Coltrane Quartet, the John Coltrane Quintet, the John Coltrane Trio and two for the John Coltrane Sextet, plus some others. It’s hard to know what this means from an accounting perspective — one entry for the Sextet has 928 monthly listeners and the other has none. If you want to listen to John Coltrane, though, it’s not a great experience.  

What does this have to do with streaming fraud? Not much — but also everything. If the goal of streaming services is to offer as much music as possible, they’re kicking ass. But most consumers would prefer an experience that’s easier to navigate. This ought to mean less music, with a limit on what can be uploaded, which some services already have; the sheer amount of music Smith had online ought to have suggested a problem, and it seems to have done so after some time. It should mean rethinking the pro-rata royalty system to make everyone’s listening habits generate money for their favorite artists. And it needs to mean spending some money to make streaming services look more like a record store and less like a swap-meet table. 

These ideas may not be popular — streaming services don’t want the burden or expense of curating what they offer, and most of the labels so eager to fight fraud also fear the loss of the pro-rata system that disproportionately benefits their biggest artists. (In this industry, one illegitimate play for one song is fraud but a system that pays unpopular artists less is a business model.) But the industry needs to think about what consumers want — easy ways to find the song they want, music discovery that works in different genres, and a royalty system that benefits the artists they listen to. Shouldn’t they get it? 

When Taylor Swift endorsed Kamala Harris for president on Tuesday (Sept. 10), the singer said she was spurred to action by her fears about artificial intelligence — namely, an incident last month in which Donald Trump posted AI-generated images that falsely claimed the superstar’s support.
Swift’s endorsement, which landed on Instagram just minutes after the conclusion of the Harris-Trump debate, called the Democratic nominee a “steady-handed, gifted leader” who “fights for the rights and causes I believe need a warrior to champion them.” But before those reasons, she pointed first to last month’s deepfake debacle.

Trending on Billboard

“It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

Her fears are well-founded, as Swift has been one most prominent victims of AI deepfakes. At the start of 2024, a flood of fake, sexually explicit images of Swift were posted to the social media site X (formerly Twitter). Some were viewed millions of times before they were removed.

At the time, Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law, told Billboard that the Swift deepfakes highlighted a “particularly toxic cocktail” that was bubbling up on social media in 2024: “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”

Then last month, Trump posted several AI-generated images to social media falsely suggesting Swift had endorsed him. Several showed women in t-shirts with the slogan “Swifties for Trump”; another showed Swift herself, dressed up as Uncle Sam alongside the message, “Taylor wants you to vote for Donald Trump.” Trump himself responded to the false endorsement: “I accept!”

At the time, experts told Billboard that Swift likely had grounds to file a lawsuit over Trump’s phony endorsement by citing her right of publicity — the legal power to control how your name, image and likeness are used by others.

But they also predicted — accurately, it turns out — that the star was better off fighting Trump’s fake endorsement with a legitimate endorsement of her own, broadcast across social media to her millions of die-hard fans: “I think Swift probably has more effective political rather than legal recourse here.”

Whether or not Swift’s endorsement has its intended effect, the next president will have a chance to shape federal policy on AI and deepfakes. Numerous bills aimed at regulating the cutting-edge tech are pending before Congress, including one that would create a federal right of publicity that would allow people like Swift to more easily sue over the unauthorized use of their likeness.

As more music industry entrepreneurs rush into the nascent AI sector, the number of new companies seems to grow by the day. To help artists, creators and others navigate the space, Billboard has compiled a directory of music-centric AI startups.
Given how quickly the sector is growing, this is not an exhaustive list, but it will continue to be updated. The directory also does not make judgment calls about the quality of the models’ outputs and whether their training process is “ethical.” It is an agnostic directory of what is available. Potential users should research any company they are considering.

Although a number of the following companies fit into more than one business sector, for the sake of brevity, no company is listed more than once.

Trending on Billboard

To learn more about what is considered to be an “ethical” AI model, please read our AI FAQs, where key questions are answered by top experts in the field, or visit Fairly Trained, a nonprofit dedicated to certifying “ethical” AI music models.

General Music Creation

AIVA: A music generator that also provides additional editing tools so that users can edit the generated songs and make them their own.

Beatoven: A text-to-music generator that provides royalty-free music for content creators.

Boomy: This music generator creates instrumentals using a number of controllable parameters such as genre and BPM. It also allows users to publish and monetize their generated works.

Create: A stem and sample arrangement tool created by Splice. This model uses AI to generate new arrangements of different Splice samples, which are intended to spark the songwriting process and help users find new samples.

Gennie: A text-to-music generator created by Soundation that produces 12-second-long samples.

Hydra II: A text-to-music generator created by Rightsify that aims to create royalty-free music for commercial spaces. It is trained on Rightsify’s owned catalog of songs.

Infinite Album: A music generator that provides “fully licensed” and “copyright safe” AI music for gamers.

Jen: A text-to-music generator created by Futureverse that was trained on 40 licensed music catalogs and uses blockchain technology to verify and timestamp its creations.

Lemonaide: A “melodic idea” generator. This model creates musical ideas in MIDI form to help songwriters get started on their next idea.

MusicGen: A text-to-music generator created by Meta.

Music LM: A text-to-music generator created by Google.

Ripple: A music generator created by ByteDance. This product can convert a hummed melody into an instrumental and can expand upon the result.

Song Starter: A music generator created by BandLab that is designed to help young artists start new song ideas.

Soundful: This company has collaborated with Kaskade, Starrah and other artists and producers to create their own AI beat generators, a new play on the “type-beat.”

SoundGen: A text-to-music generator that can also act as a “musical assistant” to help flesh out a creator’s music.

Soundraw: A generator that creates royalty-free beats, some of which have been used by Trippie Redd, Fivio Foreign and French Montana.

Stable Audio: A text-to-music generator created by Stability AI. This model also offers audio-to-audio generation, which enables users to manipulate any uploaded audio sample using text prompts.

Suno: A text-to-music generator. This model can create lyrics, vocals and instrumentals with the click of a button. Suno and another generator, Udio, are currently being sued by the three major music companies for alleged widespread copyright infringement during the training process. Suno and Udio claim the training qualifies as fair use under U.S. copyright law and contend the lawsuits are attempts to stifle independent competition.

Tuney: A music generator. This model is known for soundtracking brand advertisements and offering “adaptive music” to make a generated track better fit any given project.

Udio: A text-to-music generator that can create lyrics, vocals and instrumentals with a keyboard stroke. This model is best known for generating “BBL Drizzy,” a parody song by comic Willonius Hatcher that was then sampled by Metro Boomin and became a viral hit. Udio, like Suno, is defending itself against a copyright infringement lawsuit filed by the three major music companies. Udio and Suno claim their training counts as fair use and accuse the label groups of attempting to stifle independent competition.

Voice Conversion

Covers.AI: A voice filter platform created by Mayk.It. The platform offers the ability to build your own AI voice, as well as try on the voices of characters like SpongeBob, Mario or Ash Ketcham.

Elf.Tech: A Grimes voice filter created by CreateSafe and Grimes. This tool is the first major artist-voice converter, and Grimes debuted it in response to the virality of Ghostwriter977’s “Heart on My Sleeve,” which deepfaked the voices of Drake and The Weeknd.

Hooky: A voice filter platform best known for its official partnership with Lauv, who used Hooky technology to translate his song “Love U Like That” into Korean.

Kits.AI: A voice filter, stem separation and mastering platform. This company can provide DIY voice cloning as well as a suite of other generic types of voices. It is certified by Fairly Trained.

Supertone: A voice filter platform, acquired by HYBE, that allows users to change their voice in real time. It also offers a tool called Clear to remove noise and reverb from vocal stems.

Voice-Swap: A voice filter and stem separation platform. This company offers an exclusive roster of artist voices to choose from, including Imogen Heap, and it hopes to become an “agency” for artists’ voices.

Vocoflex: A voice filter plug-in created by Dreamtonics that offers the ability to change the tone of a singer’s voice in real time.

Stem Separation

Audioshake: A stem separation and lyric transcription tool. This company is best known for its recent participation in Disney’s accelerator program.

LALA.AI: A stem separation and voice conversion tool.

Moises AI: A stem separation, pitch-changer, chord detection and smart metronome tool created by Music AI.

Sounds.Studio: A stem separation tool created by Never Before Heard Sounds.

Stem-Swap: A stem separation tool created by Voice-Swap.

Dynamic Music

Endel: A personalized soundscape generator that enhances activities including sleep and focus. The company also releases collaborations with artists like Grimes, James Blake and 6LACK.

Lifescore: A personalized soundtrack generator that enhances activities like driving, working out and more.

Plus Music.AI: A personalized soundtrack generator for video-game play.

Reactional Music: A personalized soundtrack generator that adapts music with actions taken in video games in real time.

Management

Drop Track: An AI-powered music publicity tool.

Musical AI: An AI-powered rights management tool that enables rights holders to manage their catalog and license their works for generative AI training as desired.

Musiio: An AI music tagging and search tool owned by SoundCloud. This tool creates fingerprints to better track and search songs, and it automates tagging songs by mood, keywords, language, genre and lyrical content.

Triniti: A suite of AI tools for music creation, marketing, management and distribution created by CreateSafe. It is best known for the AI voice application programming interface behind Grimes’ Elf.Tech synthetic voice model.

Other

Hook: An AI music remix app that allows users to create mashups and edits with proper licensing in place.

LANDR: A suite of plug-ins and producer services, many of which are powered by AI, including an AI mastering tool.

Morpho: A timbre transfer tool created by Neutone.

From Ghostwriter’s “fake Drake” song to Metro Boomin‘s “BBL Drizzy,” a lot has happened in a very short time when it comes to the evolution of AI’s use in music. And it’s much more prevalent than the headlines suggest. Every day, songwriters are using AI voices to better target pitch records to artists, producers are trying out AI beats and samples, film/TV licensing experts are using AI stem separation to help them clean up old audio, estates and catalog owners are using AI to better market older songs, and superfans are using AI to create next-level fan fiction and UGC about their favorite artists.
For those just starting out in the brave new world of AI music, and understanding all the buzzwords that come with it, Billboard contacted some of the sector’s leading experts to get answers to top questions.

Trending on Billboard

What are some of the most common ways AI is already being used by songwriters and producers?

TRINITY, music producer: As a producer and songwriter, I use AI and feel inspired by AI tools every day. For example, I love using Splice Create Mode. It allows me to search through the Splice sample catalog while coming up with ideas quickly, and then I export it into my DAW Studio One. It keeps the flow of my sessions going as I create. I heard we’ll soon be able to record vocal ideas into Create Mode, which will be even more intuitive and fun. Also, the Izotope Ozone suite is great. The suite has mastering and mixing assistant AI tools built into its plug-ins. These tools help producers and songwriters mix and master tracks and song ideas.

I’ve also heard other songwriters and producers using AI to get started with song ideas. When you feel blocked, you have AI tools like Jen, Melody Studio and Lemonaide to help you come up with new chord progressions. Also, Akai MPC AI and LALA AI are both great for stem splitting, which allows you to separate [out] any part of the music. For example, if I just want to solo and sample the drums in a record, I can do that now in minutes.

AI is not meant to replace us as producers and songwriters. It’s meant to inspire and push our creativity. It’s all about your perspective and how you use it. The future is now; we should embrace it. Just think about how far we have come from the flip phones to the phones we have now that feel more limitless every day. I believe the foundation and heart of us as producers and songwriters will never get lost. We must master our craft to become the greatest producers and songwriters. AI in music creation is meant to assist and free [up] more mental space while I create. I think of AI as my J.A.R.V.I.S. and I’m Iron Man.

How can a user tell if a generative AI company is considered “ethical” or not?

Michael Pelczynski, chief strategy and impact officer, Voice-Swap: If you’re paying for services from a generative AI company, ask yourself, “Where is my money going?” If you’re an artist, producer or songwriter, this question becomes even more crucial: “Why?” Because as a customer, the impact of your usage directly affects you and your rights as a creator. Not many companies in this space truly lead by example when it comes to ethical practices. Doing so requires effort, time and money. It’s more than just marketing yourself as ethical. To make AI use safer and more accessible for musicians, make sure the platform or company you choose compensates everyone involved, both for the final product and for the training sources.

Two of the most popular [ways to determine whether a company is ethical] are the Fairly Trained certification that highlights companies committed to ethical AI training practices, and the BMAT x Voice-Swap technical certification that sets new standards for the ethical and legal utilization of AI-generated voices.

When a generative AI company says it has “ethically” sourced the data it trained on, what does that usually mean? 

Alex Bestall, founder and CEO, Rightsify and Global Copyright Exchange (GCX): [Ethical datasets] require [an AI company to] license the works and get opt-ins from the rights holders and contributors… Beyond copyright, it is also important for vocalists whose likeness is used in a dataset to have a clear opt-in.

What are some examples of AI that can be useful to music-makers that are not generative?

Jessica Powell, CEO, AudioShake: There are loads of tools powered by AI that are not generative. Loop and sample suggestion are a great way to help producers and artists brainstorm the next steps in a track. Stem separation can open up a recording for synch licensing, immersive mixing or remixing. And metadata tagging can help prepare a song for synch-licensing opportunities, playlisting and other experiences that require an understanding of genre, BPM and other factors.

In the last year, several lawsuits have been filed between artists of various fields and generative AI companies, primarily concerning the training process. What is the controversy about?

Shara Senderoff, co-founder, Futureverse and Raised in Space: The heart of the controversy lies in generative AI companies using copyrighted work to train their models without artists’ permission. Creators argue that this practice infringes on their intellectual property rights, as these AI models can produce content closely resembling their original works. This raises significant legal and ethical questions about creative ownership and the value of human artistry in the digital age. The creator community is incensed [by] seeing AI companies profit from their efforts without proper recognition or compensation.

Are there any tools out there today that can be used to detect generative AI use in music? Why are these tools important to have?

Amadea Choplin, COO, Pex: The more reliable tools available today use automated content recognition (ACR) and music recognition technology (MRT) to identify uses of existing AI-generated music. Pex can recognize new uses of existing AI tracks, detect impersonations of artists via voice identification and help determine when music is likely to be AI-generated. Other companies that can detect AI-generated music include Believe and Deezer; however, we have not tested them ourselves. We are living in the most content-dense period in human history where any person with a smartphone can be a creator in an instant, and AI-powered technology is fueling this growth. Tools that operate at mass scale are critical to correctly identifying creators and ensuring they are properly compensated for their creations.

Romain Simiand, chief product officer, Ircam Amplify: Most AI detection tools provide only one side of the coin. As an example, tools such as aivoicedetector.com are primarily meant to detect deepfakes for speech. IRCAM Amplify focuses primarily on prompt-based tools used widely. Yet, because we know this approach is not bulletproof, we are currently supercharging our product to highlight voice clones and identify per-stem AI-generated content. Another interesting contender is resemble.ai, but while it seems their approach is similar, the methodology described diverges greatly.

Finally, we have pex.com, which focuses on voice identification. I haven’t tested the tool but this approach seems to require the original catalog to be made available, which is a potential problem.

AI recognition tools like the AI Generated Detector released by IRCAM Amplify and the others mentioned above help with the fair use and distribution of AI-generated content.

We think AI can be a creativity booster in the music sector, but it is as important to be able to recognize those tracks that have been generated with AI [automatically] as well as identifying deepfakes — videos and audio that are typically used maliciously or to spread false information.

In the United States, what laws are currently being proposed to protect artists from AI vocal deepfakes?

Morna Willens, chief policy officer, RIAA: Policymakers in the U.S. have been focused on guardrails for artificial intelligence that promote innovation while protecting all of us from unconsented use of our images and voices to create invasive deepfakes and voice clones. Across legislative efforts, First Amendment speech protections are expressly covered and provisions are in place to help remove damaging AI content that would violate these laws.

On the federal level, Reps. María Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA) introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act to create a national framework that would safeguard Americans from their voice and likeness being used in nonconsensual AI-generated imitations.

Sens. Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC) released a discussion draft of a bill called Nurture Originals, Foster Art and Keep Entertainment Safe Act with similar aims of protecting individuals from AI deepfakes and voice clones. While not yet formally introduced, we’re hopeful that the final version will provide strong and comprehensive protections against exploitive AI content.

Most recently, Sens. Blackburn, Maria Cantwell (D-WA) and Martin Heinrich (D-NM) introduced the Content Origin Protection and Integrity From Edited and Deepfaked Media Act, offering federal transparency guidelines for authenticating and detecting AI-generated content while also holding violators accountable for harmful deepfakes.

In the states, existing “right of publicity” laws address some of the harms caused by unconsented deepfakes and voice clones, and policymakers are working to strengthen and update these. The landmark Ensuring Likeness Voice and Image Security Act made Tennessee the first state to update its laws to address the threats posed by unconsented AI deepfakes and voice clones. Many states are similarly considering updates to local laws for the AI era.

RIAA has worked on behalf of the artists, rights holders and the creative community to educate policymakers on the impact of AI — both challenges and opportunities. These efforts are a promising start, and we’ll continue to advocate for artists and the entire music ecosystem as technologies develop and new issues emerge.

What legal consequences could a user face for releasing a song that deepfakes another artist’s voice? Could that user be shielded from liability if the song is clearly meant to be parody?

Joseph Fishman, music law professor, Vanderbilt University: The most important area of law that the user would need to worry about is publicity rights, also known as name/image/likeness laws, or NIL. For now, the scope of publicity rights varies state by state, though Congress is working on enacting an additional federal version whose details are still up for grabs. Several states include voice as a protected aspect of the rights holder’s identity. Some companies in the past have gotten in legal trouble for mimicking a celebrity’s voice, but so far those cases have involved commercial advertisements. Whether one could get in similar trouble simply for using vocal mimicry in a new song, outside of the commercial context, is a different and largely untested question. This year, Tennessee became the first state to expand its publicity rights statute to cover that scenario expressly, and other jurisdictions may soon follow. We still don’t know whether that expansion would survive a First Amendment challenge.

If the song is an obvious parody, the user should be on safer ground. There’s pretty widespread agreement that using someone’s likeness for parody or other forms of criticism is protected speech under the First Amendment. Some state publicity rights statutes even include specific parody exemptions.

By the mid-2010s, the power of the playlist — the Spotify playlist to be exact — loomed large in the music business: Everyone knew a spot on Rap Caviar could mint a rap hit overnight; a placement on Fresh Finds could induce a label bidding war; and a lower-than-expected ranking on New Music Friday could ruin a label project manager’s Thursday night.
But in the 2020s, challengers — namely TikTok, with its potent and mysterious algorithm that serves social media users with addictive snippets of songs as they scroll — have threatened Spotify’s reign as music industry kingmaker. Still, Spotify’s editorial playlists remain one of the most important vehicles for music promotion, and its 100-plus member global team, led by its global head of editorial Sulinna Ong, has evolved to meet the changing times.

“Our editorial expertise is both an art and a science,” says Ong, who has led the company through its recent efforts to use technology to offer more personalized playlist options, like its AI DJ, Daylist and daily mixes. “We’re always thinking about how we can introduce you to your next favorite song to your next favorite artist. How do we provide context to get you to engage? Today, the challenge is cutting through the noise to get your attention.”

Trending on Billboard

In conversation with Billboard, Ong talks about training the AI DJ with the editors’ human expertise, using playlists to differentiate Spotify from its competition and looking ahead to Generation Alpha (ages 0-14). 

I’ve seen such a shift in the editorial strategy at Spotify in the last couple years. Daylist, personalized editorial playlists (marked by the “made for you” tag), daily mixes, AI DJ and more. Did those inspire your team to push into these personalized editorial playlists?

To start off, it’s useful to zoom out and think about how people listen to music. The way people listen to music is fluid and curation and editorial has to be fluid as well. We have to understand the changes.

Curators have always been at the core of Spotify’s identity, right from the early days of the company. Back in 2012, Spotify’s music team started with three editors, and it quickly grew to more than 100 around the world today. These curators started by curating what became known as our flagship editorial playlists — Today’s Top Hits, Rap Caviar, Viva Latino. Over time that expanded to playlists like Altar, Lorem, Pollen, etc. Those are all still important.

But around 2018, editors made their first attempts to bridge human curation from our flagship editorial playlists with personalization engines. 2018 is the year when the technology arose with personalization and machine learning to open up these possibilities. At that time, we started making more personalized playlists where the tracks fit with an overall mood or moment curated by editors but varied for each listener — like My Life Is A Movie, Beastmode, Classic Roadtrip Songs. Editors will select a number of songs that they feel fit that playlist. Let’s say for example we have 200 songs selected, you might see the 100 of those that are most aligned with your taste.

Discover Weekly and Release Radar are tailored to listener activity and have been around much longer. Did those inspire your team to push into these personalized editorial playlists around 2018?

Yes, exactly. Algorithmic playlists, like Release Radar [and] Discover Weekly, we found that users liked them [and] that inspired us to then work with the product teams and ask, “What is the next step of this?” Spotify has more than 500 million users. We knew that it would keep growing and as a human curator, you can’t manually curate to that entire pool. Technology can fill in that gap and increase our possibilities. A lot of times, I see narratives where people call this a dichotomy — either playlists are human-made or machine-made. We don’t see it that way.

In 2024, personalization and machine learning are even more important technologies for streaming music and watching content. We’ve kept investing in cutting-edge personalization and it’s making a real impact — 81% of our listeners cite personalization as their favorite thing about Spotify. Our static editorial playlists are still very powerful, but we also have made these other listening experiences to round out the picture.

How someone listens is never one thing. Do you only want to watch movies? No, you want to watch a movie sometimes; other times you want to watch a 20-minute TV show. We have to understand the various ways that you might like to [listen].

Daylist, for example, is very ephemeral. It only exists for a certain amount of time. The appeal is in the title — it also really resonates for a younger audience.

Did your team always intend that Daylist, which often gives users crazy titles like “Whimsical Downtown Vibes Tuesday Evening,” could be shareable — even memeable — on social media?

Absolutely. It’s very shareable. It’s a bite-sized chunk of daily joy that you get that you can post about online.

It reminds me of the innately shareable nature of Spotify Wrapped.

There is a lineage there. It is similar because it’s a reminder of what you’re listening to. But it’s repackaged in a humorous way — light and fun and it updates so it keeps people coming back.

How do you think Spotify’s editorial team differentiates itself from competitors like Apple and Amazon?

Early on, we understood that editorial expertise around the world is really valuable, and it was needed to set us apart. So we have editors all around the world. They are really the music experts of the company. They are focused on understanding the music and the cultural scenes where they are.

We have what we call “editorial philosophy.” One of the tenets of that is our Global Curation Groups, or “GCGs” for short. Once a week, editors from around the world meet and identify tracks that are doing well and should flow from one market to another. We talk about music trends, artists we are excited about. We talk about new music mainly but also music that is resurfacing from social media trends.

This is how we got ahead on spreading genres like K-pop seven years ago. We were playlisting it and advocating for it spreading around the world. Musica Mexicana and Amapiano — we were early [with those] too. We predicted that streaming would reduce the barriers of entry in terms of language, so we see genres and artists coming from non-Western, non-English speaking countries really making an impact on the global music scene.

How was the AI DJ trained to give the commentary and context it gives?

We’ve essentially spun up a writers’ room. We have our editors work with our product team and script writers to add in some context about the artists and tracks that the DJ can share with listeners. The info they feed in can be musical facts, culturally-relevant insights. We want listeners to feel connected to the artists they hear on a human level. At the end of the day, this approach to programming also really helps us broaden out the pool of exposure, particularly for undiscovered artists and tracks. We’ve seen that people who hear the commentary from DJ are more likely to listen to a song they would have otherwise skipped.

When Spotify editorial playlists started, the cool, young, influential audience was millennials. Now it’s Gen Z. What challenges did that generational shift pose?

We think about this every day in our work. Now, we’re even thinking about the next generation after Gen Z, Gen Alpha [children age 14 and younger]. I think the key difference is our move away from genre lines. Where we once had a strictly rock playlist, we are now building playlists like POV or My Life Is A Movie. It’s a lifestyle or an experience playlist. We also see that younger listeners like to experiment with lots of different listening experiences. We try to be very playful about our curation and offer those more ephemeral daily playlists.

What are you seeing with Gen Alpha so far? I’m sure many of them are still on their parents’ accounts, but do you have any insight into how they might see music differently than other generations as they mature?

Gaming. Gaming is really an important space for them. Music is part of the fabric of how we play games now — actually, that’s how these kids often discover and experience music, especially on Discord and big MMOs — massive multiplayer games. We think about this culture a lot because it is mainstream culture for someone of that age.

Gaming is so interesting because it is such a dynamic, controllable medium. Recorded music, however, is totally static. There have been a few startups, though, that are experimenting with music that can morph as you play the game.

Yeah, we’re working on making things playful. There’s a gamification in using Daylist, right? It’s a habit. You come back because you want to see what’s new. We see the AI DJ as another way to make music listening more interactive, less static.

Spotify has been known as a destination for music discovery for a long time. Now, listeners are increasingly turning to TikTok and social media for this. How do you make sure music discovery still continues within Spotify for its users?

That comes down to, again, the editorial expertise and the GCGs I mentioned before. We have 100-plus people whose job it is to be the most tapped-in people in terms of what’s happening around the world in their genre. That’s our biggest strength in terms of discovery because we have a large team of people focused on it. Technology just adds on to that human expertise.

Back when Spotify playlists first got popular, a lot of people compared the editors to the new generation of radio DJs. How do you feel about that comparison?

It’s not a one-to-one comparison. I can understand the logic of how some people might get there. But, if I’m very frank, the editorial job that we do is not about us. Radio DJs, it’s all about them, their personality. It’s not about them as a DJ or a front face of a show. Not to be disparaging to radio DJs — their role is important — it’s just not the same thing. I don’t think we are gatekeepers. I say that because it is never about me or us as editors. It’s about the music, the artist and the audience’s experience. It’s very simple: I want to introduce you to your next favorite song. Yes, we have influence. I recognize that in the industry. It’s one I take very seriously. That’s a privilege and a responsibility, but it is not about us at the end of the day.

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

A North Carolina musician has been indicted by federal prosecutors over allegations that he used AI to help create “hundreds of thousands” of songs and then used the AI tracks to earn more than $10 million in fraudulent streaming royalty payments since 2017.
In a newly unsealed indictment, Manhattan federal prosecutors charged the musician, Michael Smith, 52, with three counts of wire fraud, wire fraud conspiracy and money laundering conspiracy. According to the indictment, Smith was aided by the CEO of an unnamed AI music company as well as other co-conspirators in the U.S. and around the world, and some of the millions he was paid were funneled back to the AI music company.

According to the indictment, the hundreds of thousands of AI songs Smith allegedly helped create were available on music streaming platforms like Spotify, Amazon Music, Apple Music and YouTube Music. It also claims Smith has made “false and misleading” statements to the streaming platforms, as well as collection societies including the Mechanical Licensing Collective (the MLC) and distributors, to “promote and conceal” his alleged fraud.

Trending on Billboard

Because of Smith’s alleged activities, he diverted over $1 million in streaming payments per year that “ultimately should have been paid to the songwriters and artists whose works were streamed legitimately by real consumers,” says the indictment.

The indictment also details exactly how Smith allegedly pulled off the scheme he’s accused of. First, it says he gathered thousands of email accounts, often in the names of fictitious identities, to create thousands of so-called “bot accounts” on the streaming platforms. At its peak, Smith’s operation allegedly had “as many as 10,000 active bot accounts” running; he also allegedly hired a number of co-conspirators in the U.S. and abroad to do the data entry work of signing up those accounts. “Make up names and addresses,” reads an email from Smith to an alleged co-conspirator dated May 11, 2017, that was included in the indictment.

To maximize income, the indictment states that Smith often paid for “family plans” on streaming platforms “typically using proceeds generated by his fraudulent scheme” because they are the “most economical way to purchase multiple accounts on streaming services.”

Smith then used cloud computing services and other means to cause the accounts to “continuously stream songs that he owned” and make it look legitimate. The indictment alleges that Smith knew he was in the wrong and used a number of methods to “conceal his fraudulent scheme,” ranging from fictitious email names and VPNs to instructing his co-conspirators to be “undetectable” in their efforts.

In emails sent in late 2018 and obtained by the government, Smith told co-conspirators to not be suspicious while running up tons of streams on the same song. “We need to get a TON of songs fast to make this work around the anti fraud policies these guys are all using now,” Smith wrote in the emails.

Indeed, there have been a number of measures taken up by the music business to try to curb this kind of fraudulent streaming activity in recent years. Anti-streaming fraud start-up Beatdapp, for example, has become an industry leader, hired by a number of top distributors, streaming services and labels to identify and prevent fraud. Additionally, severl independent DIY distributors including TuneCore, Distrokid and CD Baby have recently banded together to form “Music Fights Fraud,” a coalition that shares a database and other resources to prevent fraudsters from hopping from service to service to avoid detection.

Last year, Spotify and Deezer came out with revamped royalty systems that proposed new penalties for fraudulent activity. Still, it seems fraudsters study these new efforts and continue to evolve their efforts to evade detection.

The rise of quickly generated AI songs has been a major point of concern for streaming fraud experts because it allows bad actors to spread their false streaming activity over a larger number of songs and create more competition for streaming dollars. To date, AI songs are not paid out any differently from human-made songs on streaming platforms. A lawsuit filed by Sony Music, Warner Music Group and Universal Music Group against AI companies Suno and Udio in June summed up the industry’s fears well, warning that AI songs from these companies “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”

Though Smith is said to be a musician himself with a small catalog of his own, the indictment states that he leaned on AI music to quickly amass a much larger catalog.

The indictment alleges that around 2018, “Smith began working with the Chief Executive Officer of an unnamed AI music company and a music promoter to create thousands of thousands of songs that Smith could then fraudulently stream.” Within months, the CEO of the AI company was allegedly providing Smith with “thousands of songs each week.” Eventually, Smith entered a “Master Services Agreement” with the AI company that supplied Smith with 1,000-10,000 songs per month, agreeing that Smith would have “full ownership of the intellectual property rights in the songs.” In turn, Smith would provide the AI company with metadata and the “greater of $2,000 or 15% of the streaming revenue” he generated from the AI songs.

“Keep in mind what we’re doing musically here… this is not ‘music,’ it’s ‘instant music’ ;)”, reads an email from the AI company’s CEO to Smith that was included in the indictment.

Over time, various players in the music business questioned Smith’s activities, including a streaming platform, a music distributor and the MLC. By March and April 2023, the MLC halted royalty payments to Smith and confronted him about his possible fraud. In response, Smith and his representatives “repeatedly lied” about the supposed fraud and AI-generated creations, says the indictment.

Christie M. Curtis, FBI acting assistant director, said of the indictment, “The defendant’s alleged scheme played upon the integrity of the music industry by a concerted attempt to circumvent the streaming platforms’ policies. The FBI remains dedicated to plucking out those who manipulate advanced technology to receive illicit profits and infringe on the genuine artistic talent of others.”

Kris Ahrend, CEO of the MLC, added, “Today’s DOJ indictment shines a light on the serious problem of streaming fraud for the music industry. As the DOJ recognized, The MLC identified and challenged the alleged misconduct, and withheld payment of the associated mechanical royalties, which further validates the importance of The MLC’s ongoing efforts to combat fraud and protect songwriters.”

Downtown Music has struck a deal with Hook, an AI social music app, which will pave the way for fans to create authorized remixes of the millions of licensed recordings in Downtown’s catalog.
In a time when many of music’s biggest stars are releasing sped up or slowed down remixes of songs, and fans are taking to TikTok to post all kinds of musical mashups and edits, it’s clear that listeners want to do more than just play songs, they want to play with songs, but often these remixes are made without proper licenses or authorization in place.

According to a recent study by Pex, nearly 40% of all the music used on TikTok is modified in some way, whether its pitch-altered, sped up, slowed down, or spliced together with another song. Hook hopes to create a legal, licensed environment for users to participate in this rapidly growing part of online music fandom.

Trending on Billboard

With Hook’s license in place, Downtown Music will receive financial compensation when their works are used in these user-generated content (UGC) remixes. Hook’s platform also gives Downtown’s artists and labels access to valuable data insights, showing them how and where their augmented music, created on Hook, is being used.

Hook sees their AI-powered remix app as a viable new revenue source for artists and labels, allowing them to better capitalize on the fact that much of music culture and fandom has shifted from traditional streaming services and over to short-form apps like TikTok. Hook’s founder/CEO Gaurav Sharma says, “we are challenging the idea that music on social media and UGC only provides promotional value. We believe fan remixing and UGC is a new form of active music consumption and rights holders should be paid for it. This deal represents a new model for music, social, and AI. The team at Downtown understands our mission and we’re humbled by their support.”

Previous to Sharma founding Hook, he served as chief operating officer for JioSaavn, India’s largest music streaming platform and one of the first platforms to secure global streaming licenses with record labels. During his time at the company, Sharma and his team grew JioSaavn to more than 100 million active monthly users.

Harmen Hemminga, vp of product & services strategy at Downtown Music, says of the deal, “Whilst music consumption continues to increase, broaden and localize, the trend of music “prosumption” on social platforms is ever-growing. Users of these platforms are including music in the experiences they share with others across a variety of contextual, inventive ways. Hook offers rights holders the ability to monetize these new and creative forms of use.”

These days, many in the music business are trying to harness the power of the “superfan” — the highly engaged segment of an artist’s audience that regularly shows up to concerts, buys t-shirts, orders physical albums and obsesses over the artist online. In the digital marketing space, that has meant agencies are increasingly turning their attention to fan pages, hoping to capture the attention of that top tier of listeners online. 
“The TikTok influencer campaign has been front and center for marketing songs for a while,” says Ethan Curtis, founder of PushPlay, a digital marketing agency that has promoted songs like “Bad Habit” by Steve Lacy, “Golden Hour” by JVKE and “Glimpse of Us” by Joji. “But as it’s gotten more saturated and more expensive, we found there was interest in creating your own fan pages where you can have total control of the narrative.” 

“Fan pages” made sneakily by artists’ teams may have become the digital campaign du jour in the last year or so, but the idea isn’t new. Even before TikTok took over music discovery, management and digital teams quietly used anonymous accounts to pose as fans on sites like Tumblr, Instagram and Twitter, sharing interviews, videos and other content around the artists because, as Curtis puts it, “It is a space you can own.”

Trending on Billboard

Curtis is now taking that concept a step further with his innovative, albeit controversial, new company WtrCoolr, a spinoff of his digital firm that’s dedicated to creating “fan fiction” pages for artists. To put it simply, WtrCoolr is hired to create viral-worthy fake stories about their clients, which include Shaboozey and Young Nudy, among others. While Curtis says he is open to creating videos with all kinds of “imaginative” new narratives, he says he draws the line at any fan fiction that could be “negative” or “cause backlash” for the people featured in the videos.

The results speak for themselves. One popular WtrCoolr-made TikTok video that falsely claimed that Dolly Parton is Shaboozey’s godmother has 1.1 million views and 121,500 likes to date. Posted to the digital agency’s fan account @ShaboozeysVault, Curtis says that the popular video was made by splicing together old interview clips of the artists, along with some AI voiceovers. 

“We are huge fans of pop culture, fan fiction and satire,” says Curtis. “We see it as creating our own version of a Marvel Universe but with pop stars.”

All of the TikTok accounts made by WtrCoolr note in their bios that their content is “fan fiction.” The videos on these pages also include “Easter eggs,” which Curtis says point to the fact that the videos are fabrications. But plenty of fans are still falling for it. Many viewers of the Parton video, for example, took it as gospel truth, posting comments like “how many god children does Dolly have and where can I sign up?” and “Dolly is an angel on Earth.”

In the future, Curtis thinks this novel form of “fan fiction” will be useful beyond just trying to engage fan bases online. He sees potential for the pages to serve as “a testing ground” for real-life decisions — like an artist choosing to collaborate with another — to see how the fan base would react. “Traditionally, you don’t get to look before you jump,” he says. “Maybe in the future we will.”

What was the first “fan fiction” post that took off for WtrCoolr?

It was the video of Shaq being a superfan to the rapper Young Nudy [10.4 million views, 1.7 million likes on TikTok]. We had been working on [promoting] the Young Nudy song, “Peaches & Eggplants,” mostly on the influencer side. We had dances and all sorts of different trends going. It was becoming a top rap song by that point and then we sold the client [Young Nudy’s team] on doing one of these fan pages where we just tested out a bunch of stuff. The first narrative video we tried was this video where we found some footage of Shaq — I think it was at Lollapalooza — where he was in the front of the crowd [for a different artist], vibing and head banging. It was a really funny visual. We just got clever with the editing and created the story that Shaq was showing up at every Young Nudy show, and then it went crazy viral. 

It was really exciting to see. It brought fans to Nudy and also made existing Nudy fans super excited that Shaq was engaging. Then there was tons of goodwill for Shaq that came from it too. Lots of comments like “protect Shaq at all costs” or “Shaq’s a damn near perfect human being.” It was all around a positive experience. We put on our pages that this is a fan page and fan fiction. We don’t really push that it’s the truth. We’re just having fun and we let that be known. 

There was some pickup after that video went viral. Weren’t there some rap blogs posting about the video and taking it as truth?

I don’t know if they were taking it as true necessarily. We didn’t really have any conversations with anyone, but it was definitely getting shared all around — whether it was because of that or just because it was such a funny video. Even Nudy reacted and thought it was funny. I think the label may have reached out to Shaq and invited him to a show, and he thought it was funny but was on the other side of the country that day and couldn’t make it. 

I’m sure there’s some people who thought it was true, but a lot of the videos we’ll put Easter eggs at the end that make it obvious that it’s not true. Then in our bios we write that it is fan fiction. 

Do you think that there’s anything bad that could come from fans and blogs believing these videos are real — only to later realize later that it was fake?

I don’t know if anything is really bad. We don’t claim for it to be true, and we’re just having fun, weaving stories and basically saying, “Wouldn’t it be funny if?” or, “Wouldn’t it be heartwarming if?” I don’t think we’re really ever touching on stuff that’s of any importance, that could lead to any negative energy or backlash. We’re just trying to make fun stuff that fans enjoy. Just fun little moments. It’s no different from taking a video out of context and slapping meme headings on it.

Do you see this as the future of memes?

I do. I also think there’s a future where what we’re doing becomes sort of like a testing ground for real-life collabs or TV show concepts. I could see a label coming to us and asking us to test how a new post-beef collab between Drake and Kendrick would be received, for example. They could say, “Can you create a post about this and we can see if people turn on Kendrick for backtracking, or if fans will lose their shit over them coming together?” We could see if it’s a disaster or potentially the biggest release of their careers. Traditionally, you don’t get to look before you jump. Maybe in the future we will. But even now with the Shaq video, it basically proved that if Shaq went to an unexpected show and was raging in the front row people would love it. I mean, if it’s been so successful on socials, why wouldn’t it be so successful in real life?

It seemed like the Shaboozey and Dolly Parton video inserted Shaboozey’s name and other new phrases using an AI voice filter. Do you rely on AI in these videos a lot or is it primarily about careful editing? 

The majority of it is just clever editing. Every now and then we may change a word up or something [using AI], but the majority of it is just collaging clips together. 

How time intensive is it to create these videos? 

The process has been changing. It used to be much more time intensive back before we realized that clever editing was more efficient. In the beginning, we would write scripts for the videos, run them through AI and then try to find clips to match the scripts and stuff like that. You have to match the edit up with the artist’s lips so it looks like lip synching. That’s just super time intensive. Then we started realizing that it’s easier to just define a basic objective, go out on the internet and see what we can find. We develop a story from there so that we only have to do a few fake [AI-assisted] words here and there, and then we’ll cut away from the video, show some footage from a music video or something like that. It makes it more efficient. 

As far as you know, is WtrCoolr the first team in digital marketing that is trying to do these false-narrative, storytelling videos, or is this something that is seen all over the internet? 

We were definitely the first to do it. There’s definitely people that are imitating it now. We see it generally in the content that exists online, especially on meme pages. It’s becoming part of the culture. 

Do you run your ideas for fan fiction narratives by the artist before you do them? 

We’re working with them, and we’re talking through ideas. There’s as much communication as they want. Some artists want to know what’s going on, but some artists just don’t care to be involved. 

It seems like, so far, no one has had any issues with being used in the videos — they even see this positively — but are you concerned about the legal implications of using someone’s likeness to endorse an artist or idea that they haven’t really endorsed?

We’re not claiming it to be true. We include disclaimers that it’s just fan fiction. So, I think if we were claiming for it to be true then that’s a different story, but that’s not what we are doing. 

That’s listed on all the page bios, but it isn’t listed on the actual video captions, right? 

It’s listed on the profiles, and then a lot of videos we just do Easter eggs at the end that make it sort of apparent that it’s a joke. 

I found the idea that you mentioned earlier to be interesting — the idea that you could test out collaborations or things without having to get the artist involved initially, whether it’s Drake and Kendrick collaborating or something else. It reminds me of when people tease a song before they slate it for official release. Do you feel that is a fair comparison? 

Totally. What TikTok did for song teasing, this has done for situation teasing. 

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.