AI
Page: 3
When Michael “Mike” Smith was indicted Wednesday (Sept. 4) over allegations that he used an AI music company to create “hundreds of thousands” of songs and then used bots to artificially earn $10 million in streaming income since 2017, prosecutors claimed that some of the money flowed back to that AI music company. The indictment also claimed that Smith was in consistent contact with its CEO — but it never revealed their names.
ASCAP/BMI Songview records and the MLC database indicate that Alex Mitchell, CEO/founder of popular AI music company Boomy, is listed as the co-writer on at least hundreds of the 200,000 plus songs that are registered to Smith. Boomy also released a song, “This Isn’t Real Life,” jointly with Smith, CVBZ and Stunna 4 Vegas.
In a statement to Billboard, Mitchell says: “We were shocked by the details in the recently filed indictment of Michael Smith, which we are reviewing. Michael Smith consistently represented himself as legitimate.”
Trending on Billboard
The indictment alleges that around 2018, “Smith began working with the Chief Executive Officer of an [unnamed] AI music company and a music promoter to create thousands of songs that Smith could then fraudulently stream.” Within months, the CEO was allegedly providing Smith with “thousands of songs each week.”
In June 2019, the indictment says that Smith reported to the AI music CEO and the promoter that “we are at 88 million TOTAL STREAMS so far!!!” Smith explained to the CEO and promoter that his streams were earning about $110,000 per month and that the two men were each receiving 10% of the proceeds. Smith later asked the AI CEO to provide him with another 10,000 AI songs so that he could “spread this out more” with his streams. The indictment states that this was “to evade detection from streaming platforms.”
Eventually, according to the indictment, Smith entered a “Master Services Agreement” with this AI music company that supplied Smith with 1,000-10,000 songs per month. The deal stated that Smith would have “full ownership of the intellectual property rights in the songs.” In turn, Smith would provide the AI company with metadata and the “greater of $2,000 or 15% of the streaming revenue” he generated from the AI songs.
“Keep in mind what we’re doing musically here… this is not ‘music,’ it’s ‘instant music’ ;)” the AI CEO wrote to Smith in an email that was included in the indictment.
Mitchell’s publisher is listed as Songtrust, a publishing administration company owned by Downtown, which typically earns a percentage of signees’ royalties in exchange for services. Smith’s publisher, Smithhouse Music Publishing, also lists Songtrust as its point of contact on Songview.
A representative for Songtrust declined Billboard’s request for comment. However, a source close to the matter tells Billboard that Smith and Mitchell’s Songtrust deals were terminated more than a year ago.
While it is not unheard of for an AI company to be approached by customers who are looking to buy a large number of songs, multiple AI music executives tell Billboard that it is common to know why the customer wants the tracks and to do “KYC,” or “know your client,” checks to ensure they are above board.
Typically, customers for large sums of songs tend to be companies that are seeking cheap music alternatives, often for social media content. Other requests tend to come from unknown individuals outside of the U.S., especially streaming fraud hotspots like Poland, Ukraine, Russia, Vietnam and Brazil. These parties are often denied. Two sources say it’s surprising to see a CEO’s name listed in the credits as a songwriter when these transactions occur.
Boomy has been at the forefront of AI music since its infancy. Records vary as to when Boomy launched in beta, with some online sources saying 2018 and others saying 2019. It officially debuted in 2021, according to an announcement from Axios. The company claims on its website to have made over 20 million AI-generated tracks to date.
Boomy has also won the respect of the music industry establishment. For years, Boomy was distributing many of its AI tracks through a partnership with New York-based music services giant Downtown. Though this partnership was in place during the same time frame as Smith’s alleged fraudulent activities, it is unclear if any of Smith’s allegedly fraudulent AI tracks were distributed through Downtown. The indictment does state, however, that Smith used two distributors to upload content from 2017-2024, one based in New York and one based in Florida.
In May 2023, Boomy told users via Discord that Spotify had shut down its ability to upload songs to the DSP and that some of their released tracks had been removed. “This decision was made by Spotify and Boomy’s distributor in order to enable a review of potentially anomalous activity,” Boomy said at the time. Spotify later confirmed that the “anomalous activity” was related to possible streaming fraud detected on certain tracks. A Spotify spokesperson said at the time, “Artificial streaming is a longstanding, industry-wide issue that Spotify is working to stamp out across our service.”
In fall 2023, Boomy announced that it had partnered with fraud detection company Beatdapp to combat streaming manipulation. A month later, Boomy also announced that it had reached a new distribution partnership with ADA Worldwide, a company under the Warner Music Group (WMG) umbrella.
WMG is one of Boomy’s top investors, making both a pre-seed round as well as a seed round investment. Other Boomy investors include Sound Media Ventures, First Check Ventures, Intonation Ventures, Future Labs, Boost VC and Scrum Venture, according to Crunchbase.
According to Songview and the MLC database, the same tracks that list Smith and Mitchell as co-writers also list a music industry veteran named Bram Bessoff, founder of promotional platform Indiehitmaker. Typically, these tracks allocate 10% of publishing ownership and royalties to Bessoff, which matches the amount the indictment indicates was paid to the unnamed promoter. Bessoff’s publisher is listed as Songtrust as well. (A source close to the matter says Bessoff’s deal with Songtrust was also terminated more than a year ago).
Bessoff declined Billboard’s request for comment, citing his cooperation in the ongoing investigation.
By the mid-2010s, the power of the playlist — the Spotify playlist to be exact — loomed large in the music business: Everyone knew a spot on Rap Caviar could mint a rap hit overnight; a placement on Fresh Finds could induce a label bidding war; and a lower-than-expected ranking on New Music Friday could ruin a label project manager’s Thursday night.
But in the 2020s, challengers — namely TikTok, with its potent and mysterious algorithm that serves social media users with addictive snippets of songs as they scroll — have threatened Spotify’s reign as music industry kingmaker. Still, Spotify’s editorial playlists remain one of the most important vehicles for music promotion, and its 100-plus member global team, led by its global head of editorial Sulinna Ong, has evolved to meet the changing times.
“Our editorial expertise is both an art and a science,” says Ong, who has led the company through its recent efforts to use technology to offer more personalized playlist options, like its AI DJ, Daylist and daily mixes. “We’re always thinking about how we can introduce you to your next favorite song to your next favorite artist. How do we provide context to get you to engage? Today, the challenge is cutting through the noise to get your attention.”
Trending on Billboard
In conversation with Billboard, Ong talks about training the AI DJ with the editors’ human expertise, using playlists to differentiate Spotify from its competition and looking ahead to Generation Alpha (ages 0-14).
I’ve seen such a shift in the editorial strategy at Spotify in the last couple years. Daylist, personalized editorial playlists (marked by the “made for you” tag), daily mixes, AI DJ and more. Did those inspire your team to push into these personalized editorial playlists?
To start off, it’s useful to zoom out and think about how people listen to music. The way people listen to music is fluid and curation and editorial has to be fluid as well. We have to understand the changes.
Curators have always been at the core of Spotify’s identity, right from the early days of the company. Back in 2012, Spotify’s music team started with three editors, and it quickly grew to more than 100 around the world today. These curators started by curating what became known as our flagship editorial playlists — Today’s Top Hits, Rap Caviar, Viva Latino. Over time that expanded to playlists like Altar, Lorem, Pollen, etc. Those are all still important.
But around 2018, editors made their first attempts to bridge human curation from our flagship editorial playlists with personalization engines. 2018 is the year when the technology arose with personalization and machine learning to open up these possibilities. At that time, we started making more personalized playlists where the tracks fit with an overall mood or moment curated by editors but varied for each listener — like My Life Is A Movie, Beastmode, Classic Roadtrip Songs. Editors will select a number of songs that they feel fit that playlist. Let’s say for example we have 200 songs selected, you might see the 100 of those that are most aligned with your taste.
Discover Weekly and Release Radar are tailored to listener activity and have been around much longer. Did those inspire your team to push into these personalized editorial playlists around 2018?
Yes, exactly. Algorithmic playlists, like Release Radar [and] Discover Weekly, we found that users liked them [and] that inspired us to then work with the product teams and ask, “What is the next step of this?” Spotify has more than 500 million users. We knew that it would keep growing and as a human curator, you can’t manually curate to that entire pool. Technology can fill in that gap and increase our possibilities. A lot of times, I see narratives where people call this a dichotomy — either playlists are human-made or machine-made. We don’t see it that way.
In 2024, personalization and machine learning are even more important technologies for streaming music and watching content. We’ve kept investing in cutting-edge personalization and it’s making a real impact — 81% of our listeners cite personalization as their favorite thing about Spotify. Our static editorial playlists are still very powerful, but we also have made these other listening experiences to round out the picture.
How someone listens is never one thing. Do you only want to watch movies? No, you want to watch a movie sometimes; other times you want to watch a 20-minute TV show. We have to understand the various ways that you might like to [listen].
Daylist, for example, is very ephemeral. It only exists for a certain amount of time. The appeal is in the title — it also really resonates for a younger audience.
Did your team always intend that Daylist, which often gives users crazy titles like “Whimsical Downtown Vibes Tuesday Evening,” could be shareable — even memeable — on social media?
Absolutely. It’s very shareable. It’s a bite-sized chunk of daily joy that you get that you can post about online.
It reminds me of the innately shareable nature of Spotify Wrapped.
There is a lineage there. It is similar because it’s a reminder of what you’re listening to. But it’s repackaged in a humorous way — light and fun and it updates so it keeps people coming back.
How do you think Spotify’s editorial team differentiates itself from competitors like Apple and Amazon?
Early on, we understood that editorial expertise around the world is really valuable, and it was needed to set us apart. So we have editors all around the world. They are really the music experts of the company. They are focused on understanding the music and the cultural scenes where they are.
We have what we call “editorial philosophy.” One of the tenets of that is our Global Curation Groups, or “GCGs” for short. Once a week, editors from around the world meet and identify tracks that are doing well and should flow from one market to another. We talk about music trends, artists we are excited about. We talk about new music mainly but also music that is resurfacing from social media trends.
This is how we got ahead on spreading genres like K-pop seven years ago. We were playlisting it and advocating for it spreading around the world. Musica Mexicana and Amapiano — we were early [with those] too. We predicted that streaming would reduce the barriers of entry in terms of language, so we see genres and artists coming from non-Western, non-English speaking countries really making an impact on the global music scene.
How was the AI DJ trained to give the commentary and context it gives?
We’ve essentially spun up a writers’ room. We have our editors work with our product team and script writers to add in some context about the artists and tracks that the DJ can share with listeners. The info they feed in can be musical facts, culturally-relevant insights. We want listeners to feel connected to the artists they hear on a human level. At the end of the day, this approach to programming also really helps us broaden out the pool of exposure, particularly for undiscovered artists and tracks. We’ve seen that people who hear the commentary from DJ are more likely to listen to a song they would have otherwise skipped.
When Spotify editorial playlists started, the cool, young, influential audience was millennials. Now it’s Gen Z. What challenges did that generational shift pose?
We think about this every day in our work. Now, we’re even thinking about the next generation after Gen Z, Gen Alpha [children age 14 and younger]. I think the key difference is our move away from genre lines. Where we once had a strictly rock playlist, we are now building playlists like POV or My Life Is A Movie. It’s a lifestyle or an experience playlist. We also see that younger listeners like to experiment with lots of different listening experiences. We try to be very playful about our curation and offer those more ephemeral daily playlists.
What are you seeing with Gen Alpha so far? I’m sure many of them are still on their parents’ accounts, but do you have any insight into how they might see music differently than other generations as they mature?
Gaming. Gaming is really an important space for them. Music is part of the fabric of how we play games now — actually, that’s how these kids often discover and experience music, especially on Discord and big MMOs — massive multiplayer games. We think about this culture a lot because it is mainstream culture for someone of that age.
Gaming is so interesting because it is such a dynamic, controllable medium. Recorded music, however, is totally static. There have been a few startups, though, that are experimenting with music that can morph as you play the game.
Yeah, we’re working on making things playful. There’s a gamification in using Daylist, right? It’s a habit. You come back because you want to see what’s new. We see the AI DJ as another way to make music listening more interactive, less static.
Spotify has been known as a destination for music discovery for a long time. Now, listeners are increasingly turning to TikTok and social media for this. How do you make sure music discovery still continues within Spotify for its users?
That comes down to, again, the editorial expertise and the GCGs I mentioned before. We have 100-plus people whose job it is to be the most tapped-in people in terms of what’s happening around the world in their genre. That’s our biggest strength in terms of discovery because we have a large team of people focused on it. Technology just adds on to that human expertise.
Back when Spotify playlists first got popular, a lot of people compared the editors to the new generation of radio DJs. How do you feel about that comparison?
It’s not a one-to-one comparison. I can understand the logic of how some people might get there. But, if I’m very frank, the editorial job that we do is not about us. Radio DJs, it’s all about them, their personality. It’s not about them as a DJ or a front face of a show. Not to be disparaging to radio DJs — their role is important — it’s just not the same thing. I don’t think we are gatekeepers. I say that because it is never about me or us as editors. It’s about the music, the artist and the audience’s experience. It’s very simple: I want to introduce you to your next favorite song. Yes, we have influence. I recognize that in the industry. It’s one I take very seriously. That’s a privilege and a responsibility, but it is not about us at the end of the day.
This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.
A North Carolina musician has been indicted by federal prosecutors over allegations that he used AI to help create “hundreds of thousands” of songs and then used the AI tracks to earn more than $10 million in fraudulent streaming royalty payments since 2017.
In a newly unsealed indictment, Manhattan federal prosecutors charged the musician, Michael Smith, 52, with three counts of wire fraud, wire fraud conspiracy and money laundering conspiracy. According to the indictment, Smith was aided by the CEO of an unnamed AI music company as well as other co-conspirators in the U.S. and around the world, and some of the millions he was paid were funneled back to the AI music company.
According to the indictment, the hundreds of thousands of AI songs Smith allegedly helped create were available on music streaming platforms like Spotify, Amazon Music, Apple Music and YouTube Music. It also claims Smith has made “false and misleading” statements to the streaming platforms, as well as collection societies including the Mechanical Licensing Collective (the MLC) and distributors, to “promote and conceal” his alleged fraud.
Trending on Billboard
Because of Smith’s alleged activities, he diverted over $1 million in streaming payments per year that “ultimately should have been paid to the songwriters and artists whose works were streamed legitimately by real consumers,” says the indictment.
The indictment also details exactly how Smith allegedly pulled off the scheme he’s accused of. First, it says he gathered thousands of email accounts, often in the names of fictitious identities, to create thousands of so-called “bot accounts” on the streaming platforms. At its peak, Smith’s operation allegedly had “as many as 10,000 active bot accounts” running; he also allegedly hired a number of co-conspirators in the U.S. and abroad to do the data entry work of signing up those accounts. “Make up names and addresses,” reads an email from Smith to an alleged co-conspirator dated May 11, 2017, that was included in the indictment.
To maximize income, the indictment states that Smith often paid for “family plans” on streaming platforms “typically using proceeds generated by his fraudulent scheme” because they are the “most economical way to purchase multiple accounts on streaming services.”
Smith then used cloud computing services and other means to cause the accounts to “continuously stream songs that he owned” and make it look legitimate. The indictment alleges that Smith knew he was in the wrong and used a number of methods to “conceal his fraudulent scheme,” ranging from fictitious email names and VPNs to instructing his co-conspirators to be “undetectable” in their efforts.
In emails sent in late 2018 and obtained by the government, Smith told co-conspirators to not be suspicious while running up tons of streams on the same song. “We need to get a TON of songs fast to make this work around the anti fraud policies these guys are all using now,” Smith wrote in the emails.
Indeed, there have been a number of measures taken up by the music business to try to curb this kind of fraudulent streaming activity in recent years. Anti-streaming fraud start-up Beatdapp, for example, has become an industry leader, hired by a number of top distributors, streaming services and labels to identify and prevent fraud. Additionally, severl independent DIY distributors including TuneCore, Distrokid and CD Baby have recently banded together to form “Music Fights Fraud,” a coalition that shares a database and other resources to prevent fraudsters from hopping from service to service to avoid detection.
Last year, Spotify and Deezer came out with revamped royalty systems that proposed new penalties for fraudulent activity. Still, it seems fraudsters study these new efforts and continue to evolve their efforts to evade detection.
The rise of quickly generated AI songs has been a major point of concern for streaming fraud experts because it allows bad actors to spread their false streaming activity over a larger number of songs and create more competition for streaming dollars. To date, AI songs are not paid out any differently from human-made songs on streaming platforms. A lawsuit filed by Sony Music, Warner Music Group and Universal Music Group against AI companies Suno and Udio in June summed up the industry’s fears well, warning that AI songs from these companies “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
Though Smith is said to be a musician himself with a small catalog of his own, the indictment states that he leaned on AI music to quickly amass a much larger catalog.
The indictment alleges that around 2018, “Smith began working with the Chief Executive Officer of an unnamed AI music company and a music promoter to create thousands of thousands of songs that Smith could then fraudulently stream.” Within months, the CEO of the AI company was allegedly providing Smith with “thousands of songs each week.” Eventually, Smith entered a “Master Services Agreement” with the AI company that supplied Smith with 1,000-10,000 songs per month, agreeing that Smith would have “full ownership of the intellectual property rights in the songs.” In turn, Smith would provide the AI company with metadata and the “greater of $2,000 or 15% of the streaming revenue” he generated from the AI songs.
“Keep in mind what we’re doing musically here… this is not ‘music,’ it’s ‘instant music’ ;)”, reads an email from the AI company’s CEO to Smith that was included in the indictment.
Over time, various players in the music business questioned Smith’s activities, including a streaming platform, a music distributor and the MLC. By March and April 2023, the MLC halted royalty payments to Smith and confronted him about his possible fraud. In response, Smith and his representatives “repeatedly lied” about the supposed fraud and AI-generated creations, says the indictment.
Christie M. Curtis, FBI acting assistant director, said of the indictment, “The defendant’s alleged scheme played upon the integrity of the music industry by a concerted attempt to circumvent the streaming platforms’ policies. The FBI remains dedicated to plucking out those who manipulate advanced technology to receive illicit profits and infringe on the genuine artistic talent of others.”
Kris Ahrend, CEO of the MLC, added, “Today’s DOJ indictment shines a light on the serious problem of streaming fraud for the music industry. As the DOJ recognized, The MLC identified and challenged the alleged misconduct, and withheld payment of the associated mechanical royalties, which further validates the importance of The MLC’s ongoing efforts to combat fraud and protect songwriters.”
These days, many in the music business are trying to harness the power of the “superfan” — the highly engaged segment of an artist’s audience that regularly shows up to concerts, buys t-shirts, orders physical albums and obsesses over the artist online. In the digital marketing space, that has meant agencies are increasingly turning their attention to fan pages, hoping to capture the attention of that top tier of listeners online.
“The TikTok influencer campaign has been front and center for marketing songs for a while,” says Ethan Curtis, founder of PushPlay, a digital marketing agency that has promoted songs like “Bad Habit” by Steve Lacy, “Golden Hour” by JVKE and “Glimpse of Us” by Joji. “But as it’s gotten more saturated and more expensive, we found there was interest in creating your own fan pages where you can have total control of the narrative.”
“Fan pages” made sneakily by artists’ teams may have become the digital campaign du jour in the last year or so, but the idea isn’t new. Even before TikTok took over music discovery, management and digital teams quietly used anonymous accounts to pose as fans on sites like Tumblr, Instagram and Twitter, sharing interviews, videos and other content around the artists because, as Curtis puts it, “It is a space you can own.”
Trending on Billboard
Curtis is now taking that concept a step further with his innovative, albeit controversial, new company WtrCoolr, a spinoff of his digital firm that’s dedicated to creating “fan fiction” pages for artists. To put it simply, WtrCoolr is hired to create viral-worthy fake stories about their clients, which include Shaboozey and Young Nudy, among others. While Curtis says he is open to creating videos with all kinds of “imaginative” new narratives, he says he draws the line at any fan fiction that could be “negative” or “cause backlash” for the people featured in the videos.
The results speak for themselves. One popular WtrCoolr-made TikTok video that falsely claimed that Dolly Parton is Shaboozey’s godmother has 1.1 million views and 121,500 likes to date. Posted to the digital agency’s fan account @ShaboozeysVault, Curtis says that the popular video was made by splicing together old interview clips of the artists, along with some AI voiceovers.
“We are huge fans of pop culture, fan fiction and satire,” says Curtis. “We see it as creating our own version of a Marvel Universe but with pop stars.”
All of the TikTok accounts made by WtrCoolr note in their bios that their content is “fan fiction.” The videos on these pages also include “Easter eggs,” which Curtis says point to the fact that the videos are fabrications. But plenty of fans are still falling for it. Many viewers of the Parton video, for example, took it as gospel truth, posting comments like “how many god children does Dolly have and where can I sign up?” and “Dolly is an angel on Earth.”
In the future, Curtis thinks this novel form of “fan fiction” will be useful beyond just trying to engage fan bases online. He sees potential for the pages to serve as “a testing ground” for real-life decisions — like an artist choosing to collaborate with another — to see how the fan base would react. “Traditionally, you don’t get to look before you jump,” he says. “Maybe in the future we will.”
What was the first “fan fiction” post that took off for WtrCoolr?
It was the video of Shaq being a superfan to the rapper Young Nudy [10.4 million views, 1.7 million likes on TikTok]. We had been working on [promoting] the Young Nudy song, “Peaches & Eggplants,” mostly on the influencer side. We had dances and all sorts of different trends going. It was becoming a top rap song by that point and then we sold the client [Young Nudy’s team] on doing one of these fan pages where we just tested out a bunch of stuff. The first narrative video we tried was this video where we found some footage of Shaq — I think it was at Lollapalooza — where he was in the front of the crowd [for a different artist], vibing and head banging. It was a really funny visual. We just got clever with the editing and created the story that Shaq was showing up at every Young Nudy show, and then it went crazy viral.
It was really exciting to see. It brought fans to Nudy and also made existing Nudy fans super excited that Shaq was engaging. Then there was tons of goodwill for Shaq that came from it too. Lots of comments like “protect Shaq at all costs” or “Shaq’s a damn near perfect human being.” It was all around a positive experience. We put on our pages that this is a fan page and fan fiction. We don’t really push that it’s the truth. We’re just having fun and we let that be known.
There was some pickup after that video went viral. Weren’t there some rap blogs posting about the video and taking it as truth?
I don’t know if they were taking it as true necessarily. We didn’t really have any conversations with anyone, but it was definitely getting shared all around — whether it was because of that or just because it was such a funny video. Even Nudy reacted and thought it was funny. I think the label may have reached out to Shaq and invited him to a show, and he thought it was funny but was on the other side of the country that day and couldn’t make it.
I’m sure there’s some people who thought it was true, but a lot of the videos we’ll put Easter eggs at the end that make it obvious that it’s not true. Then in our bios we write that it is fan fiction.
Do you think that there’s anything bad that could come from fans and blogs believing these videos are real — only to later realize later that it was fake?
I don’t know if anything is really bad. We don’t claim for it to be true, and we’re just having fun, weaving stories and basically saying, “Wouldn’t it be funny if?” or, “Wouldn’t it be heartwarming if?” I don’t think we’re really ever touching on stuff that’s of any importance, that could lead to any negative energy or backlash. We’re just trying to make fun stuff that fans enjoy. Just fun little moments. It’s no different from taking a video out of context and slapping meme headings on it.
Do you see this as the future of memes?
I do. I also think there’s a future where what we’re doing becomes sort of like a testing ground for real-life collabs or TV show concepts. I could see a label coming to us and asking us to test how a new post-beef collab between Drake and Kendrick would be received, for example. They could say, “Can you create a post about this and we can see if people turn on Kendrick for backtracking, or if fans will lose their shit over them coming together?” We could see if it’s a disaster or potentially the biggest release of their careers. Traditionally, you don’t get to look before you jump. Maybe in the future we will. But even now with the Shaq video, it basically proved that if Shaq went to an unexpected show and was raging in the front row people would love it. I mean, if it’s been so successful on socials, why wouldn’t it be so successful in real life?
It seemed like the Shaboozey and Dolly Parton video inserted Shaboozey’s name and other new phrases using an AI voice filter. Do you rely on AI in these videos a lot or is it primarily about careful editing?
The majority of it is just clever editing. Every now and then we may change a word up or something [using AI], but the majority of it is just collaging clips together.
How time intensive is it to create these videos?
The process has been changing. It used to be much more time intensive back before we realized that clever editing was more efficient. In the beginning, we would write scripts for the videos, run them through AI and then try to find clips to match the scripts and stuff like that. You have to match the edit up with the artist’s lips so it looks like lip synching. That’s just super time intensive. Then we started realizing that it’s easier to just define a basic objective, go out on the internet and see what we can find. We develop a story from there so that we only have to do a few fake [AI-assisted] words here and there, and then we’ll cut away from the video, show some footage from a music video or something like that. It makes it more efficient.
As far as you know, is WtrCoolr the first team in digital marketing that is trying to do these false-narrative, storytelling videos, or is this something that is seen all over the internet?
We were definitely the first to do it. There’s definitely people that are imitating it now. We see it generally in the content that exists online, especially on meme pages. It’s becoming part of the culture.
Do you run your ideas for fan fiction narratives by the artist before you do them?
We’re working with them, and we’re talking through ideas. There’s as much communication as they want. Some artists want to know what’s going on, but some artists just don’t care to be involved.
It seems like, so far, no one has had any issues with being used in the videos — they even see this positively — but are you concerned about the legal implications of using someone’s likeness to endorse an artist or idea that they haven’t really endorsed?
We’re not claiming it to be true. We include disclaimers that it’s just fan fiction. So, I think if we were claiming for it to be true then that’s a different story, but that’s not what we are doing.
That’s listed on all the page bios, but it isn’t listed on the actual video captions, right?
It’s listed on the profiles, and then a lot of videos we just do Easter eggs at the end that make it sort of apparent that it’s a joke.
I found the idea that you mentioned earlier to be interesting — the idea that you could test out collaborations or things without having to get the artist involved initially, whether it’s Drake and Kendrick collaborating or something else. It reminds me of when people tease a song before they slate it for official release. Do you feel that is a fair comparison?
Totally. What TikTok did for song teasing, this has done for situation teasing.
This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.
Universal Music Group (UMG) reached a strategic agreement with ProRata.ai, a new company that enables generative artificial intelligence platforms to fractionally attribute and compensate content owners. Bill Gross, chairman of technology incubator Idealab Studio — which launched ProRata — will serve as CEO. ProRata’s technology allows generative AI platforms to attribute and share revenues on a per-use basis with content owners while preventing “unreliable content from driving AI answers,” according to a press release. In addition, ProRata is building a consumer AI answer engine set to launch this fall that will feature the company’s attribution technology.
“Current AI answer engines rely on shoplifted, plagiarized content,” Gross, the inventor of the pay-per-click monetization model underlying internet search, said in a statement. “This creates an environment where creators get nothing, and disinformation thrives. ProRata is pro-author, pro-artist and pro-consumer. Our technology allows creators to get credited and compensated while consumers get attributed, accurate answers. This solution will lead to a broader movement across the entire AI industry.”
Trending on Billboard
In his own statement, UMG chairman/CEO Lucian Grainge said, “We are encouraged to see new entrepreneurial innovation set into motion in the Generative AI space guided by objectives that align with our own vision of how this revolutionary technology can be used ethically and positively while rewarding human creativity. Having reached a strategic agreement to help shape their efforts in the music category, we look forward to exploring all the potential ways UMG can work with ProRata to further advance our common goals and values.”
Along with UMG, ProRata has struck early agreements with media publishers including the Financial Times, The Atlantic and Fortune.
In describing the technology, the release reads: “ProRata’s technology analyzes AI output, measures the value of contributing content and calculates proportional compensation. The company uses a proprietary algorithmic approach to score and determine attribution. This attribution method enables copyright holders to share in the upside of generative AI by being credited and compensated for their material on a per-use basis. Unlike music or video streaming, generative AI pay-per-use requires fractional attribution as responses are generated using multiple content sources.”
ProRata is in “advanced discussions” with additional news publishers, authors, and media and entertainment companies. The company’s leadership team and board of directors include executives who have held senior roles at Microsoft, Google and Meta, as well as Michael Lang, the president of Lang Media Group and one of the founders of Hulu. Early investors include Revolution Ventures, Prime Movers Lab and Mayfield.
Immersive technology, media and entertainment company Cosm raised more than $250 million in funding to drive the growth of its “Shared Reality” venues — described in a press release as an “experience that seamlessly bridges the virtual and physical worlds by merging state-of-the-art visuals with the energy and excitement of the crowd and elevated food and beverage service.” The new funding round includes existing investors Steve Winn and Mirasol Capital and first-time investors Avenue Sports Fund led by Marc Lasry, Dan Gilbert‘s ROCK, Baillie Gifford, and David Blitzer‘s Bolt Ventures. Cosm will use the funds to scale, grow its technology and media business units, and speed up the development of more Cosm venues worldwide. The second Cosm venue is slated to open in Dallas later this year, with a third in Atlanta recently announced. “Cosm venues are a new paradigm in live sports, music, and artistic entertainment,” said Chris Evdaimon, investment manager at Baillie Gifford, in a statement. “The mesmerizing viewing experience guarantees the Cosm customer the best seats in the arena and the best viewing angle at any moment of the live event, at an affordable ticket price.”
HYBE Interactive Media (HYBE IM), the interactive media and games division of the storied K-pop company, raised $80 million in a round led by Makers Fund with participation from IMM Investment and parent company HYBE. The funds will be used to expand the company’s games publishing and development efforts, allowing HYBE IM to invest in more games, introduce them in global markets and bolster the division’s in-house development capabilities. HYBE IM’s previously-released titles include Rhythm Hive and BTS Island: In the SEOM. It’s also signed publishing contracts for Macovill’s OZ Re:write and Flint’s RPG Astra: Knights of Veda.
Believe acquired Doğan Music Company, Turkey’s largest independent record label, four years after purchasing a 60% majority stake in the company in 2020; it acquired the remaining 40% of the company for 38.3 million euros ($41.84 million). The transaction is pending approval by the competition regulator.
The U.K. office of Believe signed a global services deal with electronic music brand fabric. Under the agreement, fabric joins the client base of b:electronic, Believe’s electronic music imprint and part of the company’s label & artist solutions division. B:electronic will provide genre specialist label management, video and audience development, editorial and marketing partnerships internationally, and distribution for both catalog and new releases. Fabric’s labels include fabric Originals, fabric Records and Houndstooth, while a new imprint is slated to launch in the near future.
Beatchain partnered with Indian radio network Radio City India to launch Muzartdisco, a digital platform and app that will allow Indian artists to release and promote their music using Beatchain’s A&R tool and artist services platform. Through the platform, artists can also compete for opportunities including studio sessions; mentoring; collaborations with established artists, writers and producers; radio breakout campaigns, social media shoutouts and other opportunities courtesy of Radio City India; and more. Meanwhile, A&R teams using the platform will be able to find artists using a tailored filtering process that makes it easier to find talent that aligns with their mission and niche. According to a press release, Radio India is the country’s leading radio network, boasting a listenership of more than 69 million across 39 cities.
Sports and entertainment collectibles company Panini America partnered with The Rolling Stones to produce the first fully licensed, career-spanning trading card set for the band. Titled Prizm The Rolling Stones, the set will chronicle the Stones’ 60-year recording and touring history, with additional collections to come.
AEG Presents partnered with Jacobs Entertainment — a developer, owner and operator of gaming and entertainment facilities — on Globe Iron, a new indoor 1,200-capacity venue in Cleveland that was once home to the Globe Iron Works Foundry built in 1853. AEG, which will operate and exclusively book the venue’s programming, already books and operates two other Cleveland venues: the Agora Theatre and the Jacobs Pavilion.
Indie record label The Programm, led by Peter “S.Y.” Pestano, struck a joint venture with LLC4/Capitol Records to break new artists, starting with Mexican-American rapper NHC Murda 60x. The joint venture will be steered by Orlando Wharton, executive vp at Capitol Music Group, president of Priority Records and CEO of LLC4. NHC Murda 60x and other Programm artists will have the potential to be upstreamed under the deal.
Independent entertainment company Unity 7 Entertainment announced a distribution partnership with Forecast Music Group (The Orchard/Sony), which will provide global distribution, marketing and promotional support for Unity 7’s artist roster. The partnership will kick off with the release of hip-hop artist Alantra’s debut single, “Get It,” which is set to drop on Sept. 5.
AI-powered, ethically-trained music generation company Soundful teamed with SoundCloud and Kaskade on an AI songwriting competition that will offer the winner a chance to perform alongside Kaskade and have their winning track completed and released by Kaskade as a featured artist.
Even before President Joe Biden announced that he was dropping out of the 2024 presidential race on July 21, extremely online millennials and Gen Zers had started posting memes on social media in support of Vice President Kamala Harris, who many hoped (and assumed) would take over for Biden after his disastrous debate performance in late June. And after Harris replaced him as the presumptive Democratic presidential nominee, it seemed the entire internet became completely coconut-pilled.
Along with traditional text- and image-based memes — which are nothing new — musical memes have also proliferated on short-form video sites like TikTok, Reels and Shorts, with users mashing up Harris quotes with popular songs using AI or more traditional methods of remixing. But these playful — or, in some cases, just plain strange — songs are more than just digital fun and games. The overwhelmingly pro-Harris memes are reaching millions of potential voters, and might help Harris mobilize the previously discouraged young voters she needs in order to win in November.
One audio, which has over 1.1 million likes on TikTok, pairs Harris’ memeable quote “do you think you just fell out of a coconut tree?” with the instrumental for “360” by Charli XCX. Another pitch-alters the same Harris quote over “The Star-Spangled Banner.” One anti-J.D. Vance audio pastes the Republican VP candidate saying “I’m a Never Trump Guy” over “Freek-a-Leek” by Petey Pablo. (After that clip went viral, the @KamalaHQ account also made its own video using the sound.)
There are also pro-Harris AI tracks, like one that replaces the lyrics to a Beyoncé song to make Queen Bey seemingly sing “you exist in the context of all in which you live,” another heavily memed Harris quote. A different AI track splices a Harris soundbite over DJ Johnrey’s viral track “Emergency Budots,” with an AI deepfake video of Harris and Pete Buttigieg dancing under a palm tree.
Beyond its political ramifications, this content also offers a glimpse into the future of music — one where we don’t just play our music, but where we play with it. In a sense, it’s the culmination of a trend that’s been brewing for decades. As music lovers have embraced sampling, remixing, the digital audio workstation, the Splice royalty-free sample library, Kanye West’s stem player and sped-up/slowed-down song edits, they’ve demonstrated a desire to have more control over static recordings than the traditional music consumption provides. And AI innovations can help to further facilitate this customizable listening experience.
Some music AI experts, including Suno’s CEO Mikey Shulman, are betting on a future where “anyone can make music” at the click of a button — and that everyone will want to. Often, I’ve heard folks who espouse this view of AI music compare it to photography, given photography is an art form which went from being something conducted by trained professionals in proper studio settings to being a ubiquitous activity aided by smartphones.
These entrepreneurs aren’t totally misguided — it’s clear based on user interest in Suno and Udio that there is a place for songs that are completely new and individual. But right now, it seems predictions about this technology’s role in the future of music consumption are too bullish. Music fans still crave familiarity, community and repetition when listening to music. It’s also scientifically proven that it takes multiple listens to form bonds with new songs — which is way more likely to happen with hit songs by artists you know and love, rather than individualized AI-generated tracks.
Instead, I think the average music listener will be way more interested in using AI to tweak their favorite hits. Listeners could use AI stem separation tools to create more bass-heavy mixes, for example, or some form of AI “timbre transfer” to make a song’s guitars sound more like a Les Paul than a Stratocaster (you could also go even further and change a guitar to be an entirely different instrument), or AI voice filters to change the lyrics of a song to include their best friend’s name.
Of course, there are still serious legal hurdles to customizing copyrighted sound recordings and songs if users share them publicly. Right now, any of the artists whose songs were used in these pro-Harris remixes could get them taken down upon request, citing copyright infringement. The NMPA has also expressed that it is willing to fight back against Spotify if it ever rolled out customizable song features on its platform. In a cease and desist letter, the NMPA warned the streaming service, saying, “We understand that Spotify wishes to offer a ‘remix’ feature…to ‘speed up, mash up, and otherwise edit’ their favorite songs to create derivative works. Spotify is on notice that release of any such feature without the proper licenses in place from our members may constitute additional direct infringement.”
So for now, edited songs will remain on social media platforms only, at least until they receive takedown requests. Still, consumer interest in music customization is only growing, and the popularity of pro-Harris campaign remixes serve as proof.
This analysis was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.
We are in a transformative era of music technology.
The music industry will continue to experience growth throughout the decade, with total music revenue reaching approximately $131 billion by 2030, according to Goldman Sachs. This lucrative business is built on streaming but also witnessing an unprecedented surge in innovation and entrepreneurship in artificial intelligence. With over 300,000 tech and media professionals laid off since the beginning of 2023, according to TechCrunch, a new wave of talent has been funneled into music tech startups. This influx, coupled with the dramatic decrease in cloud storage costs and the global rise of developer talent, has catalyzed the emergence of many startups (over 400 that we have tracked) dedicated to redefining the music business through AI.
These music tech startups are not just changing the way music is made and released; they are reshaping the very fabric of the industry. They’re well-funded, too. After raising over $4.8 billion in 2022, music tech startups and companies raised almost $10 billion in funding in 2023, according to Digital Music News, indicating that venture capitalists and investors are highly optimistic about the future growth of music technology.
As Matt Cartmell, CEO of Music Technology UK, said, “Our members want us to present the music tech sector as a highly investible proposition, educating investors about the opportunities that lie within. Music tech firms are also looking for innovative models of engagement with labels, DSPs and artists, as well as looking for our help to bring diverse talent into the industry, removing the barriers that continue to restrict individuals with passion and enthusiasm from a career in music technology.”
Trending on Billboard
Riding this wave of investment, several startups have already made a splash in the music AI space. Below is an overview of a few of those companies and how they’re contributing to the industry’s rapid evolution.
Generative AI: The New Frontier
At the heart of this revolution is generative AI, a technology rapidly becoming indispensable for creators across the spectrum. From novices to professional artists and producers, AI-powered platforms offer unprecedented musical expression and innovation opportunities. It’s now possible for users without any formal musical training to craft songs in many genres, effectively democratizing music production. Music fans or content creators can utilize products that score their social content, while seasoned musicians can use these tools to enhance their creative workflows.
“I like to think of generative AI as a new wave of musical instruments,” says Dr. Maya Ackerman, founder of Wave AI, a company that has introduced tools to aid human songwriters. “The most useful AI tools for artists are those that the musicians can ‘play,’ meaning that the musician is in full control, using the AI to aid in their self-expression rather than be hindered.” These tools focus on generating vocal melodies, chords and lyrics, emphasizing collaboration with musicians rather than replacing them.
For non-professionals, one ambitious company, Beatoven.ai, is building a product to generate music in a host of different ways for many different use cases. “(Users) can get a piece of music generated and customize it as per their content without much knowledge of music,” says Siddharth Bhardwaj, co-founder/CTO of Beatoven. “Going forward, we are working on capturing their intent in the form of multimodal input (text, video, images and audio) and getting them as close to their requirements as possible.”
The concept of “artist as an avatar” has become increasingly popular, which draws inspiration from the gaming community. Companies like CreateSafe, the startup powering Grimes’ elf.tech, have built generative audio models that enable anyone to either license the voice of a well-known artist or replicate their own voice. This innovative approach also reflects the adaptive and forward-thinking nature of artists. Established artists like deadmau5, Richie Hawtin and Ólafur Arnalds have also delved into AI initiatives and investments. Furthermore, a few innovators are crafting AI music tools tailored for the gaming community, potentially paving the way for the fusion of music and gaming through real-time personalization and adaptive soundtracks during gameplay.
The Community and Collaboration Ecosystem
The journey of music creation is often fraught with challenges, including tedious workflows and a sense of isolation. Recognizing this, several startups are focusing on building communities around music creation and feedback. The Singapore-based music tech giant BandLab recently announced that it has acquired a user base of 100 million, making it one of the biggest success stories in this arena. “Our strength lies in our comprehensive approach to our audience’s needs. From the moment of inspiration to distribution, our platform is designed to be a complete toolkit for music creators and their journey,” says founder Meng Ro Kuok. There are several startups pioneering spaces where creators can collaborate, share insights and support each other, heralding a new era of collective creativity.
A Toolkit for Every Aspect of Music Production
This landscape of music tech startups offers a comprehensive toolkit that caters to every facet of the music creation process:
Track and Stem Organization. Platforms like Audioshake simplify the management of tracks and stems, streamlining the production process.
Vocal & Instrument Addition. These technologies allow for the addition of any human voice or instrument sound to a recording environment, expanding the possibilities for frictionless creativity.
Sound Libraries. Services provide or generate extensive libraries of samples, beats and sounds, offering artists a rich palette.
Mix and Master. The process of mixing and mastering audio has historically relied heavily on human involvement. However, several startups are utilizing AI technology to automate these services for a more comprehensive audio production experience. Others also offer the ability to convert stereo songs to spatial audio.
Remixing and Freelance Musicianship. Many platforms now offer creative and innovative solutions for remixing music. Additionally, some platforms allow users to easily source and connect with talented artists, session musicians and other music professionals. Need an orchestra? There are tech platforms that can arrange and source one for you remotely.
The Future of Music Tech: A Vision of Inclusivity and Innovation
The barriers that once kept people from participating in music creation are falling away. Now, anyone with a passion for sound can create content, engage with fans, find a community and even monetize their work. This more accessible and collaborative music ecosystem offers an exciting glimpse into a future where anyone can participate in the art of creation. The explosion of creators, facilitated by these technologies, also suggests a new economic opportunity for the industry to service this growing creator class.
Drew Thurlow is the founder of Opening Ceremony Media where he advises music and music tech companies. Previously he was senior vp of A&R at Sony Music, and director of artists partnerships & industry relations at Pandora. His first book, about music & AI, will be released by Routledge in early 2026.
Rufy Anam Ghazi is a seasoned music business professional with over eight years of experience in product development, data analysis, research, business strategy, and partnerships. Known for her data-driven decision-making and innovative approach, she has successfully led product development, market analysis, and strategic growth initiatives, fostering strong industry relationships.
In March of 2023, as artificial intelligence barnstormed through the headlines, Goldman Sachs published a report on “the enormous economic potential of generative AI.” The writers explored the possibility of a “productivity boom,” comparable to those that followed seismic technological shifts like the mass adoption of personal computers.
Roughly 15 months later, Goldman Sachs published another paper on AI, this time with a sharply different tone. This one sported a blunt title — “Gen AI: Too Much Spend, Too Little Benefit?” — and it included harsh assessments from executives like Jim Covello, Goldman’s head of global equity research. “AI bulls seem to just trust that use cases will proliferate as the technology evolves,” Covello said. “But 18 months after the introduction of generative AI to the world, not one truly transformative — let alone cost-effective — application has been found.”
This skepticism has been echoed elsewhere. Daron Acemoglu, a prominent M.I.T. scholar, published a paper in May arguing that AI would lead to “much more modest productivity effects than most commentators and economists have claimed.” David Cahn, a partner at Sequoia Capital, warned in June that “we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick.”
Trending on Billboard
“I’m worried that we’re getting this hype cycle going by measuring aspiration and calling it adoption,” says Kristina McElheran, an assistant professor of strategic management at the University of Toronto who recently published a paper examining businesses’ attempts to implement AI technology. “Use is harder than aspiration.”
The music industry is no exception. A recent survey of music producers conducted by Tracklib, a company that supplies artists with pre-cleared samples, found that 75% of producers said they’re not using AI to make music. Among the 25% who were playing around with the technology, the most common use cases were to help with highly technical and definitely unsexy processes: stem separation (73.9%) and mastering (45.5%). (“Currently, AI has shown the most promise in making existing processes — like coding — more efficient,” Covello noted in Goldman’s report.) Another multi-country survey published in May by the Reuters Institute found that just 3% of people have used AI for making audio.
At the moment, people use AI products “to do their homework or write their emails,” says Hanna Kahlert, a cultural trends analyst at MIDiA Research, which recently conducted its own survey about AI technology adoption. “But they aren’t interested in it as a creative solution.”
When it comes to assessing AI’s impact — and the speed with which it would remake every facet of society — some recalibration was probably inevitable. “Around the launch of ChatGPT, there was so much excitement and promise, especially because this is a technology that we talk about in pop culture and see in our movies and our TV shows,” says Manav Raj, an assistant professor of management at the University of Pennsylvania’s Wharton School, who studies firms’ responses to technological change. “It was really easy to start thinking about how it could be really transformative.”
“Some of that excitement might have been a little frothy,” he continues. “Even if this is a really important and big technology, it takes time for us to see the effects of these kinds of technological changes in markets.” This was famously true with the development of computers — in 1987, the economist Robert Solow joked, “You can see the computer age everywhere but in the productivity statistics,” a phenomenon later dubbed “the productivity paradox.”
It also takes time to settle the legal and regulatory framework governing AI technologies, which will presumably influence the magnitude of their effects as well. Earlier this year, the major labels sued two genAI music platforms, Suno and Udio, accusing them of copyright infringement on a mass scale; in recently filed court documents, the companies said their activities were lawful under the doctrine of fair use, and that the major labels were just trying to eliminate “a threat to their market share.” Similar suits against AI companies have also been filed in other creative industries.
When McElheran surveyed manufacturing firms, however, few cited regulatory uncertainty as a barrier to AI use. She points out that “they may have had bigger fish to fry, like no use case.” A U.S. Census Bureau survey of businesses published in March found that 84.2% of respondents hadn’t used AI in the previous two weeks, and 80.9% of the firms that weren’t planning to implement AI in the next six months believe it “is not applicable to this business.”
Tracklib’s survey found something similar to McElheran’s. Only around 10% of respondents said concern about copyright was a reason they wouldn’t use AI tools. Instead, Tracklib’s results indicated that producers’ most common objections to using AI were moral, not legal — explanations like, “I want my art to be my own.”
“Generative AI comes up against this wall where it’s so easy, it’s just a push of a button,” Kahlert says. “It’s a fun gimmick, but there’s no real investment on the part of the user, so there’s not much value that they actually place in the outcome.”
In contrast, MIDiA’s survey found that respondents were interested in AI tech that can help them modify tracks by adjusting tempo — a popular TikTok alteration that can be done without AI — and customizing song lyrics. This interest was especially pronounced among younger music fans: Over a third of 20-to-24-year-olds were intrigued by AI tools that could help them play with tempo, and around 20% of that age group liked the idea of being able to personalize song lyrics.
Antony Demekhin, co-founder of the AI music company Tuney, sees a market for “creative tools” that enable “making, editing, or remixing beats and songs without using a complicated DAW, while still giving users a feeling of ownership over the output.”
“Up until recently,” he adds, “the addressable market for those kinds of tools has been small because the number of producers that use professional production software has been limited, so early-stage tech investors don’t frequently back stuff like that.”
Demekhin launched Tuney in 2020, well before the general public was thinking about products like ChatGPT. In the wake of that platform’s explosion, “Investors started throwing money around,” he recalls. At the same time, “nobody knew what questions to ask. What is this trained on? Are you exposed to legal risk? How easy would it be for Meta to replicate this and then make it available on Instagram?”
Today, investors are far better informed, and conversations with them sound very different, Demekhin says. “Cooler heads are prevailing,” he continues. “Now there’s going to be a whole wave of companies that make more sense because people have figured out where these technologies can be useful — and where they can’t.”
AI-focused music production, distribution and education platform LANDR has devised a new way for musicians to capitalize on the incoming AI age with consent and compensation in mind. With its new Fair Trade AI program, any musician who wishes to join can be part of this growing pool of songs that will be used to […]
Eminem’s latest album, The Death of Slim Shady (Coupe de Gråce), is the funeral rites for one of rap’s most popular and divisive characters.
Slim Shady, the foul-mouthed alter-ego of Detroit’s most famous MC, was first seen by the masses in the video for “My Name Is,” the debut single from Em’s major label debut, The Slim Shady LP. With a slightly eerie but bemused grin, Slim Shady told kids to “stick nine-inch nails through one of my eyelids.” From there, the character went on to help Eminem, né Marshall Bruce Mathers III, sell millions of albums with an ingenious mix of up-to-the-minute cultural commentary, razor-sharp wit, and a fondness for boundary pushing.
But as rap grew up, Eminem has had to reckon with a changing listenership that increasingly views Slim’s trademark blue bars as inappropriate and offensive. So, now at 51-years-old, Em’s decided that it’s the right time to say goodbye to his beloved alter ego for good. Earlier this year, his team took out an ad in the Detroit Free Press in the form of a fake obituary for Slim Shady that read in part, “His complex and tortured existence has come to a close, and the legacy he leaves behind is no closer to resolution than the manner in which this character departed this world.”
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
To properly say “peace,” Em decided to not only bring Slim Shady back in song, but in video form, too. “Houdini,” the first single from The Death of Slim Shady (Coupe de Gråce), is a throwback of sorts. Produced by Eminem and Luis Resto, it has the same feeling of Em singles of old. With a playful beat (partially lifted from The Steve Miller Band’s 1982 smash “Abracadabra”) that sounds like the music for a demented carnival ride, “Houdini” could be “Without Me Part 2.” The refrain at the top of the song even drives the point home by quoting that 2002 single: “Shady’s back. Tell a friend.” But what puts it over the top is the comic book-brought-to-life nature of the video, and the inclusion of a young Slim Shady starring alongside a modern-day Eminem.
Trending on Billboard
[embedded content]
But just how did Eminem manage to recreate a version of himself from 20 years ago? With the help of AI and Metaphysic. Founded in 2021, Metaphysic offers a suite of tools that allows artists to create and manage digital versions of themselves that they can then manipulate and use for their own projects or license out to third parties for movies, tv shows, or other commercial projects.
Metaphysic Pro, its premiere offering, allows creatives and artists to, as the website says, create a “portfolio of high-quality data assets used to create your AI, voice, and performance.” So, if you were a platinum-selling rapper who wanted to protect your image and likeness against the pending artificial intelligence onslaught, you could register with Metaphysic to build a database of your face, voice, and performance videos from any point in your career. Metaphysic will then help you paper licensing deals with so you remain in control of your AI self.
Right now, the law is trailing behind the state of the art, so there is little to stop companies and rogue actors from exploiting celebrity’s image and and likeness. But if some third party decides to just create a digital version of you without your approval, Metaphysic will alert you to any instances they find on any social networks or video platforms. At a time when actors, musicians, and other creatives are increasingly terrified about unauthorized use of their face or voice, Metaphysic works to provide some sort of protection and control.
“We’re here to help people protect themselves and at least understand what’s going on,” says Ed Ulbrich, chief content officer and EVP of production at Metaphysic. He’s running to catch a flight, but is able to still exuberantly extol the virtues of Metaphysics and AI. “It is not unreasonable to believe that people should own their own likeness. They should own their own biometric data. They should have access to their AI self. They should be able to control it. And if you are an individual that is in command, we don’t own that. We maintain it for them, but it’s up to them if they want to license it to someone.”
On paper, Ulbrich is the last person you would expect to tout the benefits of the AI revolution. He went to art school and traditionally trained to be a painter, but when he got his first glimpse of CGI he knew everything was going to change. After short stint in advertising, he saw James Cameron’s 1991 blockbuster Terminator 2: Judgement Day and realized what he wanted to do with the rest of his life. So, after packing up and moving to L.A., Ulbrich managed to get a job with his filmmaking hero working at Cameron’s Digital Domain VFX shop. After rising to the role of CEO, he left to lead Deluxe Entertainment’s VFX and virtual reality teams.
In the over 30 years he’s worked in VFX, Ulbrich worked on some of Hollywood’s biggest movies including Titanic, The Curious Case of Benjamin Button, Black Panther, and most recently, Top Gun: Maverick. By all measures, he’s had a hell of a career, one that he believes may no longer be accessible to young artists.
Ed Ulbrich
Courtesy of Metaphysic
“I watched what I love doing back in the ’90s and the early 2000s become factory work,” he says. “The movie business expanded so much. No longer were we sitting with filmmakers in the theater [with] laser pens looking at shots and getting notes and helping craft the movie together with the directors. It became a global business. It became [a business where you had] to have factories all around the planet to get government rebates that get passed back to the studios. I found myself running manufacturing facilities. I never set out to do factory work. I went to art school.”
He believes the tools Metaphysics is building will bring about a “whole renaissance of creativity.” That brings us back to “Houdini.” The video for the lead single from Eminem’s 12th album was made possible by Metaphysic’s Live product — which allows for, among other things, real-time photoreal face swaps driven by live actor performances. The tool allowed Eminem to look 20 years younger without much time (or, according to Ulbrich, money).
Here’s how it works: The Metaphysics team first learns what exactly the scope of the project is, i.e. who is going to be de-aged or have their faceswapped. They then gather all the assets — old photos, videos, and audio samples, etc. — they need to build their models. It takes a little under two months for Metaphysic to train its AI model on all the collected assets. The team tests the model to make sure it looks accurate and works properly before setting up all its production equipment on location. Then it’s showtime.
“If you would have asked me if we could produce a video like this in the amount of time we had with the budget we had two years ago, I would have laughed,” he says.
Resources aside, the most impressive part of the video is just how lifelike and real the younger Slim Shady looks. Everything from his facial features to his movements could be mistaken for a real person. Ulbrich says that’s because it is: “People said, ‘You created Slim Shady!’ We did not create Slim Shady. Let’s be very clear … the real Slim Shady played the real Slim Shady. We just helped by giving him something that makeup couldn’t pull off. We let him play himself but through the interpretation of CGI or any other technology. It’s him playing him.”
To help Em channel his younger self, Metaphysics provided another other its products, the AI Mirror. Built around a huge 85-inch LED monitor, the AI Mirror has a camera and a soft light that work together to allow people to walk up and see live AI projected onto their face. So before the cameras started rolling, Em could see exactly what a young Slim Shady would look like and how he would move to help him better get into character. “You can see yourself 30-40 years ago looking right back at you,” Ulbrich says. “It’s pretty magical to actually go through that.”
Abracadabra, indeed.