State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 22

A new song purportedly created with AI that mashes up Bad Bunny and Rihanna was uploaded on Tuesday (April 25), apparently by the same mystery maker behind the controversial AI Drake/Weeknd tune “Heart On My Sleeve” that led to immediate takedown notices earlier this month.

Uploaded directly to SoundCloud, YouTube and TikTok (but not Spotify, Apple Music or other major streaming services) under a different handle, Ghostwrider777, the 1:07 song, “Por Qué,” features a spot-on yearning Bad Bunny vocal over a shuffling reggaeton beat along with RihRih singing “Baby you shine so bright/ Like diamonds in the sky/ I feel alive/ But now you got me thinkin.’”

The message accompanying the clip reads, “they tried to shut us down,” and the video running underneath the song features a character in a white bucket hat, green ski goggles, winter gloves and a white sheet covering their face and upper body. As the song plays out, the on-screen graphics read: “i used AI to make a Bad Bunny song ft. Rihanna… this video will be removed in 24 hours… they tried to quiet my brother, but we will prevail.”

Representatives for Bad Bunny and Rihanna did not immediately respond to Billboard‘s requests for comment.

The Drake/Weeknd fake was quickly pulled from most streaming platforms after their label, Universal Music Group condemned it in a statement on April 17 that said the track was “infringing content created with generative AI.” That song was credited to the anonymous TikTok user Ghostwriter977 and credited to “Ghostwriter” on streaming platforms; at press time it was not clear if both songs were created by the same person. Before it was pulled, “Heart On My Sleeve” racked up millions of views and listens on Spotify, TikTok and YouTube, with the creator saying “This is just the beginning” in a comment beneath the YouTube clip.

The short-lived success of the song amplified a growing worry in the music industry over the potential impact of AI, even as a newly label-free Grimes mused this week about “killing copyright” by offering to split royalties 50/50 with fans who create a successful AI-generated song using her voice.

Click here to listen to “Por Qué.”

For the last week, the most talked-about song in the music business has been “Heart on My Sleeve,” the track said to have been created by using artificial intelligence to imitate vocals from Drake and The Weeknd and uploaded to TikTok by the user Ghostwriter977. And while most reactions were impressed, there was a big difference between those of fans (“This isn’t bad, which is pretty cool!”) and executives (“This isn’t bad, which is really scary!”). As with much online technology, however, what’s truly remarkable, and frightening, isn’t the quality – it’s the potential quantity.

This particular track didn’t do much damage. Streaming services pulled it down after receiving a request from Universal Music Group, for which both Drake and The Weeknd record. YouTube says the track was removed because of a copyright claim, and “Heart on My Sleeve” contains at least one obvious infringement in the form of a Metro Boomin producer tag. But it’s not as clear as creators and rightsholders might like that imitating Drake’s voice qualifies as copyright infringement.

In a statement released around the time the track was taken down, Universal said that “the training of generative AI using our artists’ music” violated copyright. But it’s a bit more complicated than that. Whether that’s true in the U.S. depends on whether training AI with music qualifies as fair use – which will not be clear until courts rule on the matter. Whether it’s true in other countries depends on local statutory exceptions for text and data mining that vary in every country. Either way, though, purposefully imitating Drake’s voice would almost certainly violate his right to what an American lawyer might call his right of publicity but a fan would more likely call his artistic identity. There are precedents for this: A court held that Frito-Lay violated the rights of Tom Waits by imitating his voice for a commercial, and Bette Midler won a similar lawsuit against Ford. Both of those cases involved an implied endorsement – the suggestion of approval where none existed.

The violation of an AI imitation is far more fundamental, though. The essence of Drake’s art – the essence of his Drakeness, if you will – is his voice. (That voice isn’t great by any technical definition, any more than Tom Waits’ is, but it’s a fundamental part of his creativity, even his very identity.) Imitating that is fair enough when it comes to parody – this video of takes on Bob Dylan‘s vocal style seems like it should be fair game because it’s commenting on Dylan instead of ripping him off – but creating a counterfeit Drake might be even more of a moral violation than a commercial one. Bad imitators may be tacky, but people tend to regard very accurate ones as spooky. “Heart on My Sleeve” isn’t Drake Lite so much as an early attempt at Drakenstein – interesting to look at, but fundamentally alarming in the way it imitates humanity. (Myths and stories return to this theme all the time, and it’s hard to think of many with happy endings.) Universal executives know that – they have talked internally about the coming challenges of AI for years – which is why the company’s comment asked stakeholders “which side of history” they want to be on.

This track is just the sign of a coming storm. The history of technology is filled with debates about when new forms of media and technology will surpass old ones in terms of quality when it often matters much more about how cheap and easy they are. No one thinks movies look better on a phone screen than in a theatre, but the device is right there in your hand. Travel agents might be better at booking flights than Expedia, but – well, the fact that there aren’t that many of them anymore makes my point. Here, the issue isn’t whether AI can make a Drake track better than Drake – which is actually impossible by definition, because a Drake track without Drake isn’t really a Drake track at all – but rather how much more productive AI can be than human artists, and what happens once it starts operating at scale.

Imagine the most prolific artist you can think of – say, an insomniac YoungBoy Never Broke Again crossed with King Gizzard & the Lizard Wizard. Then imagine that this hypothetical artist never needs to eat or sleep or do anything else that interferes with work. Then imagine that he – or, really, it – never varies from a proven commercial formula. Now clone that artist thousands of times. Because that’s the real threat of AI to the music business – not the quality that could arrive someday but the quantity that’s coming sooner than we can imagine.

It has been said that 100,000 tracks get uploaded to streaming services every day. What happens once algorithms can make pop music at scale? Will that turn into a million tracks a day? Or 100 million? Could music recorded by humans become an exception instead of a rule? In the immediate future, most of this music wouldn’t be very interesting – but the sheer scale and inevitable variety could cut into the revenue collected by creators and rightsholders. The music business doesn’t need an umbrella – it needs a flood barrier.

In the long run, that barrier should be legal – some combination of copyright, personality rights and unfair competition law. That will take time to build, though. For now, streaming services need to continue to work with creators and rightsholders to make clear the difference between artists and their artificial imitators.

Fans who want to hear Drake shouldn’t have to guess which songs are really his.

For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.

Spotify CEO Daniel Ek said Tuesday (April 25) that, contrary to the widespread backlash artificial intelligence (AI) tools have faced, he’s optimistic the technology could actually be a good thing for musicians and for Spotify.

While acknowledging the copyright infringement concerns presented by songs like the AI-generated Drake fake “Heart on My Sleeve” — which racked up 600,000 streams on Spotify before the platform took it down — in comments made on a Spotify conference call and podcast, Ek said AI tools could ease the learning curve for first-time music creators and spark a new era of artistic expression.

“On the positive side, this could be potentially huge for creativity,” Ek said on a conference call discussing the company’s first-quarter earnings. “That should lead to more music [which] we think is great culturally, but it also benefits Spotify because the more creators we have on our service the better it is and the more opportunity we have to grow engagement and revenue.”

Ek’s entrepreneurial confidence that AI can be an industry boon in certain instances stands in contrast to a steady campaign of condemnation for generative machine learning tools coming from Universal Music Group, the National Music Publishers’ Association (NMPA) and others.

At the same time, companies including Spotify, Warner Music Group, HYBE, ByteDance, SoundCloud and a host of start-ups have leaned in on the potential of AI, investing or partnering with machine learning companies.

The industry is still sorting the ways in which AI can be used and attempting to delineate between AI tools that are helpful and those that are potentially harmful. The use case presenting the most consternation uses a machine-learning process to identify patterns and characteristics in songs that make them irresistible and reproduce those patterns and characteristics in new creations.

Functional music — i.e., sounds designed to promote sleep, studying or relaxation — has become a fertile genre for AI, and playlists featuring AI-enhanced or generated music have racked up millions of followers on Spotify and other streaming services. This has led to concerns by some record executives who have noted that functional music eats into major-label market share.

For Spotify’s part, in February the platformSpotify launched an “AI DJ,” which uses AI technology to gin up song recommendations for premium subscribers based on their listening history, narrated by commentary delivered by an AI voice platform. 

“I’m very familiar with the scary parts … the complete generative stuff or even the so-called deep fakes that pretend to be someone they’re not,” Ek said on Tuesday’s episode of Spotify’s For the Record podcast. “I choose to look at the glass as more half-full than half-empty. I think if it’s done right, these AIs will be incorporated into almost every product suite to enable creativity to be available to many more people around the world.”

Grimes loves to push the envelope. But after telling her fans that she’s down with “open sourcing all art and killing copyright” in a series of tweets on Sunday night, and offering to split royalties 50/50 with any successful AI-generated song that uses her voice,” the no rules singer realized she might need some guardrails after all.
“Ok hate this part but we may do copyright takedowns ONLY for rly rly toxic lyrics w Grimes voice,” she tweeted on Monday afternoon (April 24). “imo you’d rly have to push it for me to wanna take smthn down but I guess plz don’t be *the worst*. as in, try not to exit the current Overton window of lyrical content w regards to sex/violence. Like no baby murder songs plz.”

The mother of two with ex Elon Musk then went further, openly debating with herself whether issuing takedown notices after making the open call for facsimile Grimes songs with no limits would make her a hypocrite. “I think I’m Streisand effecting this now but I don’t wanna have to issue a takedown and be a hypocrite later,” she said in reference to an attempt to censor or hide a piece of information that only serves to further shine a spotlight on it.

“***That’s the only rule. Rly don’t like to do a rule but don’t wanna be responsible for a Nazi anthem unless it’s somehow in jest a la producers I guess,” she added in a nod to the 1967 Mel Brooks satirical comedy, The Producers, about the staging a Nazi musical. (Grimes admitted in a later tweet that she has never seen The Producers and that the plan was to wing it and “send takedown notices to scary stuff,” before adding that she’s not even sure her team is capable of sending takedown notices.)

“wud prefer avoiding political stuff but If it’s a small meme with ur friends we prob won’t penalize that. Probably just if smthn is viral and anti abortion or smthn like that,” she said, reiterating that she really doesn’t like adding rules after the fact and apologizing and saying “but this is the only thing.”

When a commenter said it sounded like Grimes was definitely “streisand-ing this situation,” she responded, “Yes but I gotta say it. And if it’s a meme to make awful grimes songs it’ll prob be a week of hard work for us but not a boring outcome. I imagine the DAN// Sidney Bing going murderous equivalent will have to happen with vocal deepfakes and I’m entertained if that happens to us.”

Another commenter noted that the potential for offensive or gross posts “should’ve been their [Grimes’ team’s] first thought,” which the singer said it actually was. “I just didn’t think the original post abt ai wud be a thing, like it was sort of a casual post so my poor team is just catching up with now having to organize all this,” she said.

In a tweet referencing this weekend’s disastrous roll-up of non-paying blue checkmarks on Musk’s Twitter — which was chaotic and later reversed in part for some well-known users who adamantly refused to pay for their legacy checkmarks — a user joked that Grimes had “learned from Elon! Twitter announcement first, then let’s figure out the details after.”

Grimes turned what could have been a diss into a positive, noting, “In my defense this has always been a Grimes feature too.”

The back-and-forth continued when a user said even with takedowns Grimes could still end up in an “uncomfortable situation” where an offensive song could still be out in the world “misleading people until the end of time,” as things tend to do on the internet.

Her response to that one was classic Grimes : “We expect a certain amount of chaos,” she said. “Grimes is an art project, not a music project. The ultimate goal has always been to push boundaries rather than have a nice song. The point is to poke holes in the simulation and see what happens even if it’s a bad outcome for us.”

Most importantly, fans wanted to know when the software will be available for other artists to try it out with their voices. The good news, according to Grimes, is that it’s already out there and she was busy collecting resources. In even better news, she also told her followers that she has “lots of real Grimes songs ready to go too.”

Fans have been eagerly awaiting any news about Grimes’ next album, the as-yet-unscheduled BOOK 1, after she recently said that “music is my side quest now. Tbh reduced pressure x increased freedom = prob more music just ideally ‘Low key I’ll always do my best to entertain whilst depleting my literal reputation I hope that’s ok I love y’all.”

The musician’s most recent album was 2020’s Miss Anthropocene, which included the singles “Violence,” “So Heavy I Fell Through the Earth,” “My Name Is Dark” and “Delete Forever.” Since then, she’s also released one-off songs including 2021’s “Player of Games” and last year’s “Shinigami Eyes.”

Grimes’ AI tease came a week after a fake song featuring A.I.-generated vocals from Drake and The Weeknd, “Heart on My Sleeve,” was pulled from streaming services after going viral.

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” she promised while announcing the AI project, a stance that was in stark opposition to Universal Music Group, which acted quickly to condemn the “infringing content created with generative AI” that produced the phony superstar duet.

Check out Grimes’ tweets below.

Ok hate this part but we may do copyright takedowns ONLY for rly rly toxic lyrics w grimes voice: imo you’d rly have to push it for me to wanna take smthn down but I guess plz don’t be *the worst*. as in, try not to exit the current Overton window of lyrical content w regards to…— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

Yes but I gotta say it. And if it’s a meme to make awful grimes songs it’ll prob be a week of hard work for us but not a boring outcome. I imagine the DAN// Sidney Bing going murderous equivalent will have to happen with vocal deepfakes and I’m entertained if that happens to us— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

This was their first thought haha – I just didn’t think the original post abt ai wud be a thing, like it was sort of a casual post so my poor team is just catching up with now having to organize all this— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

In my defense this has always been a grimes feature too— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

Oh, I never saw the producers – I guess we just play it by ear and send takedown notices to scary stuff. I’m not sure we can even send takedown notices tbh. Like curious what the actual legality is, i think I chose not to copyright my name and likeness back when that was a…— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

We expect a certain amount of chaos. grimes is an art project, not a music project. The ultimate goal has always been to push boundaries rather than have a nice song. The point is to poke holes in the simulation and see what happens even if it’s a bad outcome for us https://t.co/RSAW4xQCAi— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

While most of the music industry is scrambling to figure out how to combat fake A.I.-generated vocals, Grimes is running in the other direction. “I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright,” the genre-pushing singer tweeted on Sunday night (April 23).
The series of tweets found Grimes doubling- and tripling-down on her quest to blur the lines between humans and machines and reconfigure the traditional copyright guardrails that have been in place for more than half a century in the music industry.

In a follow-up tweet Grimes linked to a recent story about how a fake song featuring A.I.-generated vocals from Drake and The Weeknd, “Heart on My Sleeve,” had been pulled from streaming services after going viral. Rather than demanding takedowns, Grimes said she’s willing to go halfsies with her fans if they create something worthy with her vocals.

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” she promised of her stance, which is in stark opposition to that of Universal Music Group, which acted quickly to condemn the “infringing content created with generative AI” that produced the phony superstar duet.

“Same deal as I would with any artist i collab with,” she continued. “Feel free to use my voice without penalty. I have no label and no legal bindings.” In addition, she said she feels like we “shouldn’t force approvals — but rather work out publishing with stuff that’s super popular. That seems most efficient? We cud use elf tech for it tho – but I think we’ll notice if a grimes song goes viral.”

Furthermore, apparently free of label contract constraints, Grimes said she and her team are working on a program that should simulate her voice pretty convincingly, but that they could also upload stems and samples for people to train their own A.I. vocal generators. As for a timeline for the Grimes A.I., the singer said her crew was “p far along last I checked. I sorta just spur of the moment decided to do this lol but we were making a sim of my voice for our own plans and they were almost done.” She also was open to taking suggestions and tips on technology from her followers as evidence by a series of back-and-forth tweets with supportive fellow AI supporters.

Fans have been eagerly awaiting an update on the status of Grimes’ next album, the as-yet-unscheduled BOOK 1, alluding to some unspecified personal and professional hang-ups before revealing that “music is my side quest now. Tbh reduced pressure x increased freedom = prob more music just ideally ‘Low key I’ll always do my best to entertain whilst depleting my literal reputation I hope that’s ok I love y’all.”

The musician’s most recent album was 2020’s Miss Anthropocene, which included the singles “Violence,” “So Heavy I Fell Through the Earth,” “My Name Is Dark” and “Delete Forever.” Since then, she’s also released one-off songs including 2021’s “Player of Games” and last year’s “Shinigami Eyes.”

Check out Grimes’ tweets below.

I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

I feel like we shouldn’t force approvals – but rather work out publishing with stuff that’s super popular. That seems most efficient? We cud use elf tech for it tho – but I think we’ll notice if a grimes song goes viral— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

We’re making a program that should simulate my voice well but we could also upload stems and samples for ppl to train their own— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

My team is asleep but I’ll see what’s up tomorrow- we were p far along last I checked. I sorta just spur of the moment decided to do this lol but we were making a sim of my voice for our own plans and they were almost done— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

When Universal Music Group emailed Spotify, Apple Music and other streaming services in March asking them to stop artificial-intelligence companies from using its labels’ recordings to train their machine-learning software, it fired the first Howitzer shell of what’s shaping up as the next conflict between creators and computers. As Warner Music Group, HYBE, ByteDance, Spotify and other industry giants invest in AI development, along with a plethora of small startups, artists and songwriters are clamoring for protection against developers that use music created by professionals to train AI algorithms. Developers, meanwhile, are looking for safe havens where they can continue their work unfettered by government interference.

To someday generate music that rivals the work of human creators, AI models use a process of machine-learning to identify patterns in and mimic the characteristics that make a song irresistible, like that sticky verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop or that meteoric drop that defines electronic dance. These are distinctions human musicians have to learn during their lives either through osmosis or music education.

Machine-learning is exponentially faster, though; it’s usually achieved by feeding millions, even billions of so-called “inputs” into an AI model to build its musical vocabulary. Due to the sheer scale of data needed to train current systems that almost always includes the work of professionals, and to many copyright owners’ dismay, almost no one asks their permission to use it.

Countries around the world have various ways of regulating what’s allowed when it comes to what’s called the text and data mining of copyrighted material for AI training. And some territories are concluding that fewer rules will lead to more business.

China, Israel, Japan, South Korea and Singapore are among the countries that have largely positioned themselves as safe havens for AI companies in terms of industry-friendly regulation. In January, Israel’s Ministry of Justice defined its stance on the issue, saying that “lifting the copyright uncertainties that surround this issue [of training AI generators] can spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine-learning] and content creation.”

Singapore also “certainly strives to be a hub for AI,” says Bryan Tan, attorney and partner at Reed Smith, which has an office there. “It’s one of the most permissive places. But having said that, I think the world changes very quickly,” Tan says. He adds that even in countries where exceptions in copyright for text and data mining are established, there is a chance that developments in the fast-evolving AI sector could lead to change.

In the United States, Amir Ghavi, a partner at Fried Frank who is representing open-source text-to-image developer Stability AI in a number of upcoming landmark cases, says that though the United States has a “strong tradition of fair use … this is all playing out in real time” with decisions in upcoming cases like his setting significant precedents for AI and copyright law.

Many rights owners, including musicians like Helienne Lindevall, president of the European Composers and Songwriters Alliance, are hoping to establish consent as a basic practice. But, she asks, “How do you know when AI has used your work?”

AI companies tend to keep their training process secret, but Mat Dryhurst, a musician, podcast host and co-founder of music technology company Spawning, says many rely on just a few data sets, such as Laion 5B (as in 5 billion data points) and Common Crawl, a web-scraping tool used by Google. To help establish a compromise between copyright owners and AI developers, Spawning has created a website called HaveIBeenTrained.com, which helps creators determine whether their work is found in these common data sets and, free of charge, opt out of being used as fodder for training.

These requests are not backed by law, although Dryhurst says, “We think it’s in every AI organization’s best interest to respect our active opt-outs. One, because this is the right thing to do, and two, because the legality of this varies territory to territory. This is safer legally for AI companies, and we don’t charge them to partner with us. We do the work for them.”

The concept of opting out was first popularized by the European Union’s Copyright Directive, passed in 2019. Though Sophie Goossens, a partner at Reed Smith who works in Paris and London on entertainment, media and technology law, says the definition of “opt out” was initially vague, its inclusion makes the EU one of the most strict in terms of AI training.

There is a fear, however, that passing strict AI copyright regulations could result in a country missing the opportunity to establish itself as a next-generation Silicon Valley and reap the economic benefits that would follow. Russian President Vladimir Putin believes the stakes are even higher. In 2017, he stated that the nation that leads in AI “will be the ruler of the world.” The United Kingdom’s Intellectual Property Office seemed to be moving in that direction when it published a statement last summer recommending that text and data mining be exempt from opt-outs in hopes of becoming Europe’s haven for AI. In February, however, the British government put the brakes on the IPO’s proposal, leaving its future uncertain.

Lindevall and others in the music industry say they are fighting for even better standards. “We don’t want to opt out, we want to opt in,” she says. “Then we want a clear structure for remuneration.”

The lion’s share of U.S.-based music and entertainment organizations — more than 40, including ASCAP, BMI, RIAA, SESAC and the National Music Publisher’s Association — are in agreement and recently launched the Human Artistry Campaign, which established seven principles advocating AI’s best practices intended to protect creators’ copyrights. No. 4: “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”

Today, the idea that rights holders could one day license works for machine-learning still seems far off. Among the potential solutions for remuneration are blanket licenses something like the blank-tape levies used in parts of Europe. But given the patchwork of international law on this subject, and the complexities of tracking down and paying rights holders, some feel these fixes are not viable.

Dryhurst says he and the Spawning team are working on a concrete solution: an “opt in” tool. Stability AI has signed on as its first partner for this innovation, and Dryhurst says the newest version of its text-to-image AI software, Stable Diffusion 3, will not include any of the 78 million artworks that opted out prior to this advancement. “This is a win,” he says. “I am really hopeful others will follow suit.”

Over the weekend, a track called “Heart on My Sleeve,” allegedly created with artificial intelligence to sound like it was by Drake and The Weeknd, became the hottest thing in music. By Monday evening, it was all but gone after most streaming platforms pulled it. But in that short time online, it earned thousands of dollars.
“Fake Drake” has a nice ring to it, but the music industry was less than charmed by the fact that a TikToker with just 131,000 followers (as of Tuesday evening) operating under the name Ghostwriter could rack up millions of streams with such a track in only a few days. Even though the legal issues around these kinds of AI-generated soundalikes are still murky, streaming services quickly pulled the track, largely without explanation. Universal Music Group, which reps both Drake and The Weeknd, issued a statement Monday in response, claiming these kinds of songs violate both copyright law its agreements with the streaming services and “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.” While a spokesperson would not say whether the company had sent formal takedown requests over the song, a rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was pulled because it used a copyrighted music sample. As of Wednesday, the song had also been removed from TikTok.

What sets “Heart on My Sleeve” apart from other AI-generated deepfakes — including one that had Drake covering Ice Spice‘s “Munch,” which the rapper himself called “the last straw” on Instagram — is that it was actually uploaded to streaming services, rather than just living on social media like so many others. It also was a hit — or could have been one — as the track drew rave reviews online. Once the song caught fire, daily U.S. streams increased exponentially, from about 2,000 on Friday to 362,000 on Saturday to 407,000 on Sunday and 652,000 on Monday before it was taken down, according to Luminate. Globally, the song started taking off too, racking up 1,140,000 streams worldwide on Monday alone.

Those streams are worth real money, too. And since streaming royalties are distributed on a pro-rata basis — meaning an overall revenue pool is divided based on the total popularity of tracks — the royalties earned by “Heart on My Sleeve” is revenue that is then not going to other artists. That’s how streaming works for any song— or sleep sound — but in this case it’s an AI-generated song pulling potential revenue from actual living beings creating music.

Aside from the rights issues at play, that money underlines one of rights holders’ key concerns around AI-generated music: That It threatens to take money away from them. For “Heart on My Sleeve,” the 1,423,000 U.S. streams it received over four days were worth about $7,500, Billboard estimates, while the 2,125,000 total global streams were worth closer to $9,400.

However, streaming royalties are typically paid out on a monthly basis, which allows time for platforms to detect copyright infringement and other attempts to game the system. In a case such as “Heart on My Sleeve,” a source at a streaming company says that might mean Ghostwriter’s royalties will be withheld.

“Heart on My Sleeve” was a wake-up call to the music business and music fans alike, who until now may not have taken the threat, or promise, of AI-generated music seriously. But as this technology becomes increasingly accessible — coupled with the ease of music distribution in the streaming era — concern around the issue is growing quickly. As Ghostwriter — who did not respond to a request for comment — promises on his TiKTok profile, “I’m just getting started.”

Imagine if the classic 1995-1997 lineup of beloved battling Brit Pop band Oasis had stayed together and continued making music. Now you don’t have to thanks to the British band Breezer, who spent their pandemic lockdown writing and recording an album that taps into the classic everything-all-at-once sound and fury of Oasis’ landmark first three albums: Definitely Maybe (1994), (What’s the Story) Morning Glory? (1995) and Be Here Now.
AISIS is a mind-expanding 8-song album that eerily mimics the Gallagher brothers’ sound on tracks written and recorded by Breezer in the group’s style, with the original singer’s voice later replaced by an AI vocals in the style of Oasis singer Liam Gallagher.

“AISIS is an alternate reality concept album where the band’s 95-97 line-up continued to write music, or perhaps all got together years later to write a record akin to the first 3 albums, and only now has the master DAT tape from that session surfaced,” reads a note from the band. “We’re bored of waiting for Oasis to reform, so we’ve got an AI modelled Liam Gallagher (inspired by @JekSpek) to step in and help out on some tunes that were written during lockdown 2021 for a short lived, but much loved band called Breezer.”

While some labels and artists are hurtling in a panic to stop AI versions of their music — with a fake Drake and The Weeknd viral hit quickly pulled from streamers this week — notoriously cantankerous vocalist Gallagher responded to a fan’s question on Wednesday (April 19) about whether he’s heard it and what he thinks. Yes, he said, he had, and in classic Liam fashion he added that he’d only heard one tune but that it was “better than all the other snizzle out there.”

Better still, in response to another query about his thoughts on the computer-generated Liam, the perma-swaggering singer proclaimed “Mad as f–k I sound mega.” He’s not wrong, as songs such as the bullrushing openers “Out of My Mind” and “Time” perfectly capture peak Oasis’ signature mix of swirling guitars, hedonistic fury and Liam’s snarling, nasally vocals. The psychedelic rager “Forever” and expansive ballad “Tonight” nail songwriter/guitarist/singer Noel Gallagher’s stuffed-to-exploding arrangements and Beatles fetish, amid such spot-on touches as the sound of the tide washing out, layers of sitar and a lyrical nod to Mott the Hoople’s David Bowie-penned 1972 smash “All the Young Dudes.”

As any Oasis fan knows, AISIS is as close as anyone is likely to come to an actual reunion of the group that split in 2009 after Liam left, setting off more than a decade of acrimonious back-and-forth between the famously battling singer and brother Noel as each has pursued their respective solo projects.

See Gallagher’s tweets and listen to AISIS album below.

Not the album heard a tune it’s better than all the other snizzle out there— Liam Gallagher (@liamgallagher) April 19, 2023

Mad as fuck I sound mega— Liam Gallagher (@liamgallagher) April 19, 2023

A song featuring AI-generated fake vocals from Drake and The Weeknd might be a scary moment for artists and labels whose livelihoods feel threatened, but does it violate the law? It’s a complicated question.

The song “Heart on My Sleeve,” which also featured Metro Boomin’s distinctive producer tag, racked up hundreds of thousands of spins on streaming services before it was pulled down on Monday evening, powered to viral status by uncannily similar vocals over a catchy instrumental track. Millions more have viewed shorter snippets of the song that the anonymous creator posted to TikTok.

It’s unclear whether only the soundalike vocals were created with AI tools – a common trick used for years in internet parody videos and deepfakes – or if the entire song was created solely by a machine based purely on a prompt to create a Drake track, a more novel and potentially disruptive development. 

For an industry already on edge about the sudden growth of artificial intelligence, the appearance of a song that convincingly replicated the work product of two of music’s biggest stars and one of its top producers and won over likely millions of listeners has set off serious alarm bells.

“The ability to create a new work this realistic and specific is disconcerting, and could pose a range of threats and challenges to rightsowners, musicians, and the businesses that invest in them,” says Jonathan Faber, the founder of Luminary Group and an attorney who specializes in protecting the likeness rights of famous individuals. “I say that without attempting to get into even thornier problems, which likely also exist as this technology demonstrates what it may be capable of.”

“Heart On My Sleeve” was quickly pulled down, disappearing from most streaming services by Monday evening. Representatives for Drake, The Weeknd and Spotify all declined to comment when asked about the song on Monday. And while the artists’ label, Universal Music Group, issued a strongly worded statement condemning “infringing content created with generative AI,” a spokesperson would not say whether the company had sent formal takedown requests over the song. 

A rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was removed because it used a copyrighted music sample.

Highlighted by the debacle is a monumental legal question for the music industry that will likely be at the center of legal battles for years to come: To what extent do AI-generated songs violate the law? Though “Heart on My Sleeve” was removed relatively quickly, it’s a more complicated question than it might seem.

For starters, the song appears to be an original composition that doesn’t directly copy any of Drake or the Weeknd’s songs, meaning that it could be hard to make a claim that it infringes their copyrights, like when an artist uses elements of someone else’s song without permission. While Metro Boomin’s tag may have been illegally sampled, that element likely won’t exist in future fake songs.

By mimicking their voices, however, the track represents a clearer potential violation of Drake and Weeknd’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Such rights are more typically invoked when someone’s name or visual likeness is stolen, but they can extend to someone’s voice if it’s particularly well-known – think Morgan Freeman or James Earl Jones.

“The right of publicity provides recourse for rights owners who would otherwise be very vulnerable to technology like this,” Faber said. “It fits here because a song is convincingly identifiable as Drake and the Weeknd.”

Whether a right of publicity lawsuit is legally viable against this kind of voice mimicry might be tested in court soon, albeit in a case dealing with decidedly more old school tech.

Back in January, Rick Astley sued Yung Gravy over the rapper’s breakout 2022 hit that heavily borrowed from the singer’s iconic “Never Gonna Give You Up.” While Yung Gravy had licensed the underlying composition, Astley claimed Yung Gravy violated his right of publicity when he hired a singer who mimicked his distinctive voice.

That case has key differences from the situation with “Heart on My Sleeve,” like the allegation that Gravy falsely suggested to his listeners that Astley had actually endorsed his song. In the case of “Heart on My Sleeve,” the anonymous creator Ghostwriter omitted any reference to Drake and The Weeknd on streaming platforms; on TikTok, he directly stated that he, and not the two superstars, had created his song using AI.

But for Richard Busch of the law firm King & Ballow, a veteran music industry litigator who brought the lawsuit on behalf of Astley, the right of publicity and its protections for likeness still provides the most useful tool for artists and labels confronted with such a scenario in the future.

“If you are creating a song that sounds identical to, let’s say, Rihanna, regardless of what you say people are going to believe that it was Rihanna. I think there’s no way to get around that,” Busch said. “The strongest claim here would be the use of likeness.”

But do AI companies themselves break the law when they create programs that can so effectively mimic Drake and The Weeknd’s voices? That would seem to be the far larger looming crisis, and one without the same kind of relatively clear legal answers.

The fight ahead will likely be over how AI platforms are “trained” – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. From the point of view of many in the music industry, if that process is accomplished by feeding a platform copyrighted songs — in this case, presumably, recordings by Drake and The Weeknd — then those platforms and their owners are infringing copyrights on a mass scale.

In UMG’s statement Monday, the label said clearly that it believes such training to be a “violation of copyright law,” and the company previously warned that it “will not hesitate to take steps to protect our rights and those of our artists.” The RIAA has said the same, blasting AI companies for making “unauthorized copies of our members works” to train their machines.

While the training issue is legally novel and unresolved, it could be answered in court soon. A group of visual artists has filed a class action over the use of their copyrighted images to train AI platforms, and Getty Images has filed a similar case against AI companies that allegedly “scraped” its database for training materials. 

And after this week’s incident over “Heart on My Sleeve,” a similar lawsuit against AI platforms filed by artists or music companies gets more likely by the day.

National Association of Broadcasters president and CEO Curtis LeGeyt spoke out on the potential dangers of Artificial Intelligence on Monday at the NAB Show in Las Vegas. “This is an area where NAB will absolutely be active,” he asserted of AI, which is one of the buzziest topics this week at the annual convention. “It is just amazing how quickly the relevance of AI to our entire economy — but specifically, since we’re in this room, the broadcast industry — has gone from amorphous concept to real.”

LeGeyt warned of several concerns that he has for local broadcasters, the first being issues surrounding “big tech” taking broadcast content and not fairly compensating broadcasters for its use. “We have been fighting for legislation to put some guardrails on it,” LeGeyt said. “AI has the potential to put that on overdrive. We need to ensure that our stations, our content creators are going to be fairly compensated.”

He added that he worries for journalists. “We’re already under attack for any slip-up we might have with regard to misreporting on a story. Well, you’re gonna have to do a heck of a lot more diligence to ensure that whatever you are reporting on is real, fact-based information and not just some AI bot that happens to look like Joe Biden.” Finally, he warned of images and likenesses being misappropriated where AI is involved.

“I want to wave the caution flag on some of these areas,” he said. “I think this could be really damaging for local broadcast.”

During his talk, he also outlines was he sees as potential opportunities. “My own view is there are some real potentially hyperlocal benefits to AI,” he said, citing as examples translation services and the ability to speed up research at “resource-constrained local stations.” He asserted, “Investigative journalism is never going to be replaced by AI. Our role at local community events, philanthropic work, is never going to be replaced by AI. But to the degree that we can leverage AI to do some of the things that are time-consuming and take away your ability to be boots on the ground doing the things that only you can do well, I think that’s a positive.”

Also addressed during the session was the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which may include capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a lot of moving parts and has a long way to go before its potential can be realized.

At NAB, FCC chairwoman Jessica Rosenworcel was on hand to announce the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0. “With over 60 percent of Americans already in range of a Next Gen TV signal, we are excited to work closely with all stakeholders, including the FCC, to bring Next Gen TV and all of its benefits to all viewers,” said LeGeyt.

During his session, LeGeyt also addressed “fierce competition for the dashboard” as part of a discussion of connected cars. “It’s not enough for any one [broadcaster] to innovate. If we are all not rowing in the same direction as an industry, … we are going to lose this arms race,” he warned.

Citing competition from the likes of Spotify, he contends that the local content offered by broadcasters gives them a “competitive advantage.”

The NAB Show runs through Wednesday.

This article was originally published by The Hollywood Reporter.