State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am


artificial intelligence

Page: 20

While most of the music industry is scrambling to figure out how to combat fake A.I.-generated vocals, Grimes is running in the other direction. “I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright,” the genre-pushing singer tweeted on Sunday night (April 23).
The series of tweets found Grimes doubling- and tripling-down on her quest to blur the lines between humans and machines and reconfigure the traditional copyright guardrails that have been in place for more than half a century in the music industry.

In a follow-up tweet Grimes linked to a recent story about how a fake song featuring A.I.-generated vocals from Drake and The Weeknd, “Heart on My Sleeve,” had been pulled from streaming services after going viral. Rather than demanding takedowns, Grimes said she’s willing to go halfsies with her fans if they create something worthy with her vocals.

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” she promised of her stance, which is in stark opposition to that of Universal Music Group, which acted quickly to condemn the “infringing content created with generative AI” that produced the phony superstar duet.

“Same deal as I would with any artist i collab with,” she continued. “Feel free to use my voice without penalty. I have no label and no legal bindings.” In addition, she said she feels like we “shouldn’t force approvals — but rather work out publishing with stuff that’s super popular. That seems most efficient? We cud use elf tech for it tho – but I think we’ll notice if a grimes song goes viral.”

Furthermore, apparently free of label contract constraints, Grimes said she and her team are working on a program that should simulate her voice pretty convincingly, but that they could also upload stems and samples for people to train their own A.I. vocal generators. As for a timeline for the Grimes A.I., the singer said her crew was “p far along last I checked. I sorta just spur of the moment decided to do this lol but we were making a sim of my voice for our own plans and they were almost done.” She also was open to taking suggestions and tips on technology from her followers as evidence by a series of back-and-forth tweets with supportive fellow AI supporters.

Fans have been eagerly awaiting an update on the status of Grimes’ next album, the as-yet-unscheduled BOOK 1, alluding to some unspecified personal and professional hang-ups before revealing that “music is my side quest now. Tbh reduced pressure x increased freedom = prob more music just ideally ‘Low key I’ll always do my best to entertain whilst depleting my literal reputation I hope that’s ok I love y’all.”

The musician’s most recent album was 2020’s Miss Anthropocene, which included the singles “Violence,” “So Heavy I Fell Through the Earth,” “My Name Is Dark” and “Delete Forever.” Since then, she’s also released one-off songs including 2021’s “Player of Games” and last year’s “Shinigami Eyes.”

Check out Grimes’ tweets below.

I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

I feel like we shouldn’t force approvals – but rather work out publishing with stuff that’s super popular. That seems most efficient? We cud use elf tech for it tho – but I think we’ll notice if a grimes song goes viral— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

We’re making a program that should simulate my voice well but we could also upload stems and samples for ppl to train their own— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

My team is asleep but I’ll see what’s up tomorrow- we were p far along last I checked. I sorta just spur of the moment decided to do this lol but we were making a sim of my voice for our own plans and they were almost done— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023

When Universal Music Group emailed Spotify, Apple Music and other streaming services in March asking them to stop artificial-intelligence companies from using its labels’ recordings to train their machine-learning software, it fired the first Howitzer shell of what’s shaping up as the next conflict between creators and computers. As Warner Music Group, HYBE, ByteDance, Spotify and other industry giants invest in AI development, along with a plethora of small startups, artists and songwriters are clamoring for protection against developers that use music created by professionals to train AI algorithms. Developers, meanwhile, are looking for safe havens where they can continue their work unfettered by government interference.

To someday generate music that rivals the work of human creators, AI models use a process of machine-learning to identify patterns in and mimic the characteristics that make a song irresistible, like that sticky verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop or that meteoric drop that defines electronic dance. These are distinctions human musicians have to learn during their lives either through osmosis or music education.

Machine-learning is exponentially faster, though; it’s usually achieved by feeding millions, even billions of so-called “inputs” into an AI model to build its musical vocabulary. Due to the sheer scale of data needed to train current systems that almost always includes the work of professionals, and to many copyright owners’ dismay, almost no one asks their permission to use it.

Countries around the world have various ways of regulating what’s allowed when it comes to what’s called the text and data mining of copyrighted material for AI training. And some territories are concluding that fewer rules will lead to more business.

China, Israel, Japan, South Korea and Singapore are among the countries that have largely positioned themselves as safe havens for AI companies in terms of industry-friendly regulation. In January, Israel’s Ministry of Justice defined its stance on the issue, saying that “lifting the copyright uncertainties that surround this issue [of training AI generators] can spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine-learning] and content creation.”

Singapore also “certainly strives to be a hub for AI,” says Bryan Tan, attorney and partner at Reed Smith, which has an office there. “It’s one of the most permissive places. But having said that, I think the world changes very quickly,” Tan says. He adds that even in countries where exceptions in copyright for text and data mining are established, there is a chance that developments in the fast-evolving AI sector could lead to change.

In the United States, Amir Ghavi, a partner at Fried Frank who is representing open-source text-to-image developer Stability AI in a number of upcoming landmark cases, says that though the United States has a “strong tradition of fair use … this is all playing out in real time” with decisions in upcoming cases like his setting significant precedents for AI and copyright law.

Many rights owners, including musicians like Helienne Lindevall, president of the European Composers and Songwriters Alliance, are hoping to establish consent as a basic practice. But, she asks, “How do you know when AI has used your work?”

AI companies tend to keep their training process secret, but Mat Dryhurst, a musician, podcast host and co-founder of music technology company Spawning, says many rely on just a few data sets, such as Laion 5B (as in 5 billion data points) and Common Crawl, a web-scraping tool used by Google. To help establish a compromise between copyright owners and AI developers, Spawning has created a website called HaveIBeenTrained.com, which helps creators determine whether their work is found in these common data sets and, free of charge, opt out of being used as fodder for training.

These requests are not backed by law, although Dryhurst says, “We think it’s in every AI organization’s best interest to respect our active opt-outs. One, because this is the right thing to do, and two, because the legality of this varies territory to territory. This is safer legally for AI companies, and we don’t charge them to partner with us. We do the work for them.”

The concept of opting out was first popularized by the European Union’s Copyright Directive, passed in 2019. Though Sophie Goossens, a partner at Reed Smith who works in Paris and London on entertainment, media and technology law, says the definition of “opt out” was initially vague, its inclusion makes the EU one of the most strict in terms of AI training.

There is a fear, however, that passing strict AI copyright regulations could result in a country missing the opportunity to establish itself as a next-generation Silicon Valley and reap the economic benefits that would follow. Russian President Vladimir Putin believes the stakes are even higher. In 2017, he stated that the nation that leads in AI “will be the ruler of the world.” The United Kingdom’s Intellectual Property Office seemed to be moving in that direction when it published a statement last summer recommending that text and data mining be exempt from opt-outs in hopes of becoming Europe’s haven for AI. In February, however, the British government put the brakes on the IPO’s proposal, leaving its future uncertain.

Lindevall and others in the music industry say they are fighting for even better standards. “We don’t want to opt out, we want to opt in,” she says. “Then we want a clear structure for remuneration.”

The lion’s share of U.S.-based music and entertainment organizations — more than 40, including ASCAP, BMI, RIAA, SESAC and the National Music Publisher’s Association — are in agreement and recently launched the Human Artistry Campaign, which established seven principles advocating AI’s best practices intended to protect creators’ copyrights. No. 4: “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”

Today, the idea that rights holders could one day license works for machine-learning still seems far off. Among the potential solutions for remuneration are blanket licenses something like the blank-tape levies used in parts of Europe. But given the patchwork of international law on this subject, and the complexities of tracking down and paying rights holders, some feel these fixes are not viable.

Dryhurst says he and the Spawning team are working on a concrete solution: an “opt in” tool. Stability AI has signed on as its first partner for this innovation, and Dryhurst says the newest version of its text-to-image AI software, Stable Diffusion 3, will not include any of the 78 million artworks that opted out prior to this advancement. “This is a win,” he says. “I am really hopeful others will follow suit.”

Over the weekend, a track called “Heart on My Sleeve,” allegedly created with artificial intelligence to sound like it was by Drake and The Weeknd, became the hottest thing in music. By Monday evening, it was all but gone after most streaming platforms pulled it. But in that short time online, it earned thousands of dollars.
“Fake Drake” has a nice ring to it, but the music industry was less than charmed by the fact that a TikToker with just 131,000 followers (as of Tuesday evening) operating under the name Ghostwriter could rack up millions of streams with such a track in only a few days. Even though the legal issues around these kinds of AI-generated soundalikes are still murky, streaming services quickly pulled the track, largely without explanation. Universal Music Group, which reps both Drake and The Weeknd, issued a statement Monday in response, claiming these kinds of songs violate both copyright law its agreements with the streaming services and “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.” While a spokesperson would not say whether the company had sent formal takedown requests over the song, a rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was pulled because it used a copyrighted music sample. As of Wednesday, the song had also been removed from TikTok.

What sets “Heart on My Sleeve” apart from other AI-generated deepfakes — including one that had Drake covering Ice Spice‘s “Munch,” which the rapper himself called “the last straw” on Instagram — is that it was actually uploaded to streaming services, rather than just living on social media like so many others. It also was a hit — or could have been one — as the track drew rave reviews online. Once the song caught fire, daily U.S. streams increased exponentially, from about 2,000 on Friday to 362,000 on Saturday to 407,000 on Sunday and 652,000 on Monday before it was taken down, according to Luminate. Globally, the song started taking off too, racking up 1,140,000 streams worldwide on Monday alone.

Those streams are worth real money, too. And since streaming royalties are distributed on a pro-rata basis — meaning an overall revenue pool is divided based on the total popularity of tracks — the royalties earned by “Heart on My Sleeve” is revenue that is then not going to other artists. That’s how streaming works for any song— or sleep sound — but in this case it’s an AI-generated song pulling potential revenue from actual living beings creating music.

Aside from the rights issues at play, that money underlines one of rights holders’ key concerns around AI-generated music: That It threatens to take money away from them. For “Heart on My Sleeve,” the 1,423,000 U.S. streams it received over four days were worth about $7,500, Billboard estimates, while the 2,125,000 total global streams were worth closer to $9,400.

However, streaming royalties are typically paid out on a monthly basis, which allows time for platforms to detect copyright infringement and other attempts to game the system. In a case such as “Heart on My Sleeve,” a source at a streaming company says that might mean Ghostwriter’s royalties will be withheld.

“Heart on My Sleeve” was a wake-up call to the music business and music fans alike, who until now may not have taken the threat, or promise, of AI-generated music seriously. But as this technology becomes increasingly accessible — coupled with the ease of music distribution in the streaming era — concern around the issue is growing quickly. As Ghostwriter — who did not respond to a request for comment — promises on his TiKTok profile, “I’m just getting started.”

Imagine if the classic 1995-1997 lineup of beloved battling Brit Pop band Oasis had stayed together and continued making music. Now you don’t have to thanks to the British band Breezer, who spent their pandemic lockdown writing and recording an album that taps into the classic everything-all-at-once sound and fury of Oasis’ landmark first three albums: Definitely Maybe (1994), (What’s the Story) Morning Glory? (1995) and Be Here Now.
AISIS is a mind-expanding 8-song album that eerily mimics the Gallagher brothers’ sound on tracks written and recorded by Breezer in the group’s style, with the original singer’s voice later replaced by an AI vocals in the style of Oasis singer Liam Gallagher.

“AISIS is an alternate reality concept album where the band’s 95-97 line-up continued to write music, or perhaps all got together years later to write a record akin to the first 3 albums, and only now has the master DAT tape from that session surfaced,” reads a note from the band. “We’re bored of waiting for Oasis to reform, so we’ve got an AI modelled Liam Gallagher (inspired by @JekSpek) to step in and help out on some tunes that were written during lockdown 2021 for a short lived, but much loved band called Breezer.”

While some labels and artists are hurtling in a panic to stop AI versions of their music — with a fake Drake and The Weeknd viral hit quickly pulled from streamers this week — notoriously cantankerous vocalist Gallagher responded to a fan’s question on Wednesday (April 19) about whether he’s heard it and what he thinks. Yes, he said, he had, and in classic Liam fashion he added that he’d only heard one tune but that it was “better than all the other snizzle out there.”

Better still, in response to another query about his thoughts on the computer-generated Liam, the perma-swaggering singer proclaimed “Mad as f–k I sound mega.” He’s not wrong, as songs such as the bullrushing openers “Out of My Mind” and “Time” perfectly capture peak Oasis’ signature mix of swirling guitars, hedonistic fury and Liam’s snarling, nasally vocals. The psychedelic rager “Forever” and expansive ballad “Tonight” nail songwriter/guitarist/singer Noel Gallagher’s stuffed-to-exploding arrangements and Beatles fetish, amid such spot-on touches as the sound of the tide washing out, layers of sitar and a lyrical nod to Mott the Hoople’s David Bowie-penned 1972 smash “All the Young Dudes.”

As any Oasis fan knows, AISIS is as close as anyone is likely to come to an actual reunion of the group that split in 2009 after Liam left, setting off more than a decade of acrimonious back-and-forth between the famously battling singer and brother Noel as each has pursued their respective solo projects.

See Gallagher’s tweets and listen to AISIS album below.

Not the album heard a tune it’s better than all the other snizzle out there— Liam Gallagher (@liamgallagher) April 19, 2023

Mad as fuck I sound mega— Liam Gallagher (@liamgallagher) April 19, 2023

A song featuring AI-generated fake vocals from Drake and The Weeknd might be a scary moment for artists and labels whose livelihoods feel threatened, but does it violate the law? It’s a complicated question.

The song “Heart on My Sleeve,” which also featured Metro Boomin’s distinctive producer tag, racked up hundreds of thousands of spins on streaming services before it was pulled down on Monday evening, powered to viral status by uncannily similar vocals over a catchy instrumental track. Millions more have viewed shorter snippets of the song that the anonymous creator posted to TikTok.

It’s unclear whether only the soundalike vocals were created with AI tools – a common trick used for years in internet parody videos and deepfakes – or if the entire song was created solely by a machine based purely on a prompt to create a Drake track, a more novel and potentially disruptive development. 

For an industry already on edge about the sudden growth of artificial intelligence, the appearance of a song that convincingly replicated the work product of two of music’s biggest stars and one of its top producers and won over likely millions of listeners has set off serious alarm bells.

“The ability to create a new work this realistic and specific is disconcerting, and could pose a range of threats and challenges to rightsowners, musicians, and the businesses that invest in them,” says Jonathan Faber, the founder of Luminary Group and an attorney who specializes in protecting the likeness rights of famous individuals. “I say that without attempting to get into even thornier problems, which likely also exist as this technology demonstrates what it may be capable of.”

“Heart On My Sleeve” was quickly pulled down, disappearing from most streaming services by Monday evening. Representatives for Drake, The Weeknd and Spotify all declined to comment when asked about the song on Monday. And while the artists’ label, Universal Music Group, issued a strongly worded statement condemning “infringing content created with generative AI,” a spokesperson would not say whether the company had sent formal takedown requests over the song. 

A rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was removed because it used a copyrighted music sample.

Highlighted by the debacle is a monumental legal question for the music industry that will likely be at the center of legal battles for years to come: To what extent do AI-generated songs violate the law? Though “Heart on My Sleeve” was removed relatively quickly, it’s a more complicated question than it might seem.

For starters, the song appears to be an original composition that doesn’t directly copy any of Drake or the Weeknd’s songs, meaning that it could be hard to make a claim that it infringes their copyrights, like when an artist uses elements of someone else’s song without permission. While Metro Boomin’s tag may have been illegally sampled, that element likely won’t exist in future fake songs.

By mimicking their voices, however, the track represents a clearer potential violation of Drake and Weeknd’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Such rights are more typically invoked when someone’s name or visual likeness is stolen, but they can extend to someone’s voice if it’s particularly well-known – think Morgan Freeman or James Earl Jones.

“The right of publicity provides recourse for rights owners who would otherwise be very vulnerable to technology like this,” Faber said. “It fits here because a song is convincingly identifiable as Drake and the Weeknd.”

Whether a right of publicity lawsuit is legally viable against this kind of voice mimicry might be tested in court soon, albeit in a case dealing with decidedly more old school tech.

Back in January, Rick Astley sued Yung Gravy over the rapper’s breakout 2022 hit that heavily borrowed from the singer’s iconic “Never Gonna Give You Up.” While Yung Gravy had licensed the underlying composition, Astley claimed Yung Gravy violated his right of publicity when he hired a singer who mimicked his distinctive voice.

That case has key differences from the situation with “Heart on My Sleeve,” like the allegation that Gravy falsely suggested to his listeners that Astley had actually endorsed his song. In the case of “Heart on My Sleeve,” the anonymous creator Ghostwriter omitted any reference to Drake and The Weeknd on streaming platforms; on TikTok, he directly stated that he, and not the two superstars, had created his song using AI.

But for Richard Busch of the law firm King & Ballow, a veteran music industry litigator who brought the lawsuit on behalf of Astley, the right of publicity and its protections for likeness still provides the most useful tool for artists and labels confronted with such a scenario in the future.

“If you are creating a song that sounds identical to, let’s say, Rihanna, regardless of what you say people are going to believe that it was Rihanna. I think there’s no way to get around that,” Busch said. “The strongest claim here would be the use of likeness.”

But do AI companies themselves break the law when they create programs that can so effectively mimic Drake and The Weeknd’s voices? That would seem to be the far larger looming crisis, and one without the same kind of relatively clear legal answers.

The fight ahead will likely be over how AI platforms are “trained” – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. From the point of view of many in the music industry, if that process is accomplished by feeding a platform copyrighted songs — in this case, presumably, recordings by Drake and The Weeknd — then those platforms and their owners are infringing copyrights on a mass scale.

In UMG’s statement Monday, the label said clearly that it believes such training to be a “violation of copyright law,” and the company previously warned that it “will not hesitate to take steps to protect our rights and those of our artists.” The RIAA has said the same, blasting AI companies for making “unauthorized copies of our members works” to train their machines.

While the training issue is legally novel and unresolved, it could be answered in court soon. A group of visual artists has filed a class action over the use of their copyrighted images to train AI platforms, and Getty Images has filed a similar case against AI companies that allegedly “scraped” its database for training materials. 

And after this week’s incident over “Heart on My Sleeve,” a similar lawsuit against AI platforms filed by artists or music companies gets more likely by the day.

National Association of Broadcasters president and CEO Curtis LeGeyt spoke out on the potential dangers of Artificial Intelligence on Monday at the NAB Show in Las Vegas. “This is an area where NAB will absolutely be active,” he asserted of AI, which is one of the buzziest topics this week at the annual convention. “It is just amazing how quickly the relevance of AI to our entire economy — but specifically, since we’re in this room, the broadcast industry — has gone from amorphous concept to real.”

LeGeyt warned of several concerns that he has for local broadcasters, the first being issues surrounding “big tech” taking broadcast content and not fairly compensating broadcasters for its use. “We have been fighting for legislation to put some guardrails on it,” LeGeyt said. “AI has the potential to put that on overdrive. We need to ensure that our stations, our content creators are going to be fairly compensated.”

He added that he worries for journalists. “We’re already under attack for any slip-up we might have with regard to misreporting on a story. Well, you’re gonna have to do a heck of a lot more diligence to ensure that whatever you are reporting on is real, fact-based information and not just some AI bot that happens to look like Joe Biden.” Finally, he warned of images and likenesses being misappropriated where AI is involved.

“I want to wave the caution flag on some of these areas,” he said. “I think this could be really damaging for local broadcast.”

During his talk, he also outlines was he sees as potential opportunities. “My own view is there are some real potentially hyperlocal benefits to AI,” he said, citing as examples translation services and the ability to speed up research at “resource-constrained local stations.” He asserted, “Investigative journalism is never going to be replaced by AI. Our role at local community events, philanthropic work, is never going to be replaced by AI. But to the degree that we can leverage AI to do some of the things that are time-consuming and take away your ability to be boots on the ground doing the things that only you can do well, I think that’s a positive.”

Also addressed during the session was the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which may include capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a lot of moving parts and has a long way to go before its potential can be realized.

At NAB, FCC chairwoman Jessica Rosenworcel was on hand to announce the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0. “With over 60 percent of Americans already in range of a Next Gen TV signal, we are excited to work closely with all stakeholders, including the FCC, to bring Next Gen TV and all of its benefits to all viewers,” said LeGeyt.

During his session, LeGeyt also addressed “fierce competition for the dashboard” as part of a discussion of connected cars. “It’s not enough for any one [broadcaster] to innovate. If we are all not rowing in the same direction as an industry, … we are going to lose this arms race,” he warned.

Citing competition from the likes of Spotify, he contends that the local content offered by broadcasters gives them a “competitive advantage.”

The NAB Show runs through Wednesday.

This article was originally published by The Hollywood Reporter.

A new song believed to feature AI-generated fake vocals from Drake and The Weeknd that went viral over the weekend has been pulled from most streaming platforms after their label, Universal Music Group, released a statement Monday (April 17) condemning “infringing content created with generative AI.”
Released by an anonymous TikTok user called Ghostwriter977 and credited as Ghostwriter on steaming platforms where it racked up hundreds of thousands of streams, the track “Heart On My Sleeve” features uncannily similar voices to the two superstars — a trick that the creator says was accomplished by using artificial intelligence. It’s unclear if the entire song was created with AI, or just the soundalike vocals.

By Monday afternoon, the song had generated more 600,000 spins on Spotify, and Ghostwriter977’s TikTok videos had been viewed more than 15 million times. A YouTube video had another 275,000 views, with an ominous comment from the creator below it: “This is just the beginning.”

Many music fans seemed impressed. One comment on TikTok with more than 75,000 likes said it was the “first AI song that has actually impressed me.” Another said Ghostwriter was “putting out better drake songs than drake himself.” A third said AI was “getting dangerously good.”

But the end could already be in sight. At time of publishing on Monday evening, “Heart On My Sleeve” had recently been pulled from Spotify, as well as Apple Music, Deezer and TIDAL before it.

Even if short-lived, the sensational success of “Heart On My Sleeve” will no doubt underscore growing concerns over the impact of AI on the music industry. Last week, UMG urged streaming platforms like Spotify to block AI companies from accessing the label’s songs to “train” their machines, and the RIAA has warned that doing so infringes copyrights on a mass scale. Last month, a large coalition of industry organizations warned that AI technology should not be used to “replace or erode” human artistry.

Representatives for Drake and The Weeknd declined to comment on Monday. But in a statement to Billboard, UMG said the viral postings “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” a UMG spokesman said in a statement. “We’re encouraged by the engagement of our platform partners on these issues – as they recognize they need to be part of the solution.”

UMG declined to comment on whether it had sent formal takedown requests to streaming services and social media websites.

Drake is in his feelings.
On Friday (April 14), the chart-topping artist took to Instagram to voice his opinion about AI-generated versions of his voice, particularly a video that features him rapping Bronx artist Ice Spice’s “Munch.”

“This is the last straw,” he wrote on his story, along with a post about the AI clip. The pairing of Drake with Ice Spice is particularly interesting, given the rappers’ history. While Drake was an early advocate of Ice Spice, born Isis Gaston, he unfollowed her on Instagram, something Gaston had no explanation for in interviews. However, shortly after, he re-followed her.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Drake’s complaint comes after Universal Music Group asked streaming services including Spotify and Apple Music to prevent artificial intelligence companies from accessing their copyrighted songs. AI companies would use the music to “train” their machines, something that is becoming a cause for concern within the music industry.

In an email sent to Spotify, Apple Music and other streaming platforms, UMG said that it had become aware that certain AI services had been trained on copyrighted music “without obtaining the required consents” from those who own the songs.

“We will not hesitate to take steps to protect our rights and those of our artists,” UMG warned in the email, first obtained by the Financial Times. Billboard confirmed the details with sources on both sides. Although there isn’t clarity on what those steps would be or what streaming platforms can do to stop it, labels and artists alike seem aligned about a needed change.

UMG later issued a statement regarding the email sent to DSPs. “We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators. We expect our platform partners will want to prevent their services from being used in ways that harm artists,” it read.

Other AI covers making the rounds include Rihanna singing Beyoncé’s “Cuff It,” which sounded relatively believable, aside from a glitch during a melodic run.

While the implications of artificial intelligence poking its head into music can be scary for artists and labels alike, it’s hard not to smirk at Drizzy rapping, “A– too fat, can’t fit in no jeans.”

President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.

Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.

“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.

Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it’s the work of humans.

The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.

Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.

The U.S. so far has taken a different approach. The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.

Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.

Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”

In a new open letter signed by Elon Musk, Steve Wozniak, Andrew Yang and more on Wednesday (March 29), leaders in technology, academia and politics came together to call for a moratorium on training AI systems “more advanced than Chat GPT-4” for “at least 6 months.”

The letter states that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” including the increased spread of propaganda and fake news as well as automation leading to widespread job loss. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.

By drawing the line at AI models “more advanced than Chat GPT-4,” the signees are likely pointing to generative artificial intelligence — a term encompassing a subset of AI that can create new content after being trained via the input of millions or even billions of pieces of data. While some companies license or create their own training data, a large number of AIs are trained using data sets scraped from the web that contain copyright-protected material, including songs, books, articles, images and more. This practice has sparked widespread debate over whether or not AI companies should be required to obtain consent or to compensate the rights holders, and whether the fast-evolving models will endanger the livelihoods of musicians, illustrators and other creatives.

Before late 2022, generative AI was little discussed outside of tech-savvy circles, but it has gained national attention over the last six months. Popular examples of generative AI today include image generators like DALLE-2, Stable Diffusion and Midjourney, which use simple text prompts to conjure up realistic pictures. Chatbots (also called Large Language Models or “LLMs”) like Chat GPT are also considered generative, as are machines that can create new music at the touch of a button. Though generative AI models in music have yet to make as many headlines as chatbots and image generators, companies like Boomy, Soundful, Beatlab, Google’s Magenta, Open AI and others are already building them, leading to fears that their output could one day threaten human-made music.

The letter urging the pause in AI training was signed by some of AI’s biggest executives. They notably include Stability AI CEO Emad Mostaque, Conjecture AI CEO Connor Leahy, Unanimous AI CEO and chief scientist Louis Rosenberg and Scale AI CEO Julien Billot. It was also signed by Pinterest co-founder Evan Sharp, Skype co-founder Jaan Tallinn and Ripple CEO Chris Larsen.

Other signees include several engineers and researchers at Microsoft, Google and Meta, though it notably does not include any names from Open AI, the firm behind the creation of Chat GPT-4.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter continues. Rather, the industry must “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter comes only a few weeks after several major organizations in the entertainment industry, including in music, came together to release a list of seven principles, detailing how they hope to protect and support “human creativity” in the wake of the AI boom. “Policymakers must consider the interests of human creators when crafting policy around AI,” the coalition wrote. “Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table.”