State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


tech

Country music star Martina McBride headed to Capitol Hill on Wednesday (May 21) to speak out in support of the NO FAKES Act, arguing the legislation is necessary to protect artists in the AI age.
If passed, the bill (officially titled the Nurture Originals, Foster Art and Keep Entertainment Safe Act), which was recently reintroduced to the U.S. House of Representatives and the U.S. Senate, would create a federal protection against unauthorized deepfakes of one’s name, image, likeness or voice for the first time. It is widely supported by the music industry, the film industry and other groups.

Just prior to McBride’s testimony, the Human Artistry Campaign sent out a press release stating that 393 artists have signed on in support of the NO FAKES Act, including Cardi B, Randy Travis, Mary J. Blige and the Dave Matthews Band.

Trending on Billboard

In her testimony to the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, McBride called unauthorized deepfakes “just terrifying” and added, “I’m pleading with you to give me the tools to stop that kind of betrayal.” She continued that passing the NO FAKES Act could “set America on the right course to develop the world’s best AI while preserving the sacred qualities that make our country so special: authenticity, integrity, humanity and our endlessly inspiring spirit…I urge you to pass this bill now.”

McBride went on to express the challenges that musicians face as unauthorized AI deepfakes proliferate online. “I worked so hard to establish trust with my fans,” she said. “They know when I say something, they can believe it… I don’t know how I can stress enough how [much unauthorized deepfakes] can impact the careers [of] artists.”

During her testimony, the singer-songwriter pointed to more specific concerns, like what can happen to individuals after they pass away. “Far into the future after I’m gone,” she said, there is the threat now that someone could “creat[e] a piece of music or [a video of] me saying something that I never did.” She added that this issue is especially challenging for emerging musicians: “I think for younger artists, to be new, and to have to set up what you stand for and who you are as a person as an artist and what you endorse what you believe in…on top of having to navigate this… is devastating.”

Suzana Carlos, head of music policy for YouTube, also expressed her company’s support for the NO FAKES Act during the hearing. “As technology evolves, we must collectively ensure that it is used responsibly, including when it comes to protecting our creators and viewers,” she said. “Platforms have a responsibility to address the challenges posed by AI-generated content, and Google and YouTube staff [is] ready to apply our expertise to help tackle them on our services and across the digital ecosystem. We know that a practical regulatory framework addressing digital replicas is critical.”

Carlos noted how one’s name, image, likeness and voice (also known as “publicity rights” or “rights of publicity”) is only currently protected on a state-by-state basis, creating a “patchwork of inconsistent legal frameworks.” She noted that YouTube would like to “streamline global operations for platforms.”

Mitch Glazier, CEO/president of the Recording Industry Association of America (RIAA), which has served a strong role in pushing the NO FAKES Act forward, added during his testimony that time is of the essence to pass this bill. “I think there’s a very small window, and an unusual window, for Congress to get ahead of what is happening before it becomes irreparable,” he said.

Also during the hearing, Senator Amy Klobuchar (D-MN) brought up concerns about a “10-year moratorium” that would ban states and localities from implementing AI regulation — a clause the Republican-led House of Representatives baked into the Republicans’ so-called “big beautiful” tax bill last week. “I’m very concerned, having spent years trying to pass some of these things,” Klobuchar said. “If you just put a moratorium [on]…the ELVIS law [a new Tennessee state law that updated protections for deepfakes in the AI age] coming out of Tennessee…and some of the other things, this would stop all of that.”

The NO FAKES Act was introduced by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC) and Klobuchar along with Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4) Nathaniel Moran (R-TX-1) and Becca Balint (D-VT-At Large). It was first introduced as a draft bill in 2023 and formally introduced in the Senate in summer 2024.

Unlike some of the state publicity rights laws, the NO FAKES Act would create a federal right of publicity that would not expire after death and could be controlled by a person’s heirs for 70 years after their passing. It also includes specific carve-outs for replicas used in news, parody, historical works and criticism to ensure the First Amendment right to free speech remains protected.

LONDON — When the European Parliament passed sweeping new laws governing the use of artificial intelligence (AI) last March, the “world first” legislation was hailed as an important victory by music executives and rights holders. Just over one year later — and with less than three months until the European Union’s Artificial Intelligence Act is due to come fully into force — those same execs say they now have “serious concerns” about how the laws are being implemented amid a fierce lobbying battle between creator groups and big tech.
“[Tech companies] are really aggressively lobbying the [European] Commission and the [European] Council to try and water down these provisions wherever they can,” John Phelan, director general of international music publishing trade association ICMP, tells Billboard. “The EU is at a junction and what we’re trying to do is try to push as many people [as possible] in the direction of: ‘The law is the law’. The copyright standards in there are high. Do not be afraid to robustly defend what you’ve got in the AI Act.”

Trending on Billboard

One current source of tension between creator groups, tech lobbyists and policy makers is the generative AI “Code of Practice” being developed by the EU’s newly formed AI Office in consultation with almost 1,000 stakeholders, including music trade groups, tech companies, academics, and independent experts. The code, which is currently on its third draft, is intended to set clear, but not legally binding, guidelines for generative AI models such as OpenAI’s ChatGPT to follow to ensure they are complying with the terms of the AI Act.

Those obligations include the requirement for generative AI developers to provide a “sufficiently detailed summary” of all copyright protected works, including music, that they have used to train their systems. Under the AI Act, tech companies are also required to water mark training data sets used in generative AI music or audio-visual works, so there is a traceable path for rights holders to track the use of their catalog. Significantly, the laws apply to any generative AI company operating within the 27-member EU state, regardless of where they are based, acquired data from, or trained their systems.   

“The obligations of the AI Act are clear: you need to respect copyright, and you need to be transparent about the data you have trained on,” says Matthieu Philibert, public affairs director at European independent labels trade body IMPALA.

Putting those provisions into practice is proving less straight-forward, however, with the latest version of the code, published in March, provoking a strong backlash from music execs who say that the draft text risks undermining the very same laws it is designed to support.

“Rather than providing a robust framework for compliance, [the code] sets the bar so low as to provide no meaningful assistance for authors, performers, and other right holders to exercise or enforce their rights,” said a coalition of creators and music associations, including ICMP, IMPALA, international labels trade body IFPI and Paris-based collecting societies trade organization CISAC, in a joint statement published March 28.

Causing the biggest worry for rights holders is the text’s instruction that generative AI providers need only make “reasonable efforts” to comply with European copyright law, including the weakened requirement that signatories undertake “reasonable efforts to not crawl from piracy domains.”

There’s also strong opposition over a lack of meaningful guidance on what AI companies must do to comply with a label, artist or publisher’s right to reserve (block) their rights, including the code’s insistence that robots.txt is the “only” method generative AI models must use to identify rights holders opt out reservations. Creator groups says that robots.txt – a root directory file that tells search engine crawlers which URLs they can access on a website — works for only a fraction of right holders and is unfit for purpose as it takes effect at the point of web crawling, not scraping, training or other downstream uses of their work.

“Every draft we see coming out is basically worse than the previous one,” Philibert tells Billboard. “As it stands, the code of practice leaves a lot to be desired.”

Caught Between Creators, Big Tech and U.S. Pressure

The general view within the music business is that the concessions introduced in the third draft are in response to pressure from tech lobbyists and outside pressure from the Trump administration, which is pursuing a wider deregulation agenda both at home and abroad. In April, the U.S. government’s Mission to the EU (USEU) sent a letter to the European Commission pushing back against the code, which it said contained “flaws.” The Trump administration is also demanding changes to the EU’s Digital Services Act, which governs digital services such as X and Facebook, and the EU’s Digital Markets Act, which looks to curb the power of large digital platforms.

The perception that the draft code favors Big Tech is not shared by their lobby group representatives, however.

“The code of practice for general-purpose AI is a vital step in implementing the EU’s AI Act, offering much-needed guidance [to tech providers] … However, the drafting process has been troubled from the very outset,” says Boniface de Champris, senior policy manager at the European arm of the Computer and Communications Industry Association (CCIA), which counts Google, Amazon, Meta and Apple among its members.

De Champris says that generative AI developers accounted for around 50 of the nearly 1,000 stakeholders that the EU consulted with on the drafting of the code, allowing the process “to veer off course, with months lost to debates that went beyond the AI Act’s agreed scope, including proposals explicitly rejected by EU legislators.” He calls a successful implementation of the code “a make-or-break moment for AI innovation in Europe.”

In response to the backlash from creator groups and the tech sector, the EU’s AI Office recently postponed publishing the final code of practice from May 2 to an unspecified date later this summer to allow for changes to be made.

The AI Act’s key provisions for generative AI models come into force Aug. 2, after which all of its regulations will be legally enforceable with fines of up to 35 million euros ($38 million, per current exchange rate), or up to 7% of global annual turnover, for large companies that breach the rules. Start-up businesses or smaller tech operations will receive proportionate financial punishments.

Creators Demand Stronger Rules

Meanwhile, work continues behind the scenes on what many music executives consider to be the key part of the legislation: the so-called “training template” that is being developed by the AI Office in parallel with the code of practice. The template, which is also overdue and causing equal concern among rights holders, will set the minimum requirements of training data that AI developers have to publicly disclose, including copyright-protected songs that they have used in the form of a “sufficiently detailed summary.”

According to preliminary proposals published in January, the training summary will not require tech companies to specify each work or song they have used to train AI systems, or be “technically detailed,” but will instead be a “generally comprehensive” list of the data sets used and sources.

“For us, the [transparency] template is the most important thing and what we have seen so far, which had been presented in the context of the code, is absolutely not meeting the required threshold,” says Lodovico Benvenuti, managing director of IFPI’s European office. “The act’s obligations on transparency are not only possible but they are needed in order to build a fair and competitive licensing market.”

“Unless we get detailed transparency, we won’t know what works have been used and if that happens most of this obligation will become an empty promise,” agrees IMPALA’s Philibert. “We hear claims [from the European Commission] that the training data is protected as a trade secret. But it’s not a trade secret to say: ‘This is what I trained on.’ The trade secret is how they put together their models, not the ingredients.”

“The big tech companies do not want to disclose [training data] because if they disclose, you will be able to understand if copyrighted material [has been used]. This is why they are trying to dilute this [requirement],” Brando Benifei, a Member of the European Parliament (MEP) and co-rapporteur of the AI Act, tells Billboard. Benifei is co-chair of a working group focused on the implementation of the AI Act and says that he and colleagues are urging policymakers to make sure that the final legislation achieves its overarching aim of defending creators’ rights.

“We think it is very important in this moment to protect human creativity, including the music sector,” warns Benifei, who this week co-hosted a forum in Brussels that brought together voices from music and other media to warn that current AI policies could erode copyright protections and compromise cultural integrity. Speakers, including ABBA member and CISAC president Björn Ulvaeus and Universal Music France CEO Olivier Nusse, stressed that AI must support — and not replace — human creativity, and criticized the lack of strong transparency requirements in AI development. They emphasized that AI-generated content should not be granted the same legal standing as human-created works. The event aligned with the “Stay True to the Act, Stay True to Culture” campaign, which advocates for equitable treatment and fair compensation for creators.   

“A lot is happening, almost around the clock, in front of and behind the scenes,” ICMP’s Phelan tells Billboard. He says he and other creator groups are “contesting hard” with the EU’s executive branch, the European Commission, to achieve the transparency standards required by the music business.    

“The implementation process doesn’t redefine the law or reduce what was achieved within the AI Act,” says Phelan. “But it does help to create the enforcement tools and it’s those tools which we are concerned about.”

Source: Anadolu / Getty

On Monday (May 19), the biotech giant Regeneron  Pharmeceuticals announced that they were acquiring the DNA testing company 23andMe after it was up for sale in an auction after declaring bankruptcy. Regeneron is best known for its antibody drug cocktail which was used by President Donald Trump when he fell ill with COVID-19 during his first term.23andMe was a dominant product as genetic testing became more publicly accepted in past years, with customers buying their test kits to find out more about their family ancestry and traits as well as potential health risks. It had a peak valuation of $6 billion after going public in 2021. The company’s decline happened afterward, due to a combination of less customers buying kits and a data breach back in 2023 which exposed genetic information for seven million of its customers.

A class-action lawsuit against 23andMe would be filed, with litigants claiming they were never notified of the breach. Last September, seven members of its board resigned citing their lack of confidence in chief executive Anne Wojcicki, according to the New York Times. The company would file for bankruptcy in March to “facilitate a sale process to maximize the value of its business.” Wojcicki would resign during that period as well.

The deal is worth $256 million, and will give Regeneron full control of all of 23andMe’s business assets which include its Personal Genome Service, Total Health and Research Services divisions. It is subject to a review from a corporate privacy ombudsman who will provide the final report on June 10. Then, approval must be given under federal antitrust laws and the U.S. Bankruptcy Court for the Eastern District of Missouri, whom which a hearing is set for June 17. If all proceeds according to plan, the sale would become official in the third quarter of 2025.Regeneron has already pledged to keep customers’ data and associated materials secure. “[W]e want to let 23andMe customer know that we are committed to protecting this dataset with our high standards of data privacy, security and ethical oversight. We believe that Regeneron has the right expertise, leadership and vision to revitalize 23andMe, protect its existing customer base and benefit society overall,” the company said in a statement.

HipHopWired Featured Video

All products and services featured are independently chosen by editors. However, Billboard may receive a commission on orders placed through its retail links, and the retailer may receive certain auditable data for accounting purposes.
If you’re looking to improve your home audio for cheap, you don’t have to give an arm and a leg for premium sound. Begin with this cinema audio system from LG, especially since it’s on sale for almost 60% off the list price.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Available on Amazon, the LG S80QR Dolby Atmos Soundbar System is on sale for $446.99 — $650 off its list price. In addition, the home soundbar system has a 4.2 out of 5-star rating from more than 335 of the retail giant’s shoppers.

LG’s home theater soundbar system comes with the soundbar itself, a wireless subwoofer for additional bass and rumble, a pair of wireless rear satellites for surround sound, and wireless hub. It’s a 5.1.3-channel audio set up (five main speakers in the soundbar, one subwoofer and three rear speakers in total) with Dolby Atmos and DTS:X audio support.

As a result, audio from music, movies, TV shows and video games is bigger, cleaner, crisper and sharper with multiple ports for connectivity options, including HDMI and Bluetooth for pairing smartphones, tablets and laptops.

LG

Lowest Price Ever

LG S80QR Dolby Atmos Soundbar System

$446.99

$1,096.99

59% off

If you’re an Amazon Prime member, you can order now and the LG S80QR Dolby Atmos Soundbar System will be delivered to your home in less than two days once it’s released, thanks to Prime Delivery.

Not a member? Sign up for a 30-day free trial to take advantage of all that Amazon Prime has to offer, including access to Amazon Music for online music streaming, Prime Video and Prime Gaming; fast free shipping in less than two days with Prime Delivery; in-store discounts at Whole Foods Market; access to exclusive shopping events — such as Prime Day and Black Friday — and much more. Learn more about Amazon Prime and its benefits here.

Marked down to $446.99 (regularly $1,096.99), the LG S80QR Dolby Atmos Soundbar System is available on Amazon.

Want more? For more product recommendations, check out our roundups of the best Xbox deals, studio headphones and Nintendo Switch accessories.

Source: Epic Games / Fortnite AI-Powered Darth Vader / Getty Images / James Earl Jones
Did Epic Games cross the line with its AI-powered Darth Vader companion that uses James Earl Jones voice? Gamers are currently debating.
The iconic actor James Earl Jones passed away last year, but his contributions to cinema, especially his iconic voice, will forever be linked to the Star Wars villain Darth Vader.

Starting today, Walt Disney Co. and Epic Games will allow Fortnite players to recruit and speak to Darth Vader.

Thanks to the power of generative AI, the Vader in-game character will use the late actor’s voice. Epic Games said they are doing so while remaining in “close consultation” with Jones’ family.

How Epic Games Is Bringing James Earl Jones To Fortnite
Epic Games uses Google’s Gemini 2.0 Flash model to generate Vader’s responses to players, while ElevenLabs’ Flash v2.5 model generates Jones’ voice.
“James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it,” the family of James Earl Jones said in a statement. “We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character.”
This decision changes the direction of the entertainment industry in situations like this. In previous Star Wars games, actors were hired to mimic Jones’ voice while he was still alive.
“Epic Games and Disney have worked together to thoughtfully develop this innovative feature with a strong focus on transparency, consent, and safety — ensuring that creators, Disney IP, and players are protected in interactive experiences,” Disney and Epic said in a statement announcing the decision.
As with anything, nothing goes off without a hitch. IGN reported that Epic Games had to issue a patch within an hour of AI-Darth Vader going live because videos of him dropping f-bombs hit X, formerly Twitter timelines.

Gamers do not like conversing with an AI-powered chatbot with Jones’s voice either.
“We deadass having a conversation with James Earl Jones via AI…? Yea you can keep it,” one post on X read. 
Welp, it looks like its not going anywhere, and as long they got permission from the Jones family, we don’t see an issue, but this is still opening a can of worms for many.
You can see more reactions in the gallery below.

2. Heard you

HipHopWired Featured Video

CLOSE

Source: RICHARD A. BROOKS / Getty / PlayStation 5
Sony announced its financial forecast for the next year, and the company expects Donald Trump’s tariffs to impact its bottom line greatly.
Spotted on The Verge, Sony expects Trump’s stupid tariffs to impact its financial forecast by 100 billion yen (about $680 million). Sony is weighing different options to counter the tariffs, including moving manufacturing to the United States, which could raise the price of PS5 consoles.

Sony CFO Lin Tao confirmed during a company earnings call that the company is considering “passing on” the price of tariffs to counteract the tariffs. Tao did not mention the PlayStation 5 by name during the call, signaling that the company could look to raise prices elsewhere to avoid raising the price of the PS5 console.
The PlayStation 5 has already seen price increases outside of the United States in Australia, the UK, New Zealand, and Europe.
While Tao didn’t mention the console, CEO Hiroki Totoki did when discussing possibly moving PS5 production to the US to minimize the effect of Donald Trump’s tariffs.
Totokoi said the PS5 console could be manufactured “can be produced locally,” say it could be “an efficient strategy,”  that “has to be considered going forward.”

Sony Still Has To Worry About The 30 Percent Tariff On China
Sony still makes the PS5 in China, and even after the 90-day pause, the 30 percent tariff, which is a significant reduction from the 145 percent, is still above the 10 percent on imports into the United States from China.
If Sony does decide to raise its prices, it won’t be the first company to do so. Microsoft recently raised the price of its Xbox consoles by $100, while Nintendo has kept the price of the Switch 2 the same, opting to raise the prices of accessories instead. 
We may learn soon enough if Sony does raise the price of the PS5. Until then, you can see reactions to this unfortunate news that we can all blame on Donald Trump.

2. Great question

4. We’re begging Sony

HipHopWired Featured Video

CLOSE

Source: Samsung / Samsung Galaxy S25 Edge
Samsung isn’t done dropping entries into its flagship smartphone line.
Yesterday, Samsung unveiled its thinnest smartphone yet, the Galaxy S25 Edge, giving Korean tech a giant leg up on the competition, including Apple, which is reportedly set to drop a thin flagship smartphone, the iPhone 17 Air.

The Galaxy S25 Edge lives up to its thin reputation coming in at 5.8 millimeters thin and only weighing 163 grams making it one of the thinnest smartphones on the market.

The S25 Edge, which will cost $1,099 and officially go on sale beginning May 30, comes just four months after the launch of the S25 smartphone series. This marks a different path for Samsung, which usually doesn’t introduce new smartphones this early, usually opting for the middle of the year to introduce the latest models of foldable smartphones.

The S25 Edge is basically a lite model of the S25 Ultra, and it’s for the Samsung fan who loves a premium phone but doesn’t care for features like the S-Pen.
The thin smartphone features a durable titanium build paired with the latest Corning Gorilla Glass Ceramic 2. It also has two premium cameras: a 200MP wide lens and a 12MP ultra-wide sensor with autofocus, allowing for detailed macro photography and Nightography.
Under the hood is the powerful Snapdragon 8 Elite Mobile Platform for Galaxy, which delivers snappy performance, on-device AI processing, and features like the Now Brief and Now Bar.
Here is the full breakdown via Samsung:

6.7″ Dynamic AMOLED 2X Display     
Slim and lightweight body at 5.8mm thick and weighing at 163 grams
Enhanced camera featuring a 200MP wide lens and 12MP ultra-wide sensor
ProVisual Engine optimized for Galaxy S25
Built with the latest Corning Gorilla Armor 2
AI image processing with ProScaler and Digital Natural Image engine (mDNIe). 
Galaxy AI features including Now Brief and Now Bar as well as editing features including Audio Eraser and Drawing Assist
Thinner, reconfigured vapor chamber to keep phone cool
Snapdragon 8 ® Elite Mobile Platform for Galaxy
ProVisual Engine
Available starting at $1,099.99 in Titanium Silver, Titanium Jetblack and Titanium Icyblue.

Again, we’re not sure who the market is for the S25 Edge, but it’s definitely appealing thanks to its features.

1. Samsung Galaxy S25 Edge

Source:Samsung Galaxy S25 Edge
Samsung Galaxy S25 Edge samsung,galaxy s25 edge

2. Samsung Galaxy S25 Edge

Source:Samsung Galaxy S25 Edge
Samsung Galaxy S25 Edge samsung,galaxy s25 edge

3. Samsung Galaxy S25 Edge

Source:Samsung Galaxy S25 Edge
Samsung Galaxy S25 Edge samsung,galaxy s25 edge

HipHopWired Featured Video

CLOSE

Rosie Greenway / GTA 4

GTA VI won’t arrive till next year, but now there are rumors of a remaster of one of the most underrated entries into the Grand Theft Auto franchise, and a former developer feels it needs to happen.

Word on the video game streets is that 2008’s GTA IV is getting a remaster for current consoles, and Ex-Rockstar Games technical director Obbe Vermeij would love to see Niko Bellic make a triumphant return. 

Vermeij, who worked at DMA Design, which later became Rockstar North, from 1995 until 2009 on titles like Space Station Silicon Valley, Manhunt, and other fan-favorite GTA titles, was asked how he feels about a remaster of GTA IV.

“I hadn’t heard those rumours. I think GTA 4 should be remastered. It’s a great game and there have been a number of successful remasters recently,” he told the person on X, formerly Twitter. 

Unlike other studios, Rockstar Games doesn’t often get into its remastering bag. They did attempt with Grand Theft Auto: The Trilogy – The Definitive Edition, which was a disaster at launch.

Vermeij spoke on that lackluster remaster in the past, noting that if Rockstar Games decides to do a remaster, “It would be better if Rockstar did quality remasters of their classic games.”No lies detected.

Again, there is no confirmation that Rockstar Games is working on a remaster of GTA 4. Honestly, based on the game studios’ track record, we won’t be shocked if the game is just a straight port with minimal improvements like the other classic titles brought back, like LA Noire and Red Dead Redemption.

We got our fingers crossed on this one.

HipHopWired Featured Video

The U.K. government’s plans to allow artificial intelligence firms to use copyrighted work, including music, have been dealt another setback by the House of Lords.
An amendment to the data bill which required AI companies to disclose the copyrighted works their models are trained on was backed by peers in the upper chamber of U.K. Parliament, despite government opposition.

The U.K.’s government has proposed an “opt out” approach for copyrighted material, meaning that the creator or owner must explicitly choose for their work not to be eligible for training AI models. The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125 on Monday (May 12).

The data bill will now return to the House of Commons, though the government could remove Kidron’s amendment and send the bill back to the House of Lords next week.

Trending on Billboard

Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.

“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn ($158bn) to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

The “opt out” move has proved unpopular with many in the creative fields, particularly in the music space. Prior to the vote, over 400 British musicians including Elton John, Paul McCartney, Dua Lipa, Coldplay, Kate Bush and more signed an open letter calling on U.K. prime minister Sir Keir Starmer to update copyright laws to protect their work from AI companies. 

The letter said that such an approach would threaten “the UK’s position as a creative powerhouse,” and signatories included major players such as Sir Lucian Grainge (Universal Music Group CEO), Jason Iley MBE (Sony Music UK CEO), Tony Harlow (Warner Music UK CEO) and Dickon Stainer (Universal Music UK CEO).

A spokesperson for the government responded to the letter, saying: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.”

They added: “We’re clear that no changes will be considered unless we are completely satisfied they work for creators.”

Sophie Jones, chief strategist office for the BPI, said: “The House of Lords has once again taken the right decision by voting to establish vital transparency obligations for AI companies. Transparency is crucial in ensuring that the creative industries can retain control over how their works are used, enabling both the licensing and enforcement of rights. If the Government chooses to remove this clause in the House of Commons, it would be preventing progress on a fundamental cornerstone which can help build trust and greater collaboration between the creative and tech sectors, and it would be at odds with its own ambition to build a licensing market in the UK.”

On Friday (May 9), SoundCloud encountered user backlash after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted on X that SoundCloud’s terms of service quietly changed in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models. Over the weekend, SoundCloud clarified via a statement, originally sent to The Verge and also obtained by Billboard, that reads in part: “SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.”
The streaming service adds that this change was made last year “to clarify how content may interact with AI technologies within SoundCloud’s own platform,” including AI-powered personalized recommendation tools, streaming fraud detection, and more, and it apparently did not mean that SoundCloud was allowing external AI companies to train on its users’ songs.

Trending on Billboard

SoundCloud seems to claim the right to train on people’s uploaded music in their terms. I think they have major questions to answer over this.I checked the wayback machine – it seems to have been added to their terms on 12th Feb 2024. I’m a SoundCloud user and I can’t see any… pic.twitter.com/NIk7TP7K3C— Ed Newton-Rex (@ednewtonrex) May 9, 2025

Over the years, SoundCloud has announced various partnerships with AI companies, including its acquisition of Singapore-based AI music curation company Musiio in 2022. SoundCloud’s statement added, “Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.” SoundCloud also has integrations in place with AI firms like Tuney, Voice-Swap, Fadr, Soundful, Tuttii, AIBeatz, TwoShot, Starmony and ACE Studio, and it has teamed up with content identification companies Pex and Audible Magic to ensure these integrations provide rights holders with proper credit and compensation.

The company doesn’t totally rule out the possibility that users’ works will be used for AI training in the future, but says “no such use has taken place to date,” adding that “SoundCloud will introduce robust internal permissioning controls to govern any potential future use. Should we ever consider using user content to train generative AI models, we would introduce clear opt-out mechanisms in advance—at a minimum—and remain committed to transparency with our creator community.”

Read the full statement from SoundCloud below.

“SoundCloud has always been and will remain artist-first. Our focus is on empowering artists with control, clarity, and meaningful opportunities to grow. We believe AI, when developed responsibly, can expand creative potential—especially when guided by principles of consent, attribution, and fair compensation.

SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes. In fact, we implemented technical safeguards, including a “no AI” tag on our site to explicitly prohibit unauthorized use.

The February 2024 update to our Terms of Service was intended to clarify how content may interact with AI technologies within SoundCloud’s own platform. Use cases include personalized recommendations, content organization, fraud detection, and improvements to content identification with the help of AI Technologies.

Any future application of AI at SoundCloud will be designed to support human artists, enhancing the tools, capabilities, reach and opportunities available to them on our platform. Examples include improving music recommendations, generating playlists, organizing content, and detecting fraudulent activity. These efforts are aligned with existing licensing agreements and ethical standards. Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.

We understand the concerns raised and remain committed to open dialogue. Artists will continue to have control over their work, and we’ll keep our community informed every step of the way as we explore innovation and apply AI technologies responsibly, especially as legal and commercial frameworks continue to evolve.”