State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

SoundCloud CEO Eliah Seton has today (May 14) published an open letter clarifying the company’s position on AI.
This letters follows backlash that happened last week after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted about SoundCloud’s terms of service quietly changing in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models.

In his letter, Seton repeats what SoundCloud shared in a statement last week, noting that the platform “has never used artist content to train AI models. Not for music creation. Not for large language models. Not for anything that tries to mimic or replace your work. Period. We don’t build generative AI tools, and we don’t allow third parties to scrape or use artist content from SoundCloud to train them either.”

The letter then goes on to directly address the 2024 Terms of Services changes, which were done, Seton writes, “to clarify how we may use AI internally to improve the platform for both artists and fans. This includes powering smarter recommendations, search, playlisting, content tagging and tools that help prevent fraud.”

Trending on Billboard

But he acknowledges that “the language in the Terms of Use was too broad and wasn’t clear enough. It created confusion, and that’s on us. That’s why we’re fixing it.” He write that the company is revising the Terms of Use to make it “absolutely clear” that “SoundCloud will not use your content to train generative AI models that aim to replicate or synthesize your voice, music, or likeness.” He notes that the Terms of Service updates will be reflected online in the coming weeks.

Seton adds that given the rapidly changing landscape, “If there is an opportunity to use generative AI for the benefit of our human artists, we may make this opportunity available to our human artists with their explicit consent, via an opt-in mechanism. We don’t know what we don’t know, and we have a responsibility to give our human artists the opportunities, choices and control to advance their creative journeys.”

Finally, he notes that the platform is “making a formal commitment that any use of AI on SoundCloud will be based on consent, transparency and artist control.”

Read his complete letter here.  

The U.K. government’s plans to allow artificial intelligence firms to use copyrighted work, including music, have been dealt another setback by the House of Lords.
An amendment to the data bill which required AI companies to disclose the copyrighted works their models are trained on was backed by peers in the upper chamber of U.K. Parliament, despite government opposition.

The U.K.’s government has proposed an “opt out” approach for copyrighted material, meaning that the creator or owner must explicitly choose for their work not to be eligible for training AI models. The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125 on Monday (May 12).

The data bill will now return to the House of Commons, though the government could remove Kidron’s amendment and send the bill back to the House of Lords next week.

Trending on Billboard

Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.

“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn ($158bn) to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

The “opt out” move has proved unpopular with many in the creative fields, particularly in the music space. Prior to the vote, over 400 British musicians including Elton John, Paul McCartney, Dua Lipa, Coldplay, Kate Bush and more signed an open letter calling on U.K. prime minister Sir Keir Starmer to update copyright laws to protect their work from AI companies. 

The letter said that such an approach would threaten “the UK’s position as a creative powerhouse,” and signatories included major players such as Sir Lucian Grainge (Universal Music Group CEO), Jason Iley MBE (Sony Music UK CEO), Tony Harlow (Warner Music UK CEO) and Dickon Stainer (Universal Music UK CEO).

A spokesperson for the government responded to the letter, saying: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.”

They added: “We’re clear that no changes will be considered unless we are completely satisfied they work for creators.”

Sophie Jones, chief strategist office for the BPI, said: “The House of Lords has once again taken the right decision by voting to establish vital transparency obligations for AI companies. Transparency is crucial in ensuring that the creative industries can retain control over how their works are used, enabling both the licensing and enforcement of rights. If the Government chooses to remove this clause in the House of Commons, it would be preventing progress on a fundamental cornerstone which can help build trust and greater collaboration between the creative and tech sectors, and it would be at odds with its own ambition to build a licensing market in the UK.”

On Friday (May 9), SoundCloud encountered user backlash after AI music expert and founder of Fairly Trained, Ed Newton-Rex, posted on X that SoundCloud’s terms of service quietly changed in February 2024 to allow the platform the ability to “inform, train, develop or serve as input” to AI models. Over the weekend, SoundCloud clarified via a statement, originally sent to The Verge and also obtained by Billboard, that reads in part: “SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.”
The streaming service adds that this change was made last year “to clarify how content may interact with AI technologies within SoundCloud’s own platform,” including AI-powered personalized recommendation tools, streaming fraud detection, and more, and it apparently did not mean that SoundCloud was allowing external AI companies to train on its users’ songs.

Trending on Billboard

SoundCloud seems to claim the right to train on people’s uploaded music in their terms. I think they have major questions to answer over this.I checked the wayback machine – it seems to have been added to their terms on 12th Feb 2024. I’m a SoundCloud user and I can’t see any… pic.twitter.com/NIk7TP7K3C— Ed Newton-Rex (@ednewtonrex) May 9, 2025

Over the years, SoundCloud has announced various partnerships with AI companies, including its acquisition of Singapore-based AI music curation company Musiio in 2022. SoundCloud’s statement added, “Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.” SoundCloud also has integrations in place with AI firms like Tuney, Voice-Swap, Fadr, Soundful, Tuttii, AIBeatz, TwoShot, Starmony and ACE Studio, and it has teamed up with content identification companies Pex and Audible Magic to ensure these integrations provide rights holders with proper credit and compensation.

The company doesn’t totally rule out the possibility that users’ works will be used for AI training in the future, but says “no such use has taken place to date,” adding that “SoundCloud will introduce robust internal permissioning controls to govern any potential future use. Should we ever consider using user content to train generative AI models, we would introduce clear opt-out mechanisms in advance—at a minimum—and remain committed to transparency with our creator community.”

Read the full statement from SoundCloud below.

“SoundCloud has always been and will remain artist-first. Our focus is on empowering artists with control, clarity, and meaningful opportunities to grow. We believe AI, when developed responsibly, can expand creative potential—especially when guided by principles of consent, attribution, and fair compensation.

SoundCloud has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes. In fact, we implemented technical safeguards, including a “no AI” tag on our site to explicitly prohibit unauthorized use.

The February 2024 update to our Terms of Service was intended to clarify how content may interact with AI technologies within SoundCloud’s own platform. Use cases include personalized recommendations, content organization, fraud detection, and improvements to content identification with the help of AI Technologies.

Any future application of AI at SoundCloud will be designed to support human artists, enhancing the tools, capabilities, reach and opportunities available to them on our platform. Examples include improving music recommendations, generating playlists, organizing content, and detecting fraudulent activity. These efforts are aligned with existing licensing agreements and ethical standards. Tools like Musiio are strictly used to power artist discovery and content organization, not to train generative AI models.

We understand the concerns raised and remain committed to open dialogue. Artists will continue to have control over their work, and we’ll keep our community informed every step of the way as we explore innovation and apply AI technologies responsibly, especially as legal and commercial frameworks continue to evolve.”

On Friday afternoon, the U.S. Copyright Office released a report examining copyrights and generative AI training, which supported the idea of licensing copyrights when they are used in commercial AI training.
On Saturday (May 10), the nation’s top copyright official – Register of Copyrights Shira Perlmutter – was terminated by President Donald Trump. Her dismissal shortly follows the firing of the Librarian of Congress, Carla Hayden, who appointed and supervised Perlmutter. In response, Rep. Joe Morelle (D-NY) of the House Administration Committee, which oversees the Copyright Office and the Library of Congress, said that he feels it is “no coincidence [Trump] acted less than a day after [Perlmutter] refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”

This report was largely seen as a win among copyright owners in the music industry, and it noted three key stances: the Office’s support for licensing copyrighted material when a “commercial” AI model uses it for training, its dismissal of compulsory licensing as the correct framework for a future licensing model, and its rejection of “the idea of any opt-out approach.”

Trending on Billboard

The Office affirms that in “commercial” cases, licensing copyrights for training could be a “practical solution” and that using copyrights without a license “[go] beyond established fair use boundaries.” It also notes that some commercial AI models “compete with [copyright owners] in existing markets.” However, if an AI model has been created for “purposes such as analysis or research – the types of uses that are critical to international competitiveness,” the Office says “the outputs are unlikely to substitute” for the works by which they were trained.

“In our view, American leadership in the AI space would best be furthered by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights,” the report reads.

While it is supportive of licensing efforts between copyright owners and AI firms, the report recognizes that most stakeholders do not hold support “for any statutory change” or “government intervention” in this area. “The Office believes…[that] would be premature at this time,” the report reads. Later, it adds “we agree with commenters that a compulsory licensing regime for AI training would have significant disadvantages. A compulsory license establishes fixed royalty rates and terms and can set practices in stone; they can become inextricably embedded in an industry and become difficult to undo. Premature adoption also risks stifling the development of flexible and creative market-based solutions. Moreover, compulsory licenses can take years to develop, often requiring painstaking negotiation of numerous operational details.”

The Office notes the perspectives of music-related organizations, like the National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), and Recording Industry Association of America (RIAA), which all hold a shared distaste for the idea of a future compulsory or government-controlled license for AI training. Already, the music industry deals with a compulsory license for mechanical royalties, allowing the government to control rates for one of the types of royalties earned from streaming and sales.

“Most commenters who addressed this issue opposed or raised concerns about the prospect of compulsory licensing,” the report says. “Those representing copyright owners and creators argued that the compulsory licensing of works for use in AI training would be detrimental to their ability to control uses of their works, and asserted that there is no market failure that would justify it. A2IM and RIAA described compulsory licensing as entailing ‘below-market royalty rates, additional administrative costs, and… restrictions on innovation’… and NMPA saw it as ‘an extreme remedy that deprives copyright owners of their right to contract freely in the market, and takes away their ability to choose whom they do business with, how their works are used, and how much they are paid.’”

The Office leaves it up to the copyright owners and AI companies to figure out the right way to license and compensate for training data, but it does explore a few options. This includes “compensation structures based on a percentage of revenue or profits,” but if the free market fails to find the right licensing solution, the report suggested “targeted intervention such as [Extended Collective Licensing] ECL should be considered.”

ECL, which is employed in some European countries, would allow a collective management organization (CMO) to issue and administer blanket licenses for “all copyrighted works within a particular class,” much like the music industry is already accustomed to with organizations like The MLC (The Mechanical Licensing Collective) and performing rights organizations (PROs) like ASCAP and BMI. The difference between an ECL and a traditional CMO, however, is that under an ECL system, the CMO can license for those who have not affirmatively joined it yet. Though these ECL licenses are still negotiated in a “free market,” the government would “regulat[e] the overall system and excercis[e] some degree of oversight.”

While some AI firms expressed concerns that blanket licensing by copyright holders would lead to antitrust issues, the Copyright Office sided with copyright holders, saying “[the] courts have found that there is nothing intrinsically anticompetitive about the collective, or even blanket, licensing of copyrighted works, as long as certain safeguards are incorporated— such as ensuring that licensees can still obtain direct licenses from copyright owners as an alternative.”

This is a “pre-publication” version of a forthcoming final report, which will be published in the “near future without any substantive changes expected,” according to the Copyright Office. The Office noted this “pre-publication” was pushed out early in an attempt to address inquiries from Congress and key stakeholders.

It marks the Office’s third report about generative AI and its impact on copyrights since it launched an initiative on the matter in 2023. The first report, released July 31, 2024, focused on the topic of digital replicas. The second, from Jan. 29, 2025, addressed the copyright-ability of outputs created with generative AI.

Udio, a generative AI music company backed by will.i.am, Common and a16z, has partnered with Audible Magic to fingerprint all tracks made using the platform at the moment they are created and to check the generated works, using Audible Magic’s “content control pipeline,” for any infringing copyrighted material.
By doing this, Udio and Audible Magic have created a way for streaming services and distributors to trace which songs submitted to their platforms are made with Udio’s AI. The company also aims to proactively detect and block use of copyrighted material that users don’t own or control.

“Working with Audible Magic allows us to create a transparent signal in the music supply chain. By fingerprinting at the point of generation, we’re helping establish a new benchmark for accountability and clarity in the age of generative music,” says Andrew Sanchez, co-founder of Udio. “We believe that this partnership will open the door for new licensing structures and monetization pathways that will benefit stakeholders across the industry from artists to rights holders to technology platforms.”

Trending on Billboard

Last summer, Udio, and its top competitor Suno, were both sued by the three major record companies for training their AI music models on the companies’ copyrighted master recordings. In the lawsuits, the majors argued this constituted copyright infringement “at an almost unimaginable scale.” Additionally, the lawsuits pointed out that the resulting AI-generated songs from Udio and Suno could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”

Udio’s new partnership with Audible Magic stops short of promising to eliminate copyright material from its training process, as the majors want, but it shows that Udio is trying out alternative solutions to appease the music establishment. Suno also has a partnership with Audible Magic, announced in October 2024, but the two partnerships believes these deals hold key differences. Suno’s integration focus more specifically on its “audio inputs” and “covers” features, which allow users to generate songs based on an audio file they upload. With Audible Magic’s technology, Suno prevents users from unauthorized uploads of copyrighted material.

“This partnership demonstrates Udio’s substantial commitment to rights holder transparency and content provenance,” says Kuni Takahashi, CEO of Audible Magic. “Registering files directly from the first-party source is a clean and robust way to identify the use of AI-generated music in the supply chain.”

BeatStars has partnered with Sureel, an AI music detection and attribution company, to provide its creators with the ability to express their desire to “opt out” of their works being used in AI training.
To date, AI music companies in the United States are not required to honor opt-outs, but through this partnership, Sureel and Beatstars, the world’s largest music marketplace, hope to create clarity for AI music companies that are wishing to avoid legal and reputational risks and create a digital ledger to keep track of beatmakers’ wishes regarding AI training.

Here’s how it works: Beatstars will send formal opt-out notices for every music asset and artist on its platform, and all of the creators’ choices will be documented on a portal that any AI company can access. By default, all tracks will be marked as shielded from AI training unless permission is granted. Companies can also access creators’ wishes using Sureel’s API. It will also automatically communicate the creators’ desires via a robots.txt file, which is a way to block AI companies that are crawling the web for new training data.

Trending on Billboard

As the U.S. — and countries around the world — continue to debate how to properly regulate issues related to AI, start-ups in the private sector, like Sureel, are trying to find faster solutions, including tools for opting in and out of AI training, detection technology to flag and verify AI generated works, and more.

“This partnership is an extension of our longstanding commitment to put creators first,” said Abe Batshon, CEO of BeatStars, in a statement. “We recognize that some AI companies might not respect intellectual property, so we are taking definitive action to ensure our community’s work remains protected and valued. Ethical AI is the future, and we’re leading the charge in making sure creators are not left behind.”

“BeatStars isn’t just a marketplace — it’s one of the most important creator communities in the world,” added Dr. Tamay Aykut, founder/CEO of Sureel. “They’ve built their platform around trust, transparency, and putting artists in control. That’s exactly the type of environment where our technology belongs. This partnership proves you can scale innovation and ethics together — and shows the rest of the industry what responsible AI collaboration looks like.”

French streaming service Deezer reported in a company blog post on Wednesday (April 16) that it is now receiving over 20,000 fully AI-generated tracks on a daily basis, amounting to 18% of their daily uploaded content — nearly double what it reported in January 2025.
Back in January, Deezer launched a new AI detection tool to try to balance the interests of human creators and the rapidly growing number of AI-generated tracks uploaded to the service. At the time of the tool’s launch, Deezer said it had discovered that roughly 10,000 fully AI-generated tracks were being uploaded to the platform every day. Instead of banning these fully AI-generated tracks, Deezer instead uses its AI detection tool to remove them from its recommendation algorithm and editorial playlisting — meaning users can still find AI-generated music if they choose to, though it won’t be promoted to them.

This tool might even be underestimating the number of AI tracks on Deezer. At the time of its launch, Deezer noted that the tool can detect fully AI-generated music from certain models. This includes Suno and Udio, two of the most popular AI music models on the market today, “with the possibility to add on detection capabilities for practically any other similar tool as long as there’s access to relevant data examples,” as they put it. Still, it’s possible there’s more AI-generated music out there than the tool can currently catch.

Trending on Billboard

Deezer’s tool also does not detect or penalize partially AI-generated works, which likely make up a significant portion of AI-inflected songs today. According to guidance from the U.S. Copyright Office, as long as “a human author has determined sufficient expressive elements,” an AI-assisted work can be eligible for copyright protection.

Deezer is one of the first streaming services to create a policy against fully AI-generated songs, and the first to report how often they’re seeing them uploaded to the service. As Billboard reported in February, most DSPs do not have AI-specific policies, with SoundCloud the only other streamer that has publicly stated that it penalizes AI music. Its policy is to “prohibit the monetization of songs and content that are exclusively generated through AI, encouraging creators to use AI as a tool rather than a replacement of human creation.”

Still, some other streaming services have taken steps to ensure some of the negative impacts of AI are policed, even though their policies aren’t specific to AI. For example, Spotify YouTube Music and others have created procedures for users to report impersonations of likenesses and voices, a major risk posed by (but not unique to) AI. Spotify also screens for users who spam the platform with too many uploads at once, a tactic used by bad actors who are trying to earn extra streaming royalties. This is often done by deploying quickly made AI-generated tracks, though that is not always the case.

“AI generated content continues to flood streaming platforms like Deezer, and we see no sign of it slowing down,” said Aurelien Herault, chief innovation officer at Deezer, in a statement. “Generative AI has the potential to positively impact music creation and consumption, but we need to approach the development with responsibility and care in order to safeguard the rights and revenues of artists and songwriters, while maintaining transparency for the fans. Thanks to our cutting-edge tool we are already removing fully AI generated content from the algorithmic recommendations.”

As artificial intelligence continues to blur the lines of creativity in music, South Korea’s largest music copyright organization, KOMCA (Korea Music Copyright Association), is drawing a hard line: No AI-created compositions will be accepted for registration. The controversial decision took effect on March 24, sending ripples through Korea’s music scene and sparking broader conversations about AI’s role in global songwriting.
In an official statement on its website, KOMCA explained that due to the lack of legal frameworks and clear management guidelines for AI-generated content, it will suspend the registration of any works involving AI in the creative process. This includes any track where AI was used — even in part — to compose, write lyrics or contribute melodically.

Now, every new registration must be accompanied by an explicit self-declaration confirming that no AI was involved at any stage of the song’s creation. This declaration is made by checking a box on the required registration form — a step that carries significant legal and financial consequences if false information is declared.  False declarations could lead to delayed royalty payments, complete removal of songs from the registry, and even civil or criminal liability.

Trending on Billboard

“KOMCA only recognizes songs that are wholly the result of human creativity,” the association said, noting that even a 1% contribution from AI makes a song ineligible for registration. “Until there is clear legislation or regulatory guidance, this is a precautionary administrative policy.”

The non-profit organization represents over 30,000 members, including songwriters, lyricists, and publishers, and oversees copyright for more than 3.7 million works from artists like PSY, BTS, EXO and Super Junior.

Importantly, the policy applies to the composition and lyric-writing stages of song creation, not necessarily the production or recording phase. That means high-profile K-pop companies like HYBE, which have used AI to generate multilingual vocal lines for existing songs, are not directly affected — at least not yet.

While South Korea’s government policy allows for partial copyright protection when human creativity is involved, KOMCA’s stance is notably stricter, requiring a total absence of AI involvement for a song to be protected.

This move comes amid growing international debate over the copyrightability of AI-generated art. In the U.S., a federal appeals court recently upheld a lower court’s decision to reject copyright registration for a work created entirely by an AI system called Creativity Machine. The U.S. Copyright Office maintains that only works with “human authorship” are eligible for protection, though it allows for copyright in cases where AI is used as a tool under human direction.

“Allowing copyright for machine-determined creative elements could undermine the constitutional purpose of copyright law,” U.S. Register of Copyrights Shira Perlmutter said.

With AI tools becoming increasingly sophisticated — and accessible — KOMCA’s policy underscores a growing tension within the global music industry: Where do we draw the line between assistance and authorship?

This article originally appeared on Billboard Korea.

HYBE is continuing to work to protect its artists. Korea’s Northern Gyeonggi Provincial Police Agency (NGPPA) worked with the global entertainment company to arrest eight individuals who are suspected of creating and distributing deepfake videos of HYBE Music Group artists, Billboard can confirm. Deepfakes are false images, videos or audio that have been edited or generated using […]

HYBE Interactive Media (HYBE IM) secured an additional KRW 30 billion ($21 million) investment, with existing investor IMM Investment contributing another KRW 15 billion ($10 million) in follow-on funding. Shinhan Venture Investment and Daesung Private Equity joined as new investors in the company, which plans to expand its game business using HYBE’s K-pop artist IPs. To date, HYBE IM has raised a total of KRW 137.5 billion ($100 million). With the new money, the company plans to enhance its publishing capabilities and execute its long-term growth strategy by allocating it to marketing, operations and localization strategies to support the launch of its gaming titles.
Live Nation acquired a stake in 356 Entertainment Group, a leading promoter in Malta’s festival and outdoor concert scene that operates the country’s largest club, Uno, which hosts more than 100 events a year. The two companies have a longstanding partnership that has resulted in events including Take That’s The Greatest Weekend Malta and Liam Gallagher and Friends Malta Weekender being held in the island country. According to a press release, 356’s festival season brought 56,000 visitors to the island, generating an economic impact of 51.8 million euros ($56.1 million). Live Nation is looking to build on that success by bringing more diverse international acts to the market.

Trending on Billboard

ATC Group acquired a majority stake in indie management company, record label and PR firm Easy Life Entertainment. The company’s management roster includes Bury Tomorrow, SOTA, Bears in Trees, Lexie Carroll, Mouth Culture and Anaïs; while its label roster boasts Lower Than Atlantis, Tonight Alive, Softcult, Normandie, Amber Run, Bryde and Lonely The Brave. Its PR arm has worked on campaigns for All Time Low, 41, Deaf Havana, Neck Deep, Simple Plan, Travie McCoy and Tool.

Triple 8 Management partnered with Sureel, which provides AI attribution, detection, protection and monetization for artists. Through the deal, Triple 8 artists including Drew Holcomb & the Neighbors, Local Natives, JOHNNYSWIM, Mat Kearney and Charlotte Sands will have access to tools that allow them to opt-in or opt-out of AI training with custom thresholds; protect their artist styles from being used in AI training without consent by setting time-and-date stamp behind ownership; monetize themselves in the AI ecosystem through ethical licensing that can generate revenue for them; and access real-time reporting through Sureel’s AI dashboard. Sureel makes this possible by providing AI companies “with easy-to-integrate tools to ensure responsible AI that fully respects artist preferences,” according to a press release.

Merlin signed a licensing deal with Coda Music, a new social/streaming platform that “is reimagining streaming as an interactive, artist-led experience, where fans discover music through community-driven recommendations, discussions, and exclusive content” while allowing artists “to cultivate more meaningful relationships with their audiences,” according to a press release. Through the deal, Merlin’s global membership will have access to Coda Music’s suite of social and discovery-driven features, allowing artists to engage with fan communities by sharing exclusive content and more. Users can also follow artists and fellow fans on the platform and exchange music recommendations with them.

AEG Presents struck a partnership with The Boston Beer Company that will bring the beverage maker’s portfolio of brands — including Sun Cruiser Iced Tea & Vodka, Truly Hard Seltzer, Twisted Tea Hard Iced Tea and Angry Orchard Hard Cider — to nearly 30 AEG Presents venues nationwide including Brooklyn Steel in New York, Resorts World Theatre in Las Vegas and Roadrunner in Boston, as well as festivals including Electric Forest in Rothbury, Mich., and the New Orleans Jazz & Heritage Festival.

Armada Music struck a deal with Peloton to bring an exclusive lineup of six live DJ-led classes featuring Armada artists to Peloton studios in both New York and London this year. Artists taking part include ARTY and Armin van Buuren.

Venu Holding Corporation acquired the Celebrity Lanes bowling alley in the Denver suburb of Centennial, Colo., for an undisclosed amount. It will transform the business into an indoor music hall, private rental space and restaurant.

Secretly Distribution renewed its partnership with Sufjan Stevens‘ Asthmatic Kitty Records, which has released works by Angelo De Augustine, My Brightest Diamond, Helado Negro, Linda Perhacs, Lily & Madeleine, Denison Witmer and others. Secretly will continue handling physical and digital music distribution, digital and retail marketing, and technological support for all Asthmatic Kitty releases.

Symphonic Distribution partnered with digital marketing platform SymphonyOS in a deal that will give Symphonic users discounted access to SymphonyOS via Symphonic’s client offerings page. Through SymphonyOS, artists can launch and manage targeted ad campaigns on Meta, TikTok and Google; access personalized analytics for a full view of fan interactions across platforms; build tailored pre-save links, link-in-bio pages and tour info pages; and get AI-powered real time recommendations to improve marketing campaigns.

Bootleg.live, a platform that turns high-quality concert audio into merch, partnered with Evan Honer and Judah & the Lion to offer fans unique audio collectibles on tour. Both acts are on tour this fall. The collectibles, called “bootlegs,” are concert recordings taken directly from the board, enhanced using Bootleg’s proprietary process, and combined with photos and short videos.