State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


Audioshake

Disney Music Group and AudioShake, an AI stem separation company, are teaming up. As part of their partnership, AudioShake will help Disney separate the individual instrument tracks (“stems”) for some of its classic back catalog and provide AI lyric transcription.
According to a press release, Disney says that it hopes this will “unlock new listening and fan engagement experiences” for the legendary catalog, which includes everything from the earliest recordings of Steamboat Willie (which is now in the public domain), to Cinderella, to The Lion King, to contemporary hits like High School Musical: The Musical: The Series.

Given so many of Disney’s greatest hits were made decades ago with now-out-of-date recording technology, this new partnership will allow the company to use the music in new ways for the first time. Stem separation, for instance, can help with re-mixing and mastering old audio for classic Disney films. It could also allow Disney to isolate the vocals or the instrumentals on old hits.

Trending on Billboard

Disney’s partnership with AudioShake began when the tech start-up was selected for the 2024 Disney Accelerator, a business development program designed to accelerate the growth of new companies around the world. Other participants included AI voice company Eleven Labs and autonomous vehicle company Nuro.

At the accelerator’s Demo Day 2024, on May 23, AudioShake gave a hint at what could be to come in their partnership with Disney when AudioShake CEO, Jessica Powell, showed how stem separation could be used to fix muffled dialog and music in the opening scene of Peter Pan.

Disney explained at the Demo Day that many of their earlier recordings are missing their original stems and that has limited their ability to use older hits in sync licensing, remastering, and emerging formats like immersive audio and lyric videos.

“We were deeply impressed by AudioShake’s sound separation technology, and were among its early adopters,” says David Abdo, SVP and General Manager of Disney Music Group. “We’re excited to expand our existing stem separation work as well as integrate AudioShake’s lyric transcription system. AudioShake is a great example of cutting-edge technology that can benefit our artists and songwriters, and the team at AudioShake have been fantastic partners.”

“Stems and lyrics are crucial assets that labels and artists can use to open up recordings to new music experiences,” says Powell. “We’re thrilled to deepen our partnership with Disney Music Group, and honored to work with their extensive and iconic catalog.”

Dennis Murcia was excited to get an email from Disney, but the thrill was short-lived. As an A&R and global development executive for the label Codiscos — founded in 1950, Murcia likens it to “Motown of Latin America” — part of his job revolves around finding new listeners for a catalog of older songs. Disney reached out in 2020 hoping to use Juan Carlos Coronel’s zippy recording of “Colombia Tierra Querida,” written by Lucho Bermudez, in the trailer for an upcoming film titled Encanto. The problem was: The movie company wanted the instrumental version of the track, and Codiscos didn’t have one. 

“I had to scramble,” Murcia recalls. A friend recommended that he try AudioShake, a company that uses artificial intelligence-powered technology to dissect songs into their component parts, known as stems. Murcia was hesitant — “removing vocals is not new, but it was never ideal; they always came out with a little air.” He needed to try something, though, and it turned out that AudioShake was able to create an instrumental version of “Colombia Tierra Querida” that met Disney’s standards, allowing the track to appear in the trailer. 

“It was a really important synch placement” for us, Murcia says. He calls quality stem-separation technology “one of the best uses of AI I’ve seen,” capable of opening “a whole new profit center” for Codiscos.

Catalog owners and estate administrators are increasingly interested in tapping into this technology, which allows them to cut and slice music in new ways for remixing, sampling or placements in commercials and advertisements. Often “you can’t rely on your original listeners to carry you into the future,” says Jessica Powell, co-founder and CEO of Audioshake. “You have to think creatively about how to reintroduce that music.”

Outside of the more specialized world of estates and catalogs, stem-separation is also being used widely by workaday musicians. Moises is another company that offers the technology; on some days, the platform’s users stem-separate 1 million different songs. “We have musicians all across the globe using it for practice purposes” — isolating guitar parts in songs to learn them better, or removing drums from a track to play along — says Geraldo Ramos, Moises’ co-founder and CEO.

While the ability to create missing stems has been around for at least a decade, the tech has been advancing especially rapidly since 2019 — when Deezer released Spleeter, which offered up “already trained state of the art models for performing various flavors of separation” — and 2020, when Meta released its own model called Demucs. Those “really opened the field and inspired a lot of people to build experiences based on stem separation, or even to work on it themselves,” Powell says. (She notes that AudioShake’s research was under way well before those releases.)

As a result, stem separation has “become super accessible,” according to Matt Henninger, Moises’ vp of sales and business development. “It might have been buried in Pro Tools five years ago, but now everyone can get their hands on it.” 

Where does artificial intelligence come in? Generative AI refers to programs that ingest reams of data and find patterns they can use to generate new datasets of a similar type. (Popular examples include DALL-E, which does this with images, and ChatGPT, which does it with text.) Stem separation tech finds the patterns corresponding to the different instruments in songs so that they can be isolated and removed from the whole.

“We basically train a model to recognize the frequencies and everything that’s related to a drum, to a bass, to vocals, both individually and how they relate to each other in a mix,” Ramos explains. Done at scale, with many thousands of tracks licensed from independent artists, the model eventually gets good enough to pull apart the constituent parts of a song it’s never seen before.

A lot of recordings are missing those building blocks. They could be older tracks that were cut in mono, meaning that individual parts were never tracked separately when the song was recorded. Or the original multi-track recordings could have been lost or damaged in storage.

Even in the modern world, it’s possible for stems to disappear in hard-drive crashes or other technical mishaps. The opportunity to create high-quality stems for recordings “where multi-track recordings aren’t available effectively unlocks content that is frozen in time,” says Steven Ames Brown, who administers Nina Simone‘s estate, among others.

Arron Saxe of Kinfolk Management, which includes the Otis Redding Estate, believes stem-separation can enhance the appeal of the soul great’s catalog for sample-based producers. “We have 280 songs, give or take, that Otis Redding wrote that sit in a pot,” he says. “How do you increase the value of each one of those? If doing that is pulling out a 1-second snare drum from one of those songs to sample, that’s great.” And it’s an appealing alternative to well-worn legacy marketing techniques, which Saxe jokes are “just box sets and new track listings of old songs.” 

Harnessing the tech is only “half the battle,” though. “The second part is a harder job,” Saxe says. “Do you know how to get the music to a big-name producer?” Murcia has been actively pitching electronic artists, hoping to pique their interest in sampling stems from Codiscos.

It can be similarly challenging to get the attention of a brand or music supervisor working in film and TV. But again, stem separation “allows editors to interact with or customize the music a lot more for a trailer in a way that is not usually possible with this kind of catalog material,” says Garret Morris, owner of Blackwatch Dominion, a full-service music publishing, licensing and rights management company that oversees a catalog extending from blues to boogie to Miami bass. 

Simpler than finding ways to open catalogs up to samplers is retooling old audio for the latest listening formats. Simone’s estate used stem-separation technology to create a spatial audio mix of her album Little Girl Blue as this style of listening continues to grow in popularity. (The number of Amazon Music tracks mixed in immersive-audio has jumped over 400% since 2019, for example.) 

Powell expects that the need for this adaptation will continue to grow. “If you buy into the vision presented by Apple, Facebook, and others, we will be interacting in increasingly immersive environments in the future,” she adds. “And audio that is surrounding us, just like it does in the real world, is a core component to have a realistic immersive experience.”

Brown says the spatial audio re-do of Simone’s album resulted in “an incremental increase in quality, and that can be enough to entice a brand new group of listeners.” “Most recording artists are not wealthy,” he continues. “Things that you can do to their catalogs so that the music can be fresh again, used in commercials and used in soundtracks of movies or TV shows, gives them something that makes a difference in their lives.” 

How magical would it be if we listened to music and music listened back to us?” asks Philip Sheppard, the co-founder/CEO of Lifescore, a U.K. startup that creates soundtracks tailored to users’ functional needs, from sleep to focus to fitness.

Though the premise sounds like science fiction, a number of new companies are already working with technology that attunes music to listeners’ movements in video games, workouts, virtual reality — even the aesthetics of their Snapchat lenses. Much as a film composer accentuates pivotal moments in the story with perfectly timed swells and crescendos, these innovations are being used to create bespoke soundtracks in real time.

One of the most fertile testing grounds for “dynamic” or “personalized” music, as it is called, is the gaming sector. Gamers tend to be avid music fans who stream songs an average of 7.6 hours a week — more than double the rate of the average consumer, according to MIDiA Research — and for some time now, game developers have linked players’ in-game movements to changes in lighting, setting and other parameters to enhance their storytelling.

David Knox, president of Reactional Music, the maker of an interactive music engine for video games, says “the final frontier for innovation in gaming is music.” Until recently, video-game music has consisted of loop-based scores or licensed tracks. Because of its repetitiveness, Knox says many users mute game soundtracks in favor of Spotify or Apple Music.

To compete with this, Knox says Reactional’s “rules-based music engine” applies the same reactive technology used in gaming graphics to the soundtrack, enabling, for example, a player in a first-person-shooter game to fire a gun in time with the beat of a song. As the technology evolves, Knox says soundtracks could transform to reflect the state of a player’s health or the level of danger.

This same technology could work with augmented reality and the so-called metaverse. Minibeats, a company that creates social media lenses with interactive musical elements, is in the process of incorporating dynamic music technology, which it calls “musical cameras,” into its AR filters for Snapchat. For one of its first launches, Minibeats partnered with Rhino and Stax Records in February to promote the 30th anniversary of the release of Booker T. & The M.G.’s’ “Green Onions.” One Minibeats filter turns users’ heads into green onions and allows them to control when the song’s signature Hammond organ riff courses through body and facial movements. Another filter morphs users’ faces into spinning vinyl records, allowing them to control when the song’s guitar and keys start and stop by opening and closing their mouths.

When imagining the future of dynamic music, Mike Butera, the company’s founder and CEO, says Disney’s Fantasia comes to mind. The ambitious 1940 film, which mixes animation and live action and features Mickey Mouse in his iconic sorcerer’s hat, syncs vibrantly colored dream-like visuals with a score that enhances what’s transpiring onscreen. “Imagine if we transformed your day-to-day life into something like that?” Butera says. “The mundanity of drinking coffee, walking the dog and driving to work [turned] into something [that] can be soundtracked with your own personal score that you control, whether that’s through a phone camera or AR glasses.”

These startups all claim that they have received only glowing feedback from the music business so far, and many have formed key partnerships. Hipgnosis recently announced a deal with Reactional Music to help bring its catalog of hit songs to the startup. Bentley and Audi have made deals with Lifescore to get dynamic soundtracks into cars, and Warner Music Group counts itself as an investor as well. Minibeats says it’s “in discussion with all the major labels,” though beyond its Rhino-Stax partnership, the company declined to disclose more details.

These emerging capabilities are typically powered by artificial intelligence to adapt recorded music to malleable experiences, but unlike other AI companies trying to create machine-made music with the touch of a button, these dynamic music startups either license preexisting, human-made songs or commission composers to create new or more dynamic compositions.

Lifescore pays composers to record a number of separate elements of a song, known as “stems,” and then, Sheppard says, its technology works with the resulting audio files like “shuffling a deck of cards,” assembling newfound arrangements in configurations intended to support a user’s need for focus while studying or working, for example, or sleep.

In the case of preexisting tracks, companies like Minibeats partner with Audioshake, a firm that uses AI to break down songs into individual, standardized stems, so that they can easily manipulate a song’s instrumental mix — guitar, drums, vocals, etc. — in real time. Audioshake’s proprietary technology is especially helpful in the case of older recordings in which the copyright owner no longer has the stems.Audioshake founder/CEO Jessica Powell says one reason she thinks the music industry has embraced this innovation is its potential to spur music discovery. “I think the same way TikTok pushes new songs, gaming — as well as other use cases — have enormous potential to introduce people to music,” whether that be a catalog track or a new release.

Though this technology is new, interactivity has been long seen as a way to create powerful bonds between fans and songs. Powell points to popular video games like Guitar Hero and Rock Band as successful examples. Karaoke is another. One could even point to the more recent novelty of stem players, like those Ye peddled during the release of his album Donda 2, as a way of engaging listeners. At a time when much of music discovery is passive — scrolling TikTok, streaming an editorial playlist or turning on the radio — musical interactivity and now personalization promises a stronger bond.

Knox at Reactional says interactive music also has economic potential. In-game purchases — which allow players to buy customizable elements like cars, weapons and outfits — dwarfed global recorded-music revenue in 2020, with players spending $97 billion in-game compared with music’s $35.9 billion (retail values), according to ­MIDiA Research. “In the same way you put hooks into a game, allowing someone to pay to change their appearance at a certain point, a game developer working with us could create a hook that unlocks access to the Reactional platform, letting players buy their favorite songs,” he says.

Since at least the advent of the transistor radio, consumers have used music to soundtrack their lives, but until recently, personalization of those soundtracks was limited to song selection and playlist curation. The songs themselves were unchangeable. Those on the forefront of dynamic music contend that it marries recorded music with the goose bumps-inducing, real-time experience of listening to something live.

“You know how you listen to a live performance, and the musicians are influenced by your energy in the room?” asks Erin Corsi, director of communications for Lifescore. “That’s what this is. Though this also feels like something new, it feels like we are finally able to get back to how music started.”