LifeScore
How magical would it be if we listened to music and music listened back to us?” asks Philip Sheppard, the co-founder/CEO of Lifescore, a U.K. startup that creates soundtracks tailored to users’ functional needs, from sleep to focus to fitness.
Though the premise sounds like science fiction, a number of new companies are already working with technology that attunes music to listeners’ movements in video games, workouts, virtual reality — even the aesthetics of their Snapchat lenses. Much as a film composer accentuates pivotal moments in the story with perfectly timed swells and crescendos, these innovations are being used to create bespoke soundtracks in real time.
One of the most fertile testing grounds for “dynamic” or “personalized” music, as it is called, is the gaming sector. Gamers tend to be avid music fans who stream songs an average of 7.6 hours a week — more than double the rate of the average consumer, according to MIDiA Research — and for some time now, game developers have linked players’ in-game movements to changes in lighting, setting and other parameters to enhance their storytelling.
David Knox, president of Reactional Music, the maker of an interactive music engine for video games, says “the final frontier for innovation in gaming is music.” Until recently, video-game music has consisted of loop-based scores or licensed tracks. Because of its repetitiveness, Knox says many users mute game soundtracks in favor of Spotify or Apple Music.
To compete with this, Knox says Reactional’s “rules-based music engine” applies the same reactive technology used in gaming graphics to the soundtrack, enabling, for example, a player in a first-person-shooter game to fire a gun in time with the beat of a song. As the technology evolves, Knox says soundtracks could transform to reflect the state of a player’s health or the level of danger.
This same technology could work with augmented reality and the so-called metaverse. Minibeats, a company that creates social media lenses with interactive musical elements, is in the process of incorporating dynamic music technology, which it calls “musical cameras,” into its AR filters for Snapchat. For one of its first launches, Minibeats partnered with Rhino and Stax Records in February to promote the 30th anniversary of the release of Booker T. & The M.G.’s’ “Green Onions.” One Minibeats filter turns users’ heads into green onions and allows them to control when the song’s signature Hammond organ riff courses through body and facial movements. Another filter morphs users’ faces into spinning vinyl records, allowing them to control when the song’s guitar and keys start and stop by opening and closing their mouths.
When imagining the future of dynamic music, Mike Butera, the company’s founder and CEO, says Disney’s Fantasia comes to mind. The ambitious 1940 film, which mixes animation and live action and features Mickey Mouse in his iconic sorcerer’s hat, syncs vibrantly colored dream-like visuals with a score that enhances what’s transpiring onscreen. “Imagine if we transformed your day-to-day life into something like that?” Butera says. “The mundanity of drinking coffee, walking the dog and driving to work [turned] into something [that] can be soundtracked with your own personal score that you control, whether that’s through a phone camera or AR glasses.”
These startups all claim that they have received only glowing feedback from the music business so far, and many have formed key partnerships. Hipgnosis recently announced a deal with Reactional Music to help bring its catalog of hit songs to the startup. Bentley and Audi have made deals with Lifescore to get dynamic soundtracks into cars, and Warner Music Group counts itself as an investor as well. Minibeats says it’s “in discussion with all the major labels,” though beyond its Rhino-Stax partnership, the company declined to disclose more details.
These emerging capabilities are typically powered by artificial intelligence to adapt recorded music to malleable experiences, but unlike other AI companies trying to create machine-made music with the touch of a button, these dynamic music startups either license preexisting, human-made songs or commission composers to create new or more dynamic compositions.
Lifescore pays composers to record a number of separate elements of a song, known as “stems,” and then, Sheppard says, its technology works with the resulting audio files like “shuffling a deck of cards,” assembling newfound arrangements in configurations intended to support a user’s need for focus while studying or working, for example, or sleep.
In the case of preexisting tracks, companies like Minibeats partner with Audioshake, a firm that uses AI to break down songs into individual, standardized stems, so that they can easily manipulate a song’s instrumental mix — guitar, drums, vocals, etc. — in real time. Audioshake’s proprietary technology is especially helpful in the case of older recordings in which the copyright owner no longer has the stems.Audioshake founder/CEO Jessica Powell says one reason she thinks the music industry has embraced this innovation is its potential to spur music discovery. “I think the same way TikTok pushes new songs, gaming — as well as other use cases — have enormous potential to introduce people to music,” whether that be a catalog track or a new release.
Though this technology is new, interactivity has been long seen as a way to create powerful bonds between fans and songs. Powell points to popular video games like Guitar Hero and Rock Band as successful examples. Karaoke is another. One could even point to the more recent novelty of stem players, like those Ye peddled during the release of his album Donda 2, as a way of engaging listeners. At a time when much of music discovery is passive — scrolling TikTok, streaming an editorial playlist or turning on the radio — musical interactivity and now personalization promises a stronger bond.
Knox at Reactional says interactive music also has economic potential. In-game purchases — which allow players to buy customizable elements like cars, weapons and outfits — dwarfed global recorded-music revenue in 2020, with players spending $97 billion in-game compared with music’s $35.9 billion (retail values), according to MIDiA Research. “In the same way you put hooks into a game, allowing someone to pay to change their appearance at a certain point, a game developer working with us could create a hook that unlocks access to the Reactional platform, letting players buy their favorite songs,” he says.
Since at least the advent of the transistor radio, consumers have used music to soundtrack their lives, but until recently, personalization of those soundtracks was limited to song selection and playlist curation. The songs themselves were unchangeable. Those on the forefront of dynamic music contend that it marries recorded music with the goose bumps-inducing, real-time experience of listening to something live.
“You know how you listen to a live performance, and the musicians are influenced by your energy in the room?” asks Erin Corsi, director of communications for Lifescore. “That’s what this is. Though this also feels like something new, it feels like we are finally able to get back to how music started.”
Generative artificial intelligence is currently one of the hottest topics in Silicon Valley, and its impact is already being felt in the music industry. BandLab — the music-creation app that has become popular on TikTok — relies on AI as the engine for its tool SongStarter. Users can lean on it to generate beats or melodies at random, or prompt it to spit something out based on specific lyrics and emojis; BandLab’s 60 million registered creators are churning out more than 17 million songs each month, including breakout hits for dv4d and ThxSoMch.
The tracks that emerge from BandLab depend on the interaction of human creators and AI. That holds true for some of the companies focusing on functional audio as well. LifeScore, which uses AI to “create unique, real-time soundtracks for every journey,” relies on “Lego blocks of sound all made in a studio by real musicians playing real instruments through lovely microphones,” says co-founder/CEO Philip Sheppard. Even the sound of a stream trickling through a forest comes from “someone going out with a rig and standing in that stream and recording it.”
The AI kicks in when it comes to assembling that sonic Lego. “The AI is saying, ‘Hey, wouldn’t it be delightful if these could arrange themselves in this different way?’” Sheppard explains. “’How about if we could turn that into eight hours that felt like it was original every time you listened to it?’”
All results of these processes may not work. “Unsuccessful soundscapes are generated all the time,” says Oleg Stavitsky, co-founder/CEO of Endel, which offers an app that generates music designed to help users focus, relax or sleep. “Each soundscape goes through a multi-step testing process: from automated testing, detecting sound artifacts and bad sound combinations to in-house testing to our community testing.” That community includes some 4,000 people who provide feedback through Endel’s Discord channel.
“We put human eyes on everything before it goes out,” says Alex Mitchell, founder/CEO of Boomy, a company that offers aspiring musicians the chance to make songs in seconds with help from AI tools. Since 2019, Boomy users have created over 12 million songs. “We have a generic content policy that basically means if all you’re doing is pressing buttons and we detect that, then your release probably won’t be eligible for distribution,” says Mitchell. “We reject way more releases than what gets submitted. That way we’re not flooding the [digital service providers] with a bunch of nonsense.”
How will Boomy scale this approach as it attracts even more users and generates even more millions of songs? “We’re hiring,” Mitchell says.
-
Pages