State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am


functional music

Functional content — think rain noises, whale sounds, recordings of wind rustling the leaves and the like — will be significantly devalued under Spotify‘s new royalty system: Plays of this audio will generate one fifth of the royalties generated by a play of a musical track, according to a source with knowledge of the streaming service’s new policy.
In response to a request for comment, a Spotify spokesperson pointed Billboard to the streaming service’s blog post from Tuesday (Nov. 21). The blog notes that, “over the coming months,” Spotify will “work with licensors to value noise streams at a fraction of the value of music streams.” The blog does not say what the fractional amount will be. 

Spotify’s decision to count functional content at 20% of the rate for music tracks is the culmination of nearly a year’s worth of bad press for rain sounds and the like. While this type of audio is often used for the seemingly innocuous purpose of relaxing after a long and stressful day, Spotify wrote on its blog that the space is “sometimes exploited by bad actors who cut their tracks artificially short — with no artistic merit — in order to maximize royalty-bearing streams.”

This initiative, says Spotify’s blog post, is intended to free up “extra money to go back into the royalty pool for honest, hard working artists.”

As a result, some of the most powerful executives in music have launched a sustained assault on rain and its various non-musical cousins over the course of 2023. “It can’t be that an Ed Sheeran stream is worth exactly the same as a stream of rain falling on the roof,” Warner Music Group CEO Robert Kyncl told analysts in May. 

Two months later, Universal Music Group CEO Lucian Grainge told analysts that streaming services must ensure that “real artists don’t have their royalties diluted by noise and other content that has no meaningful engagement from music fans.” He later amped up the rhetoric by describing companies that upload this content as “merchants of garbage” that were “flooding the platform with content that has absolutely no engagement with fans, doesn’t help churn, doesn’t merchandise great music and professional artists.”

When UMG rolled out a new royalty system with Deezer in September, the streaming service said it would replace “non-artist noise content” with its own functional music, while also excluding this audio from the royalty pool. “The sound of rain or a washing machine is not as valuable as a song from your favorite artist streamed in HiFi,” Deezer CEO Jeronimo Folgueira said. Deezer said plays of rain, washing machines and other non-music noise content counts for roughly 2% of all streams.

Spotify did not provide a comparable number in its blog post. It is taking one other step to limit the impact of functional content on the royalty pool: To generate royalties, a functional audio track must be longer than two minutes.

“These policies will right-size the revenue opportunity for noise uploaders,” Spotify wrote. “Currently, the opportunity is so large that uploaders flood streaming services with undifferentiated noise recordings, hoping to attract enough search traffic to generate royalties.”

Earlier this year, Oleg Stavitsky, co-founder/CEO of Endel, laid out a vision for how his company’s AI-driven functional soundscapes could help the major labels — even as anxiety around AI was reaching new heights. “We can process the stems [the audio building blocks of a track] from Miles Davis’ Kind of Blue and come back with a functional sleep version of that album,” Stavitsky told Billboard. At the time, he said his company was in talks with all the major labels about this possibility.

A few short months later, Stavitsky will have a chance to do exactly that: Endel announced a new partnership with Universal Music Group on Tuesday (May 23). In a statement, Endel’s CEO said his company will put “AI to work and help UMG build new and exciting offerings to promote wellness and banish the perceived threat around AI.”

“Our goal was always to help people focus, relax, and sleep with the power of sound,” Stavitsky added. “AI is the perfect tool for this. Today, seeing our technology being applied to turn your favorite music into functional soundscapes is a dream come true.” Artists from Republic and Interscope will be the first to participate — though the announcement omitted any names — with their soundscapes arriving “within the next few months.”

Endel focuses on creating “sound that is not designed for conscious listening,” Stavitsky told Billboard earlier this year. “Music is something you consciously listen to when you actually want to listen to a song or an album or a melody,” he explained. “What we produce is something that blends with the background and is scientifically engineered to put you in a certain cognitive state.”

Endel’s technology can spit out these soundscapes effectively at the click of a button. “The model is trained using the stems that are either produced in-house by our team, led by co-founder and chief composer Dmitry Evgrafov (who’s himself an established neo-classical artist), or licensed from artists that we’ve worked with,” Stavitsky said. “The trick is all of the stems” — Endel has used stems from James Blake, Miguel and Grimes —”are created following the scientific framework created by our product team in consultation with neuroscientists.”

Some people in the music industry have taken to calling sounds designed for sleep, study, or soothing frayed nerves “functional music.” And while it maintains a low profile, it’s an increasingly popular and lucrative space. “Science tells us that nature sounds and water sounds have a calming effect on your cognitive state,” Stavitsky noted this winter. “So naturally, people are turning to this type of content more and more.”

Early in 2022, Endel estimated that the size of the functional music market is 10 billions streams a month across all platforms. (The company has since raised its estimate to 15 billion streams a month.) If true, that would mean functional music is several times more popular than the biggest superstars. “Every day, hundreds of millions of people are self-medicating with sound,” Stavitsky wrote in March. “If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 ‘functional’ playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.”

But this has caused the music industry some concern. Major labels have not historically focused on making this kind of music. Most streaming services pay rights holders according to their share of total plays; when listeners turn to functional music to read a book or wind down after a long day, that means they’re not playing major label artists, and the companies make less money. In a memo to staff in January, UMG CEO Lucian Grainge complained that “consumers are increasingly being guided by algorithms to lower-quality functional content that in some cases can barely pass for ‘music.’”

But record companies can’t eliminate listener demand for functional music. It makes sense, then, that they would try to take over a chunk of the market. And Stavitsky has been savvy, actively pushing Endel’s technology as a way for the labels to “win back market share.”

Back in 2019, Endel entered into a distribution agreement for 20 albums with Warner Music Group. And the company announced its new partnership with UMG this week. In a statement, Michael Nash, UMG’s evp and chief digital officer, praised Endel’s “impressive ingenuity and scientific innovation.”

“We are excited to work together,” Nash continued, “and utilize their patented AI technology to create new music soundscapes — anchored in our artist-centric philosophy — that are designed to enhance audience wellness, powered by AI that respects artists’ rights in its development.”

Generative AI is hot right now. Over the last several years, music artists and labels have opened up to the idea of AI as an exciting new tool. Yet when Dall-E 2, Midjourney and GPT-3 opened up to the public, the fear that AI would render artists obsolete came roaring back.

I am here from the world of generative AI with a message: We come in peace. And music and AI can work together to address one of society’s ongoing crises: mental wellness.

While AI can already create visual art and text that are quite convincing versions of their human-made originals, it’s not quite there for music. AI music might be fine for soundtracking UGC videos and ads. But clearly we can do much better with AI and music.

There’s one music category where AI can help solve actual problems and open new revenue streams for everyone, from music labels, to artists, to DSPs. It’s the functional sound market. Largely overlooked and very lucrative, the functional sound market has been steadily growing over the past 10 years, as a societal need for music to heal increases across the globe.

Sound is powerful. It’s the easiest way to control your environment. Sound can change your mood, trigger a memory, or lull you to sleep. It can make you buy more or make you run in terror (think about the music played in stores intentionally to facilitate purchasing behavior or the sound of alarms and sirens). Every day, hundreds of millions of people are self-medicating with sound. If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 “functional” playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.

This is the market UMG chief Sir Lucian Grainge singled out in his annual staff memo earlier this year. He’s not wrong: DSPs are swarmed with playlists consisting of dishwasher sounds and white noise, which divert revenue and attention from music artists. Functional sound is a vast ocean of content with no clear leader or even a clear product.

The nuance here is that the way people consume functional sound is fundamentally different from the way they consume traditional music. When someone tunes into a sleep playlist, they care first and foremost if it works. They want it to help them fall asleep, as fast as possible. It’s counterintuitive to listen to your favorite artist when you’re trying to go to sleep (or focus, study, read, meditate). Most artist-driven music is not scientifically engineered to put you into a desired cognitive state. It’s designed to hold your attention or express some emotion or truth the artist holds dear. That’s why ambient music — which, as Brian Eno put it, is as ignorable as it is interesting — had its renaissance moment a few years ago, arguably propelled by the mental health crisis. 

How can AI help music artists and labels win back market share from white noise and dishwasher sounds playlists? Imagine that your favorite music exists in two forms: the songs and albums that you know and love, and a functional soundscape version that you can sleep, focus, or relax to. The soundscape version is produced by feeding the source stems from the album or song into a neuroscience-informed Generative AI engine. The stems are processed, multiplied, spliced together and overlaid with FX, birthing a functional soundscape built from the DNA of your favorite music. This is when consumers finally have a choice: fall asleep or study/read/focus to a no-name white-noise playlist, or do it with a scientifically engineered functional soundscape version of their favorite music. 

This is how Generative AI can create new revenue streams for all agents of the music industry, today: music labels win a piece of the the market with differentiated functional content built from their catalog; artists expand their music universe, connect with their audience in new and meaningful ways, and extend the shelf life to their material; DSPs get ample, quality-controlled content that increases engagement. Once listeners find sounds that achieve their goals, they often stick with them. For example, Wind Down, James Blake’s sleep soundscape album, shows a 50% listener retention in its seventh month after release. This shows that, when done right, functional sound has an incredibly long shelf life.

This win-win-win future is already here. By combining art, generative AI technology and science, plus business structures that enable such deals, we can transform amazing artist-driven sounds into healing soundscapes that listeners crave. In an age that yearns for calm, clarity, and better mental health, we can utilize AI to create new music formats that rights holders can embrace and listeners can appreciate. It promises AI-powered music that not only sounds good, but improves people’s lives, and supports artists. This is how you ride the functional music wave and create something listeners will find real value in and keep coming back to. Do not be afraid. Work with us. Embrace the future.

Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.