Endel
R&B singer 6LACK has partnered with Endel to create alternate versions of his latest album Since I Have A Lover that is specially formulated to promote “restorative rest and mental balance” as part of BIPOC Mental Health Awareness Month. One alternate version is designed to promote sleep, out July 8, and the second will promote focus, out July 24.
To promote the collaboration, 6lack will host a live pre-listening session on the Endel app on July 6, and two in-person events will take place in LA and Berlin that same day.
Endel is a start-up that creates what it calls “functional sound,” a form of ambient music that supports listeners day-to-day wellness needs, including sleep, meditation and focus. While Endel’s proprietary music-making technology is powered by artificial intelligence, Endel does not create new songs out of thin air. Instead, it generates ambient soundscapes by rearranging pieces of music provided by artists in ways that Endel says promotes specific wellness goals.
6lack joins the likes of other artists like Grimes and James Blake who have partnered with the soundscape start-up in the past, however, the alternate version of Since I Have A Lover is a new kind of partnership for the wellness brand because it is the first full album Endel has remixed and released on streaming services.
In March, Oleg Stavitsky, co-founder and CEO of Endel, told Billboard he felt this was a way for the soundscape company to help major labels evolve. “We can process the stems [the audio building blocks of a track] from Miles Davis’ Kind of Blue and come back with a functional sleep version of that album,” Stavitsky said, adding that Endel was in talks with all the major labels about trying this at the time.
Since then, Endel has partnered with UMG to do just that, and Since I Have A Lover marks the companies’ first attempt. Their partnership came as a surprise to some, given UMG chief Lucian Grainge‘s negative recent remarks about AI and functional music. Lamenting that functional music drives streaming dollars away from pop music and towards rain sounds, white noise, and other ambient recordings, he said in a memo to staff in January that “consumers are increasingly being guided by algorithms to lower-quality functional content that in some cases can barely pass for ‘music.’”
But at the time the partnership was announced, Michael Nash, UMG evp and chief digital officer, praised Endel for “utiliz[ing] their patented AI technology” to create ambient music because it is “anchored in our artist-centric philosophy” and “powered by AI that respects artists’ rights in its development.”
Endel’s ambient soundscapes have been on streaming services since 2019, thanks to UMG competitor Warner Music Group. Known to be bullish in its investment and partnership strategy with emerging music tech companies, WMG signed Endel to a 20-album distribution deal.
“This is about letting people experience my music in a new way,” 6LACK says of his collaboration with Endel. “These sounds can be for rest and relaxation, or for helping you feel inspired and creative. It’s for finding a sense of balance in life. Since I Have a Lover has plenty of magical sounds, and combined with Endel’s AI and science, it was easy to create something that felt healing.”
“Using AI to reimagine your favorite music as a functional soundscape, designed to help solve the biggest mental health challenges we’re facing as a species, is our mission. 6LACK’s openness to experimentation and his ability to let go and trust the process was crucial to the success of this project,” says Stavitsky. “We’re extremely proud of the result and can’t wait for millions of people to experience the healing power of these soundscapes.”
Generative AI is hot right now. Over the last several years, music artists and labels have opened up to the idea of AI as an exciting new tool. Yet when Dall-E 2, Midjourney and GPT-3 opened up to the public, the fear that AI would render artists obsolete came roaring back.
I am here from the world of generative AI with a message: We come in peace. And music and AI can work together to address one of society’s ongoing crises: mental wellness.
While AI can already create visual art and text that are quite convincing versions of their human-made originals, it’s not quite there for music. AI music might be fine for soundtracking UGC videos and ads. But clearly we can do much better with AI and music.
There’s one music category where AI can help solve actual problems and open new revenue streams for everyone, from music labels, to artists, to DSPs. It’s the functional sound market. Largely overlooked and very lucrative, the functional sound market has been steadily growing over the past 10 years, as a societal need for music to heal increases across the globe.
Sound is powerful. It’s the easiest way to control your environment. Sound can change your mood, trigger a memory, or lull you to sleep. It can make you buy more or make you run in terror (think about the music played in stores intentionally to facilitate purchasing behavior or the sound of alarms and sirens). Every day, hundreds of millions of people are self-medicating with sound. If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 “functional” playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.
This is the market UMG chief Sir Lucian Grainge singled out in his annual staff memo earlier this year. He’s not wrong: DSPs are swarmed with playlists consisting of dishwasher sounds and white noise, which divert revenue and attention from music artists. Functional sound is a vast ocean of content with no clear leader or even a clear product.
The nuance here is that the way people consume functional sound is fundamentally different from the way they consume traditional music. When someone tunes into a sleep playlist, they care first and foremost if it works. They want it to help them fall asleep, as fast as possible. It’s counterintuitive to listen to your favorite artist when you’re trying to go to sleep (or focus, study, read, meditate). Most artist-driven music is not scientifically engineered to put you into a desired cognitive state. It’s designed to hold your attention or express some emotion or truth the artist holds dear. That’s why ambient music — which, as Brian Eno put it, is as ignorable as it is interesting — had its renaissance moment a few years ago, arguably propelled by the mental health crisis.
How can AI help music artists and labels win back market share from white noise and dishwasher sounds playlists? Imagine that your favorite music exists in two forms: the songs and albums that you know and love, and a functional soundscape version that you can sleep, focus, or relax to. The soundscape version is produced by feeding the source stems from the album or song into a neuroscience-informed Generative AI engine. The stems are processed, multiplied, spliced together and overlaid with FX, birthing a functional soundscape built from the DNA of your favorite music. This is when consumers finally have a choice: fall asleep or study/read/focus to a no-name white-noise playlist, or do it with a scientifically engineered functional soundscape version of their favorite music.
This is how Generative AI can create new revenue streams for all agents of the music industry, today: music labels win a piece of the the market with differentiated functional content built from their catalog; artists expand their music universe, connect with their audience in new and meaningful ways, and extend the shelf life to their material; DSPs get ample, quality-controlled content that increases engagement. Once listeners find sounds that achieve their goals, they often stick with them. For example, Wind Down, James Blake’s sleep soundscape album, shows a 50% listener retention in its seventh month after release. This shows that, when done right, functional sound has an incredibly long shelf life.
This win-win-win future is already here. By combining art, generative AI technology and science, plus business structures that enable such deals, we can transform amazing artist-driven sounds into healing soundscapes that listeners crave. In an age that yearns for calm, clarity, and better mental health, we can utilize AI to create new music formats that rights holders can embrace and listeners can appreciate. It promises AI-powered music that not only sounds good, but improves people’s lives, and supports artists. This is how you ride the functional music wave and create something listeners will find real value in and keep coming back to. Do not be afraid. Work with us. Embrace the future.
Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.
Generative artificial intelligence is currently one of the hottest topics in Silicon Valley, and its impact is already being felt in the music industry. BandLab — the music-creation app that has become popular on TikTok — relies on AI as the engine for its tool SongStarter. Users can lean on it to generate beats or melodies at random, or prompt it to spit something out based on specific lyrics and emojis; BandLab’s 60 million registered creators are churning out more than 17 million songs each month, including breakout hits for dv4d and ThxSoMch.
The tracks that emerge from BandLab depend on the interaction of human creators and AI. That holds true for some of the companies focusing on functional audio as well. LifeScore, which uses AI to “create unique, real-time soundtracks for every journey,” relies on “Lego blocks of sound all made in a studio by real musicians playing real instruments through lovely microphones,” says co-founder/CEO Philip Sheppard. Even the sound of a stream trickling through a forest comes from “someone going out with a rig and standing in that stream and recording it.”
The AI kicks in when it comes to assembling that sonic Lego. “The AI is saying, ‘Hey, wouldn’t it be delightful if these could arrange themselves in this different way?’” Sheppard explains. “’How about if we could turn that into eight hours that felt like it was original every time you listened to it?’”
All results of these processes may not work. “Unsuccessful soundscapes are generated all the time,” says Oleg Stavitsky, co-founder/CEO of Endel, which offers an app that generates music designed to help users focus, relax or sleep. “Each soundscape goes through a multi-step testing process: from automated testing, detecting sound artifacts and bad sound combinations to in-house testing to our community testing.” That community includes some 4,000 people who provide feedback through Endel’s Discord channel.
“We put human eyes on everything before it goes out,” says Alex Mitchell, founder/CEO of Boomy, a company that offers aspiring musicians the chance to make songs in seconds with help from AI tools. Since 2019, Boomy users have created over 12 million songs. “We have a generic content policy that basically means if all you’re doing is pressing buttons and we detect that, then your release probably won’t be eligible for distribution,” says Mitchell. “We reject way more releases than what gets submitted. That way we’re not flooding the [digital service providers] with a bunch of nonsense.”
How will Boomy scale this approach as it attracts even more users and generates even more millions of songs? “We’re hiring,” Mitchell says.
If you think 100,000 songs a day going into the market is a big number, “you have no idea what’s coming next,” says Alex Mitchell, founder/CEO of Boomy, a music creation platform that can compose an instrumental at the click of an icon.
Boomy is one of many so-called “generative artificial intelligence” music companies — others include Soundful, BandLab’s SongStarter and Authentic Artists — founded to democratize songwriting and production even more than the synthesizer did in the 1970s, the drum machine in the ’80s and ’90s, digital audio workstations in the 2000s and sample and beat libraries in the 2010s.
In each of those cases, however, trained musicians were required to operate this technology in order to produce songs. The selling point of generative AI is that no musical knowledge or training is necessary. Anyone can potentially create a hit song with the help of computers that evolve with each artificially produced guitar lick or drumbeat.
Not surprisingly, the technology breakthrough has also generated considerable anxiety among professional musicians, producers, engineers and others in the recorded-music industry who worry that their livelihoods could potentially be threatened.
“In our pursuit of the next best technology, we don’t think enough about the impact [generative AI] could have on real people,” says Abe Batshon, CEO of BeatStars, a subscription-based platform that licenses beats. “Are we really helping musicians create, or are we just cutting out jobs for producers?”
Not so, say the entrepreneurs who work in the emerging business. From their perspective, generative AI tools are simply the next step in technology’s long legacy of shaping the way music is created and recorded.
“When the drum machine came out, drummers were scared it would take their jobs away,” says Diaa El All, founder/CEO of Soundful, another AI music-generation application that was tested by hit-makers such as Caroline Pennell, Madison Love and Matthew Koma at a recent songwriting camp in Los Angeles. “But then they saw what Prince and others were able to create with it.”
El All says the music that Soundful can instantly generate, based on user-set parameters, like beats per minute or genre, is simply meant to be a “jumping-off point” for writers to build songs. “The human element,” he says, “will never be replaced.”
BandLab CEO Meng Ru Kuok says that having tools to spark song creation makes a huge difference for young music-makers, who, so far, seem to be the biggest adopters of this technology. Meng claims his AI-powered SongStarter tool, which generates a simple musical loop over which creators can fashion a song, makes new BandLab users “80% more likely to actually share their music as opposed to writing from zero.” (Billboard and BandLab collaborated on Bringing BandLab to Billboard, a portal that highlights emerging artists.)
Other applications for generative AI include creating “entirely new formats for listening,” as Endel co-founder/CEO Oleg Stavitsky says. This includes personalized music for gaming, wellness and soundtracks. Lifescore modulates human-made scores in real time, which can reflect how a player is faring in a video game, for example; Endel generates soundscapes, based on user biometrics, to promote sleep, focus or other states (Lifescore also has a similar wellness application); and Tuney targets creators who need dynamic, personalized background music for videos or podcasts but do not have a budget for licensing.
These entrepreneurs contend that generative AI will empower the growth of the “creator economy,” which is already worth over $100 billion and counting, according to influencer Marketing Hub. “We’re seeing the blur of the line between creator and consumer, audience and performer,” says Mitchell. “It’s a new creative class.”
In the future Mitchell and El All both seem to imagine, every person can have the ability to create songs, much like the average iPhone user already has the ability to capture high-quality photos or videos on the fly. It doesn’t mean everyone will be a professional, but it could become a similarly common pastime.
The public’s fascination with — and fear of — generative AI reached a new milestone this year with the introduction of DALL-E 2, a generator that instantaneously creates images based on text inputs and with a surprising level of precision.
Musician Holly Herndon, who has used AI tools in her songwriting and creative direction for years, says that in the next decade, it will be as easy to generate a great song as it is to generate an image. “The entertainment industries we are familiar with will change radically when media is so easy and abundant,” she says. “The impact is going to be dramatic and very alien to what we are used to.”
Mac Boucher, creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes, agrees. “We will all become creators and be capable of creating anything.”
If these predictions are fulfilled, the music business, which is already grappling with oversaturation, will need to recalibrate. Instead of focusing on consumption and owning intellectual property, more companies may shift to artist services and the development of tools that aid song creation — similar to Downtown Music Holdings’ decision to sell off its 145,000-song catalog over the last two years and focus on serving the needs of independent talent.
Major music companies are also investing in and establishing relationships with AI startups. Hipgnosis, Reservoir, Concord and Primary Wave are among those that have worked with AI stem separation company Audioshake, while Warner Music Group has invested in Boomy, Authentic Artists and Lifescore.
The advancement of AI-generated music has understandably sparked a debate over its ethical and legal use. Currently, the U.S. Copyright Office will not register a work created solely by AI, but it will register works created with human input. However, what constitutes that input has yet to be clearly defined.
Answers to these questions are being worked out in court. In 2019, industry leader Open AI issued a comment to the U.S. Patent and Trademark Office, arguing that using copyrighted material for training an AI program should be considered fair use, although many copyright owners and some other AI companies disagree.
Now one of Open AI’s projects, which was made in collaboration with Microsoft and Github, is battling a class-action suit over a similar issue. Copilot, which is AI designed to generate computer code, was accused of often replicating copyrighted code because it was trained on billions of lines of protected material made by human developers.
The executives interviewed for this story say they hire musicians to create training material for their programs and do not touch copyright-protected songs.
“I don’t think songwriters and producers are talking about [AI] enough,” says music attorney Karl Fowlkes. “This kind of feels like a dark, impending thing coming our way, and we need to sort out the legal questions.”
Fowlkes says the most important challenge to AI-generated music will come when these tools begin creating songs that emulate specific musicians, much like DALL-E 2 can generate images clearly inspired by copyright works from talents like Andy Warhol or Jean Michel Basquiat.
Mitchell says that Boomy may cross that threshold in the next year. “I don’t think it would be crazy to say that if we can line up the right framework to pay for the rights [to copyrighted music], to see something from us sooner than people might think on that front,” he says. “we’re looking at what it’s going to take to produce at the level of DALL-E 2 for music.”
-
Pages