artificial intelligence
Page: 19
Universal Music Group announced on Monday (Aug. 21) a partnership with YouTube to create a set of principles and best practices around the use of artificial intelligence within the music community, as well as a Music AI Incubator bringing together several UMG artists and producers to help study the effect of the technology, including Anitta, Juanes, Yo Gotti, Louis Bell, ABBA’s Björn Ulvaeus, Ryan Tedder and the estate of Frank Sinatra, among others.
In announcing the new incubator and the three principles — which boil down to embracing the new technological possibilities while protecting creators and establishing content and safety policies — UMG chairman/CEO Lucian Grainge penned an op-ed for YouTube’s blog, in which he acknowledged both the possibilities and the potential dangers of AI.
“Given this tension, our challenge and opportunity as an industry is to establish effective tools, incentives and rewards – as well as rules of the road – that enable us to limit AI’s potential downside while promoting its promising upside,” Grainge writes. “If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways.”
In reference to the collaboration with YouTube, Grainge points to the video streamer’s development of its ContentID system, which helps screen user-generated content uploaded to the service for copyrighted works, and helps get creators (and copyright owners, such as UMG) paid for their use on the platform. That type of collaboration between DSP and music companies is foundational to the work YouTube and UMG are beginning with respect to AI, Grainge says.
“The truth is, great entertainment doesn’t just reach audiences on its own,” he writes. “It also requires the global infrastructure, new business models, scaled distribution, innovative partnerships and effective safeguards that enable talented artists to create with freedom and receive fair compensation. … Today, our partnership is building on that foundation with a shared commitment to lead responsibly, as outlined in YouTube’s AI principles, where Artificial Intelligence is built to empower human creativity, and not the other way around. AI will never replace human creativity because it will always lack the essential spark that drives the most talented artists to do their best work, which is intention. From Mozart to The Beatles to Taylor Swift, genius is never random.”
Read his full op-ed here.
YouTube announced a new initiative with artists and producers from Universal Music Group on Monday (August 21): An “AI Music Incubator” that will include input from Anitta, Juanes, Ryan Tedder, Björn Ulvaeus from Abba, Rodney Jerkins, d4vd, Max Richter, and the estate of Frank Sinatra, among others.
“This group will explore, experiment and offer feedback on the AI-related musical tools and products they are researching,” Universal CEO Lucian Grainge wrote in a blog post. “Once these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.”
Grainge added that “our challenge and opportunity as an industry is to establish effective tools, incentives and rewards — as well as rules of the road — that enable us to limit AI’s potential downside while promoting its promising upside.”
In a statement, Ulvaeus said that “while some may find my decision controversial, I’ve joined this group with an open mind and purely out of curiosity about how an AI model works and what it could be capable of in a creative process. I believe that the more I understand, the better equipped I’ll be to advocate for and to help protect the rights of my fellow human creators.”
Juanes noted in a statement of his own that “artists must play a central role in helping to shape the future of this technology” so “that it is used respectfully and ethically in ways that amplify human musical expression for generations to come.”
This sentiment was echoed by Richter: “Unless artists are part of this process, there is no way to ensure that our interests will be taken into account,” the composer said in a statement. “We have to be in this conversation, or our voices won’t be heard.”
Neal Mohan, YouTube’s CEO, also published the company’s “AI music principles” on Monday. The company promised to “embrace” AI “responsibly together with our music partners” and noted that any AI initiatives “must include appropriate protections and unlock opportunities for music partners who decide to participate.”
YouTube’s “AI music principles” as posted:
AI is here, and we will embrace it responsibly together with our music partners.
AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.
A federal judge ruled Friday (Aug. 18) that U.S. copyright law does not cover creative works created by artificial intelligence, weighing in on an issue that’s being closely watched by the music industry.
In a 15-page written opinion, Judge Beryl Howell upheld a decision by the U.S. Copyright Office to deny a copyright registration to computer scientist Stephen Thaler for an image created solely by an AI model. The judge cited decades of legal precedent that such protection is only afforded to works created by humans.
“The act of human creation — and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts — was … central to American copyright from its very inception,” the judge wrote. “Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.”
In a statement Friday, Thaler’s attorney Ryan Abbot said he and his client “disagree with the district court’s judgment” and vowed to appeal: “In our view, copyright law is clear that the public is the main beneficiary of the law and this is best achieved by promoting the generation and dissemination of new works, regardless of how they are created.”
Though novel, the decision was not entirely surprising. Federal courts have long strictly limited to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.
But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The question of copyright protection is crucial to the future role of AI since works that are not protected would be difficult to monetize.
“Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works,” the judge wrote. “The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”
The current case, however — dealing with a work that was admittedly created solely by a computer — “is not nearly so complex,” the judge wrote. Given the lack of any human input at all, she said, Thaler’s case presented a “clear and straightforward answer.”
Though Friday’s ruling came with a clear answer, more challenging legal dilemmas will come in the future from more subtle uses of AI. What if an AI-powered tool is used in the studio to create parts of a song, but human artists add other elements to the final product? How much human direction on the use of those tools is needed for the output to count as “human authorship”?
Earlier this year, a report by the U.S. Copyright Office said that AI-assisted works could still be copyrighted, so long as the ultimate author remains a human being. The report avoided offering easy answers, saying that protection for AI works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances.
Read the full opinion here:
As artificial intelligence remains the hottest topic of 2022, last month President Biden stood alongside big tech leaders as they pledged impotent “voluntary commitments” to control the emerging technology. Those leaders did so fully comprehending the dangers posed by the rapid and unrestricted penetration of AI through society — which are especially grave for artists and creators.
None of the commitments can give anyone in the music or creative industries comfort. They are as basic and malleable as “prioritizing research” and “sharing information on managing AI risks.” It is impossible to monitor big tech’s compliance with them. Even worse, all of the commitments are unenforceable.
We are already experiencing the consequences of the unbridled development and use of AI. Copyrighted material is being routinely ingested and used by AI conglomerates without the consent or even knowledge of rights holders. AI-generated vocals and deepfakes — given prominence by the Fake Drake and The Weeknd’s “Heart On My Sleeve” saga — are prevalent and becoming more realistic by the day. “Artificial streaming” — whereby AI bots create and upload songs, and then artificially inflate streaming numbers — is a massive issue for the streaming industry. Misinformation and inaccuracies in AI output are rampant. Generative AI programs suffer from what experts call “hallucination” — where they make up or misrepresent facts. The victims are widespread, ranging from lawyers given fake legal cases to cite in court papers, to a professor named as the accused in a sexual harassment scandal fabricated by AI.
Nicholas Saady
The lacuna of AI regulation and long wait for decisions in significant AI court cases leaves rights holders, and broader society, at the mercy of big tech. That includes the protagonists of the 2016 Cambridge Analytica scandal (where there were laws in place prohibiting such misconduct), Frances Haugen’s 2021 revelations about Facebook and its platforms’ impact on issues such as teen health and even human trafficking, as well as those being investigated by the FTC for engaging in unfair or deceptive practices causing harm to consumers. Some in this group have recently shown immense hostility to lawmakers, and threatened to leave the EU if it regulates AI. None are those whose history or current actions compel public trust.
We also have minimal, if any — a point recently illustrated by the California Stability AI lawsuit, in which key issues are copyright infringement and data scraping — transparency around the ingestion and scraping of data by companies that own generative AI. Content and data protections were not formulated with current forms of AI in mind. Nor were copyright and right of publicity laws. It is unclear whether they provide sufficient protections, and in any case, are difficult to enforce amidst the black box of AI data ingestion.
Draconian regulation is not the answer to these issues. Nor is inaction. Congress has been exceptionally slow to move. While some lawmakers have proposed legislation, held a few congressional hearings, and suggested new federal agencies to deal with AI, nothing meaningful has resulted. The significant AI litigation is also not progressing quickly. Recently, a Federal Judge indicated that he was inclined to dismiss the California Stability AI lawsuit, but would give the plaintiffs a chance to reformulate their case. This means any decision or guidance is unlikely this year. Similar court cases will not provide definitive guidance soon, and many may settle. Even if decisions are released, they will not be universally applicable.
U.S. Congress’ inaction starkly contrasts the European Union’s continued development of its “AI Act,” which includes important guardrails for the use of AI. For generative AI such as ChatGPT, the AI Act requires disclosure of content generated by AI; prohibitions on the ability to generate illegal content; and publication of copyrighted data used for training. It also prohibits real-time and remote biometric identification systems and cognitive behavioral manipulation using AI. While the final form of the Act is being negotiated, the EU hopes to “reach an agreement by the end” of 2023.
However, the EU AI Act is not a panacea — especially not for artists and creators, as music industry organizations like GESAC, ICMP, IFPI, IMPALA and IMPF recently pointed out to the EU. To properly protect human creativity and rights in creative output, more can be done to increase transparency regarding the data on which AI is trained and record keeping of the same — particularly content which is protected by registered copyrights. Such publicly accessible information will enable artists and creators to determine if their content has been ingested by AI, and also to make fully informed assessments as to whether AI outputs constitute infringement or fair use of their content. Stronger protections around the use of video and audio deepfakes and digital recreations of humans, particularly celebrities, are becoming an increasing priority to protect privacy, creativity and livelihoods.
The stakes are too high for a wait and see approach. We saw what happened when that approach was employed with cryptocurrency: FTX and other similar debacles. The potential impact of generative AI is greater because of its universal application and almost limitless potential. As history and the cacophony of current AI lawsuits make clear, big tech has little regard for the intellectual property, livelihoods and creativity of artists and creators, nor for individuals’ privacy and personal information.
The recent “voluntary commitments” won’t change a thing. Congress should take swift action. It has a unique opportunity to lead the world in AI regulation by passing an enhanced version of the EU AI Act. The risks of inaction amidst the rapid development and use of generative AI — like AI’s capabilities — are in many respects, existentially threatening.
Nicholas Saady is a U.S. and Australian lawyer, who represents high-profile organizations and individuals — including major artists, labels and agents — regarding complex intellectual property, technology and commercial matters. He has also published widely on issues relating to technology, AI, NFTs and cryptocurrency.
Futureverse — a multi-hyphenate AI company — published a new research paper on Thursday (June 9) to introduce its forthcoming text-to-music generator. Called Jen-1, the unreleased model is designed to improve upon issues found in currently available music generators like Google’s MusicLM, providing higher fidelity audio and longer, more complex musical works than what is on the market today.
“Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool,” says Shara Senderoff, co-founder of Futureverse and co-founder of Raised in Space, about the model in an exclusive first-look with Billboard. Predicted to release in early 2024, Jen can form up-to three minute songs as well as help producers with half-written songs through offering ‘continuation’ and ‘in-painting’ as well.
‘Continuation’ allows a music maker to upload an incomplete song to Jen and direct the model to create a plausible idea of how to finish the song, and ‘in-painting’ refers to a process by which the model can fill in spaces of a song that are damaged or incomplete in the middle of the work. To Aaron McDonald, the company’s co-founder, Jen’s role is to “extend creativity” of human artists.
When asked why Jen is a necessary invention during a time in which producers, songwriters and artists are more bountiful than ever, McDonald replied, “I think musicians throughout the ages have always embraced new technology that expands the way they can create music,” pointing to electronic music as one example of how new tools shape musical evolution. “To imply that music doesn’t need [any new] technology to expand and become better now is kind of silly… and arbitrary.”
He also sees this as a way to “democratize” the “high end of music [quality],” which he says is now only accessible to musicians with the means to record at a well-equipped studio and with trained technicians. With Jen, Johnson and Senderoff hope to satisfy the interests of professional musicians and to encourage newcomers to dabble in songwriting, perhaps for the first time. The two co-founders imagine a world in which everyday people can create music, and have nicknamed the products of this type of user as ‘AIGC,’ a twist on the term User Generated Content (or ‘UGC’).
Futureverse was formed piecemeal over the last 18 months, merging eleven different pre-existing AI and metaverse start-ups together into one company to make a number of creative AI models, including those that produce animations, music, sound effects and more. To power their inventions, the company employs the AI protocol from Altered State Machine, a company that was founded by Johnson and included in the merger.
Senderoff says Jen will also be a superior product because Futureverse created it with the input of some of music’s top business executives and creators, unlike its competitors. Though Senderoff does not reveal who the industry partners are or how Jen will be a more ethical and cooperative model for musicians, but she assures an announcement will be released soon providing more information.
Despite its proposed upgrades, Futureverse’s Jen could face significant challenges from other text-to-music generators named in the new research paper, given some were made by the world’s most established tech giants and have already hit the market, but McDonald is unperturbed. “That forces us to think differently. We don’t have the resources that they do, but we started our process with that in mind. I think we can beat them with a different approach: the key insight is working with the music industry as a way to produce a better product.”
Universal Music Group is in the early stages of talks with Google about licensing artists’ voices for songs created by artificial intelligence, according to The Financial Times. Warner Music Group has also discussed this possibility, The Financial Times reported.
Advances in artificial-intelligence-driven technology have made it relatively easy for a producer sitting at home to create a song involving a convincing facsimile of a superstar’s voice — without that artist’s permission. Hip-hop super-fans have been using the technology to flesh out unfinished leaks of songs from their favorite rappers.
One track in particular grabbed the industry’s attention in March: “Heart On My Sleeve,” which masqueraded as a new collaboration between Drake and the Weeknd. At the time, a Universal Music spokesperson issued a statement saying that “stakeholders in the music ecosystem” have to choose “which side of history… to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”
“In our conversations with the labels, we heard that the artists are really pissed about this stuff,” Geraldo Ramos, co-founder and CEO of the music technology company Moises, told Billboard recently. (Moises has developed its own AI-driven voice-cloning technology, along with the technology to detect whether a song clones someone else’s voice.) “How do you protect that artist if you’re a label?” added Matt Henninger, Moises’ vp of sales and business development.
The answer is probably licensing: Develop a system in which artists who are fine with having their voices cloned clear those rights — in exchange for some sort of compensation — while those acts who are uncomfortable with being replicated by technology can opt out. Just as there is a legal framework in place that allows producers to sample 1970s soul, for example, by clearing both the master and publishing rights, in theory there could be some sort of framework through which producers obtain permission to clone a superstar’s voice.
AI-driven technology could “enable fans to pay their heroes the ultimate compliment through a new level of user-driven content,” Warner CEO Robert Kyncl told financial analysts this week. (“There are some [artists] that may not like it,” he continued, “and that’s totally fine.”)
On the same investor call, Kyncl also singled out “one of the first official and professionally AI-generated songs featuring a deceased artist, which came through our ADA Latin division:” A new Pedro Capmany track featuring AI-generated vocals from his father Jose, who died in 2001. “After analyzing hundreds of hours of interviews, acappellas, recorded songs, and live performances from Jose’s career, every nuance and pattern of his voice was modeled using AI and machine learning,” Kyncl explained.
After the music industry’s initial wave of alarm about AI, the conversation has shifted, according to Henninger. With widely accessible voice-cloning technology available, labels can’t really stop civilians from making fake songs accurately mimicking their artists’ vocals. But maybe there’s a way they can make money from all the replicants.
Henninger is starting to hearing different questions around the music industry. “How can [AI] be additive?” he asks. “How can it help revenue? How can it build someone’s brand?”
Reps for Universal and Warner did not respond to requests for comment.
“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in music are cause for concern.
When a young Evan Bogart tried his hand at writing a few pop songs for a girl group he managed, he had no idea he would score one of the biggest Billboard hits of 2006.
After the act disbanded, Bogart decided to pitch the songs to labels. One of them landed with a then-fledgling pop artist named Rihanna, who was signed to Def Jam Recordings. Bogart’s song, “S.O.S.,” not only broke Rihanna — it jumped 33 spots to No. 1 on the Billboard Hot 100 in a single week — it minted his songwriting career.
Multiple hits later, Bogart runs his own publishing company and label, Seeker Music, where he encourages his songwriters to create “pitch records” — songs written by songwriters, recorded as demos, and then shopped to various artists. It’s a common practice that increasingly employs a new — albeit controversial — hack: artificial intelligence voice synthesis, which mimics the voice of the artist being pitched.
Bogart says the technology helps his roster better tailor pitches to talent and enables the artists to envision themselves on the track. At a time when acts are demanding a weightier role in the song creation process, AI voice generation offers a creative way to get their attention.
“Producers and writers have always tried to mimic the artists’ voice on these demos anyway,” says attorney Jason Berger, whose producer and songwriter clients are beginning to experiment with AI vocals for their pitches. “I feel like this technology is very impactful because now you can skip that step with AI.”
Traditionally, songwriters will either sing through the track themselves for a demo recording or employ a demo singer. In cases when writers have a specific artist in mind, a soundalike demo singer may be employed to mimic the artist’s voice for about $250-500 per cut. (One songwriter manager said there are a few in particular who make good money imitating Maroon 5’s Adam Levine, Justin Bieber, and other top tier acts. In general, however, nearly all demo singers hold other jobs in music like background singing, writing, producing or engineering.)
The emerging technology doesn’t generate a melody and vocal from scratch but instead maps the AI-generated tone of the artist’s voice atop a prerecorded vocal. Popular platforms include CoversAI, Uberduck, KitsAI, and Grimes’ own voice model, which she made available for public use in May. Still, these models yield mixed results.
Some artists’ voices might be easier for AI to imitate because they employ Auto-Tune or other voice-processing technology when they record, normalizing the voice and giving it an already computerized feel. A large catalog of recordings also helps because it offers more training material.
“Certain voices sound really good, but others are not so good,” Bogart says, but he adds that he actually “likes that it sounds a little different from a real voice. I’m not trying to pretend the artist is truly on the song. I’m just sending people a robotic version of the artist to help them hear if the song is a good fit.”
Training is one of the most contentious areas of generative AI because the algorithms are often fed copyrighted material, like sound recordings, without owners’ knowledge or compensation. The legality of this is still being determined in the United States and other countries, but any restrictions that arise probably won’t apply to pitch records because they aren’t released commercially.
“I really haven’t had any negative reactions,” Bogart says of his efforts. “No one’s said ‘did you just pitch your song with my artists’ voice on it to me?’”
Stefán Heinrich, founder and CEO of CoversAI creator mayk.it, says voice re-creation tools could even democratize the songwriting profession altogether, allowing talented unknown writers a chance at getting noticed. “Until now, you had to have the right connections to pitch your songs to artists,” he says. “Now an unknown songwriter can use the power of the technology and the reach of TikTok to show your skills to others and get invited into those rooms.”
While Nick Jarjour — founder and CEO of JarjourCo, advisor to mayk.it and former global head of song management at Hipgnosis — supports the ethical use of this technology, he believes that the industry should take a different approach to applying AI voices on pitches. “The solution is letting the artist who is receiving the demos decide to put their AI voice onto it themselves,” he says, as opposed to publishers and writers sending over demos with the AI treatment already provided. To do this, artists can create their own personal voice models that are more accurate and tailored to their needs, much like Grimes has already done, and then apply those to pitches they receive.
Still, as Berger says, “this is evolving by the day.” Most publishers haven’t put this technology into every day practice yet, but now more are discussing the idea publicly. At the Association of Independent Music Publishers (AIMP) annual conference in New York City last month, Katie Fagan, head of A&R for Prescription Songs Nashville, said that she recently saw AI vocals on a pitch record for the first time. One of her writers had tested AI to add the voice of Cardi B to the demo. “It could be an interesting pitch tool in the future,” she said, noting that this technology could be used even more simply to change the gender of the demo singer when pitching the same demo to a mix of male and female artists.
“I really don’t see why you wouldn’t pitch a song with a voice that sounds as close as possible to the artist, given the goal is helping the artist hear themselves on the track,” says Berger. “My guess is that people will get used to this pretty quick. I think in six months we are going to have even more to talk about.”
In the more distant future, Bogart wonders what might happen if, as the technology advances, pitch records become the final step in the creative process. “What would be really scary is if someone asks the artist, ‘Hey, do you want to cut this?’ And they reply, ‘I don’t have to, that’s me.’”
Dennis Murcia was excited to get an email from Disney, but the thrill was short-lived. As an A&R and global development executive for the label Codiscos — founded in 1950, Murcia likens it to “Motown of Latin America” — part of his job revolves around finding new listeners for a catalog of older songs. Disney reached out in 2020 hoping to use Juan Carlos Coronel’s zippy recording of “Colombia Tierra Querida,” written by Lucho Bermudez, in the trailer for an upcoming film titled Encanto. The problem was: The movie company wanted the instrumental version of the track, and Codiscos didn’t have one.
“I had to scramble,” Murcia recalls. A friend recommended that he try AudioShake, a company that uses artificial intelligence-powered technology to dissect songs into their component parts, known as stems. Murcia was hesitant — “removing vocals is not new, but it was never ideal; they always came out with a little air.” He needed to try something, though, and it turned out that AudioShake was able to create an instrumental version of “Colombia Tierra Querida” that met Disney’s standards, allowing the track to appear in the trailer.
“It was a really important synch placement” for us, Murcia says. He calls quality stem-separation technology “one of the best uses of AI I’ve seen,” capable of opening “a whole new profit center” for Codiscos.
Catalog owners and estate administrators are increasingly interested in tapping into this technology, which allows them to cut and slice music in new ways for remixing, sampling or placements in commercials and advertisements. Often “you can’t rely on your original listeners to carry you into the future,” says Jessica Powell, co-founder and CEO of Audioshake. “You have to think creatively about how to reintroduce that music.”
Outside of the more specialized world of estates and catalogs, stem-separation is also being used widely by workaday musicians. Moises is another company that offers the technology; on some days, the platform’s users stem-separate 1 million different songs. “We have musicians all across the globe using it for practice purposes” — isolating guitar parts in songs to learn them better, or removing drums from a track to play along — says Geraldo Ramos, Moises’ co-founder and CEO.
While the ability to create missing stems has been around for at least a decade, the tech has been advancing especially rapidly since 2019 — when Deezer released Spleeter, which offered up “already trained state of the art models for performing various flavors of separation” — and 2020, when Meta released its own model called Demucs. Those “really opened the field and inspired a lot of people to build experiences based on stem separation, or even to work on it themselves,” Powell says. (She notes that AudioShake’s research was under way well before those releases.)
As a result, stem separation has “become super accessible,” according to Matt Henninger, Moises’ vp of sales and business development. “It might have been buried in Pro Tools five years ago, but now everyone can get their hands on it.”
Where does artificial intelligence come in? Generative AI refers to programs that ingest reams of data and find patterns they can use to generate new datasets of a similar type. (Popular examples include DALL-E, which does this with images, and ChatGPT, which does it with text.) Stem separation tech finds the patterns corresponding to the different instruments in songs so that they can be isolated and removed from the whole.
“We basically train a model to recognize the frequencies and everything that’s related to a drum, to a bass, to vocals, both individually and how they relate to each other in a mix,” Ramos explains. Done at scale, with many thousands of tracks licensed from independent artists, the model eventually gets good enough to pull apart the constituent parts of a song it’s never seen before.
A lot of recordings are missing those building blocks. They could be older tracks that were cut in mono, meaning that individual parts were never tracked separately when the song was recorded. Or the original multi-track recordings could have been lost or damaged in storage.
Even in the modern world, it’s possible for stems to disappear in hard-drive crashes or other technical mishaps. The opportunity to create high-quality stems for recordings “where multi-track recordings aren’t available effectively unlocks content that is frozen in time,” says Steven Ames Brown, who administers Nina Simone‘s estate, among others.
Arron Saxe of Kinfolk Management, which includes the Otis Redding Estate, believes stem-separation can enhance the appeal of the soul great’s catalog for sample-based producers. “We have 280 songs, give or take, that Otis Redding wrote that sit in a pot,” he says. “How do you increase the value of each one of those? If doing that is pulling out a 1-second snare drum from one of those songs to sample, that’s great.” And it’s an appealing alternative to well-worn legacy marketing techniques, which Saxe jokes are “just box sets and new track listings of old songs.”
Harnessing the tech is only “half the battle,” though. “The second part is a harder job,” Saxe says. “Do you know how to get the music to a big-name producer?” Murcia has been actively pitching electronic artists, hoping to pique their interest in sampling stems from Codiscos.
It can be similarly challenging to get the attention of a brand or music supervisor working in film and TV. But again, stem separation “allows editors to interact with or customize the music a lot more for a trailer in a way that is not usually possible with this kind of catalog material,” says Garret Morris, owner of Blackwatch Dominion, a full-service music publishing, licensing and rights management company that oversees a catalog extending from blues to boogie to Miami bass.
Simpler than finding ways to open catalogs up to samplers is retooling old audio for the latest listening formats. Simone’s estate used stem-separation technology to create a spatial audio mix of her album Little Girl Blue as this style of listening continues to grow in popularity. (The number of Amazon Music tracks mixed in immersive-audio has jumped over 400% since 2019, for example.)
Powell expects that the need for this adaptation will continue to grow. “If you buy into the vision presented by Apple, Facebook, and others, we will be interacting in increasingly immersive environments in the future,” she adds. “And audio that is surrounding us, just like it does in the real world, is a core component to have a realistic immersive experience.”
Brown says the spatial audio re-do of Simone’s album resulted in “an incremental increase in quality, and that can be enough to entice a brand new group of listeners.” “Most recording artists are not wealthy,” he continues. “Things that you can do to their catalogs so that the music can be fresh again, used in commercials and used in soundtracks of movies or TV shows, gives them something that makes a difference in their lives.”