artificial intelligence
Page: 5
Warner Music Group (WMG) sent letters to tech companies this week instructing them not to use the label’s music to train artificial intelligence technology without permission. Sony Music sent out similar letters to over 700 companies in May.
“It is imperative that all uses and implementations of machine learning and AI technologies respect the rights of all those involved in the creation, marketing, promotion, and distribution of music,” Warner’s notice reads.
It continues, “all parties must obtain an express license from WMG to use… any creative works owned or controlled by WMG or to link to or ingest such creative works in connection with the creation of datasets, as inputs for any machine learning or AI technologies, or to train or develop any machine learning or AI technologies.”
Trending on Billboard
The notices from Sony and Warner come in the wake of the AI Act, legislation that was passed in the European Union in May. “Any use of copyright protected content requires the authorization of the rightsholder concerned unless relevant copyright exceptions and limitations apply,” the act notes. “Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research.”
If companies take this action, then “providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works.”
The Cold War between the music industry and much of the AI world has been heating up in recent months. Labels are adamant that AI companies should license their music if they want to use those massive catalogs of recordings to develop song generation technology.
Most AI companies, however, aren’t interested in paying. They often argue that their activities fall under “fair use” — the U.S. legal doctrine that allows for the unlicensed use of copyrighted works in certain situations.
In June, the three major labels sued two AI music companies, Suno and Udio, accusing them both of “willful copyright infringement on an almost unimaginable scale.” “These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end,” RIAA Chief Legal Officer Ken Doroshow said in a statement.
In a response to the suits, Suno CEO Mikey Shulman said his company’s tech is “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content.” Udio said it “stand[s] behind our technology.”
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Pharrell Williams and Louis Vuitton face a trademark lawsuit over “Pocket Socks”; Diplo is hit with a lawsuit claiming he distributed “revenge porn”; the Village People move forward with a lawsuit against Disney; a longtime attorney repping Britney Spears moves on; and much more.
Top stories this week…
SOCKED WITH A LAWSUIT – Pharrell Williams and Louis Vuitton were hit with a trademark lawsuit over their launch of a high-end line of “Pocket Socks” a literal sock-with-a-pocket that launched at Paris Fashion Week last year and sells for the whopping price of $530. The case was filed by a California company called Pocket Socks Inc. that says it’s been using that same name for more than a decade on a similar product. AI FIRMS FIRE BACK – Suno and Udio, the two AI music startups sued by the major record label last week over allegations that they had stolen copyrighted works on a mass scale to create their models, fired back with statements in their defense. Suno called its tech “transformative” and promised that it would only generate “completely new outputs”; Udio said it was “completely uninterested in reproducing content in our training set.”REVENGE PORN CLAIMS – Diplo was sued by an unnamed former romantic partner who accused him of violating “revenge porn” laws by sharing sexually-explicit videos and images of her without permission. NYPD confirmed to Billboard that a criminal investigation into the alleged incident was also underway. DISCO v. DISNEY – A California judge refused to dismiss a lawsuit filed by the Village People that claims the Walt Disney Co. has blackballed the legendary disco band from performing at Disney World. Disney had invoked California’s anti-SLAPP law and argued it had a free speech right to book whatever bands it chooses, but a judge ruled that the company had failed to show the issue was linked to the kind of “public conversation” that’s protected under the statute. WRIT ME BABY ONE MORE TIME – More than two years after Mathew Rosengart helped Britney Spears escape the longstanding legal conservatorship imposed by her father, the powerhouse litigator is no longer representing the pop star. In a statement, the Greenberg Traurig attorney said he was shifting to focusing on other clients: “It’s been an honor to serve as Britney’s litigator and primarily to work with her to achieve her goals.” PHONY FEES? – SiriusXM was hit with a class action lawsuit that claims the company has been earning billions in revenue by tacking a shady “U.S. Music Royalty Fee” onto consumers’ bills. The fee — allegedly 21.4% of the actual advertised price — represents a “deceptive pricing scheme whereby SiriusXM falsely advertises its music plans at lower prices than it actually charges,” the suit claims. DIVORCE DRAMA – Amid an increasingly ugly divorce case, Billy Ray Cyrus filed a new response claiming that he had been abused physically, verbally and emotionally by his soon-to-be-ex-wife, Firerose. The filing actually came in response to allegations that it was Cyrus who had subjected Firerose to “psychological abuse” during their short-lived marriage. UK ROYALTIES LAWSUIT – A group of British musicians filed a joint lawsuit against U.K. collecting society PRS, accusing the organization of a “lack of transparency” and “unreasonable” terms in how it licenses and administers live performance rights. The case, filed at London’s High Court, was brought by King Crimson’s Robert Fripp, as well as rock band The Jesus and Mary Chain and numerous other artists.
Trending on Billboard
On Monday (June 24), the three major music companies filed lawsuits against artificial intelligence (AI) music startups Suno and Udio, alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale.” Spearheaded by the RIAA, the two similar lawsuits arrived four days after Billboard first reported that the labels were seriously considering legal action against the two startups.
Filed by plaintiffs Sony Music, Warner Music Group and Universal Music Group, the lawsuits allege that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
Trending on Billboard
Hours later, Suno CEO Mikey Shulman responded to the lawsuit with a statement sent to Billboard. “Suno’s mission is to make it possible for everyone to make music,” he said. “Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists. We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.”
An RIAA spokesperson fired back at Shulman’s comment, saying: “Suno continues to dodge the basic question: what sound recordings have they illegally copied? In an apparent attempt to deceive working artists, rightsholders, and the media about its technology, Suno refuses to address the fact that its service has literally been caught on tape — as part of the evidence in this case — doing what Mr. Shulman says his company doesn’t do: memorizing and regurgitating the art made by humans. Winners of the streaming era worked cooperatively with artists and rightsholders to properly license music. The losers did exactly what Suno and Udio are doing now.”
Udio responded on Tuesday (June 25) with a lengthy statement posted to the company’s website. You can read it in full below.
In the past two years, AI has become a powerful tool for creative expression across many media – from text to images to film, and now music. At Udio, our mission is to empower artists of all kinds to create extraordinary music. In our young life as a company, we have sat in the studios of some of the world’s greatest musicians, workshopped lyrics with up-and-coming songwriters, and watched as millions of users created extraordinary new music, ranging from the funny to the profound.
We have heard from a talented musician who, after losing the ability to use his hands, is now making music again. Producers have sampled AI-generated tracks to create hit songs, like ‘BBL Drizzy’, and everyday music-lovers have used the technology to express the gamut of human emotions from love to sorrow to joy. Groundbreaking technologies entail change and uncertainty. Let us offer some insight into how our technology works.
Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.
The goal of model training is to develop an understanding of musical ideas — the basic building blocks of musical expression that are owned by no one. Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.
We stand behind our technology and believe that generative AI will become a mainstay of modern society.
Virtually every new technological development in music has initially been greeted with apprehension, but has ultimately proven to be a boon for artists, record companies, music publishers, technologists, and the public at large. Synthesizers, drum machines, digital recording technology, and the sound recording itself are all examples of once-controversial music creation tools that were feared in their early days. Yet each of these innovations ultimately expanded music as an art and as a business, leading to entirely new genres of music and billions of dollars in the pockets of artists, songwriters and the record labels and music publishers who profit from their creations.
We know that many musicians — especially the next generation — are eager to use AI in their creative workflows. In the near future, artists will compose music alongside their fans, amateur musicians will create entirely new musical genres, and talented creators — regardless of means — will be able to scale the heights of the music industry.
The future of music will see more creative expression than ever before. Let us use this watershed moment in technology to expand the circle of creators, empower artists, and celebrate human creativity.
AI has incredible promise and music creators are first in line exploring just how far these tools and innovations can take us.
At the same time, like every new technology, AI has risks and music creators are also first in line working to ensure it develops in lawful, responsible ways that respect individual autonomy and extend human creativity and possibility.
Yet today, a year and a half after the first mass market AI services were released, we still don’t know whether the promise or the peril of AI will win out.
Too many developers and investors seem to see a zero sum game – where AI behemoths scrape artists’ and songwriters’ life’s work off the internet for free and without any opportunity for individual choice, autonomy, or values. Where most of us see music, art, and culture to be cherished, they see soulless data to copied, “tokenized,” and exploited. Where most of us look to collaborate and reach for new horizons, they prefer to exploit art and culture for their own narrow gains. On the road to society’s AI future, it’s their way or no way.
At the top of the list of irresponsible developers are two music generation services, Suno and Udio, who claim to offer the ability to generate “new” music based on simple text prompts – a feat that’s only possible because these models have copied and exploited human-created music on a mass scale without authorization. Both have clearly chosen the low road of secretive, unconsented scraping and exploitation of copyrighted creative works instead of the high road of licensing and partnership.
Trending on Billboard
To address this egregious conduct, a group of music companies have today filed lawsuits against Suno and Udio in federal court in Boston and New York City, respectively. These lawsuits seek to stop the companies’ industrial scale infringement and steer generative AI back onto a healthy, responsible, lawful path.
Suno and Udio clearly recognize the business risks they are taking, going to extreme lengths to avoid transparency and refusing to disclose even the most obvious facts about how they have exploited copyrighted works or to even show us what works they have copied and used. If they really believed their own “fair use” rhetoric, if they really believe what they are doing is legal, would they work so hard to hide the ball?
The worst part is, these are multi-million dollar companies funded by the deepest pockets in the world who know the long term value music brings to their projects and who can well afford to pay fair rates for it; they just don’t want to. They willingly invest mass sums in compute and engineering, but want to take the most important ingredient – high quality human creativity – for free.
It’s a deeply shortsighted gamble – and one that has a track record of failing to deliver. Early internet services who relied on similar arguments and failed to get permission before launching are the ones who flamed out most spectacularly. Meanwhile digital streamers that partnered with artists and rightsholders to gain permission and innovate a healthy, sustainable marketplace together are today’s leading global music services.
And it’s totally unnecessary. Music creators are reaching out and leaning into opportunities in AI that support both innovation and the rights of artists and songwriters and have extended the hand of partnership and licensing to responsible AI companies.
In the last year, Sony, Warner and Universal have used creative AI tools to deliver breathtaking new moments with iconic artists including The Beatles, Roberta Flack, and David Gilmour and the Orb, all with appropriate partnership and consent. Music companies have partnered with ethical cutting-edge AI firms like BandLab, Endel and SoundLabs. And singer/songwriter Randy Travis used AI to record his first new song since largely losing his voice after a 2013 stroke.
But AI platforms should not mistake the music community’s embrace of AI as a willingness to accept continuing mass infringement. While free-market partnerships are the best path forward, we will not allow the status quo scraping and copying of artists’ creative legacies without permission to stand unchallenged. As in the past, music creators will enforce their rights to protect the creative engine of human artistry and enable the development of a healthy and sustainable licensed market that recognizes the value of both creativity and technology.
Generative AI has extraordinary promise. But realizing it will take collaboration, partnership, and genuine respect for human creativity. It’s time for AI companies to choose – go nowhere alone or explore a rich, amazing future together.
Mitch Glazier is Chairman and CEO of the Recording Industry Association of America (RIAA).
The three major music companies filed lawsuits against AI music companies Suno and Udio on Monday, alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale.” The lawsuits, spearheaded by the Recording Industry Association of America (RIAA), arrive four days after Billboard first reported the news the labels were seriously considering legal action against the two start-ups.
Filed by plaintiffs that include Sony Music, Warner Music Group and Universal Music Group, the lawsuits allege that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.”
“Building and operating [these services] requires at the outset copying and ingesting massive amounts of data to ‘train’ a software ‘model’ to generate outputs,” the lawyers for the major labels explain. “For [these services], this process involved copying decades worth of the world’s most popular sound recordings and then ingesting those copies [to] generate outputs that imitate the qualities of genuine human sound recordings.”
Trending on Billboard
“Since the day it launched, Udio has flouted the rights of copyright owners in the music industry as part of a mad dash to become the dominant AI music generation service,” the lawsuit against Udio reads. “Neither Udio, nor any other generative AI company, can be allowed to advance toward this goal by trampling the rights of copyright owners.”
The lawsuit is seeking both an injunction to bar the companies from continuing to train on the copyrighted songs, as well as damages from the infringements that have already taken place. Neither Suno nor Udio immediately returned requests for comment on Monday.
Suno and Udio have quickly become two of the most advanced and important players in the emerging field of generative AI music. While many competitors only create instrumentals or lyrics or vocals, Suno and Udio can generate all three in the click of a button with shocking precision. Udio has already produced what could be considered the first AI-generated hit song with the Drake diss track “BBL Drizzy,” which was generated on the platform by comedian King Willonius and popularized by a Metro Boomin remix. Suno has also achieved early success since its December 2023 launch, raising $125 million in funding from investors like Lightspeed Venture Partners, Matrix, Nat Friedman and Daniel Gross.
Both companies have declined to comment on whether or not unlicensed copyrights were part of their datasets. In a previous interview with Billboard, Udio co-founder David Ding said simply that the company trained on “good music.” However, in a series of articles for Music Business Worldwide, founder of AI music safety nonprofit Fairly Trained, Ed Newton-Rex, found that he was able to generate music from Suno and Udio that “bears a striking resemblance to copyrighted music. This is true across melody, chords, style and lyrics,” he wrote.
The complaints against the two companies also make the case that copyrighted material was used to train these models. Some of the circumstantial evidence cited in the lawsuits include generated songs by Suno and Udio that sound just like the voices of Bruce Springsteen, Lin-Manuel Miranda, Michael Jackson and ABBA; outputs that parrot the producer tags of Cash Money AP and Jason Derulo; and outputs that sound nearly identical to Mariah Carey’s “All I Want For Christmas Is You,” The Beach Boys’ “I Get Around,” ABBA’s “Dancing Queen,” The Temptations’ “My Girl,” Green Day’s “American Idiot,” and more.
In a recent Rolling Stone profile of Suno, investor Antonio Rodriguez admitted that the start-up does not have licenses for whatever music it has trained on but added that it was not a concern to him. Knowing that labels and publishers could sue was just “the risk we had to underwrite when we invested in the company, because we’re the fat wallet that will get sued right behind these guys… Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”
Many AI companies argue that training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law. Though fair use has historically allowed for things like news reporting and parody, AI firms say it applies equally to the “intermediate” use of millions of works to build a machine that spits out entirely new creations.
Anticipating that defense from Suno and Udio, the lawyers for the major labels argue that “[Suno and Udio] cannot avoid liability for [their] willful copyright infringement by claiming fair use. The doctrine of fair use promotes human expression by permitting the unlicensed use of copyrighted works in certain, limited circumstances, but [the services] offe[r] imitative machine-generated music—not human creativity or expression.”
News of the complaints filed against Suno and Udio follow up a previous lawsuit that also concerned the use of copyrighted materials to train models without a license. Filed by UMG, Concord and ABKCO in October against Anthropic, a major AI company, that case focused more specifically on copied lyrics.
In a statement about the lawsuits, RIAA CEO and chairman Mitch Glazier says, “The music community has embraced AI and we are already partnering and collaborating with responsible developers to build sustainable AI tools centered on human creativity that put artists and songwriters in charge. But we can only succeed if developers are willing to work together with us. Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all.”
RIAA Chief Legal Officer Ken Doroshow adds, “These are straightforward cases of copyright infringement involving unlicensed copying of sound recordings on a massive scale. Suno and Udio are attempting to hide the full scope of their infringement rather than putting their services on a sound and lawful footing. These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end.”
A few weeks back, a member of the team at my company, Ircam Amplify, joined one of the multiple AI music generators available online and input a brief prompt for a song. Within minutes, a new track was generated and promptly uploaded to a distribution platform. In just a couple of hours, that song, in which no human creativity played a part, was available on various streaming platforms. We diligently took action to remove the track from all of them, but the experiment highlighted a significant point.
It is now that simple! My aim here is not to pass judgment on whether AI-generated music is a good or a bad thing — from that perspective, we are neutral — but we think it is important to emphasize that, while the process is easy and cost-effective, there are absolutely no safeguards currently in place to ensure that consumers know if the music they are listening to is AI-generated. Consequently, they cannot make an informed choice about whether they want to listen to such music.
With AI-generated songs inundating digital platforms, streaming services require vast technological resources to manage the volume of tracks, diverting attention away from the promotion of music created by “human” artists and diluting the royalty pool.
Trending on Billboard
Like it or not, AI is here to stay, and more and more songs will find their way onto streaming platforms given how quick and easy the process is. We already know that there are AI-generated music “farms” flooding streaming platforms; over 25 million tracks were recently removed by Deezer, and it is reasonable to speculate that a significant proportion of these were AI-generated.
In the interest of transparency, consumers surely deserve to know whether the music they are tuning into is the genuine product of human creativity or derived from computer algorithms. But how can AI-generated tracks be easily distinguished? Solutions already exist. At Ircam Amplify, we offer a series of audio tools, from spatial sound to vocal separator, that cover the full audio supply chain. One of the latest technologies we have launched is an AI-generated detector designed to help rights holders, as well as platforms, identify tracks that are AI-generated. Through a series of benchmarks, we have been able to determine the “fingerprints” of AI models and apply them to their output to identify tracks coming from AI-music factories.
The purpose of any solution should be to support the whole music ecosystem by providing a technical answer to a real problem while contributing to a more fluid and transparent digital music market.
Discussions around transparency and AI are gaining traction all around the world. From Tokyo to Washington, D.C., from Brussels to London, policymakers are considering new legislation that would require platforms to identify AI-generated content. That is the second recommendation in the recent report “Artificial Intelligence and the Music Industry — Master or Servant?” published by the UK Parliament.
Consumers are also demanding it. A recent UK Music survey of more than 2,000 people, commissioned by Whitestone Insight, emphatically revealed that more than four out of five people (83%) agree that if AI technology has been used to create a song, it should be distinctly labeled as such.
Similarly, a survey conducted by Goldmedia in 2023 on behalf of rights societies GEMA and SACEM found that 89% of the collective management organizations’ members expressed a desire for AI-generated music tracks and other works to be clearly identified.
These overwhelming numbers tell us that concerns about AI are prevalent within creative circles and are also shared by consumers. There are multiple calls for the ethical use of AI, mostly originating from rights holders — artists, record labels, music publishers, collective management organizations, etc. — and transparency is usually at the core of these initiatives.
Simply put, if there’s AI in the recipe, then it should be flagged. If we can collectively find a way to ensure that AI-generated music is identified, then we will have made serious progress towards transparency.
Nathalie Birocheau currently serves as CEO at Ircam Amplify and is also a certified engineer (Centrale-Supélec) and former strategy consultant who has led several major cultural and media projects, notably within la Maison de la Radio. She became Deputy Director of France Info in 2016, where she led the creation of the global media franceinfo.
The three major music companies are weighing a lawsuit against AI startups Suno and Udio for allegedly training on copyrighted sound recordings, according to multiple sources.
The potential lawsuit, which would include Universal Music Group, Warner Music Group and Sony Music, would target a pair of companies that have quickly become two of the most important players in the emerging field of generative AI music. While many of its competitors focus on generating either music or lyrics or vocals, Suno and Udio both allow users to generate all three in the click of a button. Two sources said the lawsuit could come as soon as next week. Reps for the three majors, as well as Suno and Udio, did not respond to requests for comment.
Music companies, including UMG, have already filed a lawsuit against Anthropic, another major AI firm, over the use of copyrighted materials to train models. But that case dealt only with lyrics, which in many ways are legally similar to written subject matter. The new suit would deal with music and sound itself.
Trending on Billboard
Just a few months from its launch, Udio has already produced what could be considered an AI-generated hit song with “BBL Drizzy,” a parody track created by comedian King Willonius and popularized via a remix by super producer Metro Boomin. Later, the song reached new heights when it was sampled in Sexyy Red and Drake‘s song “U My Everything,” becoming the first major example of sampling an AI-generated song.
Suno has also achieved early success since its launch in December 2023. In May, the company announced via a blog post that it had raised a total of $125 million in funding from a group of notable investors, including Lightspeed Venture Partners and Nat Friedman and Daniel Gross.
Both companies, however, have drawn criticism from many members of the music business who believe that the models train on vast swathes of copyrighted material, including hit songs, without consent, compensation or credit to rights holders. Representatives for Suno and Udio have previously declined to comment on whether or not they train on protected copyrights, with Udio’s co-founders telling Billboard they simply train on “good music.”
In a recent Rolling Stone story about Suno, investor Antonio Rodriguez admitted that Suno does not have licenses for whatever music it has trained on, but he said that was not a concern to him, adding that this lack of such licenses is “the risk we had to underwrite when we invested in the company, because we’re the fat wallet that will get sued right behind these guys… Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”
In a series of articles for Music Business Worldwide, founder of AI safety non-profit Fairly Trained, Ed Newton-Rex, found that he was able to generate music from Suno and Udio that “bears a striking resemblance to copyrighted music. This is true across melody, chords, style and lyrics,” he wrote. Both companies, however, bar users from prompting the models to copy artists’ styles by typing out sentiments like “a rock song in the style of Radiohead” or from using specific artists’ voices.
The case, if it is filed, would hinge on whether the use of unlicensed materials to train AI models amounts to copyright infringement — something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities. Content owners in many sectors, including book authors, comedians and visual artists, have all filed similar lawsuits over training.
Many AI companies argue that such training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law. Though fair use has historically allowed for things like news reporting and parody, AI firms say it applies equally to the “intermediate” use of millions of works to build a machine that spits out entirely new creations. That argument will likely be the central question in any lawsuit over AI training.
Some AI companies have taken what is often called a more “ethical” approach to AI training by working directly with companies and rights holders to license their copyrights or form official partnerships instead.
So far, the majors have embraced partnering with AI companies in this way. Already, UMG and WMG have worked with YouTube for its AI voice experiment DreamTrack; Sony has partnered with Vermillio on a remix project for The Orb and David Gilmour; WMG has worked with Edith Piaf’s estate to recreate her voice using AI for an upcoming biopic; UMG launched an AI music incubator with YouTube Music; and most recently, UMG has teamed up with SoundLabs to let their artists create their own AI voice models for personal use in the studio.
Futureverse, an AI music company co-founded by music technology veteran Shara Senderoff, has announced the alpha launch of Jen, its text-to-music AI model. Available for anyone to use on its website, Jen-1 is an AI model that can be safely used by creators, given it was trained on 40 different fully-licensed catalogs, containing about 150 million works in total.
The company’s co-founders, Senderoff and Aaron McDonald, first teased Jen’s launch by releasing a research paper and conducting an interview with Billboard in August 2023. In the interview, Senderoff explained that “Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool.”
Trending on Billboard
Some of Jen’s capabilities, available at its alpha launch, include the ability to generate 10-45 second song snippets using text prompts. To lengthen the song to a full 3:30-long duration, one can use its “continuation” feature to re-prompt and add on additional segments to the song. With a focus on “its commitment to transparency, compensation and copyright identification,” as its press release states, Jen has made much of its inner workings available to the public via its research papers, including that the model uses “latent diffusion,” the same process used by Stable Diffusion, DALL-E 2, and Imagen to create high quality images. (It is unclear which music AI models use “latent diffusion” as well, given many do not share this information publicly).
Additionally, when works are created with Jen, users receive a JENUINE indicator, verifying that the song was made with Jen at a specific timestamp. To be more specific, this indicator is a cryptographic hash that is then recorded on The Root Network blockchain.
In an effort to work more closely with the music business, Futureverse brought on APG founder/CEO Mike Caren as a founding partner in fall 2023. While its mid-2024 release date makes it a late entrant in the music AI space, the company attributes this delay to making sure its 40 licenses were secured.
For now, Futureverse has declined to comment on which songs are included in their overall training catalog for Jen, but a representative for the company says that among these 40 catalogs includes a number of production libraries. Futureverse says it is also in talks with all major music companies and will have more licenses secured soon for Jen’s beta launch, expected for September 2024. Some licensing partners could be announced as soon as 4-6 weeks from the alpha launch.
In September, Futureverse has more capabilities planned, including longer initial song results, inpainting (the process of filling in missing sections or gaps in a musical piece) and a capability the company calls its “StyleFilter,” allowing users to upload an audio snippet of an instrument or track and then change the genre or timbre of it at the click of a button.
Also in September, Futureverse plans to launch a beat marketplace called R3CORD to go along with JEN. This will let JEN users upload whatever they produce with JEN to the marketplace and sell the works to others.
So far, the U.S. Copyright Office has advised that fully AI generated creations are not protected copyrights. Instead, they are considered “public domain” works and are not eligible to earn royalties like copyrights do, but any human additions made to an AI-assisted work are able to be copyright protected. (Already, this guidance has been applied in the music business in the case of Drake and Sexyy Red’s “U My Everything” which sampled the fully-AI generated sound recording “BBL Drizzy).”
“We have reached a defining moment for the future of the music industry. To ensure artistry maintains the value it deserves, we must commit to honor the creativity and copyrights of the past, while embracing the tools that will shape the next generation of music creation,” says Senderoff. “Jen empowers creators with tools that enhance their creative process. Jen is a collaborator; a friend in the studio that works with you to ideate and iterate. As we bring Jen to market, we are partnering with music rights holders and aligning with the industry at large to deliver what we believe is the most ethical approach to generative AI.”
“We’re incredibly proud of the work that’s gone into building Jen, from our research and technology to a strategy that we continue to develop with artists’ rights top of mind,” says Caren. “We welcome an open dialogue for those who’ve yet to meet Jen. There’s a seat at the table for every rightsholder to participate in building this next chapter of the music industry.”
Universal Music Group has announced a new partnership with SoundLabs, a “responsible” AI music tools company, to provide AI music editing tools to UMG talent. This includes a real-time personalized AI voice clone plug-in, called “MicDrop,” due to launch later this summer.
The SoundLabs AI-powered music editing tools (AU, VST3, AAX) hook up to all major digital audio workstations (DAW), including Logic, ProTools, Ableton and more, to let musicians clean up their vocals, make changes, and even shape-shift their voices in the click of a button, thanks to AI technology. With MicDrop, UMG artists can create their own AI voice models, but these custom models will exclusive to their creative use, not available to the general public.
The news falls perfectly in line with the company’s “responsible” AI strategy, laid out by UMG CEO and chairman, Lucian Grainge, at the beginning of 2024. As he stated in a January memo to staff, obtained by Billboard, Grainge wrote that even though some experts viewed AI as “a looming threat,” UMG’s view was that AI would be “presenting opportunities” for the company. “Just as we did with streaming, we went out to turn those opportunities into reality.”
Trending on Billboard
He went on to explain his two-prong approach to embracing what he called UMG’s “responsible AI initiative.” First, he would lobby for “guardrails,” or public policies to protect artists’ name, image, voice, and likeness from wrongful impersonation and other “basic rules.”
Second, Grainge set out “to forge groundbreaking private-sector partnerships with AI technology companies,” which now includes its deal with SoundLabs. “In the past, new and often disruptive technology was simply released into the world, leaving the music community to develop the model by which artists would be fairly compensated and their rights protected,” Grainge continued. “In a sharp break with that past, we formed a historic relationship with our longtime partner, YouTube, that gives artists a seat at the table before any product goes to market, including helping to shape AI products’ development and a path to monetization.”
SoundLabs was co-founded by Grammy-nominated producer, composer, software developer and electronic artist BT. After a 25-year career, working with David Bowie, Madonna, Sting, Death Cab for Cutie, Peter Gabriel and Seal, he turned to software development to create new music tools to help producers innovate. Over the years, his software products — including patented audio plugins like Stutter Edit, BreakTweaker (iZotope), Polaris, Phobos (Spirfire Audio) — have generated $70 million in gross sales.
The company’s other co-founders — Joshua Dickinson, Dr. Michael Hetrick and Lacy Transeau — are all aligned with BT on an “artist-first” approach to making AI music tools. As a press release from UMG about the partnership states: “SoundLabs was founded with a foundational respect for intellectual property rights and is focused on helping artists retain creative control over their data and models.”
“It’s a tremendous honor to be working with the forward-thinking and creatively aligned Universal Music Group. We believe the future of music creation is decidedly human. Artificial intelligence, when used ethically and trained consensually, has the promethean ability to unlock unimaginable new creative insights, diminish friction in the creative process and democratize creativity for artists, fans, and creators of all stripes. We are designing tools not to replace human artists, but to amplify human creativity,” says BT.
Chris Horton, svp of strategic technology at Universal Music adds: “UMG strives to keep artists at the center of our AI strategy, so that technology is used in service of artistry, rather than the other way around. We are thrilled to be working with SoundLabs and BT, who has a deep and personal understanding of both the technical and ethical issues related to AI. Through direct experience as a singer and in partnership with many vocal collaborators, BT understands how performers view and value their voices, and SoundLabs will allow UMG artists to push creative boundaries using voice-to-voice AI to sing in languages they don’t speak, perform duets with their younger selves, restore imperfect vocal recordings, and more.”
In the mid 1990s, Jason Paige, then a struggling singer trying to break with his rock band, could make a solid living by writing Mountain Dew, Taco Bell and Pepto Bismol earworms for jingle houses that dominated the music-in-advertising industry for decades. But during an interview a few weeks ago, Paige — who ultimately became most famous as the voice of the Pokemon theme song “Gotta Catch ‘Em All” — fires up an artificial-intelligence program. Within minutes, he emails eight studio-quality, terrifyingly catchy punk, hip-hop, EDM and klezmer MP3s centered on the reporter’s name, the word Billboard and the phrase “the jingle industry and how it’s changed so much over the years.”
The point is self-evident. “Yeah,” Paige says, about the industry that once sustained him. “It is dark.”
Trending on Billboard
Today, the jingle business has evolved an assembly line of composers and performers competing to make the next “plop plop fizz fizz” into a more multifaceted relationship between artists and companies, involving brand relationships (like Taylor Swift’s long-standing Target deal); Super Bowl synchs worth hundreds of thousands of dollars; production-house music allowing brands to pick from hundreds of thousands of pre-recorded tracks; and “sonic branding,” in which the Intel bong or Netflix’s tudum are used in a variety of marketing contexts. Performers and songwriters make plenty of revenue on this kind of commercial music, and they’re far more open to doing so than they were in the corporation-skeptical ‘90s. But AI, which allows machines to make all these sounds far more cheaply and quickly for brands than human musicians could ever do, remains a looming threat.
“It definitely has the potential to be disruptive,” says Zeno Harris, a creative and licensing manager for West One Music Group, an LA company that licenses its 85,000-song catalog of original music to brands. “If we could use it as a tool, instead of replacing [musicians], that’s where I see it heading. But money dictates where the industry goes, so we’ll have to wait and see.”
This vision of an AI-dominated future in a crucial revenue-producing business is as disturbing for singers and songwriters as it is for Hollywood screenwriters, radio DJs and voiceover actors. “I just took a life-insurance-brand deal to pay for making my record,” says Grace Bowers, 17, a Nashville blues guitarist. “I’m definitely not the only one who’s doing that. Artists are turning to anyone they can to [make] money, because touring and putting out music isn’t the biggest money-maker. If Arby’s came to me and said, ‘Can you write me a jingle?,’ I’d say, Hell, yeah!’”
End of an Era
From the late 1920s, when a barbershop quartet sang “Have You Tried Wheaties?” on the air for a Minneapolis radio station, through the late ’90s, jingles dominated the music-in-advertising business. Jingle houses like Jam, JSM and Rave competed ferociously to procure contracts with major brands and advertising agencies. In the process, they created lucrative side gigs for rising talents for decades, like Luther Vandross, Patti Austin and Richard Marx, who, as jingle veteran Michael Bolton wrote in his biography, “all shook the jingle-house tree.”
“If you wrote a jingle that was going to be a national campaign, and you sang on it, you could make $50,000, and you could do three of those a year,” recalls John Loeffler, a singer-songwriter who worked on 2,500 jingle campaigns as the head of the Rave Music jingle house, before serving as a BMG executive for years.
John Stamos and Dave Coulier played jingle writers on ABC’s Full House. In this scene from “Jingle Hell,” Mary Kate or Ashley Olsen gives “Uncle Jesse” a high five.
ABC Photo Archives/Disney General Entertainment Content via Getty Images
The jingle era ended, for the most part, by the late 1990s, as TV splintered from four must-see broadcast networks to dozens of cable channels, followed by video streaming networks such as Netflix. (Steve Karmen, the ad-agency vet who wrote “Nationwide … is on your side,” authored what many consider the post-mortem for the era with his 2005 book, Who Killed the Jingle?) “I wish the young artists these days could have the opportunities I had,” Loeffler says. “It’s very different.”
Today, artists are far more likely to have broad branding relationships with corporations such as Target — Swift has appeared in commercials and the retailer has sold exclusive versions of her albums for years, and Billie Eilish, Olivia Rodrigo and others have made similar deals — than they are to write catchy ditties for TV and radio. “I personally haven’t heard the word ‘jingle’ in the lifespan of Citizen,” says Theo de Gunzburg, managing partner of Citizen, a five-year-old music house that employs studio artists to create original music for advertisers. “The clients we deal with want to be taken more seriously. The audience is more discerning.”
Citizen employs 10 full-time staff members, including five composers, to create original music for ad campaigns, and, like West One and many other music houses, maintains a library of licensable tracks. The company’s commercial work includes Adidas’ “Runner 321,” which juxtaposes Michael Jordan and Babe Ruth with clips of athletes who have Down’s Syndrome, all set to its own sports percussion tracks. Major music publishers also maintain in-house services for this kind of production music. Warner Chappell Music’s extensive online library includes a hip-hop-style track called “Ready to Fight,” described as “driving trap drums, electric guitar, bold brass, cerebral synths and go-getter male vocals.” WCM represents “specialized songwriters who like to write in short form” and “are also great at writing pop hits,” says Dan Gross, the publisher’s creative sync director, who previously was a music supervisor at top ad agency McCann.
Ba Da Ba Ba Ba
The prevailing catchphrase for music in advertising today is “sonic branding” — designing a brief musical calling card, like the Intel bong, which reflects the feel of a product and can be used in ads, promotions, app tones, TikTok and Instagram videos and even virtual-reality games. “The message of flexibility is really the key thing,” says Simon Kringel, sonic director for Unmute, a Copenhagen agency that has worked with brands such as magazine publisher Aller Media to develop catchy musical snippets that serve as what he calls “watermarks.” “The only chance we have is to make sure every time we interact with our audience, there is something that triggers this brand recall.”
Kringel avoids using the term “jingle” — “that whole approach kind of faded out,” he says — but the most memorable old-school jingles have taken on a classic-rock quality in recent years. McDonald’s 20-year-old “ba da ba ba ba,” “Nationwide … is on your side” and many others are repeated endlessly in TV-streaming commercial breaks. State Farm’s “like a good neighbor … “ remains the emperor of earworms, and the company deploys the Barry Manilow-penned jingle in strategic ways. Around 2020, says State Farm head of marketing Alyson Griffin, the insurance giant conducted a study about its own marketing assets. “They found 80% of people recognized the notes, 95% recognized the slogan — and when they put the two together, there was nearly 100% recognition,” she says. “We recently tripled down on the jingle.”
Similarly, Chili’s recently went retro, hiring Boyz II Men to update its ’90s “baby back ribs” jingle with a new advertisement. “Jingles don’t feel as modern as maybe brands want to be,” says George Felix, chief marketing officer for Chili’s Grill and Bar. “But there’s certainly still runway for jingles if you do it right.”
For now, brands are still spending copiously on advertising music of all kinds — and every once in a while, an actual jingle emerges. Temu, a new e-commerce company owned by a Chinese retail giant, will reportedly spend $3 billion on advertising this year, emphasizing its insanely catchy “ooh, ooh, Temu” jingle that aired during the Super Bowl.
Keeping an Eye on AI
Yet some in the commercial-music industry worry about what Paige’s punk-EDM-hip-hop-klezmer AI-jingle exercise portends. “Do I think the [AI] fears are overblown? No. Am I concerned? Yes,” adds Sally House, CEO of The Hit House, a 19-year-old Los Angeles company that hires composers, engineers, sound designers and performers for music in Progressive, Marvel, HBO and Amazon Prime Video spots. “We’re all waiting for copyright to save us and the government to do something about it.”
But Warner Chappell’s Shaw says his team receives requests for “custom compositions” because brands want to work with the publisher’s stable of A-list songwriters. “AI doesn’t really factor in for us in this instance,” he says.
At Mastercard, which underwent a two-year process to unveil a piece of mellow, new-age-y instrumental music as part of its sonic brand in 2019, AI may be useful for future ad campaigns. But not for creating music. Mastercard employed its own creative people, plus composers, musicologists, sound engineers and even neuroscientists, to work on its distinctive tone. “If I tell the AI engine who is the audience, what am I trying to create, what is the context, and ask it to compose something based on the Mastercard melody, it will do a very fine job,” says Raja Rajamannar, a classically trained musician who is the company’s chief marketing and communications officer. “But if I had to create the Mastercard sonic architecture, I cannot delegate it to AI. The original creation, at this stage, clearly has to come from human beings.”
Paige agrees. Even if AI ultimately takes a cut out of the space — and certainly out of the potential profits for writers — it won’t completely gut the need for real musicians making advertorial music. Classic jingles endure, he says, because they contain humanity and spirit — and because people “know there’s a human being behind the Folger’s theme song.”