artificial intelligence
Page: 4
Symphonic Distribution has forged a partnership with AI attribution and license management company, Musical AI, that will allow its users to become part of a licensed dataset used in AI training. Joining the dataset is a choice that Symphonic users must opt-in to and participating artists can earn additional income for their contribution.
Musical AI’s goal is to clean up what it calls the “Wild West of AI” by providing a way to track every time an AI model uses a given song in the dataset in hopes that this will help compensate the proper copyright owner for each time their work is employed by the AI model. Symphonic is the first major rights holder to partner with Musical AI, and Musical AI’s co-founder and COO Matt Adell says his team is currently “build[ing] a new layer based on attribution and security for training AI to the benefit of all involved.”
The AI training process is one of the most contentious areas of the burgeoning tech field. To learn how to generate realistic results, generative AI models must train on millions, if not billions, of works. Often, this includes copyrighted material that the AI company has not licensed or otherwise paid for. Today, many of the world’s biggest AI companies, including ChatGPT creator OpenAI and music AI generators Suno and Udio, take the stance that ingesting this copyrighted material is a form of “fair use” and that compensation is not required. Many copyright owners, however, believe that AI companies must obtain their consent prior to using their works and that they should receive some form of compensation.
Trending on Billboard
Already, this issue has sparked major legal battles in the music business. The three major music companies — Universal Music Group, Warner Music Group and Sony Music — filed a lawsuit against Suno and Udio in June, arguing that training on their copyrights without permission or compensation was a form of widespread copyright infringement. A similar argument was made in a 2023 lawsuit filed by UMG, Concord, and ABKCO against Anthropic for allegedly using their copyrighted lyrics in training without proper licenses.
According to a spokesperson for the companies, one AI firm, who wishes to remain anonymous, has already signed up to use the Symphonic-affiliated dataset, and in the future, the dataset will likely be used by more. Artists who wish to participate can only opt-in if they totally control their own publishing and records to ensure there are no rights issues.
Licenses made between AI companies, Musical AI and Symphonic will vary, but ultimately that license will stipulate a certain percentage of revenue made will belong to rights holders represented in the dataset. Musical AI will create an attribution report that details how each song in the dataset was used by the AI company, and then AI companies will either pay out rights holders directly or through Musical AI, depending on what their deal looks like.
“Symphonic’s catalog has clear value to AI companies who need both excellent music by passionate artists and a broad representation of genres and sounds,” says Adell. “We’re thrilled to make them our first major rights holder partner.”
“We strive to make our services the most advanced in the business to support our artists. But any new technology needs to work for our artists and clients — not against them,” adds Jorge Brea, founder and CEO of Symphonic. “By partnering with Musical AI, we’re unlocking a truly sustainable approach to generative AI that honors our community.”
We are in a transformative era of music technology.
The music industry will continue to experience growth throughout the decade, with total music revenue reaching approximately $131 billion by 2030, according to Goldman Sachs. This lucrative business is built on streaming but also witnessing an unprecedented surge in innovation and entrepreneurship in artificial intelligence. With over 300,000 tech and media professionals laid off since the beginning of 2023, according to TechCrunch, a new wave of talent has been funneled into music tech startups. This influx, coupled with the dramatic decrease in cloud storage costs and the global rise of developer talent, has catalyzed the emergence of many startups (over 400 that we have tracked) dedicated to redefining the music business through AI.
These music tech startups are not just changing the way music is made and released; they are reshaping the very fabric of the industry. They’re well-funded, too. After raising over $4.8 billion in 2022, music tech startups and companies raised almost $10 billion in funding in 2023, according to Digital Music News, indicating that venture capitalists and investors are highly optimistic about the future growth of music technology.
As Matt Cartmell, CEO of Music Technology UK, said, “Our members want us to present the music tech sector as a highly investible proposition, educating investors about the opportunities that lie within. Music tech firms are also looking for innovative models of engagement with labels, DSPs and artists, as well as looking for our help to bring diverse talent into the industry, removing the barriers that continue to restrict individuals with passion and enthusiasm from a career in music technology.”
Trending on Billboard
Riding this wave of investment, several startups have already made a splash in the music AI space. Below is an overview of a few of those companies and how they’re contributing to the industry’s rapid evolution.
Generative AI: The New Frontier
At the heart of this revolution is generative AI, a technology rapidly becoming indispensable for creators across the spectrum. From novices to professional artists and producers, AI-powered platforms offer unprecedented musical expression and innovation opportunities. It’s now possible for users without any formal musical training to craft songs in many genres, effectively democratizing music production. Music fans or content creators can utilize products that score their social content, while seasoned musicians can use these tools to enhance their creative workflows.
“I like to think of generative AI as a new wave of musical instruments,” says Dr. Maya Ackerman, founder of Wave AI, a company that has introduced tools to aid human songwriters. “The most useful AI tools for artists are those that the musicians can ‘play,’ meaning that the musician is in full control, using the AI to aid in their self-expression rather than be hindered.” These tools focus on generating vocal melodies, chords and lyrics, emphasizing collaboration with musicians rather than replacing them.
For non-professionals, one ambitious company, Beatoven.ai, is building a product to generate music in a host of different ways for many different use cases. “(Users) can get a piece of music generated and customize it as per their content without much knowledge of music,” says Siddharth Bhardwaj, co-founder/CTO of Beatoven. “Going forward, we are working on capturing their intent in the form of multimodal input (text, video, images and audio) and getting them as close to their requirements as possible.”
The concept of “artist as an avatar” has become increasingly popular, which draws inspiration from the gaming community. Companies like CreateSafe, the startup powering Grimes’ elf.tech, have built generative audio models that enable anyone to either license the voice of a well-known artist or replicate their own voice. This innovative approach also reflects the adaptive and forward-thinking nature of artists. Established artists like deadmau5, Richie Hawtin and Ólafur Arnalds have also delved into AI initiatives and investments. Furthermore, a few innovators are crafting AI music tools tailored for the gaming community, potentially paving the way for the fusion of music and gaming through real-time personalization and adaptive soundtracks during gameplay.
The Community and Collaboration Ecosystem
The journey of music creation is often fraught with challenges, including tedious workflows and a sense of isolation. Recognizing this, several startups are focusing on building communities around music creation and feedback. The Singapore-based music tech giant BandLab recently announced that it has acquired a user base of 100 million, making it one of the biggest success stories in this arena. “Our strength lies in our comprehensive approach to our audience’s needs. From the moment of inspiration to distribution, our platform is designed to be a complete toolkit for music creators and their journey,” says founder Meng Ro Kuok. There are several startups pioneering spaces where creators can collaborate, share insights and support each other, heralding a new era of collective creativity.
A Toolkit for Every Aspect of Music Production
This landscape of music tech startups offers a comprehensive toolkit that caters to every facet of the music creation process:
Track and Stem Organization. Platforms like Audioshake simplify the management of tracks and stems, streamlining the production process.
Vocal & Instrument Addition. These technologies allow for the addition of any human voice or instrument sound to a recording environment, expanding the possibilities for frictionless creativity.
Sound Libraries. Services provide or generate extensive libraries of samples, beats and sounds, offering artists a rich palette.
Mix and Master. The process of mixing and mastering audio has historically relied heavily on human involvement. However, several startups are utilizing AI technology to automate these services for a more comprehensive audio production experience. Others also offer the ability to convert stereo songs to spatial audio.
Remixing and Freelance Musicianship. Many platforms now offer creative and innovative solutions for remixing music. Additionally, some platforms allow users to easily source and connect with talented artists, session musicians and other music professionals. Need an orchestra? There are tech platforms that can arrange and source one for you remotely.
The Future of Music Tech: A Vision of Inclusivity and Innovation
The barriers that once kept people from participating in music creation are falling away. Now, anyone with a passion for sound can create content, engage with fans, find a community and even monetize their work. This more accessible and collaborative music ecosystem offers an exciting glimpse into a future where anyone can participate in the art of creation. The explosion of creators, facilitated by these technologies, also suggests a new economic opportunity for the industry to service this growing creator class.
Drew Thurlow is the founder of Opening Ceremony Media where he advises music and music tech companies. Previously he was senior vp of A&R at Sony Music, and director of artists partnerships & industry relations at Pandora. His first book, about music & AI, will be released by Routledge in early 2026.
Rufy Anam Ghazi is a seasoned music business professional with over eight years of experience in product development, data analysis, research, business strategy, and partnerships. Known for her data-driven decision-making and innovative approach, she has successfully led product development, market analysis, and strategic growth initiatives, fostering strong industry relationships.
In March of 2023, as artificial intelligence barnstormed through the headlines, Goldman Sachs published a report on “the enormous economic potential of generative AI.” The writers explored the possibility of a “productivity boom,” comparable to those that followed seismic technological shifts like the mass adoption of personal computers.
Roughly 15 months later, Goldman Sachs published another paper on AI, this time with a sharply different tone. This one sported a blunt title — “Gen AI: Too Much Spend, Too Little Benefit?” — and it included harsh assessments from executives like Jim Covello, Goldman’s head of global equity research. “AI bulls seem to just trust that use cases will proliferate as the technology evolves,” Covello said. “But 18 months after the introduction of generative AI to the world, not one truly transformative — let alone cost-effective — application has been found.”
This skepticism has been echoed elsewhere. Daron Acemoglu, a prominent M.I.T. scholar, published a paper in May arguing that AI would lead to “much more modest productivity effects than most commentators and economists have claimed.” David Cahn, a partner at Sequoia Capital, warned in June that “we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick.”
Trending on Billboard
“I’m worried that we’re getting this hype cycle going by measuring aspiration and calling it adoption,” says Kristina McElheran, an assistant professor of strategic management at the University of Toronto who recently published a paper examining businesses’ attempts to implement AI technology. “Use is harder than aspiration.”
The music industry is no exception. A recent survey of music producers conducted by Tracklib, a company that supplies artists with pre-cleared samples, found that 75% of producers said they’re not using AI to make music. Among the 25% who were playing around with the technology, the most common use cases were to help with highly technical and definitely unsexy processes: stem separation (73.9%) and mastering (45.5%). (“Currently, AI has shown the most promise in making existing processes — like coding — more efficient,” Covello noted in Goldman’s report.) Another multi-country survey published in May by the Reuters Institute found that just 3% of people have used AI for making audio.
At the moment, people use AI products “to do their homework or write their emails,” says Hanna Kahlert, a cultural trends analyst at MIDiA Research, which recently conducted its own survey about AI technology adoption. “But they aren’t interested in it as a creative solution.”
When it comes to assessing AI’s impact — and the speed with which it would remake every facet of society — some recalibration was probably inevitable. “Around the launch of ChatGPT, there was so much excitement and promise, especially because this is a technology that we talk about in pop culture and see in our movies and our TV shows,” says Manav Raj, an assistant professor of management at the University of Pennsylvania’s Wharton School, who studies firms’ responses to technological change. “It was really easy to start thinking about how it could be really transformative.”
“Some of that excitement might have been a little frothy,” he continues. “Even if this is a really important and big technology, it takes time for us to see the effects of these kinds of technological changes in markets.” This was famously true with the development of computers — in 1987, the economist Robert Solow joked, “You can see the computer age everywhere but in the productivity statistics,” a phenomenon later dubbed “the productivity paradox.”
It also takes time to settle the legal and regulatory framework governing AI technologies, which will presumably influence the magnitude of their effects as well. Earlier this year, the major labels sued two genAI music platforms, Suno and Udio, accusing them of copyright infringement on a mass scale; in recently filed court documents, the companies said their activities were lawful under the doctrine of fair use, and that the major labels were just trying to eliminate “a threat to their market share.” Similar suits against AI companies have also been filed in other creative industries.
When McElheran surveyed manufacturing firms, however, few cited regulatory uncertainty as a barrier to AI use. She points out that “they may have had bigger fish to fry, like no use case.” A U.S. Census Bureau survey of businesses published in March found that 84.2% of respondents hadn’t used AI in the previous two weeks, and 80.9% of the firms that weren’t planning to implement AI in the next six months believe it “is not applicable to this business.”
Tracklib’s survey found something similar to McElheran’s. Only around 10% of respondents said concern about copyright was a reason they wouldn’t use AI tools. Instead, Tracklib’s results indicated that producers’ most common objections to using AI were moral, not legal — explanations like, “I want my art to be my own.”
“Generative AI comes up against this wall where it’s so easy, it’s just a push of a button,” Kahlert says. “It’s a fun gimmick, but there’s no real investment on the part of the user, so there’s not much value that they actually place in the outcome.”
In contrast, MIDiA’s survey found that respondents were interested in AI tech that can help them modify tracks by adjusting tempo — a popular TikTok alteration that can be done without AI — and customizing song lyrics. This interest was especially pronounced among younger music fans: Over a third of 20-to-24-year-olds were intrigued by AI tools that could help them play with tempo, and around 20% of that age group liked the idea of being able to personalize song lyrics.
Antony Demekhin, co-founder of the AI music company Tuney, sees a market for “creative tools” that enable “making, editing, or remixing beats and songs without using a complicated DAW, while still giving users a feeling of ownership over the output.”
“Up until recently,” he adds, “the addressable market for those kinds of tools has been small because the number of producers that use professional production software has been limited, so early-stage tech investors don’t frequently back stuff like that.”
Demekhin launched Tuney in 2020, well before the general public was thinking about products like ChatGPT. In the wake of that platform’s explosion, “Investors started throwing money around,” he recalls. At the same time, “nobody knew what questions to ask. What is this trained on? Are you exposed to legal risk? How easy would it be for Meta to replicate this and then make it available on Instagram?”
Today, investors are far better informed, and conversations with them sound very different, Demekhin says. “Cooler heads are prevailing,” he continues. “Now there’s going to be a whole wave of companies that make more sense because people have figured out where these technologies can be useful — and where they can’t.”
AI music firms Suno and Udio are firing back with their first responses to sweeping lawsuits filed by the major record labels, arguing that they were free to use copyrighted songs to train their models and claiming the music industry is abusing intellectual property to crush competition.
In legal filings on Thursday, the two firms admitted to using proprietary materials to create their artificial intelligence, with Suno saying it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet.”
But both companies said that such use was clearly lawful under copyright’s fair use doctrine, which allows for the reuse of existing materials to create new works.
Trending on Billboard
“What Udio has done — use existing sound recordings as data to mine and analyze for the purpose of identifying patterns in the sounds of various musical styles, all to enable people to make their own new creations — is a quintessential ‘fair use,’” Udio wrote in its filing. “Plaintiffs’ contrary vision is fundamentally inconsistent with the law and its underlying values.”
The filings, lodged by the same law firm (Latham & Watkins) that reps both companies, go beyond the normal “answer” to a lawsuit — typically a sparse document that simply denies each claim. Instead, Suno and Udio went on offense, with extended introductions that attempt to frame the narrative of a looming legal battle that could take years to resolve.
In doing so, they took square aim at the major labels (Universal Music Group, Warner Music Group and Sony Music Entertainment) that filed the case in June — a group that they said “dominates the music industry” and is now abusing copyright law to maintain that power.
“What the major record labels really don’t want is competition,” Suno wrote in its filing. “Where Suno sees musicians, teachers and everyday people using a new tool to create original music, the labels see a threat to their market share.”
Suno and Udio have quickly become two of the most important players in the emerging field of AI-generated music. Udio has already produced what could be considered an AI-generated hit with “BBL Drizzy,” a parody track popularized with a remix by super-producer Metro Boomin and later sampled by Drake himself. And as of May, Suno had raised a total of $125 million in funding to create what Rolling Stone called a “ChatGPT for music.”
In June, the major labels sued both companies, claiming they had infringed copyrighted music on an “unimaginable scale” to train their models. The lawsuits accused the two firms of “trampling the rights of copyright owners” as part of a “mad dash to become the dominant AI music generation service.”
The case followed similar lawsuits filed by book authors, visual artists, newspaper publishers and other creative industries, which collectively pose what could be a trillion-dollar legal question: Is it infringement to use vast troves of proprietary works to build an AI model that spits out new creations? Or is it just a form of legal fair use, transforming all those old works into something entirely new?
In Thursday’s response, Suno and Udio argued unequivocally that it was the latter. They likened their machines to a “human musician” who had played earlier songs to learn the “building blocks of music” — and then used what they had learned to create entirely new works in existing styles.
“Those genres and styles — the recognizable sounds of opera, or jazz, or rap music — are not something that anyone owns,” Suno wrote in its filing. “Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song.”
The lawsuit from the labels, Suno and Udio say, are thus an abuse of copyright law, aimed at claiming improper ownership over “entire genres of music.” They called the litigation an “attempt to misuse IP rights to shield incumbents from competition and reduce the universe of people who are equipped to create new expression.”
Both filings hint at how Suno and Udio will make their fair use arguments. The two companies say the cases will not really turn on the “inputs” — the millions of songs used to train the models — but rather on the “outputs,” or the new songs that are created. While the labels are claiming that the inputs were illegally copied, the AI firms say the music companies “explicitly disavow” that any output was a copycat.
“That concession will ultimately prove fatal to plaintiffs’ claims,” Suno wrote in its filing. “It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process,invisible to the public, in the service of creating an ultimately non-infringing new product.”
A spokeswoman and an attorney for the labels did not immediately return a request for comment.
A bipartisan group of U.S. senators introduced the highly anticipated NO FAKES Act on Wednesday (July 31), which aims to protect artists and others from AI deepfakes and other nonconsensual replicas of their voices, images and likenesses.
If passed, the legislation would create federal intellectual property protections for the so-called right of publicity for the first time, which restricts how someone’s name, image, likeness and voice can be used without consent. Currently, such rights are only protected at the state level, leading to a patchwork of different rules across the country.
Unlike many existing state-law systems, the federal right that the NO FAKES Act would create would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. To balance personal publicity rights and the First Amendment right to free speech, the NO FAKES Act also includes specific carveouts for replicas used in news coverage, parody, historical works or criticism.
Trending on Billboard
Non-consensual AI deepfakes are of great concern to the music business, given so many of its top-billing talent have already been exploited in this way. Taylor Swift, for example, was the subject of a number of sexually-explicit AI deepfakes of her body; the late Tupac Shakur‘s voice was recently deepfaked by fellow rapper Drake in his Kendrick Lamar diss track “Taylor Made Freestyle,” which was posted, and then deleted, on social media; and Drake and The Weeknd had their own voices cloned by AI without their permission in the TikTok viral track “Heart On My Sleeve.”
The NO FAKES Act was first released as a draft bill by the same group of lawmakers — Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (D-NC) — last October, and its formal introduction to the U.S. Senate continues to build on the same principles also laid out in the No AI FRAUD Act, a similar bill which was introduced to the U.S. House of Representatives earlier this year.
While the music industry is overwhelmingly supportive of the creation of a federal right of publicity, there are some detractors in other creative fields, including film/tv, which pose a threat to the passage of bills like the NO FAKES Act. In a speech during Grammy week earlier this year, National Music Publishers Association (NMPA) president/CEO David Israelite explained that “[a federal right of publicity] does not have a good chance… Within the copyright community we don’t agree. … Guess who is bigger than music? Film and TV.” Still, the introduction of the NO FAKES Act and the NO AI Fraud Act proves there is bicameral and bipartisan support for the idea.
Earlier this year, proponents for strengthened publicity rights laws celebrated a win on the state level in their fight to regulate AI deepfakes with the passage of the ELVIS Act in Tennessee. The landmark law greatly expanded protections for artists and others in the state, and explicitly protected voices for the first time.
Though it was celebrated by a who’s who of the music business — from the Recording Academy, Recording Industry Association of America (RIAA), Human Artistry Campaign, NMPA and more — the act also drew a few skeptics, like Professor Jennifer Rothman of University of Pennsylvania law school, who raised concerns that the law could have been an “overreaction” that could potentially open up tribute bands, interpolations, or sharing photos that a celebrity didn’t authorize to lawsuits.
“The Human Artistry Campaign applauds Senators Coons, Blackburn, Klobuchar and Tillis for crafting strong legislation establishing a fundamental right putting every American in control of their own voices and faces against a new onslaught of highly realistic voice clones and deepfakes,” Dr. Moiya McTier, senior advisor of the Human Artistry Campaign — a global initiative for responsible AI use, supported by 185 organizations in the music business and beyond — says of the bill. “The NO FAKES Act will help protect people, culture and art — with clear protections and exceptions for the public interest and free speech. We urge the full Senate to prioritize and pass this vital, bipartisan legislation. The abusive deepfake ecosystem online destroys more lives and generates more victims every day — Americans need these protections now.”
The introduction of the bill is also celebrated by American Federation of Musicians (AFM), ASCAP, Artist Rights Alliance (ARA), American Association of Independent Music (A2IM), Association of American Publishers, Black Music Action Coalition (BMAC), BMI, Fan Alliance, The Azoff Company co-president Susan Genco, Nashville Songwriters Association International (NSAI), National Association of Voice Actors (NAVA), National Independent Talent Organization, National Music Publishers’ Association (NMPA), Organización de Voces Unidas (OVU), Production Music Association, Recording Academy, Recording Industry Association of America (RIAA), SAG-AFTRA, SESAC Music Group, Songwriters of North America (SoNA), SoundExchange, United Talent Agency (UTA) and WME.
The lawsuits filed by the major labels against the AI companies Suno and Udio could be the most important cases to the music business since the Supreme Court Grokster decision, as I explained in last week’s Follow the Money column. The outcomes are hard to predict, however, because the central issue will be “fair use,” a U.S. legal doctrine shaped by judicial decisions that involves famously — sometimes notoriously — nuanced determinations about art and appropriation. And although most creators focus more on issues around generative AI “outputs” — music they’ll have to compete with or songs that might sound similar to theirs — these cases involve the legality of copying music for the purposes of training AI.
Neither Suno nor Udio has said how they’re trained their AI programs, but both have essentially said that copying music in order to do so would qualify as fair use. Determining that could touch on the development of Google Books, the compatibility of the Android operating system, and even a Supreme Court case that involves Prince, Andy Warhol and Vanity Fair. It’s the kind of fair use case that once inspired a judge to call copyright “the metaphysics of the law.” So let’s get metaphysical!
Trending on Billboard
Fair use essentially provides exceptions to copyright, usually for the purpose of free expression, allowing for quotation (as in book or film reviews) and parody (to comment on art), among other things. (The iconic example in music is the Supreme Court case over 2 Live Crew’s parody of Roy Orbison’s “Oh, Pretty Woman.”) These determinations involve a four-factor test that weighs “the purpose and character of the use”; “the nature of the copyrighted work”; how much and how important a part of the work is used; and the effect of the use upon the potential market value of the copyrighted work. Over the last decade or so, though, the concept of “transformative use,” derived from the first factor, expanded in a way that allowed the development of Google Books (the copying of books to create a database and excerpts) and the use of some Oracle API code in Google’s Android system — which could arguably be said to go beyond the origins of the concept.
Could copying music for the purposes of machine learning qualify as well?
In a paper on the topic, “Fair Use in the U.S. Redux: Reformed or Still Deformed,” the influential Columbia Law School professor Jane Ginsburg suggests that the influence of the transformative use argument might have reached its peak. (I am oversimplifying a very smart paper, and if you are interested in this topic, you should read it.)
The Supreme Court decision on the Google-Oracle case involved part of a computer program, far from the creative “core” of copyright, and music recordings would presumably be judged differently. The Supreme Court also made a very different decision last year in a case that pitted the Andy Warhol Foundation for the Visual Arts against prominent rock photographer Lynn Goldsmith. The case involved an Andy Warhol silkscreen of Prince, based on a Goldsmith photograph that the magazine Vanity Fair had licensed for Warhol to use. Warhol used the photo for an entire series — which Goldsmith only found out about when the magazine used the silkscreen image again for a commemorative issue after Prince died.
On the surface, this seemed to cast the Supreme Court Justices as modern art critics, in a position to judge all appropriation art as infringing. But the case wasn’t about whether Warhol’s silkscreen inherently infringed Goldsmith’s copyright but about whether it infringed it for licensed use by a magazine, in a way where it could compete with the original photo. There was a limit to transformative use, after all. “The same copying,” the court decided, “may be fair when used for one purpose but not another.”
So it might constitute fair use for Google to copy entire books for the purpose of creating a searchable database about those books with excerpts from them, as it did for Google Books — but not necessarily for Suno or Udio to copy terabytes of recordings to spur the creation of new works to compete with them, especially if it results in similar works. In the first case, it’s hard to find real economic harm — there will never be much of a market for licensing book databases — but there’s already a nascent market for licensing music to train AI programs. And, unlike Google Books, the AI programs are designed to make music to compete with the recordings used to train them. Obviously, licensing music to train an AI program is what we might call a secondary use — but so is turning a book into a film, and no one doubts they need permission for that.
All of this might seem like I think the major labels will win their cases, but that’s a tough call — the truth is that I just don’t think they’ll lose. And there’s a lot of space between victory and defeat here. If one of these cases ends up going to the Supreme Court — and if one of these doesn’t, another case about AI training surely will within the next few years — the decision might be more limited than either side is looking for, since the court has tended to step lightly around technology issues.
It’s also possible that the decision could depend on whether the outputs that result from all of this training are similar enough to copyrighted works to qualify, or plausibly qualify, as infringing. Both label lawsuits are full of such examples, presumably because that could make a difference. These cases are about the legality of AI inputs, but a fair use determination on that issue could easily involve whether those inputs lead to infringing output.
In the end, Ginsburg suggests, “system designers may need to disable features that would allow users to create recognizable copies.” Except that — let’s face it — isn’t that really part of the fun? Sure, AI music creation might eventually grow to maturity as some kind of art form — it already has enormous practical value for songwriters — but for ordinary consumers it’s still hard to beat Frank Sinatra singing Lil Jon’s “Get Low.” Of course, that could put a significant burden on AI companies — with severe consequences for crossing a line that won’t always be obvious. It might be easier to just license the content they need. The next questions, which will be the subject of future columns, involve exactly what they need to license and how they might do that, since it won’t be easy to get all the rights they need — or in some cases even agree on who controls them.
AI-focused music production, distribution and education platform LANDR has devised a new way for musicians to capitalize on the incoming AI age with consent and compensation in mind. With its new Fair Trade AI program, any musician who wishes to join can be part of this growing pool of songs that will be used to […]
Disney Music Group and AudioShake, an AI stem separation company, are teaming up. As part of their partnership, AudioShake will help Disney separate the individual instrument tracks (“stems”) for some of its classic back catalog and provide AI lyric transcription.
According to a press release, Disney says that it hopes this will “unlock new listening and fan engagement experiences” for the legendary catalog, which includes everything from the earliest recordings of Steamboat Willie (which is now in the public domain), to Cinderella, to The Lion King, to contemporary hits like High School Musical: The Musical: The Series.
Given so many of Disney’s greatest hits were made decades ago with now-out-of-date recording technology, this new partnership will allow the company to use the music in new ways for the first time. Stem separation, for instance, can help with re-mixing and mastering old audio for classic Disney films. It could also allow Disney to isolate the vocals or the instrumentals on old hits.
Trending on Billboard
Disney’s partnership with AudioShake began when the tech start-up was selected for the 2024 Disney Accelerator, a business development program designed to accelerate the growth of new companies around the world. Other participants included AI voice company Eleven Labs and autonomous vehicle company Nuro.
At the accelerator’s Demo Day 2024, on May 23, AudioShake gave a hint at what could be to come in their partnership with Disney when AudioShake CEO, Jessica Powell, showed how stem separation could be used to fix muffled dialog and music in the opening scene of Peter Pan.
Disney explained at the Demo Day that many of their earlier recordings are missing their original stems and that has limited their ability to use older hits in sync licensing, remastering, and emerging formats like immersive audio and lyric videos.
“We were deeply impressed by AudioShake’s sound separation technology, and were among its early adopters,” says David Abdo, SVP and General Manager of Disney Music Group. “We’re excited to expand our existing stem separation work as well as integrate AudioShake’s lyric transcription system. AudioShake is a great example of cutting-edge technology that can benefit our artists and songwriters, and the team at AudioShake have been fantastic partners.”
“Stems and lyrics are crucial assets that labels and artists can use to open up recordings to new music experiences,” says Powell. “We’re thrilled to deepen our partnership with Disney Music Group, and honored to work with their extensive and iconic catalog.”
SEVENTEEN singer/producer WOOZI has responded to a British news feature claiming that the K-pop group is using artificial intelligence when recording its music. WOOZI, who also co-writes the group’s songs, reportedly had thoughts after a BBC News story posted last week claimed that SEVENTEEN’s most recent single, “Maestro,” was an example of the group’s dive […]
For a little over a year, since the Fake Drake track bum rushed the music business, executives have been debating whether generative artificial intelligence is a threat or an opportunity. The answer is both — creators are already using AI tools and they already compete with AI music. But the future will be shaped by the lawsuits the major labels filed two weeks ago against Suno and Udio for copyright infringement for allegedly using the labels’ music to train their AI programs.
Like most debates about technology, this one will be resolved in real time — Internet start-ups tend to believe that it’s easier to ask forgiveness than to get permission. Although neither Suno nor Udio has said that it trained its program on major label music, the label lawsuits point out that both companies have said that using copyrighted works in this manner would be “fair use,” a defense for otherwise infringing conduct. They’re not admitting they did it — just defending themselves in case they did.
Trending on Billboard
Whether this qualifies as fair use is well over a million-dollar question, since statutory damages can reach $150,000 per work infringed. The stakes are even higher than that, though. If ingesting copyrighted works on a mass scale to train an AI is allowed under fair use, the music business could have a hard time limiting, controlling, or making money on this technology.
If it’s not, the labels will gain at least some control over these companies, and perhaps the entire nascent sector. There are other ways to limit AI, from legislation to likeness rights, but only copyright law has the kind of statutory damages that offer real leverage.
Although neither Suno nor Udio has issued a legal response, Suno CEO Mikey Shulman released a statement that said the labels had “reverted to their old lawyer-led playbook.” The obvious reference is Napster, since most people believe that in the late ‘90s the music business saw the future and decided to sue it.
That’s not exactly what happened. The major labels knew that the future was digital — they lobbied for the 1995 Digital Performance Right in Sound Recordings Act, which ensured that streaming services had to pay to play recordings in the U.S., even though traditional radio stations don’t. They just didn’t want peer-to-peer services to distribute their content for nothing — or to have to negotiate with them while they were doing so. In July 2000, three months after the major labels sued Napster, leading executives sat down with the company to try to figure out a deal, but they couldn’t agree; the labels negotiated as though Napster needed a license and Napster negotiated as though it didn’t. In the end, after a decade of lawsuits and lost business, creators and rightsholders established their right to be paid for online distribution and the music sector began recovering.
And here we are again: History isn’t repeating itself, but it seems to be rhyming. If the labels negotiated with Suno and Udio now, how much would those companies be willing to pay for rights they may or may not need? It’s easy to make fun of either side, but it’s hard to know how much to charge for rights, or pay for them, before you even know if you need them.
These lawsuits aren’t about whether creators and rightsholders should embrace or avoid AI — it’s coming, for good and ill. The question, in modern terminology, is whether the embrace will be consensual, and under what terms. Most creators and rightsholders want to do business with AI companies, as long as that actually means business — negotiating deals in something that resembles a free market.
What they’re afraid of is having technology companies build empires on their work without paying to use it — especially to create a product that creates music that will compete with them. That depends on the outcome of these lawsuits. Because if you don’t have the right to say no, you can’t really get to a fair yes.
A couple of weeks ago, at a culture conference organized by the German recorded music trade organization, I heard German Justice Minister Marco Buschmann put this as well as anyone I’ve ever heard. “The moment people have the opportunity to say ‘No’ and to enforce this ‘No,’ they gain a legal negotiating position,” he said in a speech. (Buschmann also makes electronic music, as it happens.) In the European Union, rightsholders can opt out of AI ingestion, which is far from ideal but better than nothing.
What happens in the U.S. — which often shapes the global media business — might hinge on the results of these lawsuits. There are two dozen copyright lawsuits about AI, but these look to be among the most important. Some of the others are mired in jurisdictional maneuvering, while others simply aren’t as strong: a lawsuit filed by The New York Times could involve a different fair use determination if the ingested articles are used as sources but not to generate new work. These cases are straightforward, but they won’t move fast: It’s easy to imagine the issue going to the Supreme Court.
Despite the high stakes — and what will almost certainly be a rap beef’s worth of sniping back and forth — determinations of fair use involve a considerable amount of nuance. Fair use makes it legal in some cases to excerpt or even use all of a copyrighted work without permission, usually for the purposes of commentary. (An iconic Supreme Court case involved 2 Live Crew’s parody of the Roy Orbison song “Oh, Pretty Woman.”) This is far from that, but Suno and Udio will presumably argue that their actions qualify as “transformative use” in the way the Google Books project did. Next week I’ll write about the arguments we can expect to hear, the decisions we could see, and what could happen while we wait for them.