artificial intelligence
Page: 6
In March of 2023, as artificial intelligence barnstormed through the headlines, Goldman Sachs published a report on “the enormous economic potential of generative AI.” The writers explored the possibility of a “productivity boom,” comparable to those that followed seismic technological shifts like the mass adoption of personal computers.
Roughly 15 months later, Goldman Sachs published another paper on AI, this time with a sharply different tone. This one sported a blunt title — “Gen AI: Too Much Spend, Too Little Benefit?” — and it included harsh assessments from executives like Jim Covello, Goldman’s head of global equity research. “AI bulls seem to just trust that use cases will proliferate as the technology evolves,” Covello said. “But 18 months after the introduction of generative AI to the world, not one truly transformative — let alone cost-effective — application has been found.”
This skepticism has been echoed elsewhere. Daron Acemoglu, a prominent M.I.T. scholar, published a paper in May arguing that AI would lead to “much more modest productivity effects than most commentators and economists have claimed.” David Cahn, a partner at Sequoia Capital, warned in June that “we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick.”
Trending on Billboard
“I’m worried that we’re getting this hype cycle going by measuring aspiration and calling it adoption,” says Kristina McElheran, an assistant professor of strategic management at the University of Toronto who recently published a paper examining businesses’ attempts to implement AI technology. “Use is harder than aspiration.”
The music industry is no exception. A recent survey of music producers conducted by Tracklib, a company that supplies artists with pre-cleared samples, found that 75% of producers said they’re not using AI to make music. Among the 25% who were playing around with the technology, the most common use cases were to help with highly technical and definitely unsexy processes: stem separation (73.9%) and mastering (45.5%). (“Currently, AI has shown the most promise in making existing processes — like coding — more efficient,” Covello noted in Goldman’s report.) Another multi-country survey published in May by the Reuters Institute found that just 3% of people have used AI for making audio.
At the moment, people use AI products “to do their homework or write their emails,” says Hanna Kahlert, a cultural trends analyst at MIDiA Research, which recently conducted its own survey about AI technology adoption. “But they aren’t interested in it as a creative solution.”
When it comes to assessing AI’s impact — and the speed with which it would remake every facet of society — some recalibration was probably inevitable. “Around the launch of ChatGPT, there was so much excitement and promise, especially because this is a technology that we talk about in pop culture and see in our movies and our TV shows,” says Manav Raj, an assistant professor of management at the University of Pennsylvania’s Wharton School, who studies firms’ responses to technological change. “It was really easy to start thinking about how it could be really transformative.”
“Some of that excitement might have been a little frothy,” he continues. “Even if this is a really important and big technology, it takes time for us to see the effects of these kinds of technological changes in markets.” This was famously true with the development of computers — in 1987, the economist Robert Solow joked, “You can see the computer age everywhere but in the productivity statistics,” a phenomenon later dubbed “the productivity paradox.”
It also takes time to settle the legal and regulatory framework governing AI technologies, which will presumably influence the magnitude of their effects as well. Earlier this year, the major labels sued two genAI music platforms, Suno and Udio, accusing them of copyright infringement on a mass scale; in recently filed court documents, the companies said their activities were lawful under the doctrine of fair use, and that the major labels were just trying to eliminate “a threat to their market share.” Similar suits against AI companies have also been filed in other creative industries.
When McElheran surveyed manufacturing firms, however, few cited regulatory uncertainty as a barrier to AI use. She points out that “they may have had bigger fish to fry, like no use case.” A U.S. Census Bureau survey of businesses published in March found that 84.2% of respondents hadn’t used AI in the previous two weeks, and 80.9% of the firms that weren’t planning to implement AI in the next six months believe it “is not applicable to this business.”
Tracklib’s survey found something similar to McElheran’s. Only around 10% of respondents said concern about copyright was a reason they wouldn’t use AI tools. Instead, Tracklib’s results indicated that producers’ most common objections to using AI were moral, not legal — explanations like, “I want my art to be my own.”
“Generative AI comes up against this wall where it’s so easy, it’s just a push of a button,” Kahlert says. “It’s a fun gimmick, but there’s no real investment on the part of the user, so there’s not much value that they actually place in the outcome.”
In contrast, MIDiA’s survey found that respondents were interested in AI tech that can help them modify tracks by adjusting tempo — a popular TikTok alteration that can be done without AI — and customizing song lyrics. This interest was especially pronounced among younger music fans: Over a third of 20-to-24-year-olds were intrigued by AI tools that could help them play with tempo, and around 20% of that age group liked the idea of being able to personalize song lyrics.
Antony Demekhin, co-founder of the AI music company Tuney, sees a market for “creative tools” that enable “making, editing, or remixing beats and songs without using a complicated DAW, while still giving users a feeling of ownership over the output.”
“Up until recently,” he adds, “the addressable market for those kinds of tools has been small because the number of producers that use professional production software has been limited, so early-stage tech investors don’t frequently back stuff like that.”
Demekhin launched Tuney in 2020, well before the general public was thinking about products like ChatGPT. In the wake of that platform’s explosion, “Investors started throwing money around,” he recalls. At the same time, “nobody knew what questions to ask. What is this trained on? Are you exposed to legal risk? How easy would it be for Meta to replicate this and then make it available on Instagram?”
Today, investors are far better informed, and conversations with them sound very different, Demekhin says. “Cooler heads are prevailing,” he continues. “Now there’s going to be a whole wave of companies that make more sense because people have figured out where these technologies can be useful — and where they can’t.”
AI music firms Suno and Udio are firing back with their first responses to sweeping lawsuits filed by the major record labels, arguing that they were free to use copyrighted songs to train their models and claiming the music industry is abusing intellectual property to crush competition.
In legal filings on Thursday, the two firms admitted to using proprietary materials to create their artificial intelligence, with Suno saying it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet.”
But both companies said that such use was clearly lawful under copyright’s fair use doctrine, which allows for the reuse of existing materials to create new works.
Trending on Billboard
“What Udio has done — use existing sound recordings as data to mine and analyze for the purpose of identifying patterns in the sounds of various musical styles, all to enable people to make their own new creations — is a quintessential ‘fair use,’” Udio wrote in its filing. “Plaintiffs’ contrary vision is fundamentally inconsistent with the law and its underlying values.”
The filings, lodged by the same law firm (Latham & Watkins) that reps both companies, go beyond the normal “answer” to a lawsuit — typically a sparse document that simply denies each claim. Instead, Suno and Udio went on offense, with extended introductions that attempt to frame the narrative of a looming legal battle that could take years to resolve.
In doing so, they took square aim at the major labels (Universal Music Group, Warner Music Group and Sony Music Entertainment) that filed the case in June — a group that they said “dominates the music industry” and is now abusing copyright law to maintain that power.
“What the major record labels really don’t want is competition,” Suno wrote in its filing. “Where Suno sees musicians, teachers and everyday people using a new tool to create original music, the labels see a threat to their market share.”
Suno and Udio have quickly become two of the most important players in the emerging field of AI-generated music. Udio has already produced what could be considered an AI-generated hit with “BBL Drizzy,” a parody track popularized with a remix by super-producer Metro Boomin and later sampled by Drake himself. And as of May, Suno had raised a total of $125 million in funding to create what Rolling Stone called a “ChatGPT for music.”
In June, the major labels sued both companies, claiming they had infringed copyrighted music on an “unimaginable scale” to train their models. The lawsuits accused the two firms of “trampling the rights of copyright owners” as part of a “mad dash to become the dominant AI music generation service.”
The case followed similar lawsuits filed by book authors, visual artists, newspaper publishers and other creative industries, which collectively pose what could be a trillion-dollar legal question: Is it infringement to use vast troves of proprietary works to build an AI model that spits out new creations? Or is it just a form of legal fair use, transforming all those old works into something entirely new?
In Thursday’s response, Suno and Udio argued unequivocally that it was the latter. They likened their machines to a “human musician” who had played earlier songs to learn the “building blocks of music” — and then used what they had learned to create entirely new works in existing styles.
“Those genres and styles — the recognizable sounds of opera, or jazz, or rap music — are not something that anyone owns,” Suno wrote in its filing. “Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song.”
The lawsuit from the labels, Suno and Udio say, are thus an abuse of copyright law, aimed at claiming improper ownership over “entire genres of music.” They called the litigation an “attempt to misuse IP rights to shield incumbents from competition and reduce the universe of people who are equipped to create new expression.”
Both filings hint at how Suno and Udio will make their fair use arguments. The two companies say the cases will not really turn on the “inputs” — the millions of songs used to train the models — but rather on the “outputs,” or the new songs that are created. While the labels are claiming that the inputs were illegally copied, the AI firms say the music companies “explicitly disavow” that any output was a copycat.
“That concession will ultimately prove fatal to plaintiffs’ claims,” Suno wrote in its filing. “It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process,invisible to the public, in the service of creating an ultimately non-infringing new product.”
A spokeswoman and an attorney for the labels did not immediately return a request for comment.
A bipartisan group of U.S. senators introduced the highly anticipated NO FAKES Act on Wednesday (July 31), which aims to protect artists and others from AI deepfakes and other nonconsensual replicas of their voices, images and likenesses.
If passed, the legislation would create federal intellectual property protections for the so-called right of publicity for the first time, which restricts how someone’s name, image, likeness and voice can be used without consent. Currently, such rights are only protected at the state level, leading to a patchwork of different rules across the country.
Unlike many existing state-law systems, the federal right that the NO FAKES Act would create would not expire at death and could be controlled by a person’s heirs for 70 years after their passing. To balance personal publicity rights and the First Amendment right to free speech, the NO FAKES Act also includes specific carveouts for replicas used in news coverage, parody, historical works or criticism.
Trending on Billboard
Non-consensual AI deepfakes are of great concern to the music business, given so many of its top-billing talent have already been exploited in this way. Taylor Swift, for example, was the subject of a number of sexually-explicit AI deepfakes of her body; the late Tupac Shakur‘s voice was recently deepfaked by fellow rapper Drake in his Kendrick Lamar diss track “Taylor Made Freestyle,” which was posted, and then deleted, on social media; and Drake and The Weeknd had their own voices cloned by AI without their permission in the TikTok viral track “Heart On My Sleeve.”
The NO FAKES Act was first released as a draft bill by the same group of lawmakers — Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (D-NC) — last October, and its formal introduction to the U.S. Senate continues to build on the same principles also laid out in the No AI FRAUD Act, a similar bill which was introduced to the U.S. House of Representatives earlier this year.
While the music industry is overwhelmingly supportive of the creation of a federal right of publicity, there are some detractors in other creative fields, including film/tv, which pose a threat to the passage of bills like the NO FAKES Act. In a speech during Grammy week earlier this year, National Music Publishers Association (NMPA) president/CEO David Israelite explained that “[a federal right of publicity] does not have a good chance… Within the copyright community we don’t agree. … Guess who is bigger than music? Film and TV.” Still, the introduction of the NO FAKES Act and the NO AI Fraud Act proves there is bicameral and bipartisan support for the idea.
Earlier this year, proponents for strengthened publicity rights laws celebrated a win on the state level in their fight to regulate AI deepfakes with the passage of the ELVIS Act in Tennessee. The landmark law greatly expanded protections for artists and others in the state, and explicitly protected voices for the first time.
Though it was celebrated by a who’s who of the music business — from the Recording Academy, Recording Industry Association of America (RIAA), Human Artistry Campaign, NMPA and more — the act also drew a few skeptics, like Professor Jennifer Rothman of University of Pennsylvania law school, who raised concerns that the law could have been an “overreaction” that could potentially open up tribute bands, interpolations, or sharing photos that a celebrity didn’t authorize to lawsuits.
“The Human Artistry Campaign applauds Senators Coons, Blackburn, Klobuchar and Tillis for crafting strong legislation establishing a fundamental right putting every American in control of their own voices and faces against a new onslaught of highly realistic voice clones and deepfakes,” Dr. Moiya McTier, senior advisor of the Human Artistry Campaign — a global initiative for responsible AI use, supported by 185 organizations in the music business and beyond — says of the bill. “The NO FAKES Act will help protect people, culture and art — with clear protections and exceptions for the public interest and free speech. We urge the full Senate to prioritize and pass this vital, bipartisan legislation. The abusive deepfake ecosystem online destroys more lives and generates more victims every day — Americans need these protections now.”
The introduction of the bill is also celebrated by American Federation of Musicians (AFM), ASCAP, Artist Rights Alliance (ARA), American Association of Independent Music (A2IM), Association of American Publishers, Black Music Action Coalition (BMAC), BMI, Fan Alliance, The Azoff Company co-president Susan Genco, Nashville Songwriters Association International (NSAI), National Association of Voice Actors (NAVA), National Independent Talent Organization, National Music Publishers’ Association (NMPA), Organización de Voces Unidas (OVU), Production Music Association, Recording Academy, Recording Industry Association of America (RIAA), SAG-AFTRA, SESAC Music Group, Songwriters of North America (SoNA), SoundExchange, United Talent Agency (UTA) and WME.
The lawsuits filed by the major labels against the AI companies Suno and Udio could be the most important cases to the music business since the Supreme Court Grokster decision, as I explained in last week’s Follow the Money column. The outcomes are hard to predict, however, because the central issue will be “fair use,” a U.S. legal doctrine shaped by judicial decisions that involves famously — sometimes notoriously — nuanced determinations about art and appropriation. And although most creators focus more on issues around generative AI “outputs” — music they’ll have to compete with or songs that might sound similar to theirs — these cases involve the legality of copying music for the purposes of training AI.
Neither Suno nor Udio has said how they’re trained their AI programs, but both have essentially said that copying music in order to do so would qualify as fair use. Determining that could touch on the development of Google Books, the compatibility of the Android operating system, and even a Supreme Court case that involves Prince, Andy Warhol and Vanity Fair. It’s the kind of fair use case that once inspired a judge to call copyright “the metaphysics of the law.” So let’s get metaphysical!
Trending on Billboard
Fair use essentially provides exceptions to copyright, usually for the purpose of free expression, allowing for quotation (as in book or film reviews) and parody (to comment on art), among other things. (The iconic example in music is the Supreme Court case over 2 Live Crew’s parody of Roy Orbison’s “Oh, Pretty Woman.”) These determinations involve a four-factor test that weighs “the purpose and character of the use”; “the nature of the copyrighted work”; how much and how important a part of the work is used; and the effect of the use upon the potential market value of the copyrighted work. Over the last decade or so, though, the concept of “transformative use,” derived from the first factor, expanded in a way that allowed the development of Google Books (the copying of books to create a database and excerpts) and the use of some Oracle API code in Google’s Android system — which could arguably be said to go beyond the origins of the concept.
Could copying music for the purposes of machine learning qualify as well?
In a paper on the topic, “Fair Use in the U.S. Redux: Reformed or Still Deformed,” the influential Columbia Law School professor Jane Ginsburg suggests that the influence of the transformative use argument might have reached its peak. (I am oversimplifying a very smart paper, and if you are interested in this topic, you should read it.)
The Supreme Court decision on the Google-Oracle case involved part of a computer program, far from the creative “core” of copyright, and music recordings would presumably be judged differently. The Supreme Court also made a very different decision last year in a case that pitted the Andy Warhol Foundation for the Visual Arts against prominent rock photographer Lynn Goldsmith. The case involved an Andy Warhol silkscreen of Prince, based on a Goldsmith photograph that the magazine Vanity Fair had licensed for Warhol to use. Warhol used the photo for an entire series — which Goldsmith only found out about when the magazine used the silkscreen image again for a commemorative issue after Prince died.
On the surface, this seemed to cast the Supreme Court Justices as modern art critics, in a position to judge all appropriation art as infringing. But the case wasn’t about whether Warhol’s silkscreen inherently infringed Goldsmith’s copyright but about whether it infringed it for licensed use by a magazine, in a way where it could compete with the original photo. There was a limit to transformative use, after all. “The same copying,” the court decided, “may be fair when used for one purpose but not another.”
So it might constitute fair use for Google to copy entire books for the purpose of creating a searchable database about those books with excerpts from them, as it did for Google Books — but not necessarily for Suno or Udio to copy terabytes of recordings to spur the creation of new works to compete with them, especially if it results in similar works. In the first case, it’s hard to find real economic harm — there will never be much of a market for licensing book databases — but there’s already a nascent market for licensing music to train AI programs. And, unlike Google Books, the AI programs are designed to make music to compete with the recordings used to train them. Obviously, licensing music to train an AI program is what we might call a secondary use — but so is turning a book into a film, and no one doubts they need permission for that.
All of this might seem like I think the major labels will win their cases, but that’s a tough call — the truth is that I just don’t think they’ll lose. And there’s a lot of space between victory and defeat here. If one of these cases ends up going to the Supreme Court — and if one of these doesn’t, another case about AI training surely will within the next few years — the decision might be more limited than either side is looking for, since the court has tended to step lightly around technology issues.
It’s also possible that the decision could depend on whether the outputs that result from all of this training are similar enough to copyrighted works to qualify, or plausibly qualify, as infringing. Both label lawsuits are full of such examples, presumably because that could make a difference. These cases are about the legality of AI inputs, but a fair use determination on that issue could easily involve whether those inputs lead to infringing output.
In the end, Ginsburg suggests, “system designers may need to disable features that would allow users to create recognizable copies.” Except that — let’s face it — isn’t that really part of the fun? Sure, AI music creation might eventually grow to maturity as some kind of art form — it already has enormous practical value for songwriters — but for ordinary consumers it’s still hard to beat Frank Sinatra singing Lil Jon’s “Get Low.” Of course, that could put a significant burden on AI companies — with severe consequences for crossing a line that won’t always be obvious. It might be easier to just license the content they need. The next questions, which will be the subject of future columns, involve exactly what they need to license and how they might do that, since it won’t be easy to get all the rights they need — or in some cases even agree on who controls them.
AI-focused music production, distribution and education platform LANDR has devised a new way for musicians to capitalize on the incoming AI age with consent and compensation in mind. With its new Fair Trade AI program, any musician who wishes to join can be part of this growing pool of songs that will be used to […]
Disney Music Group and AudioShake, an AI stem separation company, are teaming up. As part of their partnership, AudioShake will help Disney separate the individual instrument tracks (“stems”) for some of its classic back catalog and provide AI lyric transcription.
According to a press release, Disney says that it hopes this will “unlock new listening and fan engagement experiences” for the legendary catalog, which includes everything from the earliest recordings of Steamboat Willie (which is now in the public domain), to Cinderella, to The Lion King, to contemporary hits like High School Musical: The Musical: The Series.
Given so many of Disney’s greatest hits were made decades ago with now-out-of-date recording technology, this new partnership will allow the company to use the music in new ways for the first time. Stem separation, for instance, can help with re-mixing and mastering old audio for classic Disney films. It could also allow Disney to isolate the vocals or the instrumentals on old hits.
Trending on Billboard
Disney’s partnership with AudioShake began when the tech start-up was selected for the 2024 Disney Accelerator, a business development program designed to accelerate the growth of new companies around the world. Other participants included AI voice company Eleven Labs and autonomous vehicle company Nuro.
At the accelerator’s Demo Day 2024, on May 23, AudioShake gave a hint at what could be to come in their partnership with Disney when AudioShake CEO, Jessica Powell, showed how stem separation could be used to fix muffled dialog and music in the opening scene of Peter Pan.
Disney explained at the Demo Day that many of their earlier recordings are missing their original stems and that has limited their ability to use older hits in sync licensing, remastering, and emerging formats like immersive audio and lyric videos.
“We were deeply impressed by AudioShake’s sound separation technology, and were among its early adopters,” says David Abdo, SVP and General Manager of Disney Music Group. “We’re excited to expand our existing stem separation work as well as integrate AudioShake’s lyric transcription system. AudioShake is a great example of cutting-edge technology that can benefit our artists and songwriters, and the team at AudioShake have been fantastic partners.”
“Stems and lyrics are crucial assets that labels and artists can use to open up recordings to new music experiences,” says Powell. “We’re thrilled to deepen our partnership with Disney Music Group, and honored to work with their extensive and iconic catalog.”

SEVENTEEN singer/producer WOOZI has responded to a British news feature claiming that the K-pop group is using artificial intelligence when recording its music. WOOZI, who also co-writes the group’s songs, reportedly had thoughts after a BBC News story posted last week claimed that SEVENTEEN’s most recent single, “Maestro,” was an example of the group’s dive […]
For a little over a year, since the Fake Drake track bum rushed the music business, executives have been debating whether generative artificial intelligence is a threat or an opportunity. The answer is both — creators are already using AI tools and they already compete with AI music. But the future will be shaped by the lawsuits the major labels filed two weeks ago against Suno and Udio for copyright infringement for allegedly using the labels’ music to train their AI programs.
Like most debates about technology, this one will be resolved in real time — Internet start-ups tend to believe that it’s easier to ask forgiveness than to get permission. Although neither Suno nor Udio has said that it trained its program on major label music, the label lawsuits point out that both companies have said that using copyrighted works in this manner would be “fair use,” a defense for otherwise infringing conduct. They’re not admitting they did it — just defending themselves in case they did.
Trending on Billboard
Whether this qualifies as fair use is well over a million-dollar question, since statutory damages can reach $150,000 per work infringed. The stakes are even higher than that, though. If ingesting copyrighted works on a mass scale to train an AI is allowed under fair use, the music business could have a hard time limiting, controlling, or making money on this technology.
If it’s not, the labels will gain at least some control over these companies, and perhaps the entire nascent sector. There are other ways to limit AI, from legislation to likeness rights, but only copyright law has the kind of statutory damages that offer real leverage.
Although neither Suno nor Udio has issued a legal response, Suno CEO Mikey Shulman released a statement that said the labels had “reverted to their old lawyer-led playbook.” The obvious reference is Napster, since most people believe that in the late ‘90s the music business saw the future and decided to sue it.
That’s not exactly what happened. The major labels knew that the future was digital — they lobbied for the 1995 Digital Performance Right in Sound Recordings Act, which ensured that streaming services had to pay to play recordings in the U.S., even though traditional radio stations don’t. They just didn’t want peer-to-peer services to distribute their content for nothing — or to have to negotiate with them while they were doing so. In July 2000, three months after the major labels sued Napster, leading executives sat down with the company to try to figure out a deal, but they couldn’t agree; the labels negotiated as though Napster needed a license and Napster negotiated as though it didn’t. In the end, after a decade of lawsuits and lost business, creators and rightsholders established their right to be paid for online distribution and the music sector began recovering.
And here we are again: History isn’t repeating itself, but it seems to be rhyming. If the labels negotiated with Suno and Udio now, how much would those companies be willing to pay for rights they may or may not need? It’s easy to make fun of either side, but it’s hard to know how much to charge for rights, or pay for them, before you even know if you need them.
These lawsuits aren’t about whether creators and rightsholders should embrace or avoid AI — it’s coming, for good and ill. The question, in modern terminology, is whether the embrace will be consensual, and under what terms. Most creators and rightsholders want to do business with AI companies, as long as that actually means business — negotiating deals in something that resembles a free market.
What they’re afraid of is having technology companies build empires on their work without paying to use it — especially to create a product that creates music that will compete with them. That depends on the outcome of these lawsuits. Because if you don’t have the right to say no, you can’t really get to a fair yes.
A couple of weeks ago, at a culture conference organized by the German recorded music trade organization, I heard German Justice Minister Marco Buschmann put this as well as anyone I’ve ever heard. “The moment people have the opportunity to say ‘No’ and to enforce this ‘No,’ they gain a legal negotiating position,” he said in a speech. (Buschmann also makes electronic music, as it happens.) In the European Union, rightsholders can opt out of AI ingestion, which is far from ideal but better than nothing.
What happens in the U.S. — which often shapes the global media business — might hinge on the results of these lawsuits. There are two dozen copyright lawsuits about AI, but these look to be among the most important. Some of the others are mired in jurisdictional maneuvering, while others simply aren’t as strong: a lawsuit filed by The New York Times could involve a different fair use determination if the ingested articles are used as sources but not to generate new work. These cases are straightforward, but they won’t move fast: It’s easy to imagine the issue going to the Supreme Court.
Despite the high stakes — and what will almost certainly be a rap beef’s worth of sniping back and forth — determinations of fair use involve a considerable amount of nuance. Fair use makes it legal in some cases to excerpt or even use all of a copyrighted work without permission, usually for the purposes of commentary. (An iconic Supreme Court case involved 2 Live Crew’s parody of the Roy Orbison song “Oh, Pretty Woman.”) This is far from that, but Suno and Udio will presumably argue that their actions qualify as “transformative use” in the way the Google Books project did. Next week I’ll write about the arguments we can expect to hear, the decisions we could see, and what could happen while we wait for them.
Warner Music Group (WMG) sent letters to tech companies this week instructing them not to use the label’s music to train artificial intelligence technology without permission. Sony Music sent out similar letters to over 700 companies in May.
“It is imperative that all uses and implementations of machine learning and AI technologies respect the rights of all those involved in the creation, marketing, promotion, and distribution of music,” Warner’s notice reads.
It continues, “all parties must obtain an express license from WMG to use… any creative works owned or controlled by WMG or to link to or ingest such creative works in connection with the creation of datasets, as inputs for any machine learning or AI technologies, or to train or develop any machine learning or AI technologies.”
Trending on Billboard
The notices from Sony and Warner come in the wake of the AI Act, legislation that was passed in the European Union in May. “Any use of copyright protected content requires the authorization of the rightsholder concerned unless relevant copyright exceptions and limitations apply,” the act notes. “Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research.”
If companies take this action, then “providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works.”
The Cold War between the music industry and much of the AI world has been heating up in recent months. Labels are adamant that AI companies should license their music if they want to use those massive catalogs of recordings to develop song generation technology.
Most AI companies, however, aren’t interested in paying. They often argue that their activities fall under “fair use” — the U.S. legal doctrine that allows for the unlicensed use of copyrighted works in certain situations.
In June, the three major labels sued two AI music companies, Suno and Udio, accusing them both of “willful copyright infringement on an almost unimaginable scale.” “These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end,” RIAA Chief Legal Officer Ken Doroshow said in a statement.
In a response to the suits, Suno CEO Mikey Shulman said his company’s tech is “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content.” Udio said it “stand[s] behind our technology.”
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Pharrell Williams and Louis Vuitton face a trademark lawsuit over “Pocket Socks”; Diplo is hit with a lawsuit claiming he distributed “revenge porn”; the Village People move forward with a lawsuit against Disney; a longtime attorney repping Britney Spears moves on; and much more.
Top stories this week…
SOCKED WITH A LAWSUIT – Pharrell Williams and Louis Vuitton were hit with a trademark lawsuit over their launch of a high-end line of “Pocket Socks” a literal sock-with-a-pocket that launched at Paris Fashion Week last year and sells for the whopping price of $530. The case was filed by a California company called Pocket Socks Inc. that says it’s been using that same name for more than a decade on a similar product. AI FIRMS FIRE BACK – Suno and Udio, the two AI music startups sued by the major record label last week over allegations that they had stolen copyrighted works on a mass scale to create their models, fired back with statements in their defense. Suno called its tech “transformative” and promised that it would only generate “completely new outputs”; Udio said it was “completely uninterested in reproducing content in our training set.”REVENGE PORN CLAIMS – Diplo was sued by an unnamed former romantic partner who accused him of violating “revenge porn” laws by sharing sexually-explicit videos and images of her without permission. NYPD confirmed to Billboard that a criminal investigation into the alleged incident was also underway. DISCO v. DISNEY – A California judge refused to dismiss a lawsuit filed by the Village People that claims the Walt Disney Co. has blackballed the legendary disco band from performing at Disney World. Disney had invoked California’s anti-SLAPP law and argued it had a free speech right to book whatever bands it chooses, but a judge ruled that the company had failed to show the issue was linked to the kind of “public conversation” that’s protected under the statute. WRIT ME BABY ONE MORE TIME – More than two years after Mathew Rosengart helped Britney Spears escape the longstanding legal conservatorship imposed by her father, the powerhouse litigator is no longer representing the pop star. In a statement, the Greenberg Traurig attorney said he was shifting to focusing on other clients: “It’s been an honor to serve as Britney’s litigator and primarily to work with her to achieve her goals.” PHONY FEES? – SiriusXM was hit with a class action lawsuit that claims the company has been earning billions in revenue by tacking a shady “U.S. Music Royalty Fee” onto consumers’ bills. The fee — allegedly 21.4% of the actual advertised price — represents a “deceptive pricing scheme whereby SiriusXM falsely advertises its music plans at lower prices than it actually charges,” the suit claims. DIVORCE DRAMA – Amid an increasingly ugly divorce case, Billy Ray Cyrus filed a new response claiming that he had been abused physically, verbally and emotionally by his soon-to-be-ex-wife, Firerose. The filing actually came in response to allegations that it was Cyrus who had subjected Firerose to “psychological abuse” during their short-lived marriage. UK ROYALTIES LAWSUIT – A group of British musicians filed a joint lawsuit against U.K. collecting society PRS, accusing the organization of a “lack of transparency” and “unreasonable” terms in how it licenses and administers live performance rights. The case, filed at London’s High Court, was brought by King Crimson’s Robert Fripp, as well as rock band The Jesus and Mary Chain and numerous other artists.
Trending on Billboard