AI
Page: 9
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: A federal judge rules that works created by A.I. are not covered by copyrights; an appeals court revives abuse lawsuits against Michael Jackson’s companies; Smokey Robinson beats a lawsuit claiming he owed $1 million to a former manager; SoundExchange sues SiriusXM for “gaming the system” on royalties; and much more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
No Copyrights For A.I. Works – But Tougher Questions Loom
The rise of artificial intelligence will pose many difficult legal questions for the music business, likely requiring some combination of litigation, regulation and legislation before all the dust settles. But on at least one A.I. issue, a federal judge just gave us a clean, straightforward answer.
In a decision issued Friday, U.S. District Judge Beryl Howell ruled that American copyright law does not cover works created entirely by artificial intelligence – full stop. That’s because, the judge said, the essential purpose of copyright law is to encourage human beings to create new works.
“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” the judge wrote.
Though novel, the decision was not entirely surprising. Federal courts have long strictly limited copyrights to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.
But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The issue of copyright protection is crucial to the future role of AI, since works that are not protected would be difficult to monetize.
Trickier legal dilemmas lie ahead. What if an AI-powered tool is used in the studio to create parts of a song, but human artists then add other elements? How much human direction on the use of AI tools is needed for the output to count as “human authorship”? How can a court filter out, in practical terms, elements authored by computers?
On those questions, the current answers are much squishier – something that Judge Howell hinted at in her decision. “Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”
“This case, however, is not nearly so complex.”
Other top stories this week…
MJ ABUSE CASES REVIVED – A California appeals court revived lawsuits filed by two men who claim Michael Jackson sexually abused them as children, ruling that they can pursue negligence claims against his companies. A lower court dismissed the cases on the grounds that staffers had no power to control Jackson, who was the sole owner of the companies. But the appeals court called such a ruling “perverse” and overturned it: “A corporation that facilitates the sexual abuse of children by one of its employees is not excused from an affirmative duty to protect those children merely because it is solely owned by the perpetrator.”
SMOKEY ROBINSON TRIAL VICTORY – The legendary Motown singer won a jury trial against a former manager who claimed he was owed nearly $1 million in touring profits, capping off more than six years of litigation over the soured partnership. Robinson himself took the stand during the case, telling jurors that the deal was never intended to cover concert revenue.
“GAMING THE SYSTEM” – SoundExchange filed a lawsuit against SiriusXM claiming the satellite radio giant is using bookmaking trickery in order to withhold more than $150 million in royalties owed to artists. The case centers on allegations that SiriusXM is manipulating how it bundles satellite services with web streaming services to “grossly underpay the royalties it owes.”
TIKTOK JUDGE RESPONDS – A judge in New Jersey defended himself against misconduct allegations over TikTok videos in which he lip-synced to Rihanna’s “Jump” and other popular songs, admitting “poor judgment” and “vulgar” lyrics but saying he should receive only a light reprimand for what intended as “silly, harmless, and innocent fun.”
LAWSUIT OVER TAKEOFF SHOOTING – Joshua Washington, an assistant to the rapper Quavo, filed a lawsuit over last year’s shooting in Houston that killed fellow Migos rapper Takeoff. He claims injuries sustained during the attack are the fault of the bowling alley where the shooting took place, which he says failed to provide adequate security, screening or emergency assistance.
GUNPLAY FACING FELONY COUNTS – The rapper Gunplay was arrested in Miami and hit with three felony charges over an alleged domestic violence incident in which he is reportedly accused of drunkenly pointing an AK-47 assault rifle at his wife and child during an argument.
FRENCH DIDN’T CLEAR SAMPLE? – The rapper French Montana was hit with a copyright lawsuit claiming his 2022 song “Blue Chills” features an unlicensed sample from singer-songwriter Skylar Gudasz. She claims he tentatively agreed to pay her for the clip – both in an upfront payment and a 50 percent share of the publishing copyright — but then never actually signed the deal.
YOUTUBE FRAUDSTER SENTENCED – Webster “Yenddi” Batista Fernandez, one of the leaders of the largest-known YouTube music royalty scam in history, was sentenced to nearly four years in prison after pleading guilty to one count of wire fraud and one count of conspiracy. Under the name MediaMuv, Batista and an accomplice fraudulently collected roughly $23 million in royalties from over 50,000 songs by Latin musicians ranging from small artists to global stars like Daddy Yankee.
A federal judge ruled Friday (Aug. 18) that U.S. copyright law does not cover creative works created by artificial intelligence, weighing in on an issue that’s being closely watched by the music industry.
In a 15-page written opinion, Judge Beryl Howell upheld a decision by the U.S. Copyright Office to deny a copyright registration to computer scientist Stephen Thaler for an image created solely by an AI model. The judge cited decades of legal precedent that such protection is only afforded to works created by humans.
“The act of human creation — and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts — was … central to American copyright from its very inception,” the judge wrote. “Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.”
In a statement Friday, Thaler’s attorney Ryan Abbot said he and his client “disagree with the district court’s judgment” and vowed to appeal: “In our view, copyright law is clear that the public is the main beneficiary of the law and this is best achieved by promoting the generation and dissemination of new works, regardless of how they are created.”
Though novel, the decision was not entirely surprising. Federal courts have long strictly limited to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.
But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The question of copyright protection is crucial to the future role of AI since works that are not protected would be difficult to monetize.
“Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works,” the judge wrote. “The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”
The current case, however — dealing with a work that was admittedly created solely by a computer — “is not nearly so complex,” the judge wrote. Given the lack of any human input at all, she said, Thaler’s case presented a “clear and straightforward answer.”
Though Friday’s ruling came with a clear answer, more challenging legal dilemmas will come in the future from more subtle uses of AI. What if an AI-powered tool is used in the studio to create parts of a song, but human artists add other elements to the final product? How much human direction on the use of those tools is needed for the output to count as “human authorship”?
Earlier this year, a report by the U.S. Copyright Office said that AI-assisted works could still be copyrighted, so long as the ultimate author remains a human being. The report avoided offering easy answers, saying that protection for AI works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances.
Read the full opinion here:
Futureverse — a multi-hyphenate AI company — published a new research paper on Thursday (June 9) to introduce its forthcoming text-to-music generator. Called Jen-1, the unreleased model is designed to improve upon issues found in currently available music generators like Google’s MusicLM, providing higher fidelity audio and longer, more complex musical works than what is on the market today.
“Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool,” says Shara Senderoff, co-founder of Futureverse and co-founder of Raised in Space, about the model in an exclusive first-look with Billboard. Predicted to release in early 2024, Jen can form up-to three minute songs as well as help producers with half-written songs through offering ‘continuation’ and ‘in-painting’ as well.
‘Continuation’ allows a music maker to upload an incomplete song to Jen and direct the model to create a plausible idea of how to finish the song, and ‘in-painting’ refers to a process by which the model can fill in spaces of a song that are damaged or incomplete in the middle of the work. To Aaron McDonald, the company’s co-founder, Jen’s role is to “extend creativity” of human artists.
When asked why Jen is a necessary invention during a time in which producers, songwriters and artists are more bountiful than ever, McDonald replied, “I think musicians throughout the ages have always embraced new technology that expands the way they can create music,” pointing to electronic music as one example of how new tools shape musical evolution. “To imply that music doesn’t need [any new] technology to expand and become better now is kind of silly… and arbitrary.”
He also sees this as a way to “democratize” the “high end of music [quality],” which he says is now only accessible to musicians with the means to record at a well-equipped studio and with trained technicians. With Jen, Johnson and Senderoff hope to satisfy the interests of professional musicians and to encourage newcomers to dabble in songwriting, perhaps for the first time. The two co-founders imagine a world in which everyday people can create music, and have nicknamed the products of this type of user as ‘AIGC,’ a twist on the term User Generated Content (or ‘UGC’).
Futureverse was formed piecemeal over the last 18 months, merging eleven different pre-existing AI and metaverse start-ups together into one company to make a number of creative AI models, including those that produce animations, music, sound effects and more. To power their inventions, the company employs the AI protocol from Altered State Machine, a company that was founded by Johnson and included in the merger.
Senderoff says Jen will also be a superior product because Futureverse created it with the input of some of music’s top business executives and creators, unlike its competitors. Though Senderoff does not reveal who the industry partners are or how Jen will be a more ethical and cooperative model for musicians, but she assures an announcement will be released soon providing more information.
Despite its proposed upgrades, Futureverse’s Jen could face significant challenges from other text-to-music generators named in the new research paper, given some were made by the world’s most established tech giants and have already hit the market, but McDonald is unperturbed. “That forces us to think differently. We don’t have the resources that they do, but we started our process with that in mind. I think we can beat them with a different approach: the key insight is working with the music industry as a way to produce a better product.”
Ed Sheeran is hoping we heed Hollywood’s warnings against artificial intelligence as it becomes more prominent in mainstream media, music and everyday life. The “Shape of You” singer shared his thoughts on the technology while chatting with Audacy Live before his private show at the Hard Rock Hotel in New York recently. “What I don’t […]
“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in music are cause for concern.
President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.
Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.
“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will work closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by currently available models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.
The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.
Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.
LONDON — When the European Union announced plans to regulate artificial intelligence in 2021, legislators started focusing on “high risk” systems that could threaten human rights, such as biometric surveillance and predictive policing. Amid increasing concern among artists and rights holders about the potential impact of AI on the creative sector, however, EU legislators are also now looking at the intersection of this new technology and copyright.
The EU’s Artificial Intelligence Act, which is now being negotiated among politicians in different branches of government, is the first comprehensive legislation in the world to regulate AI. In addition to banning “intrusive and discriminatory uses” of the technology, the current version of the legislation addresses generative AI, mandating that companies disclose content that is created by AI to differentiate it from works authored by humans. Other provisions in the law would require companies that use generative AI to provide details of copyrighted works, including music, on which they trained their systems. (The AI Act is a regulation, so it would pass directly into law in all 27 member states.)
Music executives began paying closer attention to the legislation after the November launch of ChatGPT. In April, around the time that “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, drove home the issue posed by AI, industry lobbyists convinced lawmakers to add the transparency provisions.
So far, big technology companies, including Alphabet, Meta and Microsoft, have publicly stated that they, too, support AI regulation, at least in the abstract. Behind the scenes, however, multiple music executives tell Billboard that technology lobbyists are trying to weaken these transparency provisions by arguing that such obligations could put European AI developers at a competitive disadvantage.
“They want codes of conduct” — as opposed to laws — “and very low forms of regulation,” says John Phelan, director general of international music publishing trade association ICMP.
Another argument is that summarizing training data “would basically come down to providing a summary of half, or even the entire, internet,” says Boniface de Champris, Brussels-based policy manager at the Computer and Communications Industry Association Europe, which counts Alphabet, Apple, Amazon and Meta among its members. “Europe’s existing copyright rules already cover AI applications sufficiently.”
In May, Sam Altman, CEO of ChatGPT developer OpenAI, emerged as the highest-profile critic of the EU’s proposals, accusing it of “overregulating” the nascent business. He even said that his company, which is backed by Microsoft, might consider leaving Europe if it could not comply with the legislation, although he walked back this statement a few days later. OpenAI and other companies lobbied — successfully — to have an early draft of the legislation changed so that “general-purpose AI systems” like ChatGPT would no longer be considered high risk and thus subject to stricter rules, according to documents Time magazine obtained from the European Commission. (OpenAI didn’t respond to Billboard’s requests for comment.)
The lobbying over AI echoes some of the other political conflicts between media and technology companies — especially the one over the EU Copyright Directive, which passed in 2019. While that “was framed as YouTube versus the music industry, the narrative has now switched to AI,” says Sophie Goossens, a partner at global law firm Reed Smith. “But the argument from rights holders is much the same: They want to stop tech companies from making a living on the backs of their content.”
Several of the provisions in the Copyright Directive deal with AI, including an exception in the law for text- and data-mining of copyrighted content, such as music, in certain cases. Another exception allows scientific and research institutions to engage in text- and data-mining on works to which they have lawful access.
So far, the debate around generative AI in the United States has focused on whether performers can use state laws on right of publicity to protect their distinctive voices and images — the so-called “output side” of generative AI. In contrast, both the Copyright Directive and the AI Act address the “input side,” meaning ways that rights holders can either stop AI systems from using their content for training purposes or limit which ones can in order to license that right.
Another source of tension created by the Copyright Directive is the potential for blurred boundaries between research institutions and commercial businesses. Microsoft, for example, refers to its Muzic venture as “a research project on AI music,” while Google regularly partners with independent research, academic and scientific bodies on technology developments, including AI. To close potential loopholes, Phelan wants lawmakers to strengthen the bill’s transparency provisions, requiring specific details of all music accessed for training, instead of the “summary” that’s currently called for. IFPI, the global recorded-music trade organization, regards the transparency provisions as “a meaningful step in the right direction,” according to Lodovico Benvenuti, managing director of its European office, and he says he hopes lawmakers won’t water that down.
The effects of the AI Act will be felt far outside Europe, partly because they will apply to any company that does business in the 27-country bloc and partly because it will be the first comprehensive set of rules on the use of the technology. In the United States, the Biden administration has met with technology executives to discuss AI but has yet to lay out a legislation strategy. On June 22, Senate Majority Leader Chuck Schumer, D-N.Y., said that he was working on “exceedingly ambitious” bipartisan legislation on the topic, but political divides in the United States as the next presidential election approaches would make passage difficult. China unveiled its own draft laws in April, although other governments may be reluctant to look at legislation there as a model.
“The rest of the world is looking at the EU because they are leading the way in terms of how to regulate AI,” says Goossens. “This will be a benchmark.”
Universal Music Group general counsel/executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright on Wednesday (July 12) to represent the music industry. In his remarks, the executive called for a “federal right of publicity” — the state-by-state right that protects artists’ likenesses, names, and voices — as well as for “visibility into AI training data” and for “AI-generated content to be labeled as such.”
Harleston was joined by other witnesses including Karla Ortiz, a conceptual artist and illustrator who is waging a class action lawsuit against Stability AI; Matthew Sag, professor of artificial intelligence at Emory University School of Law; Dana Rao, executive vp/general counsel at Adobe; and Ben Brooks, head of public policy at Stability AI.
“I’d like to make four key points to you today,” Harleston began. “First, copyright, artists, and human creativity must be protected. Art and human creativity are central to our identity.” He clarified that AI is not necessarily always an enemy to artists, and can be used in “service” to them as well. “If I leave you with one message today, it is this: AI in the service of artists and creativity can be a very, very good thing. But AI that uses, or, worse yet, appropriates the work of these artists and creators and their creative expression, their name, their image, their likeness, their voice, without authorization, without consent, simply is not a good thing,” he said.
Second, he noted the challenges that generative AI poses to copyright. In written testimony, he noted the concern of “AI-generated music being used to generate fraudulent plays on streaming services, siphoning income from human creators.” And while testifying at the hearing, he added, “At Universal, we are the stewards of tens of thousands, if not hundreds of thousands, of copyrighted creative works from our songwriters and artists, and they’ve entrusted us to honor, value and protect them. Today, they are being used to train generative AI systems without authorization. This irresponsible AI is violative of copyright law and completely unnecessary.”
Training is one of the most contentious areas of generative AI for the music industry. In order to get an AI model to learn how to generate a human voice, a drum beat or lyrics, the AI model will train itself on up to billions of data points. Often this data contains copyrighted material, like sound recordings, without the owner’s knowledge or compensation. And while many believe this should be considered a form of copyright infringement, the legality of using copyrighted works as training data is still being determined in the United States and other countries.
The topic is also the source of Ortiz’s class action lawsuit against Stability AI. Her complaint, filed in California federal court along with two other visual artists, alleges that the “new” images generated by Stability AI’s Stable Diffusion model used their art “without the consent of the artists and without compensating any of those artists,” which they feel makes any resulting generation from the AI model a “derivative work.”
In his spoken testimony, Harleston pointed to today’s “robust digital marketplace” — including social media sites, apps and more — in which “thousands of responsible companies properly obtained the rights they need to operate. There is no reason that the same rules should not apply equally to AI companies.”
Third, he reiterated that “AI can be used responsibly…just like other technologies before.” Among his examples of positive uses of AI, he pointed to Lee Hyun [aka MIDNATT], a K-pop artist distributed by UMG who used generative AI to simultaneously release the same single in six languages using his voice on the same day. “The generative AI tool extended the artist’s creative intent and expression with his consent to new markets and fans instantly,” Harleston said. “In this case, consent is the key,” he continued, echoing Ortiz’s complaint.
While making his final point, Harleston urged Congress to act in several ways — including by enacting a federal right of publicity. Currently, rights of publicity vary widely state by state, and many states’ versions include limitations, including less protection for some artists after their deaths.
The shortcomings of this state-by-state system were highlighted when an anonymous internet user called Ghostwriter posted a song — apparently using AI to mimic the voices of Drake and The Weeknd –called “Heart On My Sleeve.” The track’s uncanny rendering of the two major stars immediately went viral, urging the music business to confront the new, fast-developing concern of AI voice impersonation.
A month later, sources told Billboard that the three major label groups — UMG, Warner Music Group and Sony Music — have been in talks with the big music streaming services to allow them to cite “right of publicity” violations as a reason to take down songs with AI vocals. Removing songs based on right of publicity violations is not required by law, so the streamers’ reception to the idea appears to be voluntary.
“Deep fakes, and/or unauthorized recordings or visuals of artists generated by AI, can lead to consumer confusion, unfair competition against the artists that actually were the original creator, market dilution and damage to the artists’ reputation or potentially irreparably harming their career. An artist’s voice is often the most valuable part of their livelihood and public persona. And to steal it, no matter the means, is wrong,” said Harleston.
In his written testimony, Harleston went deeper, stating UMG’s position that “AI generated, mimicked vocals trained on vocal recordings from our copyrighted recordings go beyond Right of Publicity violations… copyright law has clearly been violated.” Many AI voice uses circulating the internet involve users mashing up one previously released song topped with a different artist’s voice. These types of uses, Harleston wrote, mean “there are likely multiple infringements occurring.”
Harleston added that “visibility into AI training data is also needed. If the data on AI training is not transparent, the potential for a healthy marketplace will be stymied as information on infringing content will be largely inaccessible to individual creators.”
Another witness at the hearing raised the idea of an “opt-out” system so that artists who do not wish to be part of an AI’s training data set will have the option of removing themselves. Already, Spawning, a music-tech start-up, has launched a website to put this possible remedy into practice for visual art. Called “HaveIBeenTrained.com,’ the service helps creators opt-out of training data sets commonly used by an array of AI companies, including Stability AI, which previously agreed to honor the HaveIBeenTrained.com opt-outs.
Harleston, however, said he did not believe opt-outs are enough. “It will be hard to opt out if you don’t know what’s been opted in,” he said. Spawning co-founder Mat Dryhurst previously told Billboard that HaveIBeenTrained.com is working on an opt-in tool, though this product has yet to be released.
Finally, Harleston urged Congress to label AI-generated content. “Consumers deserve to know exactly what they’re getting,” he said.
LONDON — Amid increasing concern among artists, songwriters, record labels and publishers over the impact of artificial intelligence (AI) on the music industry, European regulators are finalizing sweeping new laws that will help determine what AI companies can and cannot do with copyrighted music works.
On Wednesday (June 14), Members of the European Parliament (MEPs) voted overwhelmingly in favor of the Artificial Intelligence (AI) Act with 499 votes for, 28 against and 93 abstentions. The draft legislation, which was first proposed in April 2021 and covers a wide range of AI applications, including its use in the music industry, will now go before the European Parliament, European Commission and the European Council for review and possible amendments ahead of its planned adoption by the end of the year.
For music rightsholders, the European Union’s (EU) AI Act is the world’s first legal framework for regulating AI technology in the record business and comes as other countries, including the United States, China and the United Kingdom, explore their own paths to policing the rapidly evolving AI sector.
The EU proposals state that generative AI systems will be forced to disclose any content that they produce which is AI-generated — helping distinguish deep-fake content from the real thing — and provide detailed publicly available summaries of any copyright-protected music or data that they have used for training purposes.
“The AI Act will set the tone worldwide in the development and governance of artificial intelligence,” MEP and co-rapporteur Dragos Tudorache said following Wednesday’s vote. The EU legislation would ensure that AI technology “evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law,” he added.
The EU’s AI Act arrives as the music business is urgently trying to respond to recent advances in the technology. The issue came to a head in April with the release of “Heart on My Sleeve,” the now-infamous song uploaded to TikTok that is said to have been created using AI to imitate vocals from Drake and The Weeknd. The song was quickly pulled from streaming services following a request from Universal Music Group, which represents both artists, but not before it had racked up hundreds of thousands of streams.
A few days before “Heart on My Sleeve” become a short-lived viral hit, UMG wrote to streaming services, including Spotify and Apple Music, asking them to stop AI companies from accessing the label’s copyrighted songs “without obtaining the required consents” to “train” their machines. The Recording Industry Association of America (RIAA) has also warned against AI companies violating copyrights by using existing music to generate new tunes.
If the EU’s AI Act passes in its present draft form, it will strengthen supplementary protections against the unlawful use of music in training AI systems. Existing European laws dealing with text and data-mining copyright exceptions mean that rightsholders will still technically need to opt out of those exceptions if they want to ensure their music is not used by AI companies that are either operating or accessible in the European Union.
The AI Act would not undo or change any of the copyright protections currently provided under EU law, including the Copyright Directive, which came into force in 2019 and effectively ended safe harbor provisions for digital platforms in Europe.
That means that if an AI company were to use copyright-protected songs for training purposes — and publicly declare the material it had used as required by the AI Act — it would still be subject to infringement claims for any AI-generated content it then tried to commercially release, including infringement of the copyright, legal, personality and data rights of artists and rightsholders.
“What cannot, is not, and will not be tolerated anywhere is infringement of songwriters’ and composers’ rights,” said John Phelan, director general of international music publishing trade association ICMP, in a statement. The AI Act, he says, will ensure “special attention for intellectual property rights” but further improvements to the legislation “are there to be won.”