AI
Page: 9
A U.K. Parliament committee is calling on the British government to ensure that artificial intelligence (AI) developers are prevented from the free use of copyright-protected musical works for training purposes — and to commit to abandoning much-criticized plans that opponents say would significantly weaken copyright protections for artists and rights holders.
A report from the Culture, Media and Sport (CMS) Committee published Wednesday (Aug. 30) says that any future legislation governing the use of AI technology in the United Kingdom, the world’s third-biggest music market, must not risk “reducing arts and cultural production to mere ‘inputs’ in AI development.”
Committee members also state that urgent action must be taken to improve protections for artists and creators against the misuse of their likenesses, image rights and performances by emerging technologies such as generative AI.
The report comes more than a year after U.K. government body The Intellectual Property Office (IPO) first proposed the introduction of a new text and data mining (TDM) exception allowing AI developers to freely use copyright-protected works for commercial purposes.
Those plans, announced by the IPO last June, gave rights holders no option to opt out of the TDM exception, although they did state that tech developers would still require “lawful access” to any copyright-protected data, enabling rights holders to agree to license fees and charge for access.
The proposals drew strong criticism from across the creative industries, with Jamie Njoku-Goodwin, CEO of umbrella trade body UK Music, describing them as a “green light to music laundering.” In response, the government announced in February that it had listened to the objections and would no longer be proceeding with the original plans.
The CMS Committee welcomed the change of course but warned that the government’s handling “shows a clear lack of understanding of the needs of the U.K.’s creative industries.”
“The chorus of warnings from musicians, authors and artists about the real and lasting harm a failure to protect intellectual property in a world where the influence of AI is growing should be enough for ministers to sit up and take notice,” said CMS Committee chair Dame Caroline Dinenage in a statement.
Dinenage said the government must follow through on its pledge to abandon plans for a text and data mining exception to copyright-protected works and regain the trust of the creative industries by developing “a copyright and regulatory regime that properly protects them” from the potential risks of AI.
The U.K.’s current legal framework, which contains TDM allowances for non-commercial research purposes while also allowing rights holders to commercially license their work, “provides an appropriate balance between innovation and creator rights,” said the committee report.
The U.K.’s moves to police the rapidly evolving AI sector comes as other countries and jurisdictions, including the United States, China and the European Union, explore their own paths toward regulating the nascent technology.
The EU’s Artificial Intelligence Act, which was first proposed in April 2021 and is now being negotiated among politicians in different branches of government, is leading the way as the world’s first comprehensive legislation around AI. It states that generative AI systems will be forced to disclose any content that they produce that is AI-generated — helping differentiate computer-created works from those authored by humans — and provide detailed, publicly available summaries of any copyright-protected music or data they have used for training purposes.
Other provisions in European law, most notably those contained in 2019’s EU Copyright Directive, also deal with AI and text and data mining exceptions of copyrighted content, such as music, although these are more robust than those initially proposed — and since abandoned — by the U.K. government. These EU provisions include allowing rights holders to stop AI systems from using their content for training purposes, or to limit which ones can in order to license that right.
Responding to the CMS Committee’s recommendations, BPI chief executive Jo Twist said it was “essential that artists and rightsholders can work in partnership with technology and that policies do not allow AI to get a free ride, but to always respect human creativity by seeking permission and remunerating the use of creative content.”
Grimes is among the first wave of featured speakers for the 2024 South by Southwest (SXSW) conference, an event which promises to lean into AI-focused programming.
Announced today (Aug. 29), the multidisciplinary artist will join a session dubbed “AI and the Independent Artist,” which will explore how artificial intelligence is changing the way artists create and market their music, engage with their fans, and, of course, the challenges and responsibilities for the music industry that come with it.
The Canadian artist is known for pushing boundaries in the creative space. She enhanced that reputation by unveiling her Elf.Tech project earlier in the year, an open-source software program which encourages fans to make music (and money) with replications of her voice.
TuneCore CEO Andreea Gleeson and CreateSafe CEO Daouda Leonard are also confirmed for the panel, on which they will “present principles for companies to consider” and share results and lessons learned from early AI pilot programs, according to a SXSW statement.
The conversation on AI is only getting started. Just last week, streaming giant YouTube and Universal Music Group, the world’s biggest music company, announced a new initiative with artists and producers for an “AI Music Incubator,” and YT unveiled its own set of principals as it promised to “embrace” AI “responsibly together” with its music partners.
Other SXSW daytime discussions will drill into “AI and Humanity’s Co-evolution,”” with speakers venture partner at SignalFire Josh Constine and OpenAI’s VP of consumer product and head of ChatGPT Peter Deng; “Building the Next Era of the Internet” with author, general partner at Andreessen Horowitz, and founder/managing partner at a16z crypto Chris Dixon; and a conversation with creator, host, and executive producer of the podcast Call Her Daddy Alex Cooper and founder and CEO of ACE Entertainment Matt Kaplan.
Also slated for the conference schedule, CEO of the Future Today Institute and professor at NYU Stern School of Business Amy Webb will launch the 2024 Emerging Tech Trend Report.
SXSW 2024 will take place March 8–16 in Austin, TX.
Established in 1987, SXSW celebrates the convergence of tech, film and television, music, education, and culture and is recognized as an important destination for professionals who play in those spaces.
SXSW 2024 is sponsored by Porsche, C4 Energy, and The Austin Chronicle.
Visit sxsw.com for more.
08/23/2023
Here are all the music stars who have spoken out about the growing technology.
08/23/2023
Ask 100 people how they feel about AI-generated songs and you will likely get 100 different answers. But ask Selena Gomez how she feels about someone who cobbled together an AI version of The Weeknd‘s “Starboy” featuring her computer-generated vocals layered next to those of her ex and, well, her answer is swift and succinct. […]
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: A federal judge rules that works created by A.I. are not covered by copyrights; an appeals court revives abuse lawsuits against Michael Jackson’s companies; Smokey Robinson beats a lawsuit claiming he owed $1 million to a former manager; SoundExchange sues SiriusXM for “gaming the system” on royalties; and much more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
No Copyrights For A.I. Works – But Tougher Questions Loom
The rise of artificial intelligence will pose many difficult legal questions for the music business, likely requiring some combination of litigation, regulation and legislation before all the dust settles. But on at least one A.I. issue, a federal judge just gave us a clean, straightforward answer.
In a decision issued Friday, U.S. District Judge Beryl Howell ruled that American copyright law does not cover works created entirely by artificial intelligence – full stop. That’s because, the judge said, the essential purpose of copyright law is to encourage human beings to create new works.
“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” the judge wrote.
Though novel, the decision was not entirely surprising. Federal courts have long strictly limited copyrights to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.
But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The issue of copyright protection is crucial to the future role of AI, since works that are not protected would be difficult to monetize.
Trickier legal dilemmas lie ahead. What if an AI-powered tool is used in the studio to create parts of a song, but human artists then add other elements? How much human direction on the use of AI tools is needed for the output to count as “human authorship”? How can a court filter out, in practical terms, elements authored by computers?
On those questions, the current answers are much squishier – something that Judge Howell hinted at in her decision. “Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”
“This case, however, is not nearly so complex.”
Other top stories this week…
MJ ABUSE CASES REVIVED – A California appeals court revived lawsuits filed by two men who claim Michael Jackson sexually abused them as children, ruling that they can pursue negligence claims against his companies. A lower court dismissed the cases on the grounds that staffers had no power to control Jackson, who was the sole owner of the companies. But the appeals court called such a ruling “perverse” and overturned it: “A corporation that facilitates the sexual abuse of children by one of its employees is not excused from an affirmative duty to protect those children merely because it is solely owned by the perpetrator.”
SMOKEY ROBINSON TRIAL VICTORY – The legendary Motown singer won a jury trial against a former manager who claimed he was owed nearly $1 million in touring profits, capping off more than six years of litigation over the soured partnership. Robinson himself took the stand during the case, telling jurors that the deal was never intended to cover concert revenue.
“GAMING THE SYSTEM” – SoundExchange filed a lawsuit against SiriusXM claiming the satellite radio giant is using bookmaking trickery in order to withhold more than $150 million in royalties owed to artists. The case centers on allegations that SiriusXM is manipulating how it bundles satellite services with web streaming services to “grossly underpay the royalties it owes.”
TIKTOK JUDGE RESPONDS – A judge in New Jersey defended himself against misconduct allegations over TikTok videos in which he lip-synced to Rihanna’s “Jump” and other popular songs, admitting “poor judgment” and “vulgar” lyrics but saying he should receive only a light reprimand for what intended as “silly, harmless, and innocent fun.”
LAWSUIT OVER TAKEOFF SHOOTING – Joshua Washington, an assistant to the rapper Quavo, filed a lawsuit over last year’s shooting in Houston that killed fellow Migos rapper Takeoff. He claims injuries sustained during the attack are the fault of the bowling alley where the shooting took place, which he says failed to provide adequate security, screening or emergency assistance.
GUNPLAY FACING FELONY COUNTS – The rapper Gunplay was arrested in Miami and hit with three felony charges over an alleged domestic violence incident in which he is reportedly accused of drunkenly pointing an AK-47 assault rifle at his wife and child during an argument.
FRENCH DIDN’T CLEAR SAMPLE? – The rapper French Montana was hit with a copyright lawsuit claiming his 2022 song “Blue Chills” features an unlicensed sample from singer-songwriter Skylar Gudasz. She claims he tentatively agreed to pay her for the clip – both in an upfront payment and a 50 percent share of the publishing copyright — but then never actually signed the deal.
YOUTUBE FRAUDSTER SENTENCED – Webster “Yenddi” Batista Fernandez, one of the leaders of the largest-known YouTube music royalty scam in history, was sentenced to nearly four years in prison after pleading guilty to one count of wire fraud and one count of conspiracy. Under the name MediaMuv, Batista and an accomplice fraudulently collected roughly $23 million in royalties from over 50,000 songs by Latin musicians ranging from small artists to global stars like Daddy Yankee.
A federal judge ruled Friday (Aug. 18) that U.S. copyright law does not cover creative works created by artificial intelligence, weighing in on an issue that’s being closely watched by the music industry.
In a 15-page written opinion, Judge Beryl Howell upheld a decision by the U.S. Copyright Office to deny a copyright registration to computer scientist Stephen Thaler for an image created solely by an AI model. The judge cited decades of legal precedent that such protection is only afforded to works created by humans.
“The act of human creation — and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts — was … central to American copyright from its very inception,” the judge wrote. “Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.”
In a statement Friday, Thaler’s attorney Ryan Abbot said he and his client “disagree with the district court’s judgment” and vowed to appeal: “In our view, copyright law is clear that the public is the main beneficiary of the law and this is best achieved by promoting the generation and dissemination of new works, regardless of how they are created.”
Though novel, the decision was not entirely surprising. Federal courts have long strictly limited to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.
But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The question of copyright protection is crucial to the future role of AI since works that are not protected would be difficult to monetize.
“Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works,” the judge wrote. “The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”
The current case, however — dealing with a work that was admittedly created solely by a computer — “is not nearly so complex,” the judge wrote. Given the lack of any human input at all, she said, Thaler’s case presented a “clear and straightforward answer.”
Though Friday’s ruling came with a clear answer, more challenging legal dilemmas will come in the future from more subtle uses of AI. What if an AI-powered tool is used in the studio to create parts of a song, but human artists add other elements to the final product? How much human direction on the use of those tools is needed for the output to count as “human authorship”?
Earlier this year, a report by the U.S. Copyright Office said that AI-assisted works could still be copyrighted, so long as the ultimate author remains a human being. The report avoided offering easy answers, saying that protection for AI works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances.
Read the full opinion here:
Futureverse — a multi-hyphenate AI company — published a new research paper on Thursday (June 9) to introduce its forthcoming text-to-music generator. Called Jen-1, the unreleased model is designed to improve upon issues found in currently available music generators like Google’s MusicLM, providing higher fidelity audio and longer, more complex musical works than what is on the market today.
“Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool,” says Shara Senderoff, co-founder of Futureverse and co-founder of Raised in Space, about the model in an exclusive first-look with Billboard. Predicted to release in early 2024, Jen can form up-to three minute songs as well as help producers with half-written songs through offering ‘continuation’ and ‘in-painting’ as well.
‘Continuation’ allows a music maker to upload an incomplete song to Jen and direct the model to create a plausible idea of how to finish the song, and ‘in-painting’ refers to a process by which the model can fill in spaces of a song that are damaged or incomplete in the middle of the work. To Aaron McDonald, the company’s co-founder, Jen’s role is to “extend creativity” of human artists.
When asked why Jen is a necessary invention during a time in which producers, songwriters and artists are more bountiful than ever, McDonald replied, “I think musicians throughout the ages have always embraced new technology that expands the way they can create music,” pointing to electronic music as one example of how new tools shape musical evolution. “To imply that music doesn’t need [any new] technology to expand and become better now is kind of silly… and arbitrary.”
He also sees this as a way to “democratize” the “high end of music [quality],” which he says is now only accessible to musicians with the means to record at a well-equipped studio and with trained technicians. With Jen, Johnson and Senderoff hope to satisfy the interests of professional musicians and to encourage newcomers to dabble in songwriting, perhaps for the first time. The two co-founders imagine a world in which everyday people can create music, and have nicknamed the products of this type of user as ‘AIGC,’ a twist on the term User Generated Content (or ‘UGC’).
Futureverse was formed piecemeal over the last 18 months, merging eleven different pre-existing AI and metaverse start-ups together into one company to make a number of creative AI models, including those that produce animations, music, sound effects and more. To power their inventions, the company employs the AI protocol from Altered State Machine, a company that was founded by Johnson and included in the merger.
Senderoff says Jen will also be a superior product because Futureverse created it with the input of some of music’s top business executives and creators, unlike its competitors. Though Senderoff does not reveal who the industry partners are or how Jen will be a more ethical and cooperative model for musicians, but she assures an announcement will be released soon providing more information.
Despite its proposed upgrades, Futureverse’s Jen could face significant challenges from other text-to-music generators named in the new research paper, given some were made by the world’s most established tech giants and have already hit the market, but McDonald is unperturbed. “That forces us to think differently. We don’t have the resources that they do, but we started our process with that in mind. I think we can beat them with a different approach: the key insight is working with the music industry as a way to produce a better product.”
Ed Sheeran is hoping we heed Hollywood’s warnings against artificial intelligence as it becomes more prominent in mainstream media, music and everyday life. The “Shape of You” singer shared his thoughts on the technology while chatting with Audacy Live before his private show at the Hard Rock Hotel in New York recently. “What I don’t […]
“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in music are cause for concern.